Keeping information useful
© Copyright 1994-2002, Rishab Aiyer Ghosh. All rights reserved.
Electric Dreams #56
17/April/1995

The greatest vulnerability of a society that relies on information is the information itself, which must be reliable. Yet most, if not all the knowledge stored in vast memories human and computer around the world, is stored independent of any notion of reliability. Every time some data is accessed, its accuracy is manually calculated, or guessed, in a very haphazard fashion. Our knowledge doesn't know its own worth.

This may be acceptable when knowledge itself is not widely available. As long as it was closely guarded by librarians in mediaeval monasteries, master craftsmen in guilds, or researchers in obscure academic journals, the value of any knowledge was safely left to the few experts who had access to it. But with libraries on-line, guilds made redundant with the proliferation of formerly unavailable information, and academic papers often published on the Net before they appear in print, virtual expertise is available to anybody. Those not benefitting directly from restricted information flows would agree that this open access to knowledge is a good thing - but only if accompanied by widespread awareness of the knowledge's usefulness.

This awareness is what differentiates an expert from a merely interested person. An expert can estimate two things given a morsel of information - its context, which requires considerable understanding of the subject matter, and its reliability, which requires some experience of its sources. While putting data in context is a frightfully complex task for humans, leave alone machines artificially intelligent or naturally dumb, reliability is something computers can be capable of handling.

Reliability involves answers to many questions, which though usually implicit should for the sake of computers accompany all data explicitly - where do they come from? Who created them? How reliable were they, according to their creators? And so on. Complications arise when information travels not in single gigantic leaps but through several stepping-stones in the infosphere, many of them in turn sources of yet more information. But these are the intricacies on which machines thrive, and a whole branch of computation - epistemic logic - exists for reasoning about reasoning. Epistemic logic defines techniques to calculate the reliability - for example - of data as they pass through a chain of possibly unreliable sources, by cleverly combining the reliability of the sources themselves with what they know of the worth of the data. Strictly implemented, epistemic logic often ends up claiming that most information is pretty unreliable - which is not terribly surprising when you think about it.

Epistemic logic is only used in some experimental databases and artificial intelligence projects. To be used widely, it will have to coalesce into some standard, which will happen when enough people feel they need real knowledge on-line, not just impressions of it. To realize the promised land of all answers that cyberspace has been made out to be, electronic discussion forums need to provide easy ways of ensuring accuracy, and reliability needs to be woven into the World Wide Web. Till then, as all wise Net-dwellers know, frequent pinches of salt will remain the most important part of a digital diet.




  • Electric Dreams Index
  • dxm.org Homepage