Friday, July 29, 2016

The Truth and Expertise Network

While specific facts and resulting deductions are at the core, in the main we are interested in identifying those who are, and those who are not, good sources of the truth. Most particularly we want to identify those who are lying for their own advantage, since those lies are much more likely to cause harm than merely mistaken beliefs. We are mainly concerned with facts that are relevant to public policy, but even there we come to issues where well-meaning folk would say “you shouldn’t say that, even if it is true”. We’ll leave such delicate considerations until the next blog post.
For those who don’t want the technical details, the general idea is this: Individuals can specify their level of belief about claims made, about the motivation for claims, and about the trustworthiness of other individuals. Software can then warn you about claims based on the claim itself, or the people making it. The software would only follow links from the people you trust (and that they trust, etc). This might need some social engineering to actually work, and that is described in the following blog post.
So the idea is that participants in the scheme will have one or more public-private keypairs. These will be used to sign assertions of various sorts, discussed below. They will be of no use unless (a) people link to those keys in various ways; and (b) the assertions are made public (at the very least to some of the linking people).
People can just make their main public key publically available in places they are known to control. Or give them to specific people. They can also have keypairs that they don’t advertise as belonging to them, but endorse them as reliable as if they were some other unknown person. They can then  make assertions that can’t be attributed but can be used by people who trust you and the people you trust.
I’ll list (some of) the assertions that can be made. Software running in the user’s machine, and the machines of those she trusts, and in central servers, will cooperate to provide the user with warnings of false claims, claimants lacking the expertise they claim, claimants seeking to mislead. Perhaps the most important things will be information about internal contradictions in the trust network. If your trust network supports incompatible claims then it is an indication of a problem, such as people in your trust network being overly confidant about an uncertain matter, or infiltration of the trust network by incompetent or bad actors. Tracking these things down will help everybody who wants to get a good handle on the truth.
  • “My prior (belief pending future evidence) for this claim to be true is P%” where P is between 0 and 100. The claim should be a block of text, [+ optionally a wider block of text from the same source providing context], + a URL giving the location.
  • “My prior that this claim is honestly believed by the claimant is …”
  • “I believe the claimant is … [with probability] acting on behalf of … [with probability]”
  • “I trust this person to only express honestly held beliefs”, giving a public key.
  • “I believe this person is an expert on …”
  • “I trust this person to choose others who are trustworthy” (thus allowing an extended network of trust).
Systematizing all that (and more) is a tough job. It is similar to the jobs done by the IETF (Internet Engineering Task Force), and maybe we need an OTETF (Objective Truth Engineering Task Force).

[update: Naturally browser plugins and other user software will make it as easy as possible for users to participate in this scheme.]

No comments:

Post a Comment