Monday, December 16, 2013

The Mathematics Game

The world mathematics community has become disenchanted with the system of peer review for academic journals, but is struggling to find a way to replace it. For the purposes of appointment and promotion they need a way to allow Mathematicians to be evaluated for their research and also their breadth and depth of knowledge. This is also important because clever young people love being able to show off their inventiveness: It is what leads many young people into Mathematics.

Rather than inventing a solution from scratch, let's take what we know works and add a little cryptographic magic. Here are some things that we know work:
  1. The Arxiv system for holding academic papers and for tracking changes to them.
  2. The mathoverflow system (and similar) for asking questions and for rating questions and answers and participants.
  3. Polymath style cooperative projects.
  4. Khan Academy and similar systems of self-paced learning.
  5. Repositories of knowledge such as Wikipedia and ncatlab.
  6. Math Olympiad and other competitions.
The proposal will have the following features:
  1. You can play the game at any level, starting with K-12 mathematics, and up to new research.
  2. Participant IDs are linked to unique real world individuals. However you can play with pseudonyms, then claim the credit if you do well, but never need to own up to mistakes. Reviews can also be pseudonymous, freeing the reviewer to be honest.
  3. Abusive pseudonyms can be unmasked. Subsequent pseudonyms by that user will, for some time, have an elevated "abuse level" that users and software can take into account. Fair but tough reviews need to be endorsed by others as non-abusive to prevent abuse of the abuse system.
The system runs on various sorts of points and the interactions between them. The stackexchange folk (who run mathoverflow and similar sites) are experts on this and their advice should be sought. A possible scheme might be:
  1. Mathcoins are earned in various ways (including doing moocs and accompanying tests), and can then be spent to allow the participant to attempt higher level actions which can allow them to move up on a more permanent basis.
  2. Achievement points are earned by well-regarded actions and they accumulate. This is rather like masterpoints in Bridge and encourages the enthusiast as much as the skillful. This is important because there will be lots of work (such as marking middle level participation) needed to keep the wheels ticking over.
  3. Levels are more like the rankings in Tennis, though more blurred. At the top it is the judgement of peers. At the bottom it is mostly automated. In between the judgement of people above is the key. Moving up the levels is the objective of the game, and hopefully those at the top become stars, though they can, if they like, hide behind a pseudonym.
To start with, every participant would have an authenticating public key. The player can then generate as many additional public keys as necessary to represent pseudonyms. The activities (Arxiv/etc) would need to be modified to support this (or replaced), including supporting the authentication of all actions.

The easy thing will be for the participant to link from pseudonym to themself (or other pseudonym). All that is needed is to generate a "certificate" claiming that the public keys represent the same person, and have the certificate be signed by both public keys.

There are lots of other things that need to be done. They can all be done with a Trusted 3rd Party solution. Many of them can be done more elegantly and securely with cryptographic cleverness. It can also sometimes be possible to divide information between trusted 3rd parties so that compromise of only one doesn't reveal important information.
  1. Proving that a pseudonymous identity has sufficient points/etc to participate in high level activities.
  2. Identifying the abusiveness status of pseudonyms without identifying the real participant.
  3. Transferring mathcoins to, from and between pseudonyms.
  4. ... and much more
I think that the idea of Mathematics as a vast interconnected system, with no insurmountable barriers from the bottom to the top, would be very powerful and productive.

Saturday, November 16, 2013

The key2key project

After retiring I mostly pursued my interests in peak oil and in computer network security. While at CSIRO I had published an Internet Draft on "Basic Internet Security Model". It is still online at http://tools.ietf.org/html/draft-smart-sec-model-00, though it expired long ago. Later I tried to build on that to create a secure Internet, in what I called the Key2key Project.
After the recent security issues on the Internet (brought to light by Snowden) I thought I should look into reviving it. However it doesn't seem like something where I am likely to make any headway, given the cool/hostile reaction to my `99 Internet Draft years ago. Anyway, for the record, here is the last, rather dated and very incomplete, key2key overview doc:

The Key2key Project

The end2end interest group created the ideas on scalability that led to the Internet. The aim of the key2key project is to extend this philosophical framework into the security area to create a secure overlay network.
A trusted system is one that can harm the truster. It may actually do harm if it fails in some way, or if the trust that was placed in it was misplaced.
Security is when you know which systems you trust, and explicitly agree to place that trust. We don't consider whether that is because the trusted systems are actually believed to be trustworthy, or just that the alternatives are believed to be worse. Food security is when you get to balance the risk that the food is poison against the risk of starvation. Food insecurity is when you are force fed.
In the Internet today, security is not end-to-end. That is why Internet users are trusting intermediate hardware and software systems that they don't know exist.
This document covers the following areas:
  • Modelling Internet entities and sub-entities. This is a necessary step to understanding the problem.
  • Modelling cryptographic security technology: hashes, encryption, verification, signatures.
  • Modelling communication between entities. This will make it possible to define when a protocol is secure, and define a framework for building secure protocols. These secure protocols will be necessary for building our secure overlay network.
  • Modelling the common and crucial situation when one entity executes software "on behalf of" another (OBO).
  • A device for human signatures (DHS), and the implications of its limitations.
  • Delegating specified limited powers to sub-entities.
  • Securely booting a PC and setting it up as a sub-entity capable of representing the user on the network, and referring matters beyond its delegation up to the DHS.
  • A protocol for communication by "on behalf of" execution. It is intended to show eventually, but not in this document, that this is the only reasonable approach to this problem.
  • A simplistic e-commerce application will illustrate in detail how these components work together to make a secure system.

Entities and sub-entities

Distributed computing is very different when the computers involved are under the control of a single entity, compared with the case where the computers are controlled by separate entities. For the former the important issue is performance. The key2key project is all about the latter, communication between separate entities. In this case the main issue is security [footnote: However key2key can have good performance. Though the main control communication in key2key is often forced to follow potentially low performance routes, bulk data transfer is direct].
Legal entities (people and organizations) have sub-entities, such as employees and computer systems, which are not legal entities themselves, but can be given a delegation to act on behalf of the legal entity. Legal entities are not connected directly to the network. So in order to perform actions on the Internet they need to have some way to give a delegation to a computer system to act on their behalf. This can be quite informal, and the legal implications of the mechanism chosen will rarely be tested in court. In this document we will discuss well defined mechanisms which are appropriate as the basis for more serious interaction between legal entities via the network.
We want to get the communication between separate legal entities via the network onto a sound logical footing. It is important to understand that an individual acting with delegation as an employee is, for our purposes, entirely different from that individual acting as themselves. The fact that these two (sub)entities share the same brain gives rise to serious security issues. However this problem predates computing and networking. We aren't going to attempt to solve it, though it is useful to consider how well traditional legal approaches carry over into the network world.

Cryptographic technology

The key2key project relies on certain capabilities that are usually provided by cryptographic technologies, but can sometimes be provided in a simpler way by a trusted third party:
  • Secure hash (cryptographic checksum). This is a small fixed sized number, typically 256 bits, which uniquely determines some larger bit string. In key2key: end points are represented by the secure hash of a public key; immutable files are represented by the secure hash of the contents. The required characteristic is that there is vanishing probability that two bit strings will give the same hash; and it is computationally infeasible, if given a bit string to find a different bit string that hashes to the same result. This capability could be provided by a trusted 3rd party that remembered bit strings and returned a sequence number.
  • Encryption in key2key applications is used for access control of information that has to go via a 3rd party. Of course this often includes providers of network services. It is commonly the case that, if data is not completely public domain, it is easier to encrypt it than evaluate whether the 3rd parties who will see it are entitled to. Note that the important public keys in key2key are not used for encryption, only for signature verification. Encryption public keys are always separate and usually temporary.
  • The bulk of communication between key2key end points is verified by a temporary agreed shared key (whether or not the communication is encrypted). This means that each party knows the communication came from the other but doesn't allow them to prove that to a 3rd party.
  • Digital signing and verification is only used during the setup phase of communication, and for communications that the recipient wants to be able to prove to a 3rd party that they received. If clever algorithms based on sophisticated mathematics were to cease to be secure then a system using shared keys via a trusted third party would also be possible. Important long term public keys can use combined algorithms, and/or use multiple keys where the matching private keys are not held in one place.
Communication in key2key is between end-points identified by the hash of a public key. The first thing sent between the parties is the public key itself, which must hash to the identifying hash to be accepted. After that other cryptographic services and keys can be agreed between the end points.

Logical communication model

Each end-point is under the control of a legal entity (or in rare cases multiple entities, in some and-or tree structure [footnote: In the 'and' case all communication goes to each of the entities, and anything coming from it is approved by all. In the 'or' case communication goes to an unknown one of the entities and anything coming from it is approved by one of them.]). Initially the end points don't, by default, know what entity controls the other end. Often the initiating party will use a temporary public key just for that connection, and there may never be any call for the initiator to reveal who they are.
Two machines acting under common control might just move data back and forth according to some distributed computing algorithm that the owner has chosen to use. Communication between separate legal entities can only take place if it is meaningful. The agreed protocol must be able to be interpreted as a sequence of assertions and requests, in order for it to be possible to check if the protocol securely protects the interests of each party.
If end point 1 (EP1) sends the assertion "the sky is blue", then the receiving end can only infer and record the fact that "EP1 asserts that the sky is blue". Each end point keeps a store of beliefs and of business logic. When a request comes in, then the end point will effectively try to construct a proof that the request should be honoured.
End points can also send "out of band" hints to the other end. The correctness or otherwise of hints doesn't affect the trust in the main communication. One sort of hint will be about how to contact 3rd party keys mentioned in the communication. This might save a lookup in a directory, or it might actually be the only way for the recipient to get that information. Another sort of hint will be proposed proofs for the recipient. This is desirable because constructing proofs is inherently undecidable and the receiver of the request might be unwilling to invest the resources, and it might be more fair for the requester to do the work. This sort of hint might look something like this in English translation "Assuming your belief store holds 'trust bank public key about assertions of the form ...' and '...' then follow these steps ...".
Communication is between sub (or subsub) entities. Before events with real world significance (such as purchases) can take place, assertions about delegation may need to be exchanged, with a chain leading up to a key that is provably the key of the legal entity. However exchanges of real world significance can be anonymous on one or both sides, as in the real world when we go into a shop and pay cash.

"On Behalf Of" execution

We are familiar with the situation where we visit a web site like google or facebook or a poker server or an airline reservation site, and we perform actions which are carried out on our behalf on a computer that is not under our control. We might have an explicit or implicit legal contract, which might constrain how honestly or correctly the actions are carried out. But in general we have to assume that the requests we make will be handled in a way that suits the owner, not us, as we saw in the case of the cheating owner of a poker service, and in the case (some time ago) of a search for "linux" on MSN-India's search service, which returned linuxsucks.com as the first hit.
Other OBO cases we have a stronger expectation that the owner of the environment will honestly carry out the user's requests: when the owner provides a web hosting service, or a unix login service, or a container for isolated execution, or a virtual machine that the user seems to completely control.
Still in all these cases it hardly seems wise for the user of the service to transfer, to that environment, credentials which have power over significant amounts of money or other valuable property. Rather than trying to work out what credentials can be transferred and when, the key2key project takes an alternative approach: credentials are never transferred, but access to external resources is still possible from the OBO execution in exactly the circumstances where this is secure. More on this later.

Device for Human Signatures

We want to make it possible for real world legal entities to interact via the network. What is needed is a way to link people to the network in a way that makes legal sense. The proposed solution will work for an individual representing themself, or for an employee with some delegated ability to act for the employer. We don't consider the possibility of combining these in a single physical device.
The solution is a Device for Human Signature, DHS. The DHS requirements mean that it must be a separate device, not part of a more complex device. The proposed device has the following characteristics:
  • It has biometric authentication which is unchangeably linked to the owner.
  • It has a private key that is generated when first activated. Only the public key ever leaves the device.
  • It has a black and white screen and a mechanism for scrolling the image left-right and up-down.
  • It has a way that the owner can agree to sign what is displayed on the screen. This is such that it can't be done accidentally, nor can it be done without simultaneous biometric authentication.
  • There is another mechanism to clear the current image without signing it.
  • The device is connected to the world by wireless mechanisms and/or cable. If a cable is plugged in then it only uses that, which is desirable for signing things that have privacy restrictions. Either way it displays any offered image and, if signed, it sends the signature back on the reverse route.
The user signs the extended black and white image. She is not able to sign it till she has used the scroll control to view all of it.
The image will always be created, by a defined and public process, from information in a computer friendly format (such as XML). For example one of the known processes will be "English". The information in computer format, and the well know translation process will be sent with the signature of the text when it is used for internal computer purposes. For legal purposes only the actual visible text applies.
Any computer software can "understand" the signed text by using the conversion process on the computer friendly variant and checking that the resultant image is the one that the user signed. E.g. the user might sign "pay $1000 from my account 061234567 to Example Company (ABN 1234) account 0698765". What they actually sign is an array of black and white dots which has the appearance of this sentence. However the receiving computer (presumably the bank) doesn't have to understand the visual dots because such signed documents always come with an accompanying computer friendly structure which converts to the image in a well defined mechanical way. The signed document comes with an accompanying solution to the problem of determining its meaning.
It is important to sign a picture rather than "text", because it removes questions about how the text was rendered, and as we see it works just as well.
The signing device is only intended to be used for important things, or to create a temporary delegation to some more practical computer system which will sign as needed to act on the network within that delegation.

Delegating to sub-entities

For organizations delegating to employees or commercial network servers this is particularly important, and might be quite complicated, specifying what assertions and requests the delegate can make on behalf of the organization and what requests it will honour. This may not be practical for a person delegating to a computer system using the DHS: all the rules would have to be translated into English and read.
The form of delegation which will be initially implemented in key2key is a system of well known named delegation types. In particular the user will probably give his desktop system the "Standard Anonymous Desktop" delegation, which will enable the user to work anonymously on the network as we ordinarily do most of the time. When things come up so that the desktop system needs extra delegation, that will come up as a specific delegation request on the user's DHS.

Architecture of key2key end-point computers

The DHS doesn't remove the need for end-point systems, particularly desktop systems, to be secure. The standard techniques of managed code and sandboxing are crucial to allow applications to run without the need for them to be trusted with the crown jewels: the ability to use the private key to sign assertions and requests.
The traditional file system model of files that can be updated in place is inappropriate for the needs of key2key. Instead files are read only and identified by their hash, so that they are to a large extent self-verifying. The traditional unix updateable file is actually a form of simple database, and is handled in that way with appropriate security mechanisms shared with other network accessible databases.
This will also cover the secure execution of code that is only partly trusted, and of code that is executed on behalf of an external entity.

Desktop system: booting and running

To do anything useful, a user needs to book a desktop system. That system needs to be physically secure, and it should be booted from reliable read only media to place it in a predictable state. That system needs to generate a private and public key pair to allow it to operate on the network using key2key mechanisms.
When that is all done, the next problem is to use the DHS to associate the desktop with the user and appropriate delegation. The desktop will generate an appropriate message, sometimes incorporating user input to adjust the delegation (though normally additional delegation is added later). That message will appear on the desktop's screen, and be transferred to the DHS by wire or wireless mechanism. The DHS will offer that to the user to sign. It will be in the user's own language and will say something like: "I have securely booted on trusted hardware, and the key signature of that system is 123456789ABCEDEF. It is delegated to act for me on all services not requiring specific delegation.". This signed result will be returned to the desktop system and sent as an assertion wherever needed.

OBO execution model

Suppose that user X is running a shell on a remote computer owned and managed by Y, and a program tries to access a resource on a system owned by Z that X is allowed to access. The traditional approach is that X does something which reveal the credentials for accessing Z in a way that Y could easily take advantage of. X might type a password into the interactive session on Y, or might have transferred some cryptographic credentials, such as a a kerberos TGT or a private key to Y. This is wrong.
The key2key approach is that the request from Y's system to Z's system must use Y's credentials. Y will normally tell Z that this is on behalf of X, but this will only be used by Z to reduce its willingness to agree to the request. If Z won't execute the request using Y's credentials then Y can seek an alternative to make that request, and the natural and default alternative is to go back up the chain leading to the OBO execution. So in this simple case, Y will ask X to send the request to Z with X's credentials. And, of course, X is well placed to know if this is a request that naturally springs from the OBO execution on Y. If the execution of the request involves a bulk file transfer than that will go between Y and Z directly, and not be forced to go via X.

Illustrative e-commerce application

Payment by Reservation (PbR) is the key2key native e-commerce application. It associates accounts with keys. It handles need-to-know revelation of information about the end points: i.e. typically only when there is a conflict.

update on Peak Oil

I summarized my attitudes to Peak Oil in an (anonymous) contribution to the Azimuth discussion a couple of years ago, reproduced below. It seems right [except the dubious comment on shale oil was way off]. World economic growth continues to be constrained by the fact that we can only slowly change infrastructure. Fossil fuel use continues to grow as we go to gas and back to coal for many applications.
  1. The world peak is different to the peaks we’ve seen in individual countries and fields, because the price can now rise. This should mean that the tail is more stretched out as otherwise uneconomic fields come into play (like tar sands, very heavy oil, coal liquefaction, abandoned fields, and maybe even oil shales). Also it means that there is a lot of pressure to get off oil as much as possible. Even the Saudis don’t want to burn oil for electricity. Electric cars are coming for some uses. Nuclear powered commercial shipping may be economic. Etc. It will be many decades before we run out of oil for high value applications. However it seems that many of these changes are not starting soon enough and things will be bad for a while.
  2. On energy density: The fuel of choice for interstellar flight is anti-matter. Lots of energy goes into making the powerful lightweight batteries we use in portable stuff like mobile phones and laptops. The message is that, in a normal way, there is good value in using lots of stationary energy to produce much smaller amounts of dense energy for transportation or transportable applications. This fact has been suppressed by the availability of oil which was cheap and already dense. Now that we are looking into this problem we may remember Sheik Yamani’s (past Saudi Oil minister) quote “The stone age didn’t end because they ran out of stones”.
  3. Peak Oil is related to claims of Peak Fossil fuel. However it seems that exploration and development of gas and coal have been suppressed by the availability of the more convenient liquid form. The claims wrt coal are based on traditional extraction methods, but deeply buried coal can be accessed by underground coal gasification.
  4. A (possibly temporary) oil peak is happening now, with oil production unable to expand in response to price increases. It would be nice if we could get the economists who favour all possible effort to expand the economy (like Paul Krugman from NY Times) to respond to the question: “That would result in greater oil consumption. What if the world can’t pump that much more oil at the moment?”. It would also be nice if we could admit that growth is going to be limited for a while and have a rational discussion about how society should handle that fairly. E.g. an answer might be to get people working but not spending, with future financial security, by forcing them to take some of their income as “Energy Crisis Bonds” which will retain their value as a fraction of GDP, but not be spendable until enough of the massive infrastructure changes have been implemented..

Saturday, October 26, 2013

cheap energy created the anthropocene

At http://math.ucr.edu/home/baez/balsillie/ John Baez has slides from his recent talks on the characterization of climate change and what we will do about it. They are clearly thought out and presented, as always. Climate change is just one aspect of the anthropocene: the new era created by human activity. One of the todo actions is to leave fossil fuels in the ground.

The elephant in the room of this story is that our best chance to leave fossil fuels in the ground is to find cheaper energy and the only realistic chance of that lies in developing nuclear power. The trouble is that the anthropocene has arisen from cheap energy. It lets us destroy habitats, destroy fish stocks, and much more. Cheaper energy will make this worse, even if it fixes the CO2 problem. The answer will lie in extending the 19th/20th century idea of a national park, to create an international park which is a substantial subset of the biosphere. The other side of the coin is the human conquest of space. There are many lifeless worlds out there, just waiting for us to make them more interesting.

Friday, October 25, 2013

scientific errors in medicine

The obesity-health saga is an interesting example of scientific error. There is a clear correlation between being overweight and having various health problems, particularly Type 2 diabetes. So nobody looked more closely at that. Everyone is advised to lose weight.

But then they did look more closely and low and behold we find: For most people, carrying extra weight is actually protective. At any given level of fitness it is better to have more weight.

So why is carrying weight associated with disease? The answer is that most people who are fit are relatively slim because it is hard to keep the weight on if you get fit. So being overweight is correlated with lack of fitness, and that is the problem. If you can be fit and keep the weight, with a high proportion of muscle, then that is ideal.

Medical science seems particularly prone to jump to conclusions based on correlation alone, but it is an easy mistake in many disciplines. Medical science is pretty good at other sorts of errors, including pure guesswork, like "eating fat makes you fat" (most people lose weight on a high fat low carb diet); or eating cholesterol will increase your cholesterol levels leading to heart disease.