Julien Goodwin asks whether an insecure platform can perform secure computation [1]. My immediate reaction was to recall Charles Babbage’s quote On two occasions I have been asked,—”Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” […] I am not able rightly to comprehend the kind of confusion of ideas that could provoke such a question [2].
However on careful reading of Julien’s post it seems that he is most interested in the integrity of the computations rather than the secrecy. He suggests the possibility of performing the computation twice and comparing the results. Of course the issue to consider is whether both computations could be subverted. It seems most likely that if you are using someone else’s computation cluster for the calculations then performing the same calculation on two nodes of that cluster will give the same result both times (whether it’s right or wrong).
If there was a computation that would get a result that can be verified with little computation then it would be an easy problem to solve. For example if I wanted to brute-force a passphrase for a GPG key then I could try all combinations that were known to be possible, if one was flagged as correct then in a millisecond I could verify it. If none of the possibilities were listed as correct then I could assume that the process was broken in some way. The problem with this is that such a passphrase can have arbitrary length (I know someone who uses more than 50 characters). So if I used a brute-force attack on passwords of up to 9 characters that doesn’t exclude a 10 character password.
Probably the best potential use for insecure systems is for analysing large data sets. There have been several projects to harness unused computation resources to perform various large calculations (protein folding and SETI are two examples). Most such projects use closed-source programs because the people who run the contests are afraid of cheats who modify their programs to merely say “no” repeatedly and quickly. Of course this wouldn’t be a problem if they didn’t have a high-score table, and disassembling the program to hack the protocol can’t be that difficult (consider the work invested in reverse engineering protocols such as SMB which are much more complex).
It would probably be reasonably to randomly send batches of work to two machines in different regions for such large-scale public computation projects.
Finally if you want to perform calculations on secret data on someone else’s hardware then you may have lost before you even start.
There’s an extensive publication history on encrypted computation. In particular, there’s a way to encode any Boolean circuit into a reasonable set of operations that can be performed on encrypted inputs. This is useful if you’re OK with revealing which computation is happening, but not your input data. There’s a separate step which can be performed to obscure the actual computation.
It’s common to secure against random failure by using several machines—look at the Space Shuttle with its five computers, each with its own identical core memory. It’s less useful to use replicated identical machines against adversarial attack. But there’s an excellent paper by Claude Shannon (I think with Moore and von Neumann) that sets out mechanisms for measuring the reliability and crumminess of a system and its components, then a method for building reliable systems from crummy components.