Truly random numbers are difficult to produce. The clue is in that very description: if it can be produced once, it can be reproduced. And if a random number can be reproduced, it isn’t random.
Random numbers lie at the heart of information security. They are essential to infosec’s strongest weapon – encryption, and are used to generate the keys. The problem has always been that if an attacker can reproduce the randomness used, he can reproduce the keys, and can more easily crack the encryption. For this reason, considerable intellectual capital has been spent over the years on developing ‘true randomness’.
The University of Texas at Austin is now claiming a breakthrough. A paper by computer science professor David Zuckerman and graduate student Eshan Chattopadhyay will be presented at the annual Symposium on Theory of Computing (STOC) in June. The paper is one of three that have been awarded ‘best paper’ status – and it has been creating excitement ever since it was published for peer review and comment on the Electronic Colloquium on Computational Complexity in August 2015.
Titled ‘Explicit Two-Source Extractors and Resilient Functions‘, it describes a method of combining two ‘weakly random’ number sequences and combining them into one truly random number. Weakly random numbers, such as air temperatures or stock market prices, can over time show predictable patterns. By definition, there is nothing predictable in a truly random number.
For more than 20 years Zuckerman has been working on a process he himself pioneered – the extraction of true randomness from a weakly random sequence. Until now, however, the process has required a truly random number, or for both numbers to be almost truly random, for it to succeed.
No more. “This is a problem I’ve come back to over and over again for more than 20 years,” says Zuckerman. “I’m thrilled to have solved it.” The new paper now describes how you can extract one truly random sequence from two weakly random sequences.
Methods for generating high-quality random numbers already exist; but they are computationally very demanding. The new method can produce even better quality at less cost. “One common way that encryption is misused is by not using high-quality randomness,” says Zuckerman. “So in that sense, by making it easier to get high-quality randomness, our methods could improve security.” It is expected that this could improve the security of everything that demands high quality encryption, from credit card transactions to military communications.
The research is being hailed as a major step forwards in security. “When I heard about it, I couldn’t sleep,” says Yael Kalai, a senior researcher working in cryptography at Microsoft Research New England who has also worked on randomness extraction. “I was so excited. I couldn’t believe it. I ran to the (online) archive to look at the paper. It’s really a masterpiece.”
Vincent Rijmen, one of the two developers of the Advanced Encryption Algorithm (AES) points out that Zuckerman’s paper is a theoretical rather than practical paper. It “is probably important within its own context,” he told SecurityWeek; “that is, deep theoretic reflections on randomness and cryptography.” The idea that it does not, at least yet, have much practical value within cryptography, was confirmed by Professor Ross Anderson of the Cambridge University Computer Laboratory. “It’s a theory paper,” he told SecurityWeek, “and unlikely to be of much engineering interest as far as I can see.”