ETH Zürich » Computer Science » Theory » Cryptography

The role of complexity theory in cryptography

One of the main goals of complexity theory is to prove lower bounds on the resources (e.g. time and/or space) needed to solve a certain computational problem. Cryptography can therefore be seen as its main "customer". (There are of course plenty of customers for upper bounds, i.e., for efficient algorithms.) While complexity theory has flourished and generated some beautiful and deep results, the needs of the very demanding customer have not even closely been met. The fundamental problems are still wide open.

To begin with, in order to be independent of the details of the computational model, the complexity is not defined for a specific function, but only for an infinite class of functions parameterized with a parameter k (e.g. the input size). The complexity is a function of k. In contrast to such asymptotic definitions, concrete cryptosystems (e.g. DES) are fixed and an asymptotic extension is usually not defined. Moreover, often one distinguishes only between polynomial and super-polynomial complexity, but a finer distinction is of course possible and desirable for concrete security statements and reductions.

Second, for the most part, classical complexity theory deals with worst-case complexity, a concept useless in cryptography where breaking a system must be hard for almost all instances, not just for some of them. Ajtai's celebrated worst-case average-case hardness result was a break-through in the right direction. In cryptography even average-case complexity results are not sufficient: hardness almost everywhere is required.

Third, instead of proving the hardness of finding an exact solution to a computational problem, one would like to prove that even approximating the correct solution is hard. There has been substantial progress in complexity theory in this direction, initiated by Feige et al., based on the theory of probabilistically checkable proofs.

Fourth, the state of the art in lower bound proofs for the computational complexity of any NP-problem is quite miserable, even only for worst-case complexity. The best lower bounds are linear in the input size, which corresponds basically to the time needed to read the input. For example, in the circuit model of computation, a simple counting argument shows that almost all functions from n bits to 1 bit (i.e., {0,1}n→{0,1}) have circuit complexity Θ(2n/n), but the best lower bound by N. Blum for any concrete function is only 3n gates. Proving a lower bound of 4n, let alone n⋅log n or n2, would be a major breakthrough.

Fifth, in 1994 the discussion about the right model of computation as the basis for complexity lower bounds was reopened when Peter Shor showed that certain problems (in fact the two most relevant cryptographic problems, namely factoring integers and computing discrete logarithms) can be solved in polynomial time on a quantum computer. While a quantum computer is still a theoretical model which may perhaps never be implemented at sufficiently large scale to be of practical interest, computer scientists need to rethink the concept of computation. Basically, any process consistent with the laws of physics should be considered a computation, not just steps on an idealized computational model like a Turing machine.

From a cryptographic viewpoint, the most pressing research problem in complexity theory is to prove non-trivial lower bounds, perhaps polynomial}, perhaps super-polynomial, on the complexity of breaking concrete cryptographic systems, for instance a one-way function. Even if P=NP were proved, which would rule out the existence of secure cryptosystems in a classical complexity-theoretic framework, secure cryptography might still be possible within the realm of polynomial computation.