Cook–Levin theorem

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the problem of determining whether a Boolean formula is satisfiable.

The theorem is named after Stephen Cook and Leonid Levin.

An important consequence of the theorem is that if there exists a deterministic polynomial time algorithm for solving Boolean satisfiability, then there exists a deterministic polynomial time algorithm for solving all problems in NP. Crucially, the same follows for any NP complete problem.

The question of whether such an algorithm exists is called the P versus NP problem and it is widely considered the most important unsolved problem in theoretical computer science.

Contributions

The concept of NP-completeness was developed in the late 1960s and early 1970s in parallel by researchers in the US and the USSR. In the US in 1971, Stephen Cook published his paper "The complexity of theorem proving procedures"[1] in conference proceedings of the newly founded ACM Symposium on Theory of Computing. Richard Karp's subsequent paper, "Reducibility among combinatorial problems",[2] generated renewed interest in Cook's paper by providing a list of 21 NP-complete problems. Cook and Karp received a Turing Award for this work.

The theoretical interest in NP-completeness was also enhanced by the work of Theodore P. Baker, John Gill, and Robert Solovay who showed that solving NP-problems in Oracle machine models requires exponential time. That is, there exists an oracle A such that, for all subexponential deterministic time complexity classes T, the relativized complexity class NPA is not a subset of TA. In particular, for this oracle, PA  NPA.[3]

In the USSR, a result equivalent to Baker, Gill, and Solovay's was published in 1969 by M. Dekhtiar.[4] Later Levin's paper, "Universal search problems",[5] was published in 1973, although it was mentioned in talks and submitted for publication a few years earlier.

Levin's approach was slightly different from Cook's and Karp's in that he considered search problems, which require finding solutions rather than simply determining existence. He provided 6 such NP-complete search problems, or universal problems. Additionally he found for each of these problems an algorithm that solves it in optimal time (in particular, these algorithms run in polynomial time if and only if P = NP).

Definitions

A decision problem is in NP if it can be solved by a non-deterministic algorithm in polynomial time.

An instance of the Boolean satisfiability problem is a Boolean expression that combines Boolean variables using Boolean operators.

An expression is satisfiable if there is some assignment of truth values to the variables that makes the entire expression true.

Idea

Given any decision problem in NP, construct a non-deterministic machine that solves it in polynomial time. Then for each input to that machine, build a Boolean expression which says that the input is passed to the machine, the machine runs correctly, and the machine halts and answers "yes". Then the expression can be satisfied if and only if there is a way for the machine to run correctly and answer "yes", so the satisfiability of the constructed expression is equivalent to asking whether or not the machine will answer "yes".

Proof

Schematized accepting computation by the machine M.

This proof is based on the one given by Garey and Johnson.[6]

There are two parts to proving that the Boolean satisfiability problem (SAT) is NP-complete. One is to show that SAT is an NP problem. The other is to show that every NP problem can be reduced to an instance of a SAT problem by a polynomial-time many-one reduction.

SAT is in NP because any assignment of Boolean values to Boolean variables that is claimed to satisfy the given expression can be verified in polynomial time by a deterministic Turing machine. (The statements verifiable in polynomial time by a deterministic Turing machine and solvable in polynomial time by a non-deterministic Turing machine are totally equivalent, and the proof can be found in many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3.).

Now suppose that a given problem in NP can be solved by the nondeterministic Turing machine M = (Q, Σ, s, F, δ), where Q is the set of states, Σ is the alphabet of tape symbols, s  Q is the initial state, F  Q is the set of accepting states, and δ  ((Q \ F) × Σ) × (Q × Σ × {−1, +1}) is the transition relation. Suppose further that M accepts or rejects an instance of the problem in time p(n) where n is the size of the instance and p is a polynomial function.

For each input, I, we specify a Boolean expression which is satisfiable if and only if the machine M accepts I.

The Boolean expression uses the variables set out in the following table. Here, q  Q, −p(n)  i  p(n), j  Σ, and 0  k  p(n).

Variables Intended interpretation How many?
Ti,j,k True if tape cell i contains symbol j at step k of the computation. O(p(n)2)
Hi,k True if the M's read/write head is at tape cell i at step k of the computation. O(p(n)2)
Qq,k True if M is in state q at step k of the computation. O(p(n))

Define the Boolean expression B to be the conjunction of the sub-expressions in the following table, for all −p(n)  i  p(n) and 0  k  p(n):

Expression Conditions Interpretation How many?
Ti,j,0 Tape cell i initially contains symbol j Initial contents of the tape. For i > n-1 and i < 0, outside of the actual input I, the initial symbol is the special default/blank symbol. O(p(n))
Qs,0   Initial state of M. 1
H0,0   Initial position of read/write head. 1
¬Ti,j,k ∨ ¬Ti,j,k jj At most one symbol per tape cell. O(p(n)2)
jΣ Ti,j,k At least one symbol per tape cell. O(p(n)2)
Ti,j,kTi,j,k+1Hi,k jj Tape remains unchanged unless written. O(p(n)2)
¬Qq,k ∨ ¬Qq,k qq Only one state at a time. O(p(n))
¬Hi,k ∨ ¬Hi,k ii Only one head position at a time. O(p(n)3)
(Hi,kQq,kTi,σ,k) →
(q, σ, q, σ, d) ∈ δ(Hi+d,k+1Qq,k+1Ti,σ,k+1)
k<p(n) Possible transitions at computation step k when head is at position i. O(p(n)2)
0≤kp(n) fF Qf,k Must finish in an accepting state, not later than in step p(n). 1

If there is an accepting computation for M on input I, then B is satisfiable by assigning Ti,j,k, Hi,k and Qi,k their intended interpretations. On the other hand, if B is satisfiable, then there is an accepting computation for M on input I that follows the steps indicated by the assignments to the variables.

There are O(p(n)2) Boolean variables, each encodeable in space O(log p(n)). The number of clauses is O(p(n)3) so the size of B is O(log(p(n))p(n)3). Thus the transformation is certainly a polynomial-time many-one reduction, as required.

Consequences

The proof shows that any problem in NP can be reduced in polynomial time (in fact, logarithmic space suffices) to an instance of the Boolean satisfiability problem. This means that if the Boolean satisfiability problem could be solved in polynomial time by a deterministic Turing machine, then all problems in NP could be solved in polynomial time, and so the complexity class NP would be equal to the complexity class P.

The significance of NP-completeness was made clear by the publication in 1972 of Richard Karp's landmark paper, "Reducibility among combinatorial problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its intractability, are NP-complete.[2]

Karp showed each of his problems to be NP-complete by reducing another problem (already shown to be NP-complete) to that problem. For example, he showed the problem 3SAT (the Boolean satisfiability problem for expressions in conjunctive normal form with exactly three variables or negations of variables per clause) to be NP-complete by showing how to reduce (in polynomial time) any instance of SAT to an equivalent instance of 3SAT. (First you modify the proof of the Cook–Levin theorem, so that the resulting formula is in conjunctive normal form, then you introduce new variables to split clauses with more than 3 atoms. For example, the clause (A ∨ B ∨ C ∨ D) can be replaced by the conjunction of clauses (A ∨ B ∨ Z) ∧ (¬Z ∨ C ∨ D), where Z is a new variable which will not be used anywhere else in the expression. Clauses with fewer than 3 atoms can be padded; for example, A can be replaced by (A ∨ A ∨ A), and (A ∨ B) can be replaced by (A ∨ B ∨ B) ).

Garey and Johnson presented more than 300 NP-complete problems in their book Computers and Intractability: A Guide to the Theory of NP-Completeness,[6] and new problems are still being discovered to be within that complexity class.

Although many practical instances of SAT can be solved by heuristic methods, the question of whether there is a deterministic polynomial-time algorithm for SAT (and consequently all other NP-complete problems) is still a famous unsolved problem, despite decades of intense effort by complexity theorists, mathematical logicians, and others. For more details, see the article P versus NP problem.

References

  1. Cook, Stephen (1971). "The complexity of theorem proving procedures". Proceedings of the Third Annual ACM Symposium on Theory of Computing. pp. 151–158.
  2. 1 2 Karp, Richard M. (1972). "Reducibility Among Combinatorial Problems". In Raymond E. Miller; James W. Thatcher. Complexity of Computer Computations (PDF). New York: Plenum. pp. 85–103. ISBN 0-306-30707-3.
  3. T. P. Baker; J. Gill; R. Solovay (1975). "Relativizations of the P = NP question". SIAM Journal on Computing. 4 (4): 431–442. doi:10.1137/0204037.
  4. Dekhtiar, M. (1969). "On the impossibility of eliminating exhaustive search in computing a function relative to its graph". Proceedings of the USSR Academy of Sciences (in Russian). 14: 1146–1148.
  5. Levin, Leonid (1973). "Универсальные задачи перебора" [Universal search problems]. Problems of Information Transmission (in Russian). 9 (3): 115–116. Translated into English by Trakhtenbrot, B. A. (1984). "A survey of Russian approaches to perebor (brute-force searches) algorithms". Annals of the History of Computing. 6 (4): 384–400. doi:10.1109/MAHC.1984.10036.
  6. 1 2 Garey, Michael R.; David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman. ISBN 0-7167-1045-5.
This article is issued from Wikipedia - version of the 11/8/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.