Jonathan Vos Post - Holographic Universe Theory of Everything ver 9

From WikiName
Jump to: navigation, search

Notes 10 Jan 2012 on Oracle to overcome Turing limits of TOE By Jonathan Vos Post Draft 9.0 of Theory of Everything, 84 pp., 18,000 words; adds Appendix D: Group Complexity [Replaces Draft 8.0 of 9 Jan 2012, 66 pp., 13,200 words; adds Appendix C: Approximately Computed Cosmos Draft 7.0 of Theory of Everything, 60 pp., 11,900 words; adds Appendix B: Smart Membranes ; fixes typos; replaces Draft 6.0 of Theory of Everything, 45 pp., 9600 words; Draft 5.0 of Theory of Everything, 31 pp., 6250 words; adds Appendix A: Quantization of the Membrane Model of the Holographic Universe; Draft 4.0 of Theory of Everything, 19 pp., 3550 words; adds Table of Contents; expands with more definitions; Draft 3.0 of Theory of Everything, 15 pp., 2850 words; added comment by Dr. Timothy Poston, and reply; which replaces 10 Oct 2011 Draft 2.0 of Theory of Everything, 12 pp., 2500 words; 10 Oct 2011 Draft 2.0 of Theory of Everything, 12 pp., 2500 words] The length of this monograph is partially due to lengthy reviews of technical material to make the presentation self-contained.

TABLE OF CONTENTS 2 Introduction 3 Formal Definitions, and Poston’s Caveats to Post 7 Complexity Classes of Oracle Machines 14 Oracles and Halting Problems 14 Applications to Cryptography 16 Bibliography 25 Here was Draft 1.0 29 Appendix A: Quantization of the Membrane Model of the Holographic Universe 30 Black Hole Entropy 32 Black Hole Information Paradox 35 Limit on Information Density 36 Jacob Bekenstein’s High Level Summary 36 Unexpected Connection 37 Energy, Matter, and Information Equivalence 38 Claimed Experimental Test at Gravitational Wave Detectors 40 Bekenstein bound and Bousso’s holographic bound 48 Appendix B: Smart Membranes 61 Appendix C: Approximately Computed Cosmos 67 Appendix D: Group Complexity Introduction For the sake of argument, suppose that the 3-dimensions of space plus one dimension of time that we perceive the universe to be are a manifestation of a quantum cellular automaton operating according to discrete rules at a 2-dimensional boundary of the holographic cosmos. Given an “oracle” in the Computational Complexity sense (the theory of classifying problems based on how difficult they are to solve), then a Physics Theory of Everything can violate the Church-Turing thesis. Given such a magical device, a hypercomputer, one could have what appears to be a finitely generated Turing Machine or Cellular automaton model of the universe, or the universe itself {see Appendix A: quantization of the membrane model of the Holographic Universe}, which can physically compute uncomputable functions, and do what does not violate physical law as such, but violates computational complexity limits. The Church-Turing thesis (formerly commonly known simply as Church’s thesis) says that any real-world computation can be translated into an equivalent computation involving a Turing machine. In Church’s original formulation (Church 1935, 1936), the thesis says that real-world calculation can be done using the lambda calculus (formal logic developed by Alonzo Church and Stephen Kleene to address the computable number problem), which is equivalent to using general recursive functions. [note: There are two camps of thought on the meaning of general recursive function. One camp considers general recursive functions to be equivalent to the usual recursive functions. For members of this camp, the word “general” emphasizes that the class of functions includes all of the specific subclasses, such as the primitive recursive functions (Rogers 1987, p.27). The other camp considers general recursive functions to be equivalent to total recursive functions.] The Church-Turing thesis encompasses more kinds of computations than those originally envisioned, such as those involving cellular automata, combinators, register machines, and substitution systems. It also applies to other kinds of computations found in theoretical computer science such as quantum computing and probabilistic computing. There are conflicting points of view about the Church-Turing thesis. One says that it can be proven, and the other says that it serves as a definition for computation. There has never been a proof, but the evidence for its validity comes from the fact that every realistic model of computation, yet discovered, has been shown to be equivalent. If there were a device which could answer questions beyond those that a Turing machine can answer, then it would be called an oracle. Some computational models are more efficient, in terms of computation time and memory, for different tasks. For example, it is suspected that quantum computers can perform many common tasks with lower time complexity, compared to modern computers, in the sense that for large enough versions of these problems, a quantum computer would solve the problem faster than an ordinary computer. In contrast, there exist questions, such as the halting problem, which an ordinary computer cannot answer, and according to the Church-Turing thesis, no other computational device can answer such a question. The Church-Turing thesis has been extended to a proposition about the processes in the natural world by Stephen Wolfram in his principle of computational equivalence (Wolfram 2002), which also claims that there are only a small number of intermediate levels of computing power before a system is universal and that most natural systems are universal. In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to decide certain decision problems in a single operation. The problem can be of any complexity class. Even undecidable problems, like the halting problem, can be used. The question raised, as I {Jonathan Vos Post} see it: if the putative Theory of Everything is a Cellular Automaton rule set, for cells at or near Planck-length in radius, on a 2-dimensional surface which is equivalent to 3 dimensions of space, and evolution over time is according to the Cellular Automaton rule set, then, consistent with Bekenstein bound (see below) and/or Bousso’s holographic bound (see below) can there be a Cellular Automaton rule set that yields the observed laws of Physics, or alternatively can the brane be a hypercomputer with Cellular Automaton rule set plus an oracle, and can that oracle be equivalent to closed timelike curves (i.e. can the Brane Cellular Automaton signal itself backwards in time).

Formal Definitions, and Poston’s Caveats to Post In the following couple of paragraphs, I must address this comment from Dr. Timothy Poston. Timothy (Tim) Poston (born 19 June 1945 in St Albans, Hertfordshire) is an English mathematician best known for his work on catastrophe theory. His 1972 Ph.D at the University of Warwick, on “Fuzzy Geometry”, was directed by Christopher Zeeman Less well known was his role as active founding member of COUM Transmissions performance group with Genesis P-Orridge. They studied at Hull University together. Tim Poston wrote texts and advised COUM on physics and maths 1968-1978.Tim Poston remains “Scientific Adviser” to P-Orridge to the present. “It is peculiar, in this context, to read ‘an oracle tape, on which an infinite sequence of B’s and 1’s is printed’, with no operational definition of ‘infinite.’” “If the Universe can contain such a tape or tape analogue, can we -- without crossing an event horizon -- effectively use it? Note that while quantum computers may speed some kinds of calculation, Quantum uncertainty limits the information density of matter and spacetime.” {“There is strong evidence that the area of any surface limits the information content of adjacent spacetime regions, at 10^(69) bits per square meter” wrote Bousso in 2002; see in Appendix A: Limit on Information Density In the following, as Tim suggests, we first of all mean COUNTABLY infinite, as we do not have a 19th century continuum nor any material with continuum properties, rather, the universe is quantized, and thus discrete, per Quantum Mechanics (QM). Of course, General Relativity (GR) does presume that spacetime is an infinitely subdivisible continuum. Resolving the conflict between QM and GR is beyond the scope of this paper. we do not literally demand that a hypercomputer be built with physically infinite tape. This is a FORMAL definition; what we consider is a function ANALOGUE of such a Turing Machine, i.e. functionally computationally equivalent. Now, on to the formal definition. An oracle machine is a Turing machine connected to an oracle. The oracle, in this context, is thought of as an entity capable of answering some collection of questions, and usually represented as some subset A of the natural numbers. Intuitively then, the oracle machine can perform all of the usual operations of a Turing machine, and can also query the oracle for an answer to a specific question of the form “is x in A?” The definition given here is one of several possible oracle machine definitions. All these definitions are equivalent, as they agree on which specific functions f can be computed by an oracle machine with oracle A. An oracle machine, like a Turing machine, includes: a work tape: a sequence of cells without beginning or end, each of which may contain a B (for blank) or a 1; a read/write head, which rests on a single cell of the work tape and can read the data there, write new data, and move left or right along the tape; a control mechanism, which can be in one of a finite number of states, and which will perform different actions (reading data, writing data, moving the control mechanism, and changing states) depending on the current state and the data being read. In addition to these components, an oracle machine also includes: an oracle tape, on which a COUNTABLY infinite sequence of B’s and 1’s is printed, corresponding to the characteristic function of the oracle set A; an oracle head, which (like the read/write head) can move left or right along the oracle tape reading data, but which cannot write. Formal definition An oracle Turing machine is a 4-tuple

where

Q is a finite set of states is a partial function called the transition function, where L is left shift, R is right shift. is the initial state is the set of halting states. The oracle machine is initialized with the work tape containing some input with finitely many 1’s and the rest of the tape blank, the oracle tape containing the characteristic function of the oracle, A, and the Turing machine in state q0 with read/write head reading the first nonblank cell of the work tape, and oracle head reading the cell of the oracle tape which corresponds to χA(0). Thereafter it operates according to δ: if the Turing machine is currently in state q, the read/write head is reading a symbol S1, and the oracle head is reading S2, then if δ(q,S1,S2) = (q',S1',D1,D2), the machine enters state q', the read/write head writes the symbol S1' in place of S1, and then the read/write head moves 1 cell in direction D1 and the oracle head moves one cell in direction D2. At this point if q' is a halting state, the machine halts, otherwise it repeats this same procedure. Turing machines can compute functions as follows: if f is a function that takes natural numbers to natural numbers, MA is a Turing machine with oracle A, and whenever MA is initialized with the work tape consisting of n+1 consecutive 1’s (and blank elsewhere) MA eventually halts with f(n) 1's on the tape, then MA is said to compute the function f. A similar definition can be made for functions of more than one variable, or partial functions. If there is an oracle machine M that computes a function f with oracle A, f is said to be A-computable. If f is the characteristic function of a set B, B is also said to be A-computable, and M is said to be a Turing reduction from B to A. Complexity Classes of Oracle Machines In computer science, satisfiability (often written in all capitals or abbreviated SAT) is the problem of determining if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE. Equally important is to determine whether no such assignments exist, which would imply that the function expressed by the formula is identically FALSE for all possible variable assignments. In this latter case, we would say that the function is unsatisfiable; otherwise it is satisfiable. To emphasize the binary nature of this problem, it is frequently referred to as Boolean or propositional satisfiability. SAT was the first known example of an NP-complete problem. That briefly means that there is no known algorithm that efficiently solves all instances of SAT, and it is generally believed (but not proven, see P versus NP problem) that no such algorithm can exist. Further, a wide range of other naturally occurring decision and optimization problems can be transformed into instances of SAT. A class of algorithms called SAT solvers can efficiently solve a large enough subset of SAT instances to be useful in various practical areas such as circuit design and automatic theorem proving, by solving SAT instances made by transforming problems that arise in those areas. Extending the capabilities of SAT solving algorithms is an ongoing area of progress. However, no current such methods can efficiently solve all SAT instances. The complexity class of decision problems solvable by an algorithm in class A with an oracle for a language L is called A^L. For example, PSAT is the class of problems solvable in polynomial time by a deterministic Turing machine with an oracle for the Boolean satisfiability problem. The notation A^B can be extended to a set of languages B (or a complexity class B), by using the following definition:

When a language L is complete for some class B, then A^L=A^B provided that machines in A can execute reductions used in the completeness definition of class B. In particular, since SAT is NP-complete with respect to polynomial time reductions, P^SAT=P^NP. However, if A = DLOGTIME, then A^SAT may not equal ANP. It is obvious that NP is contained in or equal to P^NP, but the question of whether NP^NP, P^NP, N^P, and P are equal remains tentative at best. It is believed they are different, and this leads to the definition of the polynomial hierarchy. Oracle machines are useful for investigating the relationship between complexity classes P and NP, by considering the relationship between P^A and NP^A for an oracle A. In particular, it has been shown there exist languages A and B such that P^A=NP^A and PB ≠ NPB.[Baker, 1975] The fact the P = NP question relativizes both ways is taken as evidence that answering this question is difficult, because a proof technique that relativizes (i.e., unaffected by the addition of an oracle) will not answer the P = NP question. Most proof techniques relativize. It is interesting to consider the case where an oracle is chosen randomly from among all possible oracles (an infinite set). It has been shown in this case, then with probability 1, P^A ≠ NP^A.[Bennett 1981] When a question is true for almost all oracles, it is said to be true for a random oracle. This choice of terminology is justified by the fact random oracles support a statement with probability 0 or 1 only. (This follows from Kolmogorov’s zero one law. In probability theory, Kolmogorov’s zero-one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, called a tail event, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one.) This is taken as evidence P ≠ NP. A statement may be true for a random oracle and false for ordinary Turing machines at the same time; for example for oracles A, IP^A ≠ PSPACE^A, while IP = PSPACE. [Chang 1994] In complexity theory, the satisfiability problem (SAT) is a decision problem, whose instance is a Boolean expression written using only AND, OR, NOT, variables, and parentheses. The question is: given the expression, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true? A formula of propositional logic is said to be satisfiable if logical values can be assigned to its variables in a way that makes the formula true. The Boolean satisfiability problem is NP-complete. The propositional satisfiability problem (PSAT), which decides whether a given propositional formula is satisfiable, is of central importance in various areas of computer science, including theoretical computer science, algorithmics, artificial intelligence, hardware design, electronic design automation, and verification. A literal is either a variable or the negation of a variable (the negation of an expression can be reduced to negated variables by De Morgan's laws). For example, x_1, is a positive literal and not(x_2) is a negative literal. A clause is a disjunction of literals. For example, x_1 V not(x_2), is a clause (read as “x-sub-one or not x-sub-2”). There are several special cases of the Boolean satisfiability problem in which the formulae are required to be conjunctions of clauses (i.e. formulae in conjunctive normal form). Determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete; this problem is called "3SAT", “3CNFSAT”, or “3-satisfiability.” Determining the satisfiability of a formula in which each clause is limited to at most two literals is NL-complete; this problem is called "2SAT". Determining the satisfiability of a formula in which each clause is a Horn clause (i.e. it contains at most one positive literal) is P-complete; this problem is called Horn-satisfiability. The Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, and in fact, this was the first decision problem proved to be NP-complete. However, beyond this theoretical significance, efficient and scalable algorithms for SAT that were developed over the last decade have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints. Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, and so on. A SAT-solving engine is now considered to be an essential component in the EDA toolbox. Let me explain NP-complete a little more. NP-complete is a subset of NP, the set of all decision problems whose solutions can be verified in polynomial time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a nondeterministic Turing machine. A problem p in NP is also in NPC if and only if every other problem in NP can be transformed into p in polynomial time. NP-complete can also be used as an adjective: problems in the class NP-complete are known as NP-complete problems. NP-complete problems are studied because the ability to quickly verify solutions to a problem (NP) seems to correlate with the ability to quickly solve that problem (P). It is not known whether every problem in NP can be quickly solved. This is called the P = NP problem. But if any single problem in NP-complete can be solved quickly, then every problem in NP can also be quickly solved, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every problem in NP-complete (that is, it can be reduced in polynomial time). Because of this, it is often said that the NP-complete problems are harder or more difficult than NP problems in general. SAT was, as I said, the first known NP-complete problem, as proved by Stephen Cook in 1971 (see wikipedia “Cook’s theorem” for the proof). Until that time, the concept of an NP-complete problem did not even exist. The problem remains NP-complete even if all expressions are written in conjunctive normal form with 3 variables per clause (3-CNF), yielding the 3SAT problem. This means the expression has the form: (x_11 OR x_12 OR x_13) AND (x_21 OR x_22 OR x_23) AND (x_31 OR x_32 OR x3_3) AND ... where each x is a variable or a negation of a variable, and each variable can appear multiple times in the expression. A useful property of Cook’s reduction is that it preserves the number of accepting answers. For example, if a graph has 17 valid 3-colorings, the SAT formula produced by the reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. [See runtime behavior below.] SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals (possibly negated variables). Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in polynomial time.


Oracles and Halting Problems It is possible to posit the existence of an oracle which computes a non-computable function, such as the answer to the halting problem or some equivalent. A machine with an oracle of this sort is a hypercomputer. Interestingly, the halting paradox still applies to such machines; although they determine whether particular Turing machines will halt on particular inputs, they cannot determine, in general, if machines equivalent to themselves will halt. This fact creates a hierarchy of machines, called the arithmetical hierarchy, each with a more powerful halting oracle and an even harder halting problem. Applications to Cryptography See Wikipedia Main article: Random oracle In cryptography, oracles are used to make arguments for the security of cryptographic protocols where a hash function is used. A security reduction for the protocol is given in the case where, instead of a hash function, a random oracle answers each query but consistently; the oracle assumes to be available to all parties including the attacker, as the hash function is. Such a proof shows unless the attacker solves the hard problem at the heart of the security reduction, they must make use of some interesting property of the hash function to break the protocol; they cannot treat the hash function as a black box (i.e., as a random oracle). See also Turing machine Turing reduction Interactive proof system Bibliography Alan Turing, Systems of logic based on ordinals, Proc. London math. soc., 45, 1939 C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. Section 14.3: Oracles, pp. 339 – 343. Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Section 9.2: Relativization, pp.318 – 321. Martin Davis, editor: The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And Computable Functions, Raven Press, New York, 1965. Turing's paper is in this volume. Papers include those by Godel, Church, Rosser, Kleene, and Post. T. P. Baker, J. Gill, R. Solovay. Relativizations of the P =? NP Question. SIAM Journal on Computing, 4(4): 431-442 (1975) Bekenstein, Jacob D. (January 1981). "Universal upper bound on the entropy-to-energy ratio for bounded systems". Physical Review D 23 (215): 287–298. Bibcode 1981PhRvD..23..287B. doi:10.1103/PhysRevD.23.287. Bekenstein, Jacob . (August 2003). "Information in the Holographic Universe — Theoretical results about black holes suggest that the universe could be like a gigantic hologram". Scientific American 17: p. 59. doi:10.1093/shm/17.1.145. http://www.sciam.com/article.cfm?articleid=000AF072-4891-1F0A-97AE80A84189EEDF. J. D. Bekenstein, "Black Holes and the Second Law", Lettere al Nuovo Cimento, Vol. 4, No 15 (August 12, 1972), pp. 737-740, doi:10.1007/BF02757029, Bibcode 1972NCimL...4..737B. Mirror link. Jacob D. Bekenstein, "Black Holes and Entropy", Physical Review D, Vol. 7, No. 8 (April 15, 1973), pp. 2333-2346, doi:10.1103/PhysRevD.7.2333, Bibcode: 1973PhRvD...7.2333B. Mirror link. Jacob D. Bekenstein, "Generalized second law of thermodynamics in black-hole physics", Physical Review D, Vol. 9, No. 12 (June 15, 1974), pp. 3292-3300, doi:10.1103/PhysRevD.9.3292, Bibcode: 1974PhRvD...9.3292B. Mirror link. Jacob D. Bekenstein, "Statistical black-hole thermodynamics", Physical Review D, Vol. 12, No. 10 (November 15, 1975), pp. 3077-3085, doi:10.1103/PhysRevD.12.3077, Bibcode: 1975PhRvD..12.3077B. Mirror link. Jacob D. Bekenstein, "Black-hole thermodynamics", Physics Today, Vol. 33, Issue 1 (January 1980), pp. 24-31, doi:10.1063/1.2913906, Bibcode: 1980PhT....33a..24B. Mirror link. Jacob D. Bekenstein, "Universal upper bound on the entropy-to-energy ratio for bounded systems", Physical Review D, Vol. 23, No. 2, (January 15, 1981), pp. 287-298, doi:10.1103/PhysRevD.23.287, Bibcode: 1981PhRvD..23..287B. Mirror link. Jacob D. Bekenstein, "Energy Cost of Information Transfer", Physical Review Letters, Vol. 46, No. 10 (March 9, 1981), pp. 623-626, doi:10.1103/PhysRevLett.46.623, Bibcode: 1981PhRvL..46..623B. Mirror link. Jacob D. Bekenstein, "Specific entropy and the sign of the energy", Physical Review D, Vol. 26, No. 4 (August 15, 1982), pp. 950-953, doi:10.1103/PhysRevD.26.950, Bibcode: 1982PhRvD..26..950B. Jacob D. Bekenstein, "Entropy content and information flow in systems with limited energy", Physical Review D, Vol. 30, No. 8, (October 15, 1984), pp. 1669-1679, doi:10.1103/PhysRevD.30.1669, Bibcode: 1984PhRvD..30.1669B. Mirror link. Jacob D. Bekenstein, "Communication and energy", Physical Review A, Vol. 37, Issue 9 (May 1988), pp. 3437-3449, doi:10.1103/PhysRevA.37.3437, Bibcode: 1988PhRvA..37.3437B. Mirror link. Marcelo Schiffer and Jacob D. Bekenstein, "Proof of the quantum bound on specific entropy for free fields", Physical Review D, Vol. 39, Issue 4 (February 15, 1989), pp. 1109-1115, doi:10.1103/PhysRevD.39.1109 PMID 9959747, Bibcode: 1989PhRvD..39.1109S. Jacob D. Bekenstein, "Is the Cosmological Singularity Thermodynamically Possible?", International Journal of Theoretical Physics, Vol. 28, Issue 9 (September 1989), pp. 967-981, doi:10.1007/BF00670342, Bibcode: 1989IJTP...28..967B. Jacob D. Bekenstein, "Entropy bounds and black hole remnants", Physical Review D, Vol. 49, Issue 4 (February 15, 1994), pp. 1912-1921, doi:10.1103/PhysRevD.49.1912, Bibcode: 1994PhRvD..49.1912B. Also at arXiv:gr-qc/9307035, July 25, 1993. Oleg B. Zaslavskii, "Generalized second law and the Bekenstein entropy bound in Gedankenexperiments with black holes", Classical and Quantum Gravity, Vol. 13, No. 1 (January 1996), pp. L7-L11, doi:10.1088/0264-9381/13/1/002, Bibcode: 1996CQGra..13L...7Z. See also O. B. Zaslavskii, "Corrigendum to 'Generalized second law and the Bekenstein entropy bound in Gedankenexperiments with black holes'", Classical and Quantum Gravity, Vol. 13, No. 9 (September 1996), p. 2607, doi:10.1088/0264-9381/13/9/024, Bibcode: 1996CQGra..13.2607Z. Jacob D. Bekenstein, "Non-Archimedean character of quantum buoyancy and the generalized second law of thermodynamics", Physical Review D, Vol. 60, Issue 12 (December 15, 1999), Art. No. 124010, 9 pages, doi:10.1103/PhysRevD.60.124010, Bibcode: 1999PhRvD..60l4010B. Also at arXiv:gr-qc/9906058, June 16, 1999. C. H. Bennett, J. Gill. Relative to a Random Oracle A, PA != NPA != co-NPA with Probability 1. SIAM Journal on Computing, 10(1): 96-113 (1981) “Boolean Satisfiability Problem”, Retrieved 23 Oct 2011 from http://en.wikipedia.org/wiki/Boolean_satisfiability_problem Bousso, Raphael (2002). "The holographic principle". Reviews of Modern Physics 74 (3): 825–874. arXiv:hep-th/0203101. Bibcode 2002RvMP...74..825B. doi:10.1103/RevModPhys.74.825. Bousso, Raphael (13 Aug 1999). "A Covariant Entropy Conjecture". Journal of High Energy Physics (7). arXiv:hep-th/9905177. Bibcode 1999JHEP...07..004B. doi:10.1088/1126-6708/1999/07/004. Bousso, Raphael (9 Aug 1999). "Holography in General Space-times". Journal of High Energy Physics (6). arXiv:hep-th/9906022. Bibcode 1999JHEP...06..028B. doi:10.1088/1126-6708/1999/06/028. Archived from the original on 24 June 1999. http://arxiv.org/abs/hep-th/9906022. Bousso, Raphael (5 Aug 2002). "The holographic principle". Review of Modern Physics 74 (3): 825–874. arXiv:hep-th/0203101. Bibcode 2002RvMP...74..825B. doi:10.1103/RevModPhys.74.825. Bouwmeester, D.; Ekert, A.; and Zeilinger, A. (Eds.). The Physics of Quantum Information: Quantum Cryptography, Quantum Teleportation, Quantum Computation. Berlin: Springer-Verlag, 2000. Brown, J. D. & Henneaux, M. (1986). "Central charges in the canonical realization of asymptotic symmetries: an example from three-dimensional gravity". Communications in Mathematical Physics 104 (2): 207–226. Bibcode 1986CMaPh.104..207B. doi:10.1007/BF01211590 . Brzezniak, Zdzislaw; Tomasz Zastawniak (2000), Basic Stochastic Processes, Springer, ISBN 3-5407-6175-6 Richard Chang, Benny Chor, Oded Goldreich, Juris Hartmanis, Johan Hastad, Desh Ranjan, Pankaj Rohatgi. The Random Oracle Hypothesis is False. Journal of Computer and System Sciences, volume 49, issue 1, pp.24–39. August 1994. ISSN 0022-0000. http://citeseer.ist.psu.edu/282397.html Chown, Marcus (15 January 2009). "Our world may be a giant hologram". NewScientist. http://www.newscientist.com/article/mg20126911.300. Retrieved 2010-04-19. Church, A. Abstract No. 204. Bull. Amer. Math. Soc. 41, 332-333, 1935. Church, A. "An Unsolvable Problem of Elementary Number Theory." Amer. J. Math. 58, 345-363, 1936. Davies, Paul. "Multiverse Cosmological Models and the Anthropic Principle". CTNS. http://www.google.com/search?hl=en&lr=&as_qdr=all&q=holographic+everything+site%3Actnsstars.org. Retrieved 2008-03-14. Hogan, Craig J. (2008). "Measurement of quantum fluctuations in geometry". Physical Review D 77 (10): 104031. arXiv:0712.3419. Bibcode 2008PhRvD..77j4031H. doi:10.1103/PhysRevD.77.104031 . “Holographic Principle”, Retrieved 23 Oct 2011 from http://en.wikipedia.org/wiki/Holographic_principle “Is there a physically universal cellular automaton or Hamiltonian?” Dominik Janzing (Submitted on 9 Sep 2010) arXiv Lloyd, Seth (2002-05-24). "Computational Capacity of the Universe". Physical Review Letters 88 (23): 237901. arXiv:quant-ph/0110141. Bibcode 2002PhRvL..88w7901L. doi:10.1103/PhysRevLett.88.237901. PMID 12059399. Majumdar, Parthasarathi (1998). "Black Hole Entropy and Quantum Gravity". ArXiv: General Relativity and Quantum Cosmology 73: 147. arXiv:gr-qc/9807045. Bibcode 1999InJPB..73..147M. Nielsen, M. and Chuang, I. Quantum Computation and Quantum Information. Cambridge, England: Cambridge University Press, 2000. “Oracle Machine”, Retrieved 10 Oct 2011 from "http://en.wikipedia.org/w/index.php?title=Oracle_machine&oldid=427436332" Penrose, R. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford, England: Oxford University Press, pp. 47-49, 1989. Poston, Timothy, private communication, 18 October 2011 Pour-El, M. B. "The Structure of Computability in Analysis and Physical Theory: An Extension of Church's Thesis." Ch. 13 in Handbook of Computability Theory (Ed. E. R. Griffor). Amsterdam, Netherlands: Elsevier, pp. 449-470, 1999. Rogers, H. Theory of Recursive Functions and Effective Computability. Cambridge, MA: MIT Press, 1987. Rosenthal, Jeffrey S. (2006), A first look at rigorous probability theory, Hackensack, NJ: World Scientific Publishing Co. Pte. Ltd., p. 37, ISBN 9789812703712 Rowland, Todd. "Church-Turing Thesis." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/Church-TuringThesis.html Sakharov Conf on Physics, Moscow, (91):447-454. Susskind, Leonard (1995). "The World as a Hologram". Journal of Mathematical Physics 36 (11): 6377–6396. arXiv:hep-th/9409089. Bibcode 1995JMP....36.6377S. doi:10.1063/1.531249. Stroock, Daniel (1999), Probability theory: An analytic view (revised ed.), Cambridge University Press, ISBN 978-0-521-66349-6 . Susskind, L., "The Black Hole War – My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics", Little, Brown and Company (2008) ‘t Hooft, Gerard (1993). Dimensional Reduction in Quantum Gravity. pp. 10026. arXiv:gr-qc/9310026. Bibcode 1993gr.qc....10026T . t’ Hooft’s original paper. Wolfram, S. A New Kind of Science. Champaign, IL: Wolfram Media, p. 1125, 2002. Contents [hide] 1 ===================== 2 ===================== 3 ===================== 4 ============== 5 ============== 6 ============== =====================[edit] Here was Draft 1.0

Here is a summary of some thoughts of mine on Computational Metaphysics relating to “Theory of Everything” -- which is a Term of Art, a phrase with a special meaning to those “in the club” -- but that I have metaphysically questioned whether there can even BE such a theory. In my scholarly publications and in the blogosphere I have raised metaphysical questions about any putative Theory of Everything (ToE). Let me abstract some of these into a few telegraphic sentences. 1) If a ToE is a PHYSICAL THEORY then it falls within the set of ALL POSSIBLE PHYSICAL THEORIES. (2) This set is itself a proper subset of what Astronomer/Philosopher Fritz Zwicky called the IDEOCOSM: the space of all possible ideas. (3) Specifically, if we restrict ourselves to THE SPACE OF ALL POSSIBLE MATHEMATICAL PHYSICS THEORIES, then we are faced with a question that I keep asking: (4) Is there a hyperplane or hypersurface that separates the SPACE OF ALL POSSIBLE PHYSICAL THEORIES from the SPACE OF ALL MATHEMATICAL THEORIES MANY OF WHICH ARE “UNPHYSICAL”? (5) Suppose we try to avoid paradoxes that stem from such spaces of theories being infinite, and further restrict ourselves to THE SPACE OF ALL FINITELY GENERATED MATHEMATICAL PHYSICAL THEORIES. (6) this includes, therefore, all MATHEMATICAL PHYSICAL THEORIES represented in some LANGUAGE which has a finite number of symbols from a finite alphabet, and thus a finite number of syntactic terms, is based upon a finite set of axioms, and derives from those by a finite set of rules of inference. (7) We need to be extremely careful about whether we have recursively-defined axioms, recursively-defined rules of inference, and the like. (8) we try to make no a priori assumptions about structure, finiteness and the like, of EXPERIMENTALLY true observations in the PHENOMENOLOGY of this SPACE OF ALL FINITELY GENERATED MATHEMATICAL PHYSICAL THEORIES. INDEED, WE EXPECT THIS PHENOMENOLOGY TO HAVE INFINITE POSSIBLE EXPERIMENTAL RESULTS. (9) NOW WE ARE LEFT WITH VERY SUBTLE META-QUESTIONS ABOUT completeness* AND *consistency.* (10) GIVEN GODEL'S RESULTS, WE SUSPECT THAT A CONSISTENCY PROOF OF THE SPACE OF ALL FINITELY-GENERATED MATHEMATICAL PHYSICAL THEORIES IS NOT POSSIBLE FROM WITHIN THAT SPACE. (11) BUT WHERE DO WE LIVE? WE HUMANS WITH OUR SCIENCE, OUR EQUIPMENT, OUR LOGIC -- ARE WE MADE OF MATTER AND ENERGY AND INFORMATION THAT CAN BE DESCRIBED BY A PUTATIVE THEORY OF EVERYTHING? GODEL’S METHODS ARE CONSTRUCTIVE, NOT NEEDING TRANSFINITE INDUCTION AND OTHER SUCH SUPER-METHODS. (12) IF NO SINGLE SUFFICIENTLY POWERFUL THEORY OF MATHEMATICS EXISTS (PER GODEL) THEN IT SEEMS THAT NO SUFFICIENTLY POWERFUL SINGLE THEORY OF MATHEMATICAL PHYSICS EXISTS. (13) THIS IS LIKELY EVEN IF DIFFERENT UNIVERSES IN THE METAVERSE EACH HAVE THEIR OWN THEORY OF EVERYTHING, OR EVEN IF OUR UNIVERSE, THOUGH FINITELY-GENERATED, HAS AN INFINITE NUMBER OF “LAWS” THAT OPERATE AT DIFFERENT ENERGY LEVELS OR COMBINATORICS OF INITIAL CONDITIONS AND BOUNDARY CONDITIONS. (14) In this metatheoretical sense, I question that there is any “Ultimate Reality” -- and am thus unable to answer Einstein’s question (I paraphrase) "Did God have a choice when he created the physical laws of the universe?" (15) I discussed this a number of times with my mentor and co-author, Richard Feynman. Each day he seemed to have a different take on this. =====================[edit] Keywords: "Theory of Everything" Computational Metaphysics Ideocosm Phenomenology Finitely-generated Completeness Consistency Meta-theory Metaverse =====================[edit] Appendix A: Quantization of the Membrane Model of the Holographic Universe

The holographic principle is a property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the region -- preferably a light-like boundary like a gravitational horizon. First proposed by Gerard ‘t Hooft, it was given a precise string-theory interpretation by Leonard Susskind, who combined his ideas with previous ones of Gerard ‘t Hooft and Charles Thorn. In fact, as pointed out by Raphael Bousso, Thorn observed in 1978 that string theory admits a lower dimensional description in which gravity emerges from it in what would now be called a holographic way. In a larger and more speculative sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure “painted” on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time. This draft paper by Jonathan Vos Post adds this discussion, because of the metaphysical possibility that a cellular automaton, with or without an oracle, “lives” on the cosmological horizon, and defines the apparently 3-dimensional universe. The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects which have fallen into the hole can be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory. Black Hole Entropy See Wikipedia Main article: Black hole thermodynamics

An object with entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einstein’s equations, they were thought not to have any entropy either. But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics. If one throws a hot gas with entropy into a black hole, once it crosses the horizon, the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. The second law can only be salvaged if black holes are in fact random objects, with an enormous entropy whose increase is greater than the entropy carried by the gas. Bekenstein argued that black holes are maximum entropy objects -- that they have more entropy than anything else in the same volume. In a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only limit is gravitational; when there is too much energy the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon. Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by lightlike geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase, but at first, he did not take the analogy too seriously. Hawking knew that if the horizon area were an actual entropy, black holes would have to radiate. When heat is added to a thermal system, the change in entropy is the increase in mass-energy divided by temperature: dS = dM / T If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance. Time independent solutions to field equations don't emit radiation, because a time independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking’s calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units. The entropy is proportional to the logarithm of the number of microstates, the ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling -- it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior. Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets. Black Hole Information Paradox See Wikipedia Main article: Black hole information paradox Hawking’s calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy only interact when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering. Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function, they re-emit new photons in a thermal mixed state described by a density matrix. This would mean that quantum mechanics would have to be modified, because in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities. Troubled by this paradox, Gerard ‘t Hooft analyzed the emission of Hawking radiation in more detail. He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. ‘t Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternate description of the particle’s location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet. Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, ‘t Hooft believed that the correct description of the black hole would be by some form of string theory. This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes. This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way. The space-time in quantum gravity should emerge as an effective description of the theory of oscillations of a lower dimensional black-hole horizon. This suggested that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory. In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The Matrix theory they proposed was first suggested as a description of two branes in 11-dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher dimensional object, the 3+1 dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to quantum chromodynamics, and afterwards holography gained wide acceptance.[citation needed]

Limit on Information Density

Entropy, if considered as information (see information entropy), is measured in bits. The total quantity of bits is related to the total degrees of freedom of matter/energy. For a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume, suggesting that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, then the degrees of freedom of the original particle must be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level, and that the fundamental particle is a bit (1 or 0) of information. The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J.D. Brown and Marc Henneaux had rigorously proved already in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra, whose corresponding quantum theory is a 2 dimensional conformal field theory. Jacob Bekenstein’s High Level Summary The physical universe is widely seen to be composed of “matter” and “energy.” In his 2003 article published in Scientific American magazine, Jacob Bekenstein summarized a current trend started by John Archibald Wheeler, which suggests scientists may “regard the physical world as made of information, with energy and matter as incidentals.” Bekenstein quotes William Blake, and asks whether the holographic principle implies that seeing “the world in a grain of sand,” could be more than “poetic license.” Unexpected Connection Bekenstein’s topical overview “A Tale of Two Entropies” describes potentially profound implications of Wheeler’s trend in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude E. Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy. In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the “disorder” in a physical system of matter and energy. In 1877 Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic “chunk” of matter could be in while still looking like the same macroscopic “chunk.” As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room, and all the ways they could be moving. Energy, Matter, and Information Equivalence Claude Shannon’s efforts to find a way to quantify the information contained in, for example, an e-mail message, led him unexpectedly to a formula with the same form as Ludwig Boltzmann’s. Jacob Bekenstein summarizes that “Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement...” of matter and energy. The only salient difference between the thermodynamic entropy of physics and the Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless “bits” of information, and so the difference is merely a matter of convention. The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information “inscribed” on the surface of its boundary. The question raised, as I {Jonathan Vos Post} see it: if the putative Theory of Everything is a Cellular Automaton rule set, for cells at or near Planck-length in radius, on a 2-dimensional surface which is equivalent to 3 dimensions of space, and evolution over time is according to the Cellular Automaton rule set, then, consistent with Bekenstein bound (see below) and/or Bousso’s holographic bound (see below) can there be a Cellular Automaton rule set that yields the observed laws of Physics, or alternatively can the brane be a hypercomputer with Cellular Automaton rule set plus an oracle, and can that oracle be equivalent to closed timelike curves (i.e. can the Brane Cellular Automaton signal itself backwards in time). Claimed Experimental Test at Gravitational Wave Detectors The Fermilab physicist Craig Hogan claims that the holographic principle may imply quantum fluctuations in spatial position that would lead to apparent background noise or holographic noise measurable at gravitational wave detectors, in particular GEO 600. Bekenstein bound and Bousso’s holographic bound Bekenstein bound [In physics, the Bekenstein bound is an upper limit on the entropy S, or information I, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level. It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy is finite. In computer science, this implies that there is a maximum information-processing rate for a physical system that has a finite size and energy, and that a Turing machine with unbounded memory is not physically possible.] [The universal form of the bound was originally found by Jacob Bekenstein as the inequality S <= 2 pi k R E / (ħ c) where S is the entropy, k is Boltzmann's constant, R is the radius of a sphere that can enclose the given system, E is the total mass-energy including any rest masses, ħ is the reduced Planck constant, and c is the speed of light. Note that while gravity plays a significant role in its enforcement, the bound is independent of Newton’s Constant G. In informational terms, the bound is given by I <= 2 pi k R E / (ħ c ln2) where I is the information expressed in number of bits contained in the quantum states in the sphere. The ln 2 factor comes from defining the information as the logarithm to the base 2 of the number of quantum states. The right-hand side of the foregoing relation is approximately equal to 2.5769087×10^43×(mass in kilograms)×(radius in meters).] Bousso’s holographic bound [A simple generalization of the Black Hole entropy bound (cf. holographic principle) to generic systems is that, in quantum gravity, the maximum entropy which can be enclosed by a spatial boundary is given by a quarter of its surface area. However, as a general rule, this simple generalization appears to be false, because the spatial boundary can be taken to be within the event horizon of a black hole. It also fails in cosmological models.] [Raphael Bousso came up with a modification that the spatial boundary should not be a trapped surface. This lead him to come up with Bousso’s holographic bound, also known as the covariant entropy bound. Basically, a spatial boundary is valid if, when you consider both null sheets orthogonal to its tangent space, the expansion factor both point in the same direction. This defines inside and outside. The entropy of the interior region is bounded by the surface area of the boundary.] Brane cosmology [The central idea is that the visible, four-dimensional universe is restricted to a brane inside a higher-dimensional space, called the “bulk.” If the additional dimensions are compact, then the observed universe contains the extra dimensions, and then no reference to the bulk is appropriate. In the bulk model, at least some of the extra dimensions are extensive (possibly infinite), and other branes may be moving through this bulk. Interactions with the bulk, and possibly with other branes, can influence our brane and thus introduce effects not seen in more standard cosmological models.] [One of the earliest documented attempts on brane cosmology is dated by 1983. The authors discussed the possibility that the Universe has (3 + N) + 1 dimensions, but ordinary particles are confined in a potential well which is narrow along N spatial directions and flat along three others, and proposed a particular five-dimensional model. In 1998/99 Merab Gogberashvili published on Arxiv a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then there is a possibility to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement since the extra component of the Einstein equations giving the confined solution for matter fields coincides with the one of the conditions of stability.] [In 1999 there were proposed the closely related Randall-Sundrum (RS1 and RS2; 5-dimensional warped geometry theory for a nontechnical explanation of RS1) scenarios. These particular models of brane cosmology have attracted a considerable amount of attention. Later, the pre-big bang, ekpyrotic, and cyclic proposals appeared. The ekpyrotic theory hypothesizes that the origin of the observable universe occurred when two parallel branes collided. Until now, no experimental or observational evidence of extra dimensions has been officially reported. An analysis of results from the Large Hadron Collider in December 2010 severely constrains theories with large extra dimensions See also Kaluza-Klein theory M-theory String theory Loop quantum cosmology] Entropic gravity [Entropic gravity is a hypothesis in modern physics that describes gravity as an entropic force; not a fundamental interaction mediated by a particle, but a probabilistic consequence of physical systems' tendency to increase their entropy. The proposal has been intensely contested in the physics community. The probabilistic description of gravity has a history that goes back at least to Richard Feynman in the early 1960s, and to research on black hole thermodynamics by Bekenstein and Hawking in the mid-1970s. These studies suggest a deep connection between gravity and thermodynamics, which describes the behavior of heat and gases. In 1995, Jacobson demonstrated that the Einstein equations describing relativistic gravitation can be derived by combining general thermodynamic considerations with the equivalence principle. Subsequently, other physicists began to explore links between gravity and entropy.] [In 2009, Erik Verlinde disclosed a conceptual theory that describes gravity as an entropic force. On January 6, 2010 he published a preprint of a 29 page paper titled On the Origin of Gravity and the Laws of Newton. (It was published in April 2011). Reversing the logic of over 300 years, it argued that gravity is a consequence of the "information associated with the positions of material bodies". This theory combines the thermodynamic approach to gravity with Gerardus 't Hooft's holographic principle. If proven correct, it would imply that gravity is not a fundamental interaction, but an emergent phenomenon which arises from the statistical behavior of microscopic degrees of freedom encoded on a holographic screen. The paper drew a variety of responses from the scientific community. Andrew Strominger, a string theorist at Harvard said “Some people have said it can’t be right, others that it’s right and we already knew it — that it’s right and profound, right and trivial." In July 2011 Verlinde presented the further development of his ideas in a contribution to the Strings 2011 conference, including an explanation for the origin of dark matter. Verlinde's article also attracted a large amount of media exposure, and led to immediate follow-up work in cosmology, the dark energy hypothesis, cosmological acceleration, cosmological inflation, and loop quantum gravity. Also, a specific microscopic model has been proposed that indeed leads to entropic gravity emerging at large scales.] [Verlinde’s theory is criticized by Archil Kobakhidze on the basis that it fails to explain gravitational bound states of neutron observed in the experiments with ultracold neutrons. This conclusion was disputed in a follow-up paper. Another criticism says that if vacuum had a non-zero entropy density, then there would exist a privileged frame of reference, contrary to relativity. In a recent paper, Kobakhidze argues that the counter to his argument against entropic gravity is invalid. Kobakhidze argues that an entropic origin of gravity would be in conflict with the observation of interference patterns in neutron two-slit experiments, when the slits are separated along the direction of the gravitational field. If there were a much larger number of microstates in the slit at the lower gravitational potential, he argues, this would destroy the interference pattern that was observed. An example of the intense debate that Verlinde has provoked is "Comments on and Comments on Comments on Verlinde's paper 'On the Origin of Gravity and the Laws of Newton'] http://en.wikipedia.org/wiki/Entropic_gravity R.P. Feynman, The Character of Physical Law (Messenger Lectures, 1964): Lecture 2 on The Relation of Mathematics to Physics; Feynman's attempt at a statistical description of gravity starts 7 minutes into this video clip Jacobson, Theodore (4 April 1995). "Thermodynamics of Spacetime: The Einstein Equation of State". Phys.Rev.Lett.75:1260-1263,1995 75 (7): 1260–1263. arXiv:gr-qc/9504004. Bibcode 1995PhRvL..75.1260J. doi:10.1103/PhysRevLett.75.1260. Padmanabhan, Thanu (26 November 2009). "Thermodynamical Aspects of Gravity: New insights". Rep. Prog. Phys. 73 (2010)01 73 (4): 6901. arXiv:0911.5004. Bibcode 2010RPPh...73d6901P. doi:10.1088/0034-4885/73/4/046901. Mok, H.M. (13 August 2004). "Further Explanation to the Cosmological Constant Problem by Discrete Space-time Through Modified Holographic Principle". arXiv:physics/0408060 [physics.gen-ph]. van Calmthout, Martijn (12 December 2009). "Is Einstein een beetje achterhaald?" (in Dutch). de Volkskrant. http://www.volkskrant.nl/wetenschap/article1326775.ece/Is_Einstein_een_beetje_achterhaald. Retrieved 6 September 2010. Verlinde, Eric (6 January 2010). "Title: On the Origin of Gravity and the Laws of Newton". arXiv:1001.0785 [hep-th]. E.P. Verlinde. "On the Origin of Gravity and the Laws of Newton". JHEP 04, 29 (2011). Bibcode 2011JHEP...04..029V. doi:10.1007/JHEP04(2011)029. Overbye, Dennis (July 12, 2010). "A Scientist Takes On Gravity". The New York Times. http://www.nytimes.com/2010/07/13/science/13gravity.html?_r=1. Retrieved 6 September 2010. E. Verlinde, The Hidden Phase Space of our Universe, Strings 2011, Uppsala, 1 July 2011. The entropy force: a new direction for gravity, New Scientist, 20 January 2010, issue 2744 “Gravity is an entropic form of holographic information”, Wired Magazine, 20 January 2010 Fu-Wen Shu; Yungui Gong (2010). "Equipartition of energy and the first law of thermodynamics at the apparent horizon". arXiv:1001.3237 [gr-qc]. Rong-Gen Cai; Li-Ming Cao; Nobuyoshi Ohta (2010). "Friedmann Equations from Entropic Force". Phys. Rev. D 8101(R) (2010) 81 (6). arXiv:1001.3470. Bibcode 2010PhRvD..81f1501C. doi:10.1103/PhysRevD.81.061501. It from Bit: How to get rid of dark energy, Johannes Koelman, 2010 Easson; Frampton; Smoot (2010). "Entropic Accelerating Universe". Phys.Lett.B696:273-277,2011 696 (3): 273–277. arXiv:1002.4278. Bibcode 2011PhLB..696..273E. doi:10.1016/j.physletb.2010.12.025. Yi-Fu Cai; Jie Liu; Hong Li (2010). "Entropic cosmology: a unified model of inflation and late-time acceleration". Phys.Lett.B 690:213-219,2010 690 (3): 213–219. arXiv:1003.4526. Bibcode 2010PhLB..690..213C. doi:10.1016/j.physletb.2010.05.033. Yi Wang (2010). "Towards a Holographic Description of Inflation and Generation of Fluctuations from Thermodynamics". arXiv:1001.4786 [hep-th]. Lee Smolin (2010). "Newtonian gravity in loop quantum gravity". arXiv:1001.3668 [gr-qc]. Jarmo Mäkelä (2010). "Notes Concerning "On the Origin of Gravity and the Laws of Newton" by E. Verlinde (arXiv:1001.0785)". arXiv:1001.3808 [gr-qc]. Kobakhidze, Archil (15 January 2011). "Gravity is not an entropic force". Phys. Rev. D83: 021502,2011 83 (2): 021502(3 pages). arXiv:[hep-th arXiv:1009.5414 [hep-th]]. Bibcode 2011PhRvD..83b1502K. doi:10.1103/PhysRevD.83.021502. Nesvizhevsky, V.V., et al. (17 January 2002). "Quantum states of neutrons in the Earth's gravitational field". Nature 415 (6869): 297–299. Bibcode 2002Natur.415..297N. doi:10.1038/415297a. Chaichian, Masud; Oksanen, Markku and Tureanu, Anca. "On gravity as an entropic force". arXiv:1104.4650. Why gravity can't be entropic, Luboš Motl Pilsen Arxiv: Once more: gravity is not an entropic force R. Colella, A. W. Overhauser and S. A. Werner, “Observation of gravitationally induced quantum interference,” Phys. Rev. Lett. 34, 1472 (1975) Hossenfelder, Sabine. "Comments on and Comments on Comments on Verlinde's paper "On the Origin of Gravity and the Laws of Newton"" arXiv:1003.1015. http://arxiv.org/abs/1003.1015. Margolus–Levitin theorem Physical cosmology Implicate and explicate order according to David Bohm Appendix B: Smart Membranes For the sake of argument, suppose that the 3-dimensions of space plus one dimension of time that we perceive the universe to be are a manifestation of a quantum cellular automaton operating according to discrete rules at a 2-dimensional boundary of the holographic cosmos. Having in earlier discussion summarized current theories of the holographic universe, let me explain the notions of quantum cellular automata. To quote from Dominik Janzing “Is there a physically universal cellular automaton or Hamiltonian?” Dominik Janzing It is known that both quantum and classical cellular automata (CA) exist that are computationally universal in the sense that they can simulate, after appropriate initialization, any quantum or classical computation, respectively. Here we introduce a different notion of universality: a CA is called physically universal if every transformation on any finite region can be (approximately) implemented by the autonomous time evolution of the system after the complement of the region has been initialized in an appropriate way. We pose the question of whether physically universal CAs exist. Such CAs would provide a model of the world where the boundary between a physical system and its controller can be consistently shifted, in analogy to the Heisenberg cut for the quantum measurement problem. We propose to study the thermodynamic cost of computation and control within such a model because implementing a cyclic process on a microsystem may require a non-cyclic process for its controller, whereas implementing a cyclic process on system and controller may require the implementation of a non-cyclic process on a “meta”-controller, and so on. Physically universal CAs avoid this infinite hierarchy of controllers and the cost of implementing cycles on a subsystem can be described by mixing properties of the CA dynamics. We define a physical prior on the CA configurations by applying the dynamics to an initial state where half of the CA is in the maximum entropy state and half of it is in the all-zero state (thus reflecting the fact that life requires non-equilibrium states like the boundary between a hold and a cold reservoir). As opposed to Solomonoff’s prior, our prior does not only account for the Kolmogorov complexity but also for the cost of isolating the system during the state preparation if the preparation process is not robust. Other recent publications we consider include those below. Simulation of Quantum Mechanics Using Reactive Programming, Frédéric Boussinot (INRIA Sophia Antipolis) (Submitted on 11 Jan 2011) We implement in a reactive programming framework a simulation of three aspects of quantum mechanics: self-interference, state superposition, and entanglement. The simulation basically consists in a cellular automaton embedded in a synchronous environment which defines global discrete instants and broadcast events. The implementation shows how a simulation of fundamental aspects of quantum mechanics can be obtained from the synchronous parallel combination of a small number of elementary components. Subjects: Quantum Physics (quant-ph); Cellular Automata and Lattice Gases (nlin.CG) Cite as: arXiv:1101.2133v1 [quant-ph] ==============[edit] A Quantum Game of Life Pablo Arrighi, Jonathan Grattage (Submitted on 15 Oct 2010 (v1), last revised 12 Nov 2010 (this version, v2)) This research describes a three dimensional quantum cellular automaton (QCA) which can simulate all other 3D QCA. This intrinsically universal QCA belongs to the simplest subclass of QCA: Partitioned QCA (PQCA). PQCA are QCA of a particular form, where incoming information is scattered by a fixed unitary U before being redistributed and rescattered. Our construction is minimal amongst PQCA, having block size 2 x 2 x 2 and cell dimension 2. Signals, wires and gates emerge in an elegant fashion. Comments: 13 pages, 10 figures. Final version, accepted by Journ\'ees Automates Cellulaires (JAC 2010). Companion website with animated examples: this http URL Subjects: Quantum Physics (quant-ph) Cite as: arXiv:1010.3120v2 [quant-ph] Submission history From: Jonathan Grattage [view email] [v1] Fri, 15 Oct 2010 10:59:48 GMT (439kb,D) [v2] Fri, 12 Nov 2010 14:55:21 GMT (436kb,D) ==============[edit] Is there a physically universal cellular automaton or Hamiltonian? Dominik Janzing (Submitted on 9 Sep 2010)

  It is known that both quantum and classical cellular automata (CA) exist that are computationally universal in the sense that they can simulate, after appropriate initialization, any quantum or classical computation, respectively. Here we introduce a different notion of universality: a CA is called physically universal if every transformation on any finite region can be (approximately) implemented by the autonomous time evolution of the system after the complement of the region has been initialized in an appropriate way. We pose the question of whether physically universal CAs exist. Such CAs would provide a model of the world where the boundary between a physical system and its controller can be consistently shifted, in analogy to the Heisenberg cut for the quantum measurement problem. We propose to study the thermodynamic cost of computation and control within such a model because implementing a cyclic process on a microsystem may require a non-cyclic process for its controller, whereas implementing a cyclic process on system and controller may require the implementation of a non-cyclic process on a "meta"-controller, and so on. Physically universal CAs avoid this infinite hierarchy of controllers and the cost of implementing cycles on a subsystem can be described by mixing properties of the CA dynamics. We define a physical prior on the CA configurations by applying the dynamics to an initial state where half of the CA is in the maximum entropy state and half of it is in the all-zero state (thus reflecting the fact that life requires non-equilibrium states like the boundary between a hold and a cold reservoir). As opposed to Solomonoff's prior, our prior does not only account for the Kolmogorov complexity but also for the cost of isolating the system during the state preparation if the preparation process is not robust.

Comments: 27 pages, 1 figure Subjects: Quantum Physics (quant-ph); Artificial Intelligence (cs.AI) Cite as: arXiv:1009.1720v1 [quant-ph] Submission history From: Dominik Janzing [view email] [v1] Thu, 9 Sep 2010 09:40:55 GMT (267kb,D) ==============[edit] A Simple n-Dimensional Intrinsically Universal Quantum Cellular Automaton Pablo Arrighi, Jonathan Grattage (Submitted on 4 Feb 2010 (v1), last revised 12 Oct 2010 (this version, v2)) We describe a simple n-dimensional quantum cellular automaton (QCA) capable of simulating all others, in that the initial configuration and the forward evolution of any n-dimensional QCA can be encoded within the initial configuration of the intrinsically universal QCA. Several steps of the intrinsically universal QCA then correspond to one step of the simulated QCA. The simulation preserves the topology in the sense that each cell of the simulated QCA is encoded as a group of adjacent cells in the universal QCA. Comments: 13 pages, 7 figures. In Proceedings of the 4th International Conference on Language and Automata Theory and Applications (LATA 2010), Lecture Notes in Computer Science (LNCS). Journal version: arXiv:0907.3827 Subjects: Quantum Physics (quant-ph) Journal reference: Lecture Notes in Computer Science 6031 (2010) 70-81 DOI: 10.1007/978-3-642-13089-2_6 Cite as: arXiv:1002.1015v2 [quant-ph] Submission history From: Jonathan Grattage [view email] [v1] Thu, 4 Feb 2010 15:11:31 GMT (95kb,D) [v2] Tue, 12 Oct 2010 09:21:02 GMT (95kb,D) This research describes a three dimensional quantum cellular automaton (QCA) which can simulate all other 3D QCA. This intrinsically universal QCA belongs to the simplest subclass of QCA: Partitioned QCA (PQCA). PQCA are QCA of a particular form, where incoming information is scattered by a fixed unitary U before being redistributed and rescattered. Our construction is minimal amongst PQCA, having block size 2 x 2 x 2 and cell dimension 2. Signals, wires and gates emerge in an elegant fashion. 1. Introduction (Quantum) Cellular Automata. Cellular automata (CA), first introduced by von Neumann [52], consist of an array of identical cells, each of which may take one of a finite number of possible states. The entire array evolves in discrete time steps by iterating a function G. This global evolution G is shift-invariant (it acts the same everywhere) and causal (information cannot be transmitted faster than some fixed number of cells per time step). Because this is a physics-like model of computation [27], Feynman [22], and later Margolus [28], suggested early in the development of quantum computation (QC) that quantising this model was important. This was for two main reasons. Firstly, in QCA computation occurs without extraneous (unnecessary) control, hence eliminating a source of decoherence, which is problem for QC implementation. Secondly, they are a good framework in which to study the quantum simulation of a quantum system, which is predicted to be a major application of QC. From a Computer Science perspective there are other reasons to study QCA, such as studying space-sensitive problems in computer science (e.g. `machine self-reproduction', `Firing Squad Synchronisation', . . . ) in the quantum setting, or having more complete and formal models of physics ability to compute. There is also the theoretical physics perspective, where CA are used as toy models of quantum space-time [25]. The first approach to defining QCA [2, 19, 53] was later superseded by a more axiomatic approach [9, 10, 44] together with more operational approaches [15, 36, 41, 42, 49, 53]. Key words and phrases: cellular automata, quantum computation, universality. Reply Forward

…. 4. Conclusion This paper presents a minimal 3D PQCA which is capable of simulating all other PQCA, preserving the topology of the simulated PQCA. This means that the initial configuration and the forward evolution of any PQCA can be encoded within the initial configuration of this PQCA, with each simulated cell encoded as a group of adjacent cells in the PQCA, i.e. intrinsic simulation. The main, formal result of this work can therefore be stated as: Claim 1. There exists an 3D U-defined PQCA, with block size 2 and cell dimension 2, which is an intrinsically universal PQCA. Let H be a 3-dimensional V –defined PQCA such that V can be expressed as a quantum circuit C made of gates from the set Hadamard, Cnot, and R(pi/4). Then G is able to intrinsically simulate H. Any finite-dimensional unitary V can always be approximated by a circuit C(V ) with an arbitrary small error epsilon = max { |phi>} || V |phi> - C |phi> ||. Assuming instead that G simulates the C(V )-defined PQCA, for a region of s cells over a period t, the error with respect to the V -defined PQCA will be bounded by s t epsilon. This is due to the general statement that errors in quantum circuits increase, at most proportionally with time and space [38]. Combined with the fact that PQCA are universal [7, 6], this means that G is intrinsically universal, up to this unavoidable approximation. Discussion. This PQCA is definitely minimal amongst the 3D PQCA. Reducing the block size or the cell dimension necessarily results in a trivial PQCA. Moreover, PQCA are the simplest and most natural class of QCA [3]. Nevertheless, it is not so clear that this PQCA is minimal amongst all intrinsically universal 3D QCA. Indeed, PQCA-cells are really QCA-subcells, and PQCA-blocks need to be composed with them shifted in order to yield the QCA neighbourhood. Based on these observations our QCA has QCA-cell dimension 2^8 and the 3D radius-1/2 neighbourhood. Whether there are some further simplifications in this more general setting is an open question. An important source of inspiration, along with the original Game of Life, was the the [sic] simplest known intrinsically universal classical Partitioned CA [28], which has cell dimension 2. Called the BBM CA, it was itself directly inspired by the Billiard Ball Model [23]. Our scheme also features signals (billiard balls, or particles) bouncing and colliding. This analogy with physics is a desirable feature; Feynman’s sole purpose for the invention of QC [21] was that quantum computers would be able to simulate quantum systems efficiently, and hence predict their behaviour. It is still thought that quantum simulation will be one of most important uses of quantum computers for society, with expected and potentially unexpected impact in quantum chemistry, biochemistry or nanotechnologies (e.g. in terms of the synthesis of specific purpose molecules). This was also one of the reasons why Feynman immediately promoted the QCA model of QC [22]; they are a natural mathematical setting in which to encode the continuous-time and space, sometimes complex, behaviour of elementary physical elements (such as wave equations of particles, possibly interacting, under some field, for example) into the more discrete-data discrete-time framework of quantum computation. This issue of encoding is non-trivial, but has largely been solved for the one-dimensional case, by using QCA as a mathematical framework in which to model the system of interest. For example, several works [12, 13, 14, 20, 26, 30, 31, 32, 41, 46, 50] simulate quantum field theoretical equations in the continuous limit of a QCA dynamics. These results do not extend to more-than-one dimensions due to the problem of isotropy. Whilst in one-dimension the division of space into a line of points is an acceptable model, because it does not privilege the left over the right direction, in two-dimensions the grid seems to inevitably favour the cardinal directions (N, E, S, W) over the diagonal ones. That this construction is similar to the Billiard Ball Model CA suggests a Quantum Billiard-Ball Model could be defined, moving away from the underlying grid, which would remedy this problem. This is planned this for future work. After this work was completed, a two-dimensional, continuous-time, quantum version of the Game of Life was reported (see arXiv:1010.4666). Appendix C: Approximately Computed Cosmos I’ve alluded to this: where does uncertainty come from? Why quantum chaos? Heisenberg arises how from cellular automata? Suppose that the universe does not need to EXACTLY compute its own next state? This question depends on the Computational Complexity of the Cosmos. If there are algorithms underlying physical reality, are they bounded above by a polynomial in computer-time, or are the exponential or superexponential? It is hard to say in what intuitive sense the universe is performing a “search problem.” But that is the language in which theory of computation, and the notion of NP-completeness is usually treated. The approximation problems that arise, once we drop the requirement that the universe does exactly computes its own next state, are also treated in that way. As we saw, back in the section on Complexity Classes of Oracle Machines, 3-colorings are a way of getting at NP-completeness. To get deeper into PCP (Probabilistically Checkable Proofs) and related material, see the textbook of Arora and Barak, Computational Complexity: A Modern Approach, Cambridge University Press, 2009; or the summary in Luca Trevisan, “On Khot’s unique games conjecture”, Bull. Amer. Math. Soc. 49 (2012), 91-111. So here’s 3-coloring in a nutshell. Start with an undirected Graph G. It is, formally, a pair (V, E) where V is a finite set, called Vertices, and E is a finite set of unordered pairs of elements of V, called edges. The vertices u and v are called the endpoints of the edge {u, v}. The edge {u, v} I said to be incident on the vertices u and v. The goal (intuitively derived from the Four Color Map Theorem, and its long history, known to many outside the Math community) is to determine if there is a proper 3-coloring of the undirected Graph G. That is, we want a coloring function c: V --> {0,1,2} which you may imagine as c: V --> {red, yellow, blue} if you want explicit colors, that has the property that c(u) is not equal to c(v) for every {u, v} an element of E. That is, all the vertices are colored, one color per vertex, and there is never an edge that connects to vertices which have different colors. This problem is obviously capable of solution. For example, if there are |V| vertices total, then make a complete list in whatever order you want of all 3^|V| such coloring functions c: V --> {0,1,2} and check every one of them to see if it is a proper 3-coloring. With a little ingenuity you can reduce the work to checking 3^|V| such coloring functions, which speeds things up somewhat. More ingenuity leads to slightly better improvement in running time. But nobody knows any way to do the job without having a worst-case running time (in steps of computation) at a rate k^|V| for some constant k. Even these speeded up tricky approaches to solution become infeasible to computer even on graphs with only a few hundred vertices. So the big open question becomes: is there an algorithm whose worst-case running time is bounded above by a polynomial function of |V|? This open question is equivalent to the P versus NP problem, one of the six unsolved Clay Millennium Prize problems. Solve it, and you win a million dollars. What we do know is that this 3-coloring problem is NP-complete, to use that notion we discussed earlier, as first described by Cook, Karp, and Levin. Informally, NP is the collection of all computational problems which, like 3-coloring, involve searching an exponentially large list in order to find an element which satisfies some property. By contrast, P is the collection of all computational problems which can be solved by algorithms whose worst-case running time is bounded above by a polynomial function of the length of the input to the algorithm. A computational problem A is said to reduce to a computational problem B, if (I’m still being informal) there is a way to encode instances of problem A as instances of computational problem B. That is, encode so that the existence of a polynomial-time algorithm for B automatically implies the existence of a polynomial-time algorithm for A. A computational problem B is said to be NP-hard if every problem A which is an element of NP reduces B. If an NP-hard computational problem B is in NP, we say that B is NP-complete. Therefore we see (at this informal level) that an NP-complete problem can be solved in polynomial time if an only if P = NP. It is fairly easy to prove that NP-complete problems exist. In fact, problems as simple and straightforward as 3-coloring are NP-complete. The implications of this are startling, when you first see them. For example, assume that we have a fixed formal language in which to write mathematical proofs. The length of the proof means the number of symbols (or the number of pages) of a completely formal proof of some mathematical theorem, expressed in that language. It turns out that searching for a bounded-length proof of a given mathematical statement is an NP problem. A consequence of the NP-completeness of 3-coloring is: “There is an algorithm which, given an integer N, runs in time polynomial in N if and only if the Riemann hypothesis has a proof of at most N pages.” There is nothing special about the Riemann hypothesis as I use it here. For any mathematical statement S and every integer N, it is possible to efficiently construct a graph of size polynomial in the length of S and in N, such that the graph is 3-colorable if and only if S has a proof in at most N pages. I have published elsewhere on how the problem, first posed by Hilbert, of finding the “simplest” proof of any mathematical statement S, is a hideously subtle and tricky problem, and Hilbert himself dropped it from his famous list of 20 problems as challenges to 20th century mathematicians. It is the little-known Hilbert’s 21st problem. The consequences of P = NP would include the ability to find mathematical proofs in time polynomial to the length of the proofs, which most people find absurdly implausible. It is widely assumed that the resolution of the P versus NP problem will be to find that P is NOT equal to NP. If one assumes that P is NOT equal to NP, as a conjecture, then the proof that a given problem is NP-complete can be viewed as a conditional proof that the problem cannot be solved in polynomial time. I reviewed this material quickly in order to raise the question: “What is the computational complexity of a Theory of Everything Algorithm underlying Physical reality which, rather than giving optimal exact solutions, gives arbitrarily good approximate solutions?” There is some beautiful theoretical computer science that addresses approximate algorithms, and the powerful drug PCP: (Probabilistically Checkable Proofs). You know that people can play chess by mail. But this stuff gives you a fool-proof way to play poker by mail. I shall new use the results on intractable approximation problems and the proof of the PCP Theorem in the early 1990s by Arora, Lund, Motwani, Safra, Sudan, and Szegedy. The PCP Theorem can be imagined as a way to write mathematical proofs so that, say, if no more than 1% of the steps in the proof are erroneous, then the statement that we wanted to prove is totally valid. I suspect that the universe might work in such a way. {more TO BE DONE} Appendix D: Group Complexity Let us examine the computational complexity, and Turing machine equivalent, of our Holographic Universe Quantum Cellular Automata from another branch of theoretical Computer Science: Group Complexity. Here, we follow the lead (selectively) of the long and complicated Mark Sapir, “Asymptotic invariants, complexity of groups and related problems”, arXiv:1012.1325v4 [math.GR] 21 Mar 2011. “The word and conjugacy problems are the most classical algorithmic problem for groups going back to the work of Dehn and Tietze at the beginning of the 20th century. These very basic problems are interested by themselves and because of applications to other areas of mathematics, especially topology… “Let X be a set. A word over X is a sequence of elements of X. A group word over X is a sequence of elements of X and their inverses, i.e. symbols x^-1, where x is an element of X. The length of a word W is denoted by |W|. A group word is called reduced if it does not contain subwords of the form xx^-1 and x^-1x, where x is an element of X. Every group word can be made reduced by removing all subwords of these forms. The set F(X) of all reduced group words over X is equipped with the binary operation: the product UV of two reduced group words U, V is the result of reducing the concatenation of U and V . With this operation, the set F(X) turns into a group which is called the free group over X. The identity element of that group is the empty word denoted by ∅. For every group G, any map X → G extends uniquely to a homomorphism F(X) → G: any word is mapped to the product of the images of its letters. In particular, every group G generated by at most |X| elements is a homomorphic image of F(X) so that the image of X generates G. Let φ be one of these (surjective) homomorphisms.” The word problem in G (related to φ) is the following Problem 2.1. (The word problem). Input: A reduced group word W over X. Output: “Yes” if φ(W) = 1 in G and “No” otherwise. Let us also introduce a “close cousin” of the word problem, the conjugacy problem. We say that two elements a, b of a group G are conjugate if there exists an element t (called a conjugator) such that tat^-1 = b. The conjugacy problem in G (related to φ) is the following Problem 2.2. (The conjugacy problem). Input: Two reduced group words U, V over X. Output: “Yes” if φ(U) and φ(V ) are conjugate in G and “No” otherwise. “In principle, the existence of an algorithm to solve Problem 2.1 or 2.2 depends not only on G but also on φ. There is, however, an important case when φ does not matter. Suppose that X is finite. Then G = φ(F(X)) is called finitely generated. Let N be the kernel of φ. It is a normal subgroup of F(X). Suppose that N is generated as a normal subgroup by a finite set R (“generated as a normal subgroup” means that every element of N is a product of conjugates of elements of R and their inverses). Then we say that G has a finite presentation <X | R>. For finitely presented groups the solvability of the word or conjugacy problem (i.e. the existence of the needed algorithm) does not depend on the choice of φ or even on the choice of the finite set X (as long as φ is surjective). Note that the conjugacy problem is stronger than the word problem, that is if the conjugacy problem is solvable in G, then the word problem is also solvable. Indeed, only the identity element can be a conjugate of the identity element.” “The solvability of an algorithmic problem does not mean that the problem can be solved in practice. First of all the existence of an algorithm does not mean that it is readily available. For example, the word problem in every finite group is certainly decidable. But there is no algorithm that, given a finite set of relations r_i = 1, i = 1, . . . , n, and a relation r = 1 over a finite alphabet X, would decide whether every finite group generated by X where the relations r_i = 1 hold (i = 1, . . . , n) also satisfies r = 1. That is a result of Slobodskoj [A. M. Slobodskoii, Undecidability of the universal theory of finite groups. Algebra i Logika, 20(2):207-230, 1981.]. Thus there is no uniform algorithm solving the word problem in all finite groups at once. In general we usually ask for existence of an algorithm solving certain algorithmic problem but not for a procedure to find this algorithm. (In most concrete cases, if an algorithmic problem in a given group is decidable, we are able to write down an algorithm that solves the problem.)” “Another, more important, obstacle is that the algorithm can be too slow. For example, if a finitely presented group G is residually finite 3, then the word problem can be solved by the so called McKinsey algorithm [J. C. C. McKinsey, The decision problem for some classes of sentences without quantifiers. J. Symbolic Logic, 8(3):61-76, 1943.]: list all finite quotients of G (that is consider all finite groups H one by one, consider every map φ from the set of generators of G into H, and check if this map takes all r_i to 1, and whether the images of generators of G generate H; if - yes, then include the pair (H, φ) in the list of quotients), and at the same time list all the corollaries of the defining relations of G (say, list all products of conjugates of r_i’s and their inverses). Every word r is either in the list of corollaries or is not trivial in one of the finite quotients. In the first case r = 1 in G, in the second case, r is not equal to 1 in G, and after some time we will surely know which of this options holds. It is clear that in general this algorithm is very slow. Even for a short word r it would take a lot of time to decide, using this algorithm, if r = 1 in a given residually finite group. A group is residually finite if the intersection of all its subgroups of finite index is trivial.” “Thus the next thing to do, after we find out that an algorithmic problem is decidable, is to find the computational complexity of this problem. To make this more precise let us present here some concepts from the Computational Complexity Theory. Any decision problem D may be considered as a membership problem for elements of some set B_D in a subset S_D. For example if D is the word problem for a group G = <X | R>, then B_D is the set of all reduced group words in the alphabet X, and S_D is the set of products of conjugates of elements of R and their inverses in the free group over X (i.e. the normal subgroup of the free group over X generated by R). With any element x in B_D one associates a number which is called the size of this element. Usually the size is roughly the minimal space which is needed to write x down. For example the size of a word is typically its length.” The size depends on the way we choose to represent the elements. For example if x is a natural number then we can represent x as x units. Then the size of x will be equal to x. If we represent x as a sequence of binary digits then the size of x will be approximately log_2(x). Algorithms may be realized by Turing machines (for a formal definition see below). We can assume that this machine is equipped with a voice synthesizer and can say two words “Yes” and “No”. An algorithm solving the decision problem D starts working with an element x of B_D written on the tape of the machine. When it ends, it says “Yes” if x is an element of S_D or “No” if x is not an element of S_D. There are also different kinds of algorithms that say “Yes” when x is an element of S_D and work indefinitely long without saying anything if x is not an element of S_D.. Note that if there is an algorithm that recognizes the “yes” part and there is an algorithm recognizing the “no” part of D, then there exists an algorithm solving D. Turing machines (a formal definition). Recall one of the many equivalent definitions of a Turing machine, and that our cellular automata are capable of calculating precisely what Turning machines can calculate. A multi-tape Turing machine has k tapes and k heads observing the tapes. One can view it as a quadruple M = <A, B, C, D> where A is the input alphabet, B is the tape alphabet (B is equal to or properly contained in A), C = DisjointUnion of C_i, over i = 1, . . . , k is the set of states of the heads of the machine, D is a set of commands. Some (recognizing) Turing machines also have two distinguished vectors of state letters: sArrow_1 is the k-vector of start states, sArrow_0 is the k-vector of accept states. From Alpha to Omega. The leftmost square on every tape is always marked by α, the rightmost square is always marked by ω. The head is placed between two consecutive squares on the tape. A configuration of a tape of a Turing machine is a word αuqvω where q is the current state of the head of that tape, u is the word in Y to the left of the head and v is the word in Y to the right of the head, and so the word written on the entire tape is uv. At every moment the head of each tape observes two letters on that tape: the last letter of u (or α) and the first letter of v (or ω). A configuration U of a Turing machine is the word U_1 U_2 . . . U_k where U_i is a configuration of tape i. Assuming that the Turing machine is recognizing, we can define input configurations and accepted (stop) configurations. An input configuration is a configuration where the word written on the first tape is in I+, all other tapes are empty, the head on the first tape observes the right marker ω, and the states of all tapes form the start vector sArrow_1. An accept (or stop) configuration is any configuration where the state vector is sArrow_0, the accept vector of the machine. We shall always assume (as can be easily achieved) that in the accept configuration every tape is empty. A command of a Turing machine is determined by the states of the heads and some of the 2k letters observed by the heads. As a result of a transition we replace some of these 2k letters by other letters, insert new squares in some of the tapes and may move the heads one square to the left (right) with respect to the corresponding tapes. For example in a one-tape machine every transition is of the following form: uqv → u′q′v′ where u, v, u′, v′ are letters (could be end markers) or empty words. The only constraint is that the result of applying the substitution uqv → u′q′v′ to a configuration word must be a configuration word again, in particular the end markers cannot be deleted or inserted. This command means that if the state of the head is q, u is written to the left of q and v is written to the right of q then the machine must replace u by u′, q by q′ and v by v′. For a general k-tape machine a command is a vector [U_1 → V_1, . . . ,U_k → V_k] where U_i → V_i is a command of a 1-tape machine, the elementary commands (also called parts of the command) U_i → V_i are listed in the order of tape numbers. In order to execute this command, the machine first checks if every U_i is a subword of the configuration of tape_i (i = 1, . . . , k), and then replaces all U_i by V_i. A computation is a sequence of configurations C_1 → …→ C_n such that for every i = 1, . . . , n-1 the machine passes from C_i to C_(i+1) by applying one of the commands d_j from D. A configuration C is said to be accepted by a machine M if there exists at least one computation which starts with C and ends with an accept configuration. A word u in X* is said to be accepted by the machine if the corresponding input configuration is accepted. The set of all accepted words over the alphabet X is called the language accepted (recognized) by the machine Let C = C_1 → ...→ C_n be a computation of a machine M such that for every j = 1, . . . , g-1 the configuration C_(j+1) is obtained from C_j by a command d_j from D. Then we call the word d_1 d_2 … d_(n-1) the history of this computation. The number n will be called the time (or length) of the computation. Let p_i (i = 1, . . . , g) be the sum of the lengths of the configurations of the tapes in the configuration C_i. Then the maximum of all p_i will be called the space of the computation. We do not only consider deterministic Turing machines, for example, we allow several transitions with the same left side. Note that Turing machines can also be viewed as rewriting systems where objects are the configurations and substitution rules are the commands of the machine. This point of view allows one to use the tools of the string rewriting theory, normal forms, confluence, etc. One can also view Turing machines as semigroups of partial transformations of the set of configurations generated by the transformations induced by the commands. This point of view allows to explore dynamics properties of Turing machines (say, periodic computations). That is often needed when one tries to simulate a Turing machine by a group [A. Yu. Olshanskii, M. V. Sapir, Non-amenable finitely presented torsion-by-cyclic groups, Publ. Math. Inst. Hautes ´ Etudes Sci. No. 96 (2002), 43–169 (2003).]. The time and space functions. With every Turing machine A one can associate two important functions: the time function and the space function. The time (space ) function t_A(n) (respectively s_A(n)) of a Turing machine A is the minimal number of steps of the machine (respectively number of cells on the tape visited by the machine)to decide that any element x in S_D of size ≤ n is indeed in S_D. We can compare the time (space) functions of algorithms as follows. If f, g : N → N, we write f ≺ g if for some constant C we have f(n) ≤ C_g(C_n) + C_n + C for every n an element of N. We say that f and g are equivalent, if f ≺ g and g ≺ f. Thus every sublinear function is equivalent to n, functions n^5 and 2n^5, 2^x and 3^x are equivalent, but functions n^3.1 and n^3.2 are not. The following relation is clear: sA(n) ≺ tA(n). Indeed, even if at every step the algorithm used a new tape cell, s_A(n) would be only equivalent to t_A(n) (since the number of cells used by the machine at every step is bounded). In reality s_A(n) is almost always less than t_A(n). On the other hand the following relation also holds [M. R. Garey, D. S. Johnson, Computers and Intractability. A Guide to the Theory of NPCompleteness. A Series of Books in the Mathematical Sciences. Freeman, New York (1979).]: T_A(n) ≺ 2^s_A(n) A very rarely accepting Turing machine. Turing machines, like groups, can have some extreme properties. It is well known, for example, that a Turing machine can have undecidable halting problem, that is there is no algorithm that checks whether the machine will ever stop starting with a given input configuration. This is obviously equivalent to the property that the time and space functions of this Turing machine are bigger than any given recursive function (i.e. a function N → N whose graph and its complement can be recognized by a Turing machine). In Section 4.6 we shall need a Turing machine with even stronger property. Let X be a recursively enumerable language in a 2-letter alphabet, i.e. a set of binary words recognized by a (non-deterministic) Turing machine M. If x is an element of X then the time of x (denoted time(x) or time_M(x)) is, by definition, the minimal time of an accepting computation of M with input x. For any increasing function h: N → N, a real number m is called h-good for M if for every w an element of X, |w| < m implies h(time(w)) < m. Let h(n) = exp exp(n). (that is, e to the power of e to the power of n) Theorem 2.6 (Olshanskii, Sapir [A. Yu. Olshanskii, Groups with quadratic-non-recursive Dehn function (with an appendix by A. Yu. Olshanskii and M. V. Sapir), preprint, 2010.]). There exists a Turing machine M recognizing a recursively enumerable non-recursive set X such that the set of all h-good numbers for M is infinite. It is easy to observe that the set G of h-good numbers of M cannot be recursively enumerable. Moreover it cannot contain any infinite recursively enumerable subset. Indeed, if G contained an infinite recursively enumerable subset R, then for every input configuration C of M we would start the Turing machine enumerating R, and wait till we get a number r from R that is bigger than the size of C. Then to check if M halts on C, we would check all (finitely many) computations of M starting with C and having length less than log log r. The Turing machine M halts starting with C if one of these computations ends with the accept configuration. Hence the set X would be recursive. On the other hand, the complement of G is recursively enumerable because in order to verify that r is not an h-good number, one needs only to start M and wait till it accepts an input configuration C of size ≤ r but time(C) > log log r. Thus G is an immune set and its complement is simple in terminology of [A. I. Mal’cev, Algorithms and recursive functions. Nauka, Moscow, 1965.]. Note that the very existence of simple and immune sets was a non-trivial problem solved by Muchnik and Friedberg using the famous priority argument, answering a question of (not a relative of this author) Emil Post (see [A. I. Mal’cev, Algorithms and recursive functions. Nauka, Moscow, 1965.] and [Robert I. Soare, Recursively Enumerable Sets and Degrees, Springer-Verlag, 1987.] for details). Our proof of Theorem 2.6 also uses the priority argument and Matiyasevich’s solution of the Hilbert 10th problem [Yu. V. Matiyasevich, Hilbert’s tenth problem. Translated from the 1993 Russian original by the author. With a foreword by Martin Davis. Foundations of Computing Series. MIT Press, Cambridge, MA, 1993.]. The complexity classes. If there exists an algorithm A which solves the “yes” part of problem D and t_A(n) is bounded from above by a polynomial in n then we say that D can be solved in polynomial time. The solvability in exponential time, polynomial space, exponential space, etc. is defined similarly. It is worth mentioning that if we modernize the Turing machine by, say, adding more tapes or heads, we won’t change the complexity of the problem much. For example a nonpolynomial time (space) problem cannot become polynomial as a result of that. The class of all problems which can be solved in polynomial time is denoted by P. If in the definition of the time complexity, we replace the deterministic Turing machines by non-deterministic Turing machines, then we obtain definitions of the solvability in nondeterministic polynomial time, non-deterministic exponential time, etc. Recall that a non-deterministic Turing machine is more intelligent than a deterministic one: it does not blindly obey the commands of the program, but, at every step, guesses itself what the next step should be. Roughly speaking a problem D can be solved in non-deterministic polynomial time if for every element x in B_D there exists a proof (usually called witness) that x belongs to S_D and the length (size) of this proof is bounded by a polynomial of the size of x. The class of all problems which can be solved in polynomial time by a non-deterministic Turing machine is denoted by NP. It is not known if P = NP. As we have seen, this is one of the central problems in Theoretical Computer Science. In order to prove that a problem is solvable in polynomial time it is enough to find a polynomial time algorithm solving this problem. We have mentioned that the complexity of the McKinsey algorithm is very high, but we do not know any finitely presented residually finite group with really hard word problem. Thus we formulate: Problem 2.7. Is there (time or space) complexity bound for the word problem in a finitely presented residually finite group. More precisely, what is the highest class in the time complexity hierarchy [M. R. Garey, D. S. Johnson, Computers and Intractability. A Guide to the Theory of NPCompleteness. A Series of Books in the Mathematical Sciences. Freeman, New York (1979).] where the word problem in every finitely presented residually finite group belongs? An even more bold question: is the word problem of every finitely presented residually finite group in NP? Reducing one problem to another. In order to prove that a problem D cannot be solved in polynomial (exponential, etc.) time one has to take another problem Q which is known to be “hard” and reduce it to D. There are several kinds of reductions used in the Computer Science literature. One of them, the polynomial reduction in the sense of Karp [M. R. Garey, D. S. Johnson], is the following. A reduction of a problem Q to a problem D is a function φ from B_Q to B_D such that • An element x from B_Q belongs to S_Q if and only if φ(x) belongs to SD. • The element φ(x) can be computed in polynomial time, in particular the size of φ(x) is bounded by a polynomial of the size of x. It is clear that if Q is “hard” and Q can be reduced to D then D is “hard” as well. Notice that, when we prove the undecidability of a problem D, we usually also reduce a problem Q known to be “hard” (which in this case means undecidable), to D. In order to reduce Q to D we find a similar mapping φ, but we do not care about the size of φ(x). The “basic hard” problem is usually the halting problem of a Turing machine (or some other kind of machine such as Minsky machines). But it could be a different kind of undecidable problem such as the Hilbert’s 10th problem (see [Olga Kharlampovich, Mark Sapir. Algorithmic problems in varieties. Internat. J. Algebra Comput. 5 (1995), no. 4–5, 379–602., Section 6] for details) or a membership in the range of a general recursive function as in McKenzie-Thompson [Ralph McKenzie, Richard J. Thompson, An elementary construction of unsolvable word problems in group theory. Word problems: decision problems and the Burnside problem in group theory (Conf., Univ. California, Irvine, Calif. 1969; dedicated to Hanna Neumann), Studies in Logic and the Foundations of Math., 71, pp. 457–478. North-Holland, Amsterdam, 1973.]. An NP-complete problem. Many known problems A are such that A is in NP and every other NP-problem polynomially reduces to A. Such problems are called NP-complete. One of these problems is the following exact bin packing problem [M. R. Garey, D. S. Johnson,, Page 226], [135, Page 255]: Problem 2.8. (Exact bin packing problem) • Input: A k-tuple of positive integers (r_1, . . . , r_k) and positive integers B,N. • Output: “Yes” if there exists a partition of {1, . . . , k} into N subsets {1, . . . , k} = S_1 DisjointUnion . . . DisjointUnion S_N such that for each i = 1, . . . ,N we have SUM[j an element of S_i] r_j = B. Complexity classes and algebraic systems. A deep connection between Computer Science and algebra was found by R.Fagin [Ronald Fagin, Generalized first-order spectra and polynomial-time recognizable sets. In Complexity of Computation. SIAM-AMS Proceedings. 7, pages 43–73, 1974.]. He proved, in particular, the following amazing model-theoretic characterization of classes of finite algebras whose membership problem can be solved in non-deterministic polynomial time. Recall that an algebraic system is by definition a set with certain operations and predicates of various arities [A. I. Mal’cev, Algorithms and recursive functions. Nauka, Moscow, 1965.].]. For example, a linearly ordered group is an algebraic system with one binary operation (product), one unary operation (taking inverses), one nullary operation (the identity element) and one binary predicate giving the order relation. Theorem 2.9 (Fagin [Ronald Fagin, Generalized first-order spectra and polynomial-time recognizable sets. In Complexity of Computation. SIAM-AMS Proceedings. 7, pages 43–73, 1974.]). The membership problem for an abstract (i.e. closed under isomorphisms) class of finite algebraic systems is in NP if and only if it is the class of all finite models of a second-order formula of the following type: ThereExists Q_1 ThereExists Q_2 . . . VQ_n(Theta) where Q_i is a predicate, and Theta is a first-order formula. Basically this theorem means that the membership problem of a class of finite algebraic systems is in NP if and only if we can describe the structure of algebraic systems of this class in terms of functions and relations. Since all known methods of studying the structure of algebraic systems are based on studying functions (homomorphisms, polynomial functions, etc.) and relations (congruences, orders, etc.) we can conclude that for the membership problem of a class of finite algebras being in NP is equivalent to admitting a reasonable structure description. Note that a word in an alphabet with k letters can be considered as an algebraic system as well: a word is a linearly ordered finite set (say, {1, 2, . . . , n}) with k unary predicates L_x indexed by the letters of the alphabet such that L_x(p) holds if the p-th letter in the word is x. The set of all words in the k-letter alphabet can be defined by obvious axioms involving these k + 1 predicates (the order predicate should satisfy the axioms of linear order, plus for every p = 1, 2, . . . , n one and only one of the statements L_x(p) holds). Reduced group words can also be defined using similar axioms, etc. Hence being able to solve the word problem in a group in non-deterministic polynomial time means, by Fagin’s theorem, that we can find algebraic description of words that are equal to 1 in the group. Note that classes where the membership problem has other types of computational complexity have similar model theoretic characterizations. Classes with membership problem in P have been characterized by Immerman [N. Immerman, Relational queries computable in polynomial time. Information and Control, 68:76–98, 1986.], Sazonov [V. Sazonov, Polynomial computability and recursivity in finite domains. Electronische Informationverarbeitung und Kybernetik, 16:319–323, 1980.], [V. Sazonov, A logical approach to the problem P=NP. Lect. Notes Comp. Sci., 88:562–575, 1980.] and Vardi [M. Y. Vardi, The complexity of relational query languages. In Proc. 14th ACM Symp. on Theory of Computing, pages 137–146, 1982.]. Classes with exponential time membership problem were characterized by Fagin. Classes with non-deterministic exponential time membership problem have been characterized by Fagin and Jones and Selman [N. D. Jones, A. L. Selman, Turing machines and the spectra of first-order formulas with equality. J. Symbolic Logic, 28:139–150, 1974.]. For a detailed survey of these results see [Ronald Fagin, Finite model theory – a personal perspective. Theoret. Comput. Sci., 116:3–31, 1993.]. YOUR FEEDBACK IS WELCOME!

JVP/acp\TOE9.doc

Personal tools