Douglas J. Matzke

Dallas, Texas

http://www.matzkefamily.net/doug/

Presented at: Towards a Science of Consciousness 1994 Tucson I

A dualist model of mind-body is proposed that may offer a computational solution to the mystery of human intelligence and ultimately shed light onto consciousness. This model addresses the formal and physical limits to computation that critics of artificial intelligence argue are the reason computers will never produce really intelligent machines. This model describes potential ideal computational leverage mechanisms that are based upon primitive properties of the universe (such as space, time, and number of dimensions) derived from modern physics consistency arguments. An intuitive model of computational costs is introduced to create a framework for discussing how unlimited computation could be derived from space, time, and observer requirements. This cost model suggests that an ideal solution for unlimited intelligence would require a sparse, high dimensional spacetime (unrestricted locality) and a formalized observer mechanism (mobile observer framework based on a superset of inertial frame properties). This solution simultaneously addresses the semantical issue of unrestricted locality by maintaining a space/time metric but by going beyond the non-locality constraints of 4D physical implementation layers. Such a computational model would strongly support a dualist model of the consciousness, where the mind has properties distinct from the brain/body. This solution would differentiate why naturally occurring "real intelligence" computation systems exhibit consciousness and why current artificial intelligence solutions are not really intelligent nor do they exhibit consciousness.

Is the brain a fantastic computer we have yet to decipher or does some mysterious non-physical property called the "mind" evolve with and control this biological robot? Dualists have stated there are two distinct types of substance: mental and physical. Alternatively, materialists conclude from neurological research, that the brain and the mind are one and the same. All but a handful of scientists believe that the "dualist" approach has been outmoded for decades[1]. Either approach to the mind-body (or mind-brain) problem must be built on a computational theory, since man's intelligence is as much a mystery as his consciousness. Purely mechanistic computational approaches do not add much insight into the mystery of consciousness[2]. My assumption is that a good computational model for intelligence must precede and therefore support, any theory of consciousness. If a correct computational strategy could be developed to support the kind of "real intelligence" demonstrated by mankind, then it may also shed light onto consciousness.

This paper presents a rationale, important properties, and computational framework that would be required for a dualist model of the mind. Most scientists will reject the need for a dualist model of the mind, but research in physics has uncovered a wealth of understanding that could be applied to the computational approach to the mind-brain problem. Physics has developed the concepts, theories, and techniques to deal with real items that can only be indirectly measured. For example, quark theory predicts that matter consists of tiny invisible quarks, but quarks themselves are not directly measurable. In fact, most of modern science depends on indirect measurements based on predictions from some theory. These same techniques can be applied to studying a real but nonphysical mind.

If a nonphysical mind really does exist, then it should be amenable to study in the same fashion as other physical theories that deal with indirectly observable phenomena. Once the cultural limitations of accepting the possibility of nonphysical mechanisms are addressed (i.e., indirectly observable), the next major questions to be tackled for a dualist model of the mind-brain includes the concerns about representational, architectural, and physical limits.

Since humans are intelligent as well as conscious, a good predictive computational theory is the key requirement for a solution to the mind-brain puzzle. Such a theory must address the representational issue of information versus knowledge (or knowing). Information is traditionally considered a static measure, whereas knowing, meaning, and consciousness implies a dynamic action. Any representation of information and its dynamics impacts all aspects of our models, including the computer architectures we can conceive or build. Our computer dominated representations for information, space, and time have limited our race's ability to develop a computational theory of the mind.

The most widely used computer design is the Von Neumann computer architecture. In this architecture, the memory and the processor are separated by a set of wires called a bus. This bus is called the "Von Neumann bottleneck" because all data must move through this limited speed set of wires, no matter how fast the processor or how large the memory. This computer organization should be relabeled the "Newtonian bottleneck" because it reflects the computer industry's use of last century's models of an independent space and time. Computation requires both space (memory/communication) and time (processor or change) resources, but segregating these two resources seems to violate what modern physics has learned about a unified spacetime. A modern view of a unified spacetime is best reflected in cellular architectures where small amounts of memory and logic produce a space of active data that approximates the organizational dynamics found in physics.

Since computing is a physical action[3], physical limits impact the realization of any computational model. The speed of light, the wavelength of light, the discrete charge of an electron, noise margins, thermal problems, and the uncertainty principle are obvious limits to building physical computing machines. Other less obvious limits, such as the number of physical dimensions of our universe, the black hole limit, and rate of growth of exponential problems also have a profound impact on the size and nature of physical machines we can design or build.

Hubert Dreyfus[4], a critic of the artificial intelligence community, predicts that computers will never be intelligent due to known formal and computational limits. Dreyfus states that computer science has made interesting progress in mimicking human intelligence, but these algorithms require an exponential amount of computing resources for larger and larger problems. The problems of vision and language understanding, dynamic motion control, cryptography, and planning far exceed any conventional computing machine ability. Future scalability limits ultimately restrict how powerful a computer we can design or build. It is for these reasons that understanding ordinary human intelligence may be a prerequisite to understanding consciousness.

It is clear that a revolutionary computational approach is needed to build truly intelligent machines, because all the hard computational problems listed above are all members of the same formal class of algorithms. For these reasons, the following two areas of physics research are being investigated. First, scientists are studying quantum computing as an mechanism for exponential speedup[5]. The appeal of applying quantum physics for computational leverage (and consciousness) comes from its unusual properties of instantaneous, non-local correlations of discrete states. Second, relativity also comprehends the unusual properties of variable space and time. Some of these strategies for providing extraordinary computing resources might also provide insight concerning computational processes with properties suitable for consciousness. It is possible that systems that exhibit the self organization required for human "real intelligence" (nothing artificial about it), may exhibit consciousness. The next section surveys aspects of physics that could build a conceptual framework for extraordinary computational facilities and consciousness

Physics must ultimately develop a solution for human "real intelligence", because it represents an evolutionary, complexity increasing informational process. This process must not violate what physicists know about the evolution of the complexity of the universe. Cosmologists that study the evolution of the universe[6] are combining techniques from information, quantum, and relativity theories, the three most successful theories of all time. These hybrid efforts describing the evolution of the universe could be applied to evolution of the mind.

Research results from these three fields, on higher dimensional semantics may be applicable to the puzzle of intelligence and ultimately consciousness. Many interesting and complex information systems have higher dimensional semantics and repeatably show up in nature. In addition, computer scientists have demonstrated in coding theory[7], neural nets[8], and many forms of higher dimensional mathematics, that increasing the independent degrees of freedom can be computationally advantageous. It is also well known that for simulating certain high dimensional problem semantics on computers with a one dimensional virtual memory system[9] is inefficient.

Many researchers are already cognizant of the benefits of combining these three powerful theories. In his famous "It from Bit" paper, John Wheeler elegantly describes how Information and quantum theories can be combined[10]. His paper describes other hybrid efforts that combine information, gravity, and quantum theories by Bekenstein & Schiffer[11], Hawking[12], Penrose[13], and Unruh[14]. This work is exciting because it combines the sciences of the very large (Gravity theory), very small (Quantum theory), and very complex (Information theory).

The challenge of building a grand theory that combines all known theories is the goal of many researchers[15]. These hybrid theories have interesting names such as Bitstring Physics[16], Grand Unified Theory, Quantum Gravity, Theory of Everything, and many others[17]. Just as in information applications, many of these theories depend on an emergent spacetime and are based on high dimensional semantics (symmetries in 5 or 10 dimensions). All these grand theories are topological and geometrical, which is similar to the classification of computer algorithms and architectures. Another common theme among these theories is the requirements and mechanisms for consistency. Consistency can be viewed as an informational cause behind conservation laws and is therefore more primitive than mass, energy, or even spacetime.

Topological consistency and higher dimensional spacetimes seem to be common themes relevant to information theory and the physical sciences. These ideas are the foundation for Einstein's relativity which ushered in an entirely new physical theory governing consistency laws and spacetime. Quantum theory is also based on an algebraic consistency of certain conserved properties described in Hilbert Space, a high dimensional mathematics. Both of these theories have made verifiable predictions that space and time must have non-intuitive properties in order to maintain these consistency laws. This combination of well understood physical mechanisms (consistency and spacetime) defines a framework for all physics. It is possible that computational leverage for intelligence and consciousness can arise from these powerful concepts and theories.

Consistency frameworks form the physical foundation for multiple observational viewpoints or different "Points of View". Formally defining the interaction between the observer and the "action or thing being observed" is part of understanding the observation process. Historically, scientists have prided themselves in their belief that true science occurs when the observer does not participate or disturb an act of measurement. Unfortunately, quantum physics measurements depend on how a question is asked or what question is asked. If an experiment asks particle questions then the results are particle answers. If an experiment asks wave questions then the results are wave answers. Likewise in relativity, asking how much "energy" is in a system is dependent on the observer's velocity and acceleration.

Four independent frameworks for observation have been developed: 1) information/sampling theory, 2) relativity inertial frames, 3) quantum wave function collapse, and 4) the self-referential aspect of mind, called consciousness. Ideally, these observational frameworks should be combined into one unified framework that describes consistent observation. The consistency arguments that were used to develop relativity theory, should apply to all observational frameworks. These arguments are critical because they define the very nature of space and time, which are the primary resources for computation.

The main idea stated in Einstein's relativity principle was that "all inertial frames are totally equivalent for the performance of all physical experiments."[18] In other words, no matter where you are in space or what speed you are traveling, the laws of physics must be the same. The laws define the possible actions as well as the process of observing those actions from any vantage point.

The logical progression of consistency steps that follow the relativity principle are: 1) Any point in space is as good as any other (Position invariance) 2) Any direction is as good as any other (Rotation invariance - isotropy) 3) Any speed is as good as any other (Velocity invariance) except max is speed of light- C 4) Inertial frames are mathematical framework for relating various vantage points 5) Many conserved properties are not truly invariant primitives, such as mass and energy 6) Parametrized consistency metrics are the only true cornerstone for observation

One major outcome from relativity was experimental proof that the speed of light is constant no matter how you measure it, and no matter what speed you are traveling. In fact, mass, energy, distance, and time have changing values depending on one's speed. This result is required to keep anything that has mass from exceeding the speed of light, and to have all the laws of physics work consistently in all inertial frames, even those traveling very fast. The speed of light is a cosmic speed limit, and all observations using light are dependent on relativity principles. Gravity was also shown to be nothing more than an acceleration due to matter bending space and time. Relativity has the intuitive and mathematical framework to make it one of the most advanced theories of our time. Even observational frameworks of the mind could benefit from the power of this theory.

Many of the consequences and predictions of the consistency models are counter-intuitive:

1) Consistency is more primitive than conservation laws of energy/mass, or space and time

2) Consistency requires light to follow locally "straight line" geodesics (curved spacetime)

3) Consistency results in constant C and variable space and time (Lorentz Transformation)

4) Inertial frames are completely relative and outside physics (they can not be acted upon)

5) Consistency in quantum results in more extraordinary spacetime models than relativity 6) Causality must be replaced by synchronization among quantum events

Relativity theory really is a new physical theory of space and time that impacts all physical laws (and all observational frameworks). These astounding and counter-intuitive predictions are all based on absolute consistency requirements for the observation of physical events. When quantum events are considered (no direct observation possible) even more unusual spacetime properties emerge. In fact, recently Peter Shor[19] has described how the quantum collapse of states can be used to solve very hard cryptography calculations.

Physics research has uncovered many non-physical mechanisms that could be useful for creating computational leverage. A list of some of those ideas are:

1) Use ballistic computation and geodesics to reduce power costs

2) Locality could be manipulated to shorten perceived distances

3) Consistency mechanisms behave as superluminal synchronization primitives

4) Act upon inertial frames or consider higher order lorentz contraction (2d & 3d)

5) Quantum computers theoretically can provide exponential speedup

6) Consistency mechanisms interact outside normal linear time- excluding illegal time loops

7) Increased dimensionality increases degrees of freedom (number of bits of choice)

8) Prespatial and pretemporal change mechanisms may be hierarchically and sparse

These ideas appeal to researchers studying the mind and consciousness because certain biological[20], psychological[21], parapsychological[22], and meditative research[23] strongly suggest that these properties are exhibited by the mind. An interesting point to note concerning computational leverage mechanisms is that they deal with cosmological issues such as the framework of spacetime and the structure of the universe, and are thus, "outside the box" of what is normal day-to-day physics. This is not surprising given that the evolution of the mind (both collectively and individually) deals with many of the same issues (information, complexity, and energy) as the evolution of the universe. The next section will succinctly describe a model dealing with observation and the mind that could encompass many of these computational leverage ideas.

Relativity and quantum mechanics have each formalized the role of an observer. Mind is possibly the ultimate observational mechanism and therefore must also have some kind of observational framework. An abstract observational framework can be constructed that allows computational leverage properties.

This framework is built upon Carver Mead's two costs of computation[24] which are:

1) Information is in the wrong place -Spatial Entropy -> must move the data

2) Information is in the wrong form -Logical Entropy -> must rotate or transform the data Mead labeled these costs as spatial and logical entropy to suggest a link between physics and information theory. His computational cost model can be intuitively applied to a physical, 3d geometric framework and extended to deal with computational leverage.

Mead's initial idea behind spatial entropy was based on traditional communications theory. This can be expanded to a modern view in which spacetime is the backdrop for events that includes moving information thru both space and time. The model can be expanded even more if the perception of "locality" for distances/times are distorted due to relativistic or quantum mechanisms. An additional mechanism for manipulating locality is to assume the dynamics of a sparse higher dimensional space. As was mentioned earlier, information, relativity, quantum, and combined theories have all adopted higher dimensional modeling, so it is reasonable to expect higher dimensional semantics to enter into a high leverage computational model. This hyperspace model would most likely be sparse because of the desirable properties that arise[25;26] that are similar to Wheeler's pregeometric model[27]. Thus the new expanded spatial entropy theory deals with all of the backdrop issues of dimensionality, geometry, locality, and spacetime metrics required of an observational framework.

Mead's logical entropy originally was a conventional algorithmic view of transformation (i.e., inputs to outputs) and data rotation (i.e., one form to another form). Recognizing that all actions and events can be placed in a spacetime backdrop, modern physics deals with rotation and transformation by using inertial frames to formally convert from one perspective to another. Conventional computer science takes a view of transformation costs where the stationary processor implements the algorithm and the data is mobile, whereas physics takes a mobile observer frame and stationary geometric backdrop for events. Inertial frames represent the only concept from physics that matches many of the desirable properties of a mobile observer that are exhibited by psychological research[28;29;30;31;32]. A physical theory for inertial frames (i.e., they have state and therefore require bits) needs to be created to explain how they can be included in physics (i.e., be acted upon) and how they can distort locality in more than one direction (i.e., >1d length contraction). Expanded consciousness experiences demand a mechanism with these properties[33].

The ultimate computational leverage is achieved when information is always in the correct location (unlimited locality) and always in the correct form (optimum mobile perspective). In the limit, spatial entropy is minimized in a sparse hyperspace with unlimited locality, and logical entropy is minimized when a mobile observer can chose the optimal perspective of an event backdrop, (assuming that cost is not proportional to the size of the backdrop). If such a computational leverage model existed, it would be useful for looking at mental processes.

This paper introduced the idea that "real intelligence" of humans may require revolutionary computational leverage due to physical limits of computation within normal 3d space and time. Modern physics theories that are based on observer consistency arguments have already defined many possible avenues for computational leverage based on indirect measurement and extraordinary views of space and time. These models of sparse hyperspacetime form a consistency backdrop for all possible events and all possible observer interactions. Consciousness may be a direct consequence of a dualist model of the mind-brain based on these consistency and computational leverage mechanisms. If the dualist model of the mind exists outside normal spacetime, then the mind is akin to a "Godel machine" that is capable of stepping outside of our normal spacetime limits.

Carver Mead's intuitive model of computational costs was expanded to provide an informal model for incorporating the following computational leverage ideas: 1) information is always in the correct place (unlimited locality) and 2) information is always in the correct form (optimum mobile perspective). Similar to quantum mechanical models, this dualist solution does not suffer from homuniculus regression (infinite nesting of mind solutions) because the mind is not limited to 3d space but represents a topological consistency in a sparse hyperspacetime. Higher dimensional models of the mind cannot be faithfully simulated using holograms or neural networks, because such simulations represent the spatial semantics but not the corresponding temporal speedups. Self-evolving conscious mind could emerge from such a hyperspacetime computation framework.

[1] Killheffer, Robert. 1993. "The Consciousness Wars." Omni. vol 16 number 1.

[2] Maudlin, Tim. 1989. "Computation and Consciousness", The Journal of Philosophy. number 8608 pages 407-432.

[3] Landauer, Rolf. 1992. "Information is Physical." Proceedings of the Workshop on Physics and Computation. IEEE Computer Science Press.

[4] Dreyfus, Hubert. 1992. "What Artificial Experts Can and Cannot Do." AI & Society. vol 6 number 1.

[5] Shor, Peter. 1994. "Algorithms for Quantum Computation: Discrete Log and Factoring.", preprint.

[6] Hawking, Stephen. 1988. A Brief History Of Time, From the Big Bang to Black Holes. Bantam Books, New York.

[7] Lucky, Robert. 1989. Silicon Dreams: Information, Man, and Machine. St. Martin's Press, New York.

[8] Kanerva, Pentti. 1988. Sparse Distributed Memory. MIT Press. Cambridge MA.

[9] Margolus, Norman and Toffoli, Tom. 1993, "CAM-8: A Computer Architecture based on Cellular Automata." Technical Report of MIT CAM8 group.

[10] Wheeler, John. 1989. "It From Bit," Proceedings 3rd International Symposium on Foundations of Quantum Mechanics, Tokyo.

[11] Schiffer, M 1992. "The Interplay between Gravitation and Information Theory." Proceedings of the Workshop on Physics and Computation. IEEE Computer Science Press.

[12] Hawking, Stephen. 1975. Commun. Math Physics 43:199.

[13] Penrose, Roger. 1979. Singularities and Time asymmetry in General Relativity: An Einstein Centenary Survey. editors S. W. Hawking and W. Israel. Cambridge University Press.

[14] Unruh, W. G. 1976. Phys. Rev. D14, 870

[15] Kaku, Michio. 1994. Hyperspace, A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension. Oxford University Press

[16] McGoveran, David and Noyes, Pierre. 1989. "An Essay on Discrete Foundations for Physics." Physics Essays. vol 2 number 1.

[17] Kaku, Michio. 1994. Hyperspace, A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension. Oxford University Press

[18] Rindler, Wolfgang. 1977. Essential Relativity: Special, General, and Cosmological.

[19] Shor, Peter. 1994. "Algorithms for Quantum Computation: Discrete Log and Factoring.", preprint.

[20] Sheldrake, Rupert. 1971. A New Science of Life, The Hypothesis of Formative Causation. J. P. Tarcher Publisher. Los Angeles.

[21] Ornstein, Robert. 1969. On the Experience of Time. Pelican Books.

[22] Jahn, Robert G. 1982. "The Persistent Paradox of Psychic Phenomena: An Engineering Perspective". Proceedings of the IEEE. Volume 70, No 2, February.

[23] Dillbeck, Michael and Alexander, Charles. 1989. "Higher States of Consciousness: Maharishi Mahesh Yogi's Vedic Psychology of Human Development. The Journal of Mind and Behavior. vol 10 number 4.

[24] Mead, Carver and Conway, Lynn. 1980. Introduction to VLSI Systems. Addison-Wesley Publishing. Menlo Park, CA. Pages 333-371.

[25] Kanerva, Pentti. 1988. Sparse Distributed Memory. MIT Press. Cambridge MA.

[26] Pietsch, Paul. 1981. ShuffleBrain: The Quest for the Hologramic Mind. Houghton Mifflin Company. Boston.

[27] Wheeler, John. 1962. Geometrodynamics. Academic Press. New` York.

[28] Targ, Russell and Puthoff, Harold. 1977. Mind Reach. Dell Publishing.

[29] Jahn, Robert. 1982. "The Persistent Paradox of Psychic Phenomena: An Engineering Perspective. Proceedings of the IEEE. Vol 70 number 2.

[30] Monroe, Robert A. 1971. Journeys Out of the Body. Doubleday Press. New York.

[31] McMoneagle, Joseph.1993. Mind Trek. Hampton Roads Publishing. Norfolk, VA.

[32] Moody, Raymond. 1975. Life After Life. Bantam Books. New York.

[33] Murphy, Michael and White, Rhea. 1978. The Psychic Side of Sports. Addison-Wesley Publishing. Menlo Park, CA.