Upload Our Minds?


Will We Eventually Upload Our Minds?


Bruce Katz Interview

Written By: Surfdaddy Orca

Date Published: September 9, 2009

Bruce Katz received his Ph.D. in artificial intelligence from University of Illinois. He is a frequent lecturer in artificial intelligence at the University of Sussex in the U.K and serves as adjunct professor in of Computer Engineering at Drexel University in Philadelphia. Dr. Katz is the accomplished author of Neuroengineering the Future, Digital Design, as well as many prestigious journal articles.

Katz believes we are on the cusp of a broad neuro-revolution, one that will radically reshape our views of perception, cognition, emotion and even personal identity. Neuroengineering is rapidly advancing from perceptual aids such as cochlear implants to devices that will enhance and speed up thought. Ultimately, he says, this may free the mind from its bound state in the body to a platform independent existence.

Mind uploading
From Wikipedia, the free encyclopedia
Jump to: navigationsearch
Mind uploading or whole brain emulation (sometimes called mind transfer) is the hypothetical process of scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer would have to run asimulation model so faithful to the original that it would behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.[1] The simulated mind is assumed to be part of a virtual reality simulated world, supported by a simplified body simulation model. Alternatively, the simulated mind could be assumed to reside in a computer inside (or connected to) ahumanoid robot or a biological body, replacing its brain.
Whole brain emulation is discussed as a “logical endpoint”[1] of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications[2] as an approach to strong AI. Among futurists and within thetranshumanist movement it is an important proposed life extensiontechnology, originally suggested in biomedical literature in 1971.[3]It is a central conceptual feature of numerous science fiction novels and films.
Whole brain emulation is considered by some scientists as a theoretical and futuristic but possible technology,[1] although mainstream research funders remain skeptical. Several contradictory and already passed attempts have been made during the years to predict when whole human brain emulation can be achieved. Substantial mainstream research and development are however being done in relevant areas including development of faster super computersvirtual realitybrain-computer interfaces, animalbrain mapping and simulation, and information extraction from dynamically functioning brains.[4]
The question whether an emulated brain can be a human mind is debated by philosophers, and may be contradicted by the dualisticview of the human mind that is common in many religions.
1 Overview
2 Theoretical benefits
2.1 Speedup
2.2 Immortality/Backup
2.3 Multiple/parallel existence
3 Issues
3.1 Bekenstein bound
3.2 Computational issues
3.3 Philosophical issues
3.3.1 Copying vs. moving
3.4 Legal and economical issues
4 Relevant technologies and techniques
4.1 Simulation model scale
4.2 Scanning and mapping scale of an individual
4.3 Serial sectioning
4.4 Brain imaging
4.5 Brain-computer interfaces
5 Current research
6 Mind uploading in science fiction
7 Mind uploading advocates and critics
8 See also
9 References
10 External links
[edit] Overview

Neuron anatomical model

Simple artificial neural network
The human brain contains about 100 billion nerve cells calledneurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release anddetection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of thisneural network.
Importantly, many leading neuroscientists have stated they believe important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononiwrote in IEEE Spectrum:
“Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.”[5]
The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.
Many eminent computer scientists and neuroscientists have predicted that computers will be capable of thought and even attain consciousness, including Koch and Tononi,[5] Douglas Hofstadter,[6] Jeff Hawkins,[6] Marvin Minsky,[7] Randal A. Koene,[8] and Rodolfo Llinas.[9]
Such a machine intelligence capability might provide a computational substrate necessary for uploading.
However, even though uploading is dependent upon such a general capability it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form ofartificial intelligence, sometimes called an infomorph or “noömorph.”
Even if uploading is theoretically possible, the amount of storage and computational power required are difficult to predict. Nevertheless, many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations (citations needed for Boahen, Modha, Izhikevich, Bostrom and Sandberg, others). Using these models, some have estimated that uploading may become possible within decades if trends such as Moore’s Law continue.[10]
The prospect of uploading human consciousness in this manner raises many philosophical questions involving identity, individuality and the soul, as well as numerous problems of medical ethics and morality of the process.
[edit] Theoretical benefits
[edit] Speedup
A computer-based intelligence such as an upload could potentially think much faster than a human even if it was no more intelligent. Human neurons exchange electrochemical signals with a maximum speed of about 150 meters per second, whereas the speed of lightis about 300 million meters per second, about two million times faster. Also, neurons can generate a maximum of about 200 action potentials or “spikes” per second, whereas the number of signals per second in modern computer chips is about 2 GHz (about ten million times greater) and continually increasing. So even if the computer components responsible for simulating a brain were not significantly smaller than a biological brain, and even if the temperature of these components was not significantly lower, Eliezer Yudkowsky of theSingularity Institute for Artificial Intelligence calculates that a simulated brain could run about 1 million times faster than a real brain, experiencing about a year of subjective time in only 31 seconds of real time.[11][12]
[edit] Immortality/Backup


Main article: Digital immortality
In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby reducing or eliminating mortality risk. This general proposal appears to have been first made in the biomedical literature in 1971 by renowned University of Washington biogerontologist George M. Martin.[3]
[edit] Multiple/parallel existence
Another concept explored in science fiction is the idea of more than one running “copy” of a human mind existing at once. Such copies could potentially allow an “individual” to experience many things at once, and later integrate the experiences of all copies into a central mentality at some point in the future, effectively allowing a single sentient being to “be many places at once” and “do many things at once”; this concept has been explored in fiction. Such partial and complete copies of a sentient being raise interesting questions regarding identity and individuality.
[edit] Issues
[edit] Bekenstein bound
The Bekenstein bound is an upper limit on information that can be contained within a given finite region of space which has a finite amount of energy or, conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level.[13]
An average human brain has a weight of 1.5 kg and a volume of 1260 cm3. The energy (E = m·c2) will be 1.34813·1017 J and if the brain is approximate to a sphere then the radius (V = 4·π·r3/3) will be 6.70030·10-2 m.
The Bekenstein bound (I ≤ 2·π·r·Eħ·c·ln 2) will be 2.58991·1042 bitand represent the maximum information needed to perfectly recreate the average human brain down to the quantum level. This implies that the number of different states (Ω=2I) of the human brain (and of the mind if the physicalism is true) is at most 107.79640·1041.
[edit] Computational issues

Futurist Ray Kurzweil’s projected supercomputer processing power based on Moore’s law exponential development of computer capacity. Here the computational capacity doubling time is assumed to be 1.2 years.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
Henry Markram, lead researcher of the “Blue Brain Project”, has stated that “it is not [their] goal to build an intelligent neural network”, based solely on the computational demands such a project would have.[14]
It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.[15]
Advocates of mind uploading point to Moore’s law to support the notion that the necessary computing power may become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
[edit] Philosophical issues
[edit] Copying vs. moving
Another philosophical issue with mind uploading is whether an uploaded mind is really the “same” sentience, or simply an exact copy with the same memories and personality; or, indeed, what the difference could be between such a copy and the original (see theSwampman thought experiment). This issue is especially complex if the original remains essentially unchanged by the procedure, thereby resulting in an obvious copy which could potentially have rights separate from the unaltered, obvious original.
Most projected brain scanning technologies, such as serial sectioning of the brain, would necessarily be destructive, and the original brain would not survive the brain scanning procedure. But if it can be kept intact, the computer-based consciousness could be a copy of the still-living biological person. It is in that case implicit that copying a consciousness could be as feasible as literally moving it into one or several copies, since these technologies generally involve simulation of a human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is usually assumed that once the versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.
The problem is made even more serious by the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings. The most parsimonious view of this phenomenon is that the two (or more) minds would share memories of their past but from the point of duplication would simply be distinct minds (although this is complicated by merging). Many complex variations are possible.
Depending on computational capacity, the simulation may run at slower or faster simulation time as compared to the elapsed physical time, resulting in that the simulated mind would perceive that the physical world is running in slow motion or fast motion respectively, while biological persons will see the simulated mind in fast or slow motion respectively.
A brain simulation can be started, paused, backed-up and rerun from a saved backup state at any time. The simulated mind would in the latter case forget everything that has happened after the instant of backup, and perhaps not even be aware that it is repeating itself. An older version of a simulated mind may meet a younger version and share experiences with it.
[edit] Legal and economical issues
See also: Ship of Theseus
The only limited resources in a simulated world are computational resources, meaning simulation speed, and intellectual properties. In a simulated society, rich simulated minds may pay for faster simulation time than others.
It may be difficult for authorities to supervise that human rights are not threatened in any computer in the world. It might for example be tempting for social science researchers to expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments, where many copies of the same minds, or repeated reruns of the same simulation, are exposed to different test


See Also

h+: What trends do you see in cognitive enhancement modalities and therapies (drugs, supplements, music, meditation, entrainment, AI and so forth)?

BRUCE KATZ: There are two primary types of cognitive enhancement — enhancement of intelligence and enhancement of creative faculties. Even though creativity is often considered a quasi-mystical process, it may surprise some that we are actually closer to enhancing this aspect of cognition than pure intelligence.

The reason is that intelligence is an unwieldy collection of processes, and creativity is more akin to a state, so it may very well be possible to produce higher levels of creative insight for a fixed level of intelligence before we are able to make people smarter in general.

There appear to be three main neurophysiological ingredients that influence the creative process These are 1) relatively low levels of cortical arousal; 2) a relatively flat associative gradient; 3) a judicious amount of noise in the cognitive system. [Editor’s note: A person with a high associative gradient is able to make a few common associations with a stimulus word such as “flight,” whereas those with a flat gradient are able to make many associations with the stimulus word. Creative people have been found to have fairly flat gradients, and uncreative people have much steeper gradients.]

All three ingredients conspire to encourage the conditions whereby cognition runs outside of its normal attractors, and produces new and potentially valuable insights.

Solving compound remote associate (CRA) problems illustrates how these factors work. In a CRA problem, the task is to find a word that is related to three items. For example, given “fountain”, “baking”, and “pop” the solution would be “soda.”

The reason CRA problems are difficult, and why creative insight helps, is that the mind tends to fixate on the stronger associates of the priming words (for example, “music” for “pop”), which in turn inhibits the desired solution.

brain upload

What are the implications of this for artificially enhancing insight? First, any technique that quiets the mind is likely to have beneficial effects. These include traditional meditative techniques, but possibly also more brute-force technologies such as transcranial magnetic stimulation (TMS). Low frequency pulses (below 1Hz) enable inhibitory processes, and TMS applied in this manner to the frontal cortices could produce the desired result.

Second, the inhibition of the more literal and less associative left hemisphere through similar means could also produce good results. In fact, EEG studies of people solving CRA problems with insight have shown an increase in gamma activity (possibly indicative of conceptual binding activity) in the right but not the left hemisphere just prior to solution.

Finally, the application of noise to the brain, either non-invasively, through TMS, or eventually through direct stimulation may encourage it to be more “playful” and to escape its normal ruts.

In the not too distant future, we may not have to rely on nature to produce the one-in-a-million combination [of a high IQ and creative insight], and be able to produce it at will on many if not all neural substrates.

h+: What are some of the issues (legal, societal, ethical) that you anticipate for such technology?

BK: My own opinion is that — except in the case of minors — we must let an informed public make their own choices. Any government-mandated set of rules will be imperfect, and in any case will deviate from the needs and desires of its individual citizens.

What we in the neuroengineering community should be pushing for is a comprehensive freedom of thought initiative, ideally enshrined as a constitutional amendment rather than as a set of clumsy laws. And we should be doing so sooner rather than later, before individual technologies come online, and before we allow the “tyranny of the majority” to control a right that ought to trump all other rights.

h+: What is your vision for the future of cognitive enhancement and neurotechnology in the next 20 years?

BK: Ultimately, we want to be free of the limitations of the human brain. There are just too many inherent difficulties in its kludgy design — provided by evolution — to make it worthwhile to continue along this path.

As I describe in my book, Neuroengineering the Future, these kludges include:

  • Short-term memory limitations (typically seven plus or minus 2 items),
  • Significant long-term memory limitations (the brain can only hold about as much as a PC hard disk circa 1990),
  • Strong limitations on processing speed (although the brain is a highly parallel system, each neuron is a very slow processor),
  • Bounds on rationality (we are less than fully impartial processors, sometimes significantly so),
  • Bounds on creativityUltimately, we want to be free of the limitations of the human brain. There are just too many inherent difficulties in its kludgy design…h+: What trends do you see in cognitive enhancement modalities and therapies (drugs, supplements, music, meditation, entrainment, AI and so forth)?BRUCE KATZ: There are two primary types of cognitive enhancement — enhancement of intelligence and enhancement of creative faculties. Even though creativity is often considered a quasi-mystical process, it may surprise some that we are actually closer to enhancing this aspect of cognition than pure intelligence.

, ,

No comments yet.

Leave a Reply

Optimization WordPress Plugins & Solutions by W3 EDGE