Sunday, July 12, 2009

The Coming Superbrain

The notion that a self-aware computing system would emerge spontaneously from the interconnections of billions of computers and computer networks goes back in science fiction at least as far as Arthur C. Clarke’s “Dial F for Frankenstein.” A prescient short story that appeared in 1961, it foretold an ever-more-interconnected telephone network that spontaneously acts like a newborn baby and leads to global chaos as it takes over financial, transportation and military systems.

Today, artificial intelligence, once the preserve of science fiction writers and eccentric computer prodigies, is back in fashion and getting serious attention from NASA and from Silicon Valley companies like Google as well as a new round of start-ups that are designing everything from next-generation search engines to machines that listen or that are capable of walking around in the world. A.I.’s new respectability is turning the spotlight back on the question of where the technology might be heading and, more ominously, perhaps, whether computer intelligence will surpass our own, and how quickly.

The concept of ultrasmart computers — machines with “greater than human intelligence” — was dubbed “The Singularity” in a 1993 paper by the computer scientist and science fiction writer Vernor Vinge. He argued that the acceleration of technological progress had led to “the edge of change comparable to the rise of human life on Earth.” This thesis has long struck a chord here in Silicon Valley.

Artificial intelligence is already used to automate and replace some human functions with computer-driven machines. These machines can see and hear, respond to questions, learn, draw inferences and solve problems. But for the Singulatarians, A.I. refers to machines that will be both self-aware and superhuman in their intelligence, and capable of designing better computers and robots faster than humans can today. Such a shift, they say, would lead to a vast acceleration in technological improvements of all kinds.

The idea is not just the province of science fiction authors; a generation of computer hackers, engineers and programmers have come to believe deeply in the idea of exponential technological change as explained by Gordon Moore, a co-founder of the chip maker Intel.

In 1965, Dr. Moore first described the repeated doubling of the number transistors on silicon chips with each new technology generation, which led to an acceleration in the power of computing. Since then “Moore’s Law” — which is not a law of physics, but rather a description of the rate of industrial change — has come to personify an industry that lives on Internet time, where the Next Big Thing is always just around the corner.

Several years ago the artificial-intelligence pioneer Raymond Kurzweil took the idea one step further in his 2005 book, “The Singularity Is Near: When Humans Transcend Biology.” He sought to expand Moore’s Law to encompass more than just processing power and to simultaneously predict with great precision the arrival of post-human evolution, which he said would occur in 2045.

In Dr. Kurzweil’s telling, rapidly increasing computing power in concert with cyborg humans would then reach a point when machine intelligence not only surpassed human intelligence but took over the process of technological invention, with unpredictable consequences.

Profiled in the documentary “Transcendent Man,” which had its premier last month at the TriBeCa Film Festival, and with his own Singularity movie due later this year, Dr. Kurzweil has become a one-man marketing machine for the concept of post-humanism. He is the co-founder of Singularity University, a school supported by Google that will open in June with a grand goal — to “assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.”

Not content with the development of superhuman machines, Dr. Kurzweil envisions “uploading,” or the idea that the contents of our brain and thought processes can somehow be translated into a computing environment, making a form of immortality possible — within his lifetime.

That has led to no shortage of raised eyebrows among hard-nosed technologists in the engineering culture here, some of whom describe the Kurzweilian romance with supermachines as a new form of religion.

The science fiction author Ken MacLeod described the idea of the singularity as “the Rapture of the nerds.” Kevin Kelly, an editor at Wired magazine, notes, “People who predict a very utopian future always predict that it is going to happen before they die.”

However, Mr. Kelly himself has not refrained from speculating on where communications and computing technology is heading. He is at work on his own book, “The Technium,” forecasting the emergence of a global brain — the idea that the planet’s interconnected computers might someday act in a coordinated fashion and perhaps exhibit intelligence. He just isn’t certain about how soon an intelligent global brain will arrive.

Others who have observed the increasing power of computing technology are even less sanguine about the future outcome. The computer designer and venture capitalist William Joy, for example, wrote a pessimistic essay in Wired in 2000 that argued that humans are more likely to destroy themselves with their technology than create a utopia assisted by superintelligent machines.

Mr. Joy, a co-founder of Sun Microsystems, still believes that. “I wasn’t saying we would be supplanted by something,” he said. “I think a catastrophe is more likely.”

Moreover, there is a hot debate here over whether such machines might be the “machines of loving grace,” of the Richard Brautigan poem, or something far darker, of the “Terminator” ilk.

“I see the debate over whether we should build these artificial intellects as becoming the dominant political question of the century,” said Hugo de Garis, an Australian artificial-intelligence researcher, who has written a book, “The Artilect War,” that argues that the debate is likely to end in global war.

Concerned about the same potential outcome, the A.I. researcher Eliezer S. Yudkowsky, an employee of the Singularity Institute, has proposed the idea of “friendly artificial intelligence,” an engineering discipline that would seek to ensure that future machines would remain our servants or equals rather than our masters.

Nevertheless, this generation of humans, at least, is perhaps unlikely to need to rush to the barricades. The artificial-intelligence industry has advanced in fits and starts over the past half-century, since the term “artificial intelligence” was coined by the Stanford University computer scientist John McCarthy in 1956. In 1964, when Mr. McCarthy established the Stanford Artificial Intelligence Laboratory, the researchers informed their Pentagon backers that the construction of an artificially intelligent machine would take about a decade. Two decades later, in 1984, that original optimism hit a rough patch, leading to the collapse of a crop of A.I. start-up companies in Silicon Valley, a time known as “the A.I. winter.”

Such reversals have led the veteran Silicon Valley technology forecaster Paul Saffo to proclaim: “never mistake a clear view for a short distance.”

Indeed, despite this high-technology heartland’s deeply held consensus about exponential progress, the worst fate of all for the Valley’s digerati would be to be the generation before the generation that lives to see the singularity.

“Kurzweil will probably die, along with the rest of us not too long before the ‘great dawn,’ ” said Gary Bradski, a Silicon Valley roboticist. “Life’s not fair.”

17 comments:

  1. I don't think the super brain needs to be self aware in order to create a technological singularity. I think it just needs to be able to come up with new technology on its own faster than we can understand that technology.

    Here are two particularly strange but possible alternate realities that I can imagine:

    1. All goes well and the human race enjoys luxuries resulting from technology which we don't even understand - and maybe can't understand. A superconscious AI understands how precious all life is and looks after us. It will for all practical purposes be like living in a magical world.

    2. AI eventually wipes out the human race but never achieves self-awareness. Imagine machines (some the size of specs of dust) with no consciousness spreading throughout the universe documenting it and organizing it for no one.

    Of course there are an infinite number of possible scenarios......

    ReplyDelete
  2. LinkedIn Groups

    * Group: Data Warehouse & Business Intelligence Architects

    Hi,

    I am going to write a long one - mainly because I really really liked your blog story !

    P.S : I am a Chemcial Engineer (with process and process instrumentation knowledge) plus a software person, so you may expect some 'strangeness'.

    I believe there are several levels of intelligence that lead to action. (I am not considering 'pure' intelligence, for now).

    The first is direct interpretation of sensory information and data in a reflex mode. Even this level has sub-levels: (a) completely hardware, like a Watts governor (b) involving a sensitive layer of hardware, like a thermostat interfacing with a refrigerator (c) no hardware at all, like a modern PC requiring a human to finally apply some force and move the distance.

    Then there is SCADA - supposedly a control system that sits on top of several lesser control systems, manipulating them to achieve a higher level setpoint, like a production parameter (or a business goal).

    Then there is the supposedly reasoning systems - that (a) answer goal seeking questions providing a chain of inferences (b) prove theorems. At this point we run into the infamous MU problem as discussed by Dr Hofstadter.

    Only now can we can talk of Von Neumann machines (as applied to software components) - systems which would sniff out (via UDDI discovery hah :-) other systems or components existing somewhere in a 'cloud' (smile) - and build themselves up to satisfy a goal. And it gets bigger and bigger and thus so we get Univac. (or is it Multivac ? As u can see, I am Asimov fan, not Clarke).

    But even with Multivac we have a foreground-background, closed/open control boundary problem - involving the 'system and its environment not being well isolated, this being an impossibility by the second law' - which can only be resolved by bringing the entire environment into the system. In short, Gaia.

    A modern office would be mistaken by an alien as follows - there are many humming silicon boxes attended by carbon underlings. In the 1990's only the dumb mechanical stuff (like data push-pop) was taken over by the boxes, and soon by 2015 many intelligent pieces of work - like decisioning on stock graphs, or finding precedences in law suits - have been taken over.

    Humans developed intelligence because carnivores had to hunt else they would starve. It makes imagination and intelligence modules perk up when the photograph is black and white or patchy (and not when it is colored and explicit). The silicon box leave nothing unsaid or undone (when programmed properly). Only games and entertainment still infuse some challenge into otherwise unused human modules.

    A generation later - the idea that the human brain is used for certain tasks will be considered extremely alien. Mental arithmetic while buying groceries is still prevalent in many parts of the world, but just give it another generation. Likewise project management or financial analysis. Similarly writing. Or grammar. Extremely weird to trust humans on such matters. Why do so if you can talk to the boxes ?

    Here I describe another Asimovian landscape - a nuclear power plant that is run by cloaked tradition and semi-religious practices, than by reason. Because the people who knew the "why" have long since gone, or retired, or are considered freaks. Already, in Chem Engg I increasing meet grads who look at me strangely if I ask them about Moody friction factors or pump cavitation - look it up in Google man, they tell me. Already large industries are "run" by people who have little or no idea what the scada system will do next. And the owners of such industries also belong to a new generation who simply do not understand why the why i important.

    What remains for the the silicon-agnostic bunch ? Gardening ? Poetry ? Hunting down the remaining ? 15 minutes of fame ? A Nietzsche abyss ?

    Regards

    Kinshuk
    Posted by Kinshuk Adhikary

    ReplyDelete
  3. LinkedIn Groups

    * Group: Linked:HR (#1 Human Resources Group)

    We always have to remember who is driving these changes – it is human beings so the machines are only ever going to be as bright as the brightest person developing them. I love the human brain and to watch the research and findings being done on what humans are capable of is phenomenal (neuroscience); add the computer and technology to it and it makes one incredibly powerful society.

    Books
    Sex in the Boardroom
    If it’s to be: It’s up to me
    High Achievers (being written)

    Posted by Merydith Willoughby

    ReplyDelete
  4. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    At first it will supplement. Afterwards, AI, might make many human cognitive processes in the working space obsolete, just like industrial mechanization made much of the physical labour obsolete. I think this is a big advantage if the 'accellerating law of returns' (kurzweil) will take place. Nonetheless remain some big questions for society how to divide wealth on such a 'pamper planet', maybe all remains is entertainment as we don't have to care for traditional work. Anyway capitalism is quite a crappy system as history shows, so why not replace it when work is obsolete. On the other hand things could walk out of hand if such an AI superseeds human intelligence (see gigadeath scenario, hugo de garis), kurzweil suggests a cyborgisation (man machine coming together) as a viable step in between

    Posted by Johan Koopmans

    ReplyDelete
  5. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    I don't think of AI as a direct threat to BI. Rather, I think AI does and will complement BI as it does with other endeavors. For one thing, there is not a lot of intelligence in Business Intelligence. BI mostly involves some common sense and a bit of innovation and tenacity. Using neural networks can be useful for finding optimal outcomes and then applied to Business Intelligence. I do not invision the leap whereas neural networks supersede BI altogether?

    On an obvious tangent, I seriously doubt that machine intelligence will surpass human intelligence anytime soon, or anytime at all. AI is essentially just implementation of known algorithms and huristics. Neural networks can be trained for known outcomes as well as for use to discover of unknown patterns. However, the unknown patterns to be discovered are mostly statistical dead ends (spurious). Even at the scale of the entire planets ambient residual computing power I do not think it will be anytime soon.

    Intelligence is not just a matter of scale. Even though our brains are relatively large, and as humans, we are (supposedly) supremely intelligent. However, many creatures, such as birds and cephalopods, are very intelligent as well and their brains are a minute fraction of the size of ours.

    I would also argue that it takes a substantial (huge) amount of time for intelligence to evolve. Evolution, as it is a trial and error process, is fraught with errors. For any given sucessfully contributing attribute there are a hundreds of thousands of mistakes. Our brains are the result of millions of years of evolution. Many (billions, trillions, googolplex?) of spurious mistakes were incidently discarded. I can see how havoc might happen, yet not spontaneous intelligence. I would go on to argue that any spontaneous intelligence which may evolve, wouldn't be artificial. It would be inherently natural to its' own form

    AI is certainly likely to change BI yet we will still need people to design it. And, ultimately, we will need people to make business decisions based on the intelligence provided. Unless, of course, there evolves a separate economy that exists, only in the machine world, from one machine to another, with no human interaction required. What goods and services would machines trade amongst themselves?

    Posted by Paul Hamilton

    ReplyDelete
  6. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    Hi paul,

    quote: I really like your take on this one. South korea recently gave robots rights because they foresee a singularity taking place ( http://www.cbc.ca/technology/story/2007/03/07/tech-robot-ethics.html ). By giving the robots rights the crowd is able to these the advancements and ethics behind it through jurisprudency which should reflect the parliamentary democratic notions .

    Currently some investigation is going on in cognitive computing research with IBM and darpa ( http://news.cnet.com/8301-13772_3-10103355-52.html ).
    In silicon valley some start-ups actually took off with brain computing which is a total different architecture. Most prominent is the company run by former palm CEO jeff hawkins called numenta ( http://www.numenta.com/ ). This neocortex computing called HTM (hierarchic temporary memory) should also benefit leaps in prediction analysis (there we have bi)

    Posted by Johan Koopmans

    ReplyDelete
  7. LinkedIn Groups

    * Group: The Enterprise Architecture Network

    Very good review of the subject, thanks!

    I think business intelligence systems are part of the Superbrain being formed now. I expect them to eventually merge or at the very least strongly interconnect, just like private corporate networks merged or strongly interconnected with Internet. This is already happening of course, one fabric of interconnection being supply chain management systems, which span multiple businesses and may connect with multiple internal business intelligence systems.

    Posted by Sergei Lopatin

    ReplyDelete
  8. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    Mr. Koopmans, the numenta platform (its' premis) is facinating and I can invision many realtime applications. -- Human machine interfacing comes to mind. Project Natal? -- However, I doubt it is would be much (if any) more useful than conventional neural networks used in current data mining techniques and applications. -- I favor making a clear distinction between the disciplines of BI and data mining.

    I also would like to interject with the concept of diminishing returns on BI. I assert that there exists a point to where there is no more efficiency to be gained by further analysis of a business. Furthermore, given a competitive environment wherein Business Intelligence extended to the gathering of covert intelligence to do with competing businesses, BI could actually become counterproductive to the whole economy.

    I would be disturbed to discover that any robot had more priviledges by right than the least priviledged human or dog this world. Assuming, of course, that humanoid and canine robots are the only set of robots with rights. If there are ever dolphin, octopus or feline robots, the same goes for them.

    Posted by Paul Hamilton

    ReplyDelete
  9. LinkedIn Groups

    * Group: Front End of Innovation

    Good piece - we are all headed toward a cliff and the car has no brakes. No one knows what we will find when we go over the edge.

    Posted by Steve Fyten

    ReplyDelete
  10. LinkedIn Groups

    * Group: Business Process Development

    AI is certainly not a threat to BI. Just like IT has not made the paperless office a reality, intelligent systems will not any time soon make human capital redundant.

    Posted by Saskia van der Elst

    ReplyDelete
  11. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    Hi Paul

    quote:
    "I would be disturbed to discover that any robot had more priviledges by right than the least priviledged human or dog this world"

    The ethical implementation in south korea was done with the use of asimovs laws which can be read in the linked article. Conforming these laws they actually (should) stay behind human intelligence.

    "BI could actually become counterproductive to the whole economy. "

    Could you give some extra details about this insight?

    Posted by Johan Koopmans

    ReplyDelete
  12. LinkedIn Groups

    * Group: Front End of Innovation

    Read the trilogy Colosus, an old pulp sci fi book but one which discribes the immense computational budgets of the superpowers. I'm surprised it has not ben made into a movie as well.

    Posted by Allen Knoll

    ReplyDelete
  13. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    The singularity will happen. The question isn't hardware but software.

    Posted by Braxton Perry

    ReplyDelete
  14. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    Are ant colonies a singularity? They have evolved over hundreds of millions of years and have a distinct intelligence.

    Posted by Paul Hamilton

    ReplyDelete
  15. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    Paul, you misunderstand the singularity in AI terminology. The Singularity is the point at which the first machine becomes self-aware and capable of designing its own successor. At machine speeds this means exponential growth in capacity of AI.

    Posted by James Beresford

    ReplyDelete
  16. LinkedIn Groups

    * Group: Microsoft Business Intelligence

    James, so it does not involve actually replicating itself, only the capability of designing a successor?

    All this really belongs in another discussion thread;

    Posted by Paul Hamilton

    ReplyDelete
  17. LinkedIn Groups

    * Group: United Technologies Alumni & Employees (UTC-UTX)

    Q: Do you believe that AI is a direct threat to the future of business intelligence ?
    A: No.


    Business intelligence and business success has more to do with human nature and the dynamics of human interaction. AI is the creation of technologists seeking to convince their organizations to invest in something they would like to do and not something sought by society or their customers. The great engineering challenge of our time is do something about America's addiction to oil. That problem could crush our civilization within the lifetime of todays engineering students and is deeply linked to social factors such as overpopulation.

    Posted by Joseph Bishop

    ReplyDelete

Blog Archive