Forever In A Dream

Gather 'round, kids, 'cause today I'm going to tell you about the Spike.

The what?

The Spike. Also known as the Technological Singularity.

Ooh!

You see, about a hundred years ago people noticed that technology was advancing faster and faster, and the rate at which this advancement was accelerating was itself accelerating, forming a super-exponential growth curve.

What?

If you looked at, say, the 1990s, technology had advanced as much in the past decade as in the previous two, or in the entire century before that. And it would advance as much again in the next three or four years.

Oh.

So they plotted this on a graph, and stood back and looked at it. And around about 2050 A.D. - some time in the middle of the century, anyway - the curve went, to all intents and purposes, vertical.

What does that mean?

That means that technology was advancing faster than anyone could keep up. The new aircar you bought in the morning would be obsolete by lunchtime and a museum piece by the afternoon. And people were changing too, changed by the technology. If you took a day off from school to nurse a cold, you'd come back the next day to find that all your friends had advanced degrees in mathematical physics and you were still struggling with seventh-grade trig.

Cool! But - hang on - I'm still struggling with seventh-grade trig. What happened?

Well, it's more a case of what didn't happen. The Spike didn't happen.

Some people think that the basic idea was flawed; others point out that one of the key requirements for the Spike, computers with human-equivalent intelligence - still hasn't been met, and suggest that the Spike may still be waiting for us in the middle of this century. I'm not convinced by either group, so for now I'll concentrate on what the thinkers of the 20th and 21st wrote about it.

And put that away unless you brought enough for everyone.

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.
        —Vernor Vinge, 1993
Although the ever-increasing rate of technological advance was first highlighted by Alvin Toffler in his 1970 book Future Shock, it wasn't until 1993 and Vernor Vinge's paper The Coming Technological Singularity: How to Survive in the Post-Human Era that the idea really caught the public interest. [Editor's note: This paper can be found online for 21st century readers here.]

Vinge pointed out that given a small set of assumptions, the technological singularity was not merely likely but inevitable - and rapidly approaching. His estimates in 1993 were that it would not come sooner than 2005, and probably not later than 2030. Others fiddled with the numbers a little, based on assumptions about the difficulty of the problems involved and the exact nature of the growth curve, but most of the dates fell between 2025 and 2050.

So what happened? Given that the Spike is now running at least 50 years behind schedule, where did we go wrong?

Let's start by examining the mechanisms Vinge proposed for driving the Singularity.

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
  • There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)

  • Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.

  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.

  • Biological science may provide means to improve natural human intellect.
  • The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.
            —Vernor Vinge
    In 1993, progress in computer hardware had indeed been following a steady curve for at least three decades, since the creation of the first integrated circuits in the early 1960s. But even in 1993, there were signs of trouble: The cost of new fabs (as the factories that built the integrated circuits were called) was rapidly increasing with each new generation of chip technology (called nodes). And the cost of development of the new nodes was also increasing.

    This trend not only continued but accelerated over the next decade, with the cost of a new fab rising into the billions (a great deal of money at the time) and all but the largest companies either forced to form partnerships to maintain competitiveness, or forced out of the market entirely.

    At the same time, the race to develop ever faster and more powerful chips faltered. The new enemy: Heat.

    At 130 nm and 90 nm physics begins to work against the designer in regard to power. In previous technology generations, just moving to the lower geometry produced a significant power reduction. Going from 0.25 micron to 0.18 micron lowered the voltage from 2.5 to 1.8 volts, a drop of 0.7 volts. This factor alone could make up for a host of power problems. But future technologies will have voltage levels hovering consistently around 1.2 to 1.0 volts. Changing to a newer technology will provide little benefit to the power budget. Also at 130 and 90, the static or quiescent leakage becomes larger. Thinner gate oxides deliver the speed, but they do so at the price of increased leakage currents. For the foreseeable future process technology will not provide a meaningful solution to the challenge facing power.
            —EE Times, 15 January 2004
    For more than 30 years, chip designers had been given a free ride by the process specialists. Each new technological node was not only smaller and faster, but used less power. By the 180 nanometre (or 0.18 micron) node (the size represents the size of the smallest features in the silicon chips), this was no longer true. Chips were smaller and faster, but without other changes, they now consumed more power than before. More power meant more heat. And that heat had to be removed, or else the chip would malfunction.

    Huh? You lost me.

    What do you mean? Where did you get lost?

    Where you started talking about manometers.

    Nanometres. Oh well. Let's put it this way.

    For more than thirty years, computers had been getting faster and cheaper, and anyone projecting the curve forward could see the day coming when a computer with the calculating and memory capacity of the human brain would become available. In the first years of the 21st century, though, the laws of physics began to intervene, making progress very much more difficult.

    The curve began to flatten out.

    Vinge was comfortable in his predictions because he saw multiple paths leading to the Singularity, three of them backed by a decades-long track record. The fourth - biological brain engineering - was a long shot.

    But all three of the likely paths relied on the same curve, exactly the curve that was starting to flatten out just as Vinge was writing his paper. Although computers continued to improve, and human-computer interfaces in particular - the growth curve was no longer super-exponential; no longer leading inevitably toward the Singularity. Indeed, for much of the 21st century the growth was close to linear.

    And it was linear because, just as past improvements allowed ever greater resources to be brought to bear on the next advance, so the advances became ever more difficult. Sometimes advances came more easily, allowing a brief flowering of technological growth; sometimes they were relatively intractable, leading to years of stagnation.

    Today, altough we do indeed possess computers with more processing capacity than the human brain (and systems with more memory capacity are commonplace), we don't - yet - have anything resembling a human intelligence embodied in a machine. Vinge did in fact consider this possibility:

    Well, maybe it won't happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose and Searle against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question "How We Will Build a Machine that Thinks". As you might guess from the workshop's title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds.
    In fact, Searle's arguments were largely discredited even then, and Penrose's hypothesis of a quantum origin for human intelligence was purely speculative. There still seems to be no fundamental roadblock towards high-level machine intelligence. It's just that it's hard.

    And that's what killed the Spike. The Technological Singularity relied on the assumption that we would have ever-increasing computational resources to address the problem of, well, increasing our computational resources, but that the problems we would have to solve would not increase at the same rate. When it turned out that the complexity of the problems increased as fast as - or even faster than - our ability to solve them, the inevitable Spike turned into the gentle hill of progress.

    And instead of the transhuman era, we ended up with a very human era indeed.

    Posted by: Trixie Misa at 12:20 AM

    Comments

    1 Trixie is a lot smarter than Pixy.

    Posted by: Rossz at January 25, 2004 05:31 AM (43SjN)

    2 All I want is a computer fast enough to keep up with the user - me.

    Posted by: Stephen Macklin at January 25, 2004 06:10 AM (CSxVi)

    3 I refuse to be impressed until they design a computer that thinks like a woman.

    Posted by: Ted at January 25, 2004 01:24 PM (2sKfR)

    4 It's been noted before that Pixy is smarter than I am. So perhaps the true path to superhuman intelligence is just for me to keep spinning off a recursive stack of sub-personas...

    But I rather suspect that the process may be self-limiting.

    Posted by: Andrew Maizels at January 25, 2004 10:56 PM (jtW2s)

    5 That was a great read. So, boiling it all down - you're saying that hardware will not improve as fast as many hope it will, and also that the software problem of AI will be a lot harder than "singulatarians" think it will be.

    Fair enough - I've heard arguments for and against both of these assumptions (and they are, after all, just counter-assumptions), and I find most of the arguments on both sides quite convincing.

    In the end both sides seem to boil down to "well, we'll just have to wait and see won't we?". especially on the hardware issue. I will say this though - people have been predicting the decline of Moore's law for a while now, and so far the designers have pulled tricks out of their hats to keep the pace of improvement up - it's a brave person that predicts this won't continue!

    Thanks again for the article, I forget how I found you (many blog-clicks I think), but I'm glad I did. As someone that finds the idea of the singularity a bit daunting, it's a semi-comforting idea that a hundred years from now it still won't have happened. Maybe too comforting. :-)

    And hey - by giving that post the date you have given it - won't it always appear on the top of your page? How did you stop it from doing so in movabletype? I use MT but am still a bit of a noob with it.


    Skev

    Posted by: Skev at February 07, 2004 02:57 AM (7wSfI)

    6 Thanks.

    Basically, yes, I think the problems will turn out to be harder than some people expect. I could be wrong, of course; this is just idle speculation on my part, and not an in-depth study.

    Moore's Law is in decline right now though. Take a look at how speeds ramped up during 2003... Basically, not at all. It's getting very hard and very expensive to move forward at the same pace we've become used to.

    Oh, and if you select a number of posts (rather than days) for your main index, you can also provide an offset. So I've skipped the first 6 posts, then listed the next 25 (I think).

    Posted by: Pixy Misa at February 07, 2004 10:43 AM (jtW2s)

    7 I wouldn't write Searle off yet. I actually think (if you avoid the hard AI hype from MIT), that he has same very valid points which add up to two key conclusions:
    - intelligence is ill defined but intelligence in computers is useful even if it is not human-like
    - human intelligence is probably harder than we can currently imagine
    So in some ways, Searle would be supporting the arguments raised here.

    Posted by: Ozguru at March 25, 2004 02:10 PM (/acvO)

    8 The problem I have with Searle is his "Chinese Room" argument, which is complete baloney. He tries to use it to prove that AI is impossible, but instead ends up demonstrating that he has no idea what he's talking about. The two points you raise are valid, but that's not what Searle's on about.

    I think Hofstadter demolished him pretty thoroughly in Godel, Escher, Bach.

    Posted by: Pixy Misa at March 25, 2004 07:35 PM (+S1Ft)

    9 A book worth reading: Permutation City - Greg Egan, 1995 - Simulating humans using software designed for medical research is commonplace, but prohibitively expensive and slow... if I recall corectly about 17 times slower than realtime. Moore's Law has topped out, so there's no likelihood of a change in the near future.

    Anyway...

    I don't think computer performance is anywhere near topping out - there's all kinds of things that can be done to improve performance that don't have anything to do with increasing the clock rate of processors. After all, the "clock rate" of the human brain is pitiful, it just has the equivalent of millions or billions of parallel processors.

    What's holding massively parallel processors back is software. Or rather, the economics of the software industry. If you have ten thousand processors on a chip, each running at a modest few hundred megahertz, it's not going to be running Windows or Linux or Mac OS. So where's the market? Who's going to buy this cluster-on-a-chip? About the only mass market I can see for this sort of thing is photorealistic real-time animation... and even there you'll probably need to change your rendering languages to take advantage of it.

    Figuring out how that might work, well, that's an opportunity for your science fiction writers...

    Posted by: Peter da Silva at August 17, 2004 12:23 AM (p0BkR)






    Processing 0.0, elapsed 0.0084 seconds.
    18 queries taking 0.0063 seconds, 17 records returned.
    Page size 19 kb.
    Powered by Minx 0.8 beta.