the world's most uniquely productive brain change personality profiles
Home Page

Thinkologist: The Dudley Lynch Blog on Brain Change

… a (mostly) good natured critique of World Handling Skills & Tools

So Far, the Singularity Volunteer Fire Dept. Has Been Sounding Ten Alarms While Rushing Around Trying to Find Smoke

I don’t often experience writer’s block. Sleeping on a topic overnight is nearly always enough to return a free flow of ideas and images. But it was not working that way with this thing called The Singularity. For days, I tried without success to tie a literary bow around a supposition that had fast become a phenomenon that is now on the verge of becoming the first Great Technological Religion. In repeated stare-downs with my computer screen, I lost.

In a moment, I’ll share what finally dissolved the plaque in my creative arteries on this subject, but first I may need to introduce you to the current high drama and low wattage of the whole Singularity debate.

The word first appeared in a 1993 essay written by a California math professor, Vernor Steffen Vinge. The full title was “The Coming Technological Singularity.” Professor Vinge was not the first to raise the issue. But he was the first to supply a name worthy of building a whole “end of the world at least as we know it”-fearing movement around this idea: that computer and other technologies are hurdling toward a time when humans may not be the smartest intelligences on the planet. Why? Because some kind of artificial intelligence (“AI”) will have surpassed us, bringing an end to the human era.

Dr. Vinge is now retired. But his Singularity idea has become another of those Californications that is sucking the air out of intellectually tinged, futuristically oriented salons and saloons faster than a speeding epiphany. The relentless personality under the hood of the Singularity phenomenon is a talented 61-year-old inventor and big-screen-thinking, oft-honored futurist from New York City and MIT named Ray Kurzweil.

Where “My Way” Is the Theme Song
The Singularity movement has just finished what one irreverent observer called Kurzweil’s “yearly Sinatra at Caesar’s.” He was referring to Singularity Summit 2009 (the first was in 2006 and before this one, all had been in California) at the historic 92nd Street Y in New York City. Between 800 and 900 enthusiasts paid $498 each to listen to 25 notables, including Kurzweil, on such subjects as The Singularity, transhumanism and consciousness.

I wasn’t there, but bloggers who were say Kurzweil wasn’t physically at the top of his game this year, but his importance, his mesmerizing slides and his beliefs were as central as ever.

Futurist Kurzweil believes with all his heart that unimaginably powerful computers are soon going to be able to simulate the human brain, then far surpass it. He even has the year pegged for this to happen: 2029. He thinks great, wondrous, positive things will be possible for humanity because of this new capability. If you track Kurzweil’s day-to-day activities and influence, you quickly realize that he’s not so much Singularity’s prophet as its evangelist. His zeal is messianic. And he’s constantly on the prowl for new believers in a funky techno-fringe movement that is definitely showing legs.

Consider these developments:

• No less than four documentary movies will be released within a year’s time on The Singularity. One debuted last April at the Tribeca Film Festival and also was shown a couple of weeks ago at the AFI Fest in Los Angeles. Transcendent Man features or rather lionizes—who else?—Ray Kurzweil. The film is loosely based on his book, The Singularity Is Near. Movies called The Singularity Film and The Singularity Is Near are due out shortly; We Are the Singularity is still in production. One admiring critic writes of Transcendent Man, “[The] film is as much about Ray Kurzweil as it is about the Singularity. In fact, much of the film is concerned with whether or not Kurzweil’s predictions stem from psychological pressures in his life.” [Oh, my! Oh, no! How many times have we seen a movement that influences the fate of millions turn out to be the personification of one man’s neuroses?!!]

• Meanwhile, the debate continues over how soon will be the first and only coming of The Singularity (otherwise it would be named something like The Multilarity or perhaps just The Hilarity). At the Y, Paypal co-founder Peter Thiel gave voice to his nightmare that The Singularity may take too long, leaving the world economy short of cash. Michael Anissimov of the Singularity Institute for Artificial Intelligence and one of the movement’s most articulate voices, continues to warn that “a singleton, a Maximillian, an unrivaled superintelligence, a transcending upload”—you name it—could arrive very quickly and covertly. Vernor Vinge continues to say between 2005 and 2030. That means it could conceivably arrive (gasp!) on Dec. 21, 2012, bringing a boffo ending to the Mayan calendar, as some boffoyans are predicting. And, of course, Kurzweil’s charts says 2029.

• Science fiction writers continue to flee from the potential taint of having been believed to have authored the phrase, “the Rapture of the Nerds.” The Rapture, of course, is some fundamentalist Christians’ idea of a jolly good ending to the human adventure. Righteous people will ascend to heaven, leaving the rest of us behind to suffer. It’s probably the Singulatarians’ own fault that their ending sometimes gets mistaken for “those other people’s” ending. They can’t even talk about endings in general without “listing some ways in which the singularity and the rapture do resemble each other.”

• The Best and the Brightest among the Singulatarians don’t help much when they try to clear the air. For instance, there is this effort by Matt Mahoney, a plain-spoken Florida computer scientist, to explain why the people who are promoting the idea of a Friendly AI (an artificial intelligence that likes people) are the Don Quixotes of the 21st Century. “I do not believe the Singularity will be an apocalypse,” says Mahoney. “It will be invisible; a barrier you cannot look beyond from either side. A godlike intelligence could no more make its presence known to you than you could make your presence known to the bacteria in your gut. Asking what we should do [to try and insure a “friendly” AI] would be like bacteria asking how they can evolve into humans who won’t use antibiotics.” Thanks, Dr. Mahoney. We’re feeling better already!

• Philosopher Anders Sandberg can’t quit obsessing over the fact that the only way to AI is through the human brain. That’s because our brain is the only available working example of natural intelligence. And not just “the brain” is necessary but it will need to be a single, particular brain whose personality the great, incoming artificial brain apes. Popsci.com commentator Stuart Fox puckishly says this probably means copying the brain of a volunteer for scientific tests, which is usually “a half stoned, cash-strapped, college student.” Fox adds, “I think avoiding destruction at the hands of artificial intelligence could mean convincing a computer hardwired for a love of Asher Roth, keg stands and pornography to concentrate on helping mankind.” His suggestion for getting humanity out of The Singularity alive: “[Keep] letting our robot overlord beat us at beer pong.” (This is also the guy who says that if and when the AI of The Singularity shows up, he just hopes “it doesn’t run on Windows.”)

• Whether there is going to be a Singularity, and when, and to what ends does indeed seem to correlate closely to the personality of the explainer or predictor, whether it is overlord Kurzweil or someone else. For example, Vernor Vinge is a libertarian, who tends to be intensely optimistic, likes power cut and dried and maximally left in the hands of the individual. No doubt, he really does expect the Singularity no later than 2030, bar nothing. On the other hand, James J. Hughes, an ordained Buddhist monk, wants to make sure that a sense of “radical democracy”—which sees safe, self-controllable human enhancement technologies guaranteed for everyone—is embedded in the artificial intelligence on the other side of The Singularity. One has to wonder how long it will take for the Great AI that the Singulatarians say is coming to splinter and start forming opposing political parties.

• It may be that the penultimate act of the Singulatarians is to throw The Party to End All Parties. It should be a doozy. Because you don’t have thoughts and beliefs like the Singulatarians without a personal right-angle-to-the-rest-of-humanity bend in your booties. The Singularity remains an obscurity to the masses in no small part because the Singulatarians’ irreverence. Like calling the Christian God “a big authoritarian alpha monkey.” Or denouncing Howard Gardner’s popular theory of multiple intelligences as “something that doesn’t stand up to scientific scrutiny.” Or suggesting that most of today’s computer software is “s***”. No wonder that when the Institute for Ethics and Emerging Technologies was pondering speakers for its upcoming confab on The Singularity, among other topics, it added a comic book culture expert, the author of New Flesh A GoGo and one of the writers for TV’s Hercules and Xena, among other presenters.

All of the individuals quoted above and a lengthy parade of other highly opinionated folks (mostly males) who typically have scientific backgrounds (and often an “engineering” mentality) and who tend to see the world through “survival of the smartest” lenses are the people doing most of the talking today about The Singularity. It is a bewildering and ultimately stultifying babel of voices and opinions based on very little hard evidence and huge skeins of science-fiction-like supposition. I was about hit delete on the whole shrill cacophony of imaginings and outcome electioneering that I’d collected when I came across a comment from one of the more sane and even-keeled Singulatarian voices.

That would be the voice of Eliezer Yudkowsky, a co-founder and research fellow of the Singularity Institute.

He writes, “A good deal of the material I have ever produced—specifically, everything dated 2002 or earlier—I now consider completely obsolete.”

As a non-scientific observer of what’s being said and written about The Singularity at the moment, making a similar declaration would seem to be a great idea for most everyone who has voiced an opinion thus far. I suspect it’s still going to be awhile before anyone has an idea about The Singularity worth keeping.

Bookmark and Share

7 Comments

  1. [...] being said and written about The Singularity at the moment”, has written up an article on the Singularity. Conclusion: it’s mostly a bunch of [...]

  2. Hi Mr. Lynch. I think part of the problem here is that you’re primarily getting the surface activity and not enough of the underlying arguments, which are scientific, somewhat complicated, not referenced enough (they aren’t soundbite-y), and even many people in the Singularity community are preoccupied with the surface elements.

    For instance, why is it likely that true artificial intelligence could be created within the next few decades rather than hundreds of years? Well, no one knows for sure, but even if the probability is 10%, it’s worth paying attention to because the consequences could be so large. Working from a utilitarian calculus, looking at the issue makes sense.

    Why do we have reason to worry? Because AIs might pursue convergent subgoals, that, if pursued aggressively enough, destroy human value. Why is human value fragile? Because it’s extremely complex and evolved piecemeal over millions of years, for one. Why don’t people realize this? Because we all have similar values in the universal sense, and because of moral realism. So people will program advanced AI while thinking that the values part will take care of itself, because morality is intuitive to humans because we have oodles of complex neural hardware which makes us think so. That neural hardware wouldn’t create itself spontaneously in artificial intelligences.

    There is a lot more, but it’s somewhat time-consuming to get into. I published a sort of “response” to your post here.

  3. [...] somewhat of an aside, Mr. Lynch criticized my critique of Gardner’s theory of “multiple intelligences” as “irreverent”. This [...]

    Mr. Anissimov is correct in that “irreverent” was not a good word choice to attach to his views on Howard Gardner’s views. Or more correctly, his views on Howard Gardner were not the right choice to illustrate Mr. Anissimov’s tendency to be free-wheeling, free-swinging and occasionally irreverent in his commentaries. For example, he loves to call humans “klutzes” and “apes” and to downgrade humans’ views of the relative position of their brand of intelligence in the overall spectrum of universal intelligence (at least as Mr. Anissimov anticipates it). All of which makes him a really fun read. Don’t miss Michael Anissimov.

  4. mungojelly says:

    I’m not sure I understand your point. As a Singularitarian, I find most of those people’s ideas pretty silly too. But we’re dealing with a situation which is very difficult to grasp, so we do have to remain open to a wide variety of possibilities. Regardless of anyone’s spun yarn on what the consequences will be, we really are crashingly close to the really sharp part of the curve. Machine intelligence is not just a theory but a daily reality now, look around you: It has been years since humans were toppled as chess champions, computers are now driving cars, and we have successfully simulated the first tiny slice of rat brain. There are various theories as to how far that puts us from artificial general intelligence, but I see no rational reason to doubt that we are plummeting in that direction.

    The critical threshold is not human level intelligence. That’s a philosophically important transition, a threat to our ego, a deep transformation of our place in the world, but it is not at all the most important threshold. The important point– though it’s hard to say just where it is– is when AIs are smart enough to think constructively about the very problem of artificial intelligence itself. It doesn’t matter when they can talk or laugh or write poetry, it only matters when they are substantially helpful in producing the next, smarter generation of themselves. That is when the feedback loop becomes much tighter, and something very new (though it’s hard to say what) must very quickly emerge.


    Dear Mungojelly,

    I’ve been getting e-mails about this post sent though generators of anonymous messages like fartsfromtheheart.com, so I’m a bit suspicious of the motives (and opinions) of people who won’t identify themselves. (In such cases, are we already dealing with miscued artificial intelligences?)

    But your comments seem genuine and heartfelt and perhaps your mom and dad really did name you Mungojelly, in which case I’m slightly chagrined at having mentioned the identification issue at all.

    I thought my key point was not only are a lot of silly things being said by Singularitarians, the whole issue is in danger of becoming a religion or cult-like phenomenon at this early stage, probably directly as a result of our lack of dependable evidence on the subject. With respect, while I appreciate and while I learned from your cogent comments, I see signs of that emerging in even your brief observations. When we start taking of “things” like AIs as “they,” we’ve already crossed a threshold in our minds that we aren’t justified in crossing or well-informed enough at this point to cross. Currently, The Singularity is still science fiction, which goes directly back to the main point I thought I was making. That said, you’re my kind of Singularitarian. Thanks for writing!

    Dudley

  5. Wolfgang Pilz says:

    Dear Mr. Lynch,

    thank you very much for your article. Your description of the ominous singularity and its followers is spot on.
    I’ve been lurking about the more relevant outlets for information about the singularity for some time now and have been starting to doubt myself a little bit. May my doubts about the singularity have been unfounded? Is it maybe just me who can’t see the inevitable conclusion as clearly as them? Your article really helped me to answer those questions with a clear No!

    Now I also understand a little better why I so often felt like running in circles when arguing with singularians. It’s the feeling you get, when trying to argue with someone who has built himself an impenetrable ideological defense (Jehovah’s Witnesses come to my mind).

    Greetings,
    Wolfgang

  6. [...] of the Prime Dolphin and of the Deep See Change Dolphin. It is one of these stories that, if the audacious theories of The Singulatarians come to pass, is most likely going to be the leading candidate for implantation in the “mind” [...]

  7. Alexander (M) says:

    Has Mr. Lynch watched the movies he discusses?

    He writes, “[The] film is as much about Ray Kurzweil as it is about the Singularity. In fact, much of the film is concerned with whether or not Kurzweil’s predictions stem from psychological pressures in his life.” [Oh, my! Oh, no! How many times have we seen a movement that influences the fate of millions turn out to be the personification of one man’s neuroses?!!]”

    I have seen Transcendent Man… twice! At Tribeca and AFI. It was an absolutely amazing, beautiful, life changing event for me. I am still looking forward to the others. So I can not comment on them, but I will say that Transcendent Man very clearly and elegantly articulates Kurzweil’s vision of the future while staying restrained and unbiased. I did not take away from it that Kurzweil’s predictions stem from his personal psychosis. On the contrary I think the film maker Ptolemy was seeking to show that no man is an island and revealing that everyone has psychological issues to deal with. This never clouds Kurzweil’s empirically based predictions. Like he says in the film he didn’t discover the Singularity and work backwards, he continued looking forward based on the trends that he grew up with and discovered a future period which will undoubtedly be quite profound. I think Kurzweil deserves credit for being so open and transparent. After all he is working on helping people, informing people. He welcomes criticism, doesn’t run from it. The truth (as subjective as that is) is a long distance runner. I believe these films (at least Ptolemy’s) will vindicate Kurzweil in future years and decades.

    AX

    ____________

    Dear Alexander,

    futurists don’t discover “a future period that will be quite profound.” And therein lies my main quibble with the many of Ray Kurzweil’s admirers.

    Mr. Kurzweil hasn’t discovered anything because you can’t discover something that doesn’t yet exist. A good futurist understands that the best she or he can do is describe interesting scenarios, which is why the most interesting futurists are nearly always accomplished writers of science fiction.

    Futurism outside of science fiction is a calling that requires a huge amount of delicacy and caution when making appeals and offering commentary to one’s followers precisely because there is such a slippery slope separating the thought that something might happen (what good futurists spend time thinking about) and the belief that one has foreseen the future–that is, has “discovered” it.

    Mr. Kurzweil may be a good futurist, exercising all due digilence and staying free of any personal belief that he has discovered the future. But if so, he’s being done a profound disservice by many of his followers, who seem to have accepted that Kurzweil’s vision of the future is predestined. This is why I’ve suggested that he’s in danger of creating more of an apocalyptic cult with his claims than a community of dispassionate observers and debaters of future possibilities.

    Thanks for writing. I appreciate your courtesy and civil tone.

    Dudley

Leave a Reply