the world's most uniquely productive brain change personality profiles
Home Page

Thinkologist: The Dudley Lynch Blog on Brain Change

… a (mostly) good natured critique of World Handling Skills & Tools

More and More Attention Is Being Paid to the Brain’s Powers of Thin Slicing

My all-time favorite description of what time is comes not from a scientist but from a writer of pulp science fiction, the late Ray Cumming. In 1922, he observed that time is “what keeps everything from happening at once.”

This is more than a not-half-way-bad way of describing time. It’s such a doggone-good way that even some very reputable scientists say it is hard to beat.

Today, professionals in a variety of fields are recognizing the importance of “keeping everything from happening at once.” Or if you can’t keep the time crunch out of unfolding events, the importance of understanding how the brain seeks to cope when everything seems to be happening at once and making allowances for all-too-brief tick-tocks in time.

In California the other day, Sergeant Steve “Pappy” Papenfuhs, a police training expert, took up this subject with 275 lawyers who defend cops and their municipalities in lawsuits. The plaintiffs in these suits are often alleging wrongful deaths from police bullets.

When a mole hill looks like a mountain
Papenfuhs is a great fan of Dr. Matthew J. Sharps, a psychology professor at California State University, Fresno, who has made a career of studying the actions of people who must make split-second, life-and-death-affecting decisions. Sharps has even gone so far as to do cognitive and psychological post-mortems of events like Custer’s last stand, the Battle of Mogadishu and the Battle of the Bulge.

He learned that cavalry soldiers at Little Big Horn tried to take cover behind small piles of soft soil, where they died. Because they were stupid? No, Sharp concluded, because when everything is happening at once, the brain has a tendency to grab at the first apparent possibility. There isn’t a lot of natural cover on the American Great Plains. And Custer’s men hadn’t been trained to think about beating a zigzag retreat until they could reach an arroyo or a big rock or something else more solid to duck behind than a prairie dog mound.

But it wasn’t what happened at Little Big Horn but in one of Sharps’ experiments that, according to Papenfuhs, caused gasps of disbelief from the lawyers present at his recent lecture. Rather, it was evidence of what the brain may decide when there’s very little time—and often very little information.

The details of Sharps’ very detailed study can be found here. But the discoveries that most dumbfounded the cop-defending lawyers were these: (A) Ordinary people have an overwhelming tendency to shoot people they believe are threatening them with a gun. (B) They will do so even if the perpetrator is holding a power screwdriver that they have mistaken for a weapon. (C) But only about one in 10 people believes it is appropriate for a police officer to fire under the same circumstances.

All these cops saw was the hair
In his book, Processing Under Pressure: Stress, Memory and Decision-Making in Law Enforcement, Sharps offers his G/FI (Gestalt/Feature Intensive) Processing Theory. Boiled to a few words, it says that when everything is happening at once, the brain defaults to what it feels is most right (that’s the “gestalt” part). It really doesn’t even have to think about it; in fact, it usually doesn’t. If you want it to do something else—in cop talk, make good tactical decisions—then you better spend a lot of time upfront explicitly teaching the brain about what to look for and what to do when it finds it (that’s the “feature intensive” part).

Rapid cognition—or the lack of it—was, of course, the subject matter that The New Yorker magazine’s curiosity hog, Malcolm Gladwell wrote about in Blink: The Power of Thinking Without Thinking. Interestingly, he got the idea for the book from—who else?—a bunch of cops. It happened when, on a whelm, he let his hair grow wild like it had been as a teenager and suddenly started getting stopped a lot by the fuzz. One time he was grilled for twenty minutes as a rape suspect when his skin color, age, height and weight were all wrong. “All we [he and the actual rapist] had in common was a large head of curly hair,” he notes.

That tweaked Gladwell’s interest. “Something about the first impression created by my hair derailed every other consideration in the hunt for the rapist, and the impression formed in those first two seconds exerted a powerful hold over the officers’ thinking over the next twenty minutes,” he says. “That episode on the street got me thinking about the weird power of first impressions.”

Like Professor Sharps, Gladwell was often riveted by how the brain responds—and sometimes how good it is when it does—to situations where everything is happening at once. Nor by any means are those two the first to pursue this. For years, research psychologist Gary Klein has been studying how people make decisions when pressured for time. When he first started, he assumed that people thought rationally even when time was being sliced thin. But then he met a fire commander who demurred when asked how he made difficult decisions. “I don’t remember when I’ve ever made a decision,” the firefighter said. So what does he do? He replied that he just does what is obviously the right thing to do.

On thin ice, it’s good to do thin slicing
This was the beginning of Klein’s years-long inquiry into what he ended up calling “Recognition-Primed Decision-Making.” It’s not a cut-and-dried process, since the decision-maker can change his or her mind from moment to moment and often needs to.

Say a fire commander goes into a burning house, believing it to be a regular kitchen fire. But as he’s scouting around he realizes that things are too quiet and too hot. He’s uncomfortable, so he orders his team out—just before the floor collapses. The big fire was in the basement. The guy didn’t even know the house had a basement; he just knew this fire was not behaving like other fires in his experience. Klein calls this “seeing the invisible.” In Blink, Gladwell borrowed a phrase from psychologists: “the power of thin slicing.” Like Klein, he marvels at how capable the human brain can be at making sense of situations based on the thinnest slice of experience.

There is growing evidence that in situations where there is incessantly too much information incoming and not nearly enough time to come to a decision in classic laboratory (“non-garbage-in, non-garbage-out”) fashion, it behooves someone needing a favorable decision from the decider to appeal to the brain’s “powers of thin slicing.”

Literary agent Jillian Manus offers such advice at writers’ conferences to wannabe authors who are battling uphill odds that their ideas for books will ever get the full attention of a reputable agent, much less get an offer of representation. The really good (“successful”) agents get hundreds of snail mail and/or e-mail queries weekly, if not daily. This is another of those “everything is happening at once” realities. So it is critical that a writer do everything possible to instantly engage an agent’s powers of thin-slicing.

Who knows what cagier blinks will turn up?
One of Manus’s suggestions is to give an agent a comparative pitch in the very first words of a query letter. That is, tell the agent that the work is “somewhat like a this and a this.” Jane Smiley’s 1992 Pulitzer Prize winning novel, A Thousand Acres? It’s King Lear in a corn field. Clueless, the movie? Emma meets Beverly Hills 90210. The war drama, Cold Mountain? Gone With the Wind meets Faulkner. The science fiction novel, The Last Day? Manus successfully pitched it to a publisher as Michael Crichton meets the Celestine Prophecy.

Some of the more daring minds in our midst think that the universe itself has taken steps to avoid being taxed with unmanageable demands on its processing power. Science fiction writer/astrophysicist David Brin speculates that the 186,000-miles-per-second limit on how fast light can travel may be an artifact “introduced in order not to have to deal with the software loads of modeling a cosmos that is infinitely observable.” Or at the level of the quantum, “the division of reality into ‘quanta’ that are fundamentally indivisible, like the submicroscopic Planck length, below which no questions may be asked.”

Though he doesn’t talk about it exactly in these terms, Brin even wonders if our growing powers of thin slicing have us on the verge of figuring out or at least strongly suspecting that we are all reconstituted virtual people living out our lives in a reconstituted virtual reality. A simulation created by greater-intelligences-than-are-we operating way out in front of us, time-wise.

On his blog the other day, Brin wrote: “Take the coincidence of names that keep cropping up, almost as if the ‘author’ of our cosmic simulation were having a little joke. Like the almost unlimited amount of fun you can have with Barack Obama’s name. Or the fact that World War II featured a battle in which Adolf the Wolf attacked the Church on the Hill, who begged help from the Field of Roses, which asked its Marshall to send an Iron-hewer to fight in the Old World and a Man of Arthur to fight across the greatest lake (the Pacific) … does the Designer really think we don’t notice stuff like this? Or maybe this designer just doesn’t care.”

As we get better and better at deciphering what goes on in our minds in a blink in time, maybe we’ll begin to notice all kinds of things that have been eluding our powers of thin slicing. Meanwhile, our interest in what we are already noticing can only grow.

Bookmark and Share

I Couldn’t Find Much That Is New About Breakthough Thinking. (And I’m Not Sure I Want To.)

I took a look the past couple of days to see if I could find any evidence of a breakthrough in the area of breakthrough thinking, and I didn’t find one.

This isn’t to say that there aren’t some interesting things going on.

For example, the guys and gals at Idea Champions are still touting the benefits of their Breakthrough Cafés. When the idea was first uncorked a few years ago, Fast Company, the magazine, sent a writer to observe. And what she noticed was about 30 people trying to brainstorm in a dimly lit banquet room. She wrote, “Amid Latin music, a spread of fusilli and tiramisu, and plenty of pinot noir, they’re trying to get past their pasts—and push unrealized ideas to reality.”

To which, I’d have to say, “Good luck!” but that’s just me. I’ve tried this sort of thing. My menu preferences are a bit different in that I prefer a spread of, say, Goat Cheese Enchiladas With Tomatillo Sauce and grilled bananas and ice cream, and plenty of Negra Modelo. Fortified with all that, the only thing I have trouble getting past is the overwhelming desire for a 90-minute nap.

The common thread is confusion
Maybe a full tummy and well-lubricated brain produces different strokes in other folks—and that seems to speak to the real issue here: there’s a mystery afoot. The day we unravel it, we can kiss one thing goodbye: breakthrough thinking, because once we know how it is done, everybody and their gutter spout cleaner will be able to do it. And we’ll soon have more breakthrough ideas than we can shake Harry Potter’s wand at, and that’s going to take the sheen off everything.

Having just spent several hours surfing on the topic, though, I’m not worried. Us chickens in the creative thinking business (that’s the word we use when we want to feminize/sanitize the idea of bloodying the nose of the status quo) are clearly as clueless as ever. Otherwise, we’d be able to do a better job of getting our stories straight even as we take one shot after another in the dark at a target that may or may not be there. (Remember what the young adept in the movie “The Matrix”—the one who was bending tableware with his telekinetic powers—said: “There is no spoon.”)

Because the most common feature that we so-called experts on breakthrough thinking seem to share is confusion on exactly what advice to offer people looking to us for a comforting, and helpful, word or two on breaking through their brain blocks.

When idealization is a bad habit
To illustrate, consider the potential for mental whiplash—or at least cognitive dissonance—for anyone who has just been told that “The No. 1 Habit of Highly Creative People” is solitude, and then is immediately told that “The No. 2 Creative Habit” is participation? That is, “connecting with others, being inspired by others, reading others, collaborating with others.” (In other words, having fusilli and tiramisu, and plenty of pinot noir, amid Latin music in a dimly lit banquet room!)

To be fair, I should point out that this breakthrough thinking guide did say that you shouldn’t try to have solitude and participation at the same time. That you should try to “balance” them. To which I say, again, “Good luck!” The problem when you start out trying to balance the settings and behaviors and circumstances that might lead to a breakthrough idea is that you are dangerously close to “idealizing” the process, and that’s a hobgoblin that can create a big trouble in breakthrough thinking territory. As Robert Fritz (a breakthrough thinking authority I really enjoy, probably because he shares my bias against philosophers) notes, “Consistency to the ideal thwarts the creative spirit of innovation. It limits the imagination. It puts the mind in jail. It imposes a synthetic construct on real life.” Yeah, verily, it does!

To “box” or not to “box”
What I found during my several hours of surfing on the topic of breakthrough thinking is that nothing has changed. We still don’t know how an idea that strikes us and/or others as radically fresh shows up in our awareness or how to make this a cookie-cutter process.

So those of us in the “innovative thinking industry” keep casting a wide, often conflicting net of ideas in the hopes that at least something we suggest will deliver a useful payoff for people who pay attention to us.

Ever since the nine-dot puzzle became popular in the late 1960s in creativity training seminars and books, “thinking outside the box” has been a common refrain. Then along comes a widely quoted article in Harvard Business Review that says forget about thinking outside the box. What you really need to do is “create a useful new box then think inside that.”

Just when we thought that suggesting to our business clients that asking their customers what they want could lead to a breakthrough idea, we get told that this is the wrong question. The right one: “What outcomes are you looking for?” Argues this author: “When you ask customers what they want, they respond with solutions (products, services features) they’ve already experienced. They can’t imagine new technologies or materials.”

Imagination is everything?
We get all lathered up about the importance of people using their imagination. We quote Einstein (“Imagination is everything.”) Vonnegut (“We are what we imagine ourselves to be.”) Picasso (“Everything you can imagine is real.”) Napoleon (“The human race is governed by its imagination.”) Jesse Jackson (“If my mind can conceive it, and my heart can believe it, I know I can achieve it.”) And then feel ourselves shot out of the saddle by being reminded that “the notion that imagination is everything is just silly. It certainly is something, a critical part of the creative process. But imagination alone will lead you to be an empty dreamer without any possibility of making your vision become reality.”

Ah, sigh. To say it again, if breakthrough thinking was easy, everybody and their Tibetan Mastiff would be doing it.

Which is why when I read about NBC’s upcoming series, “Breakthrough,” I get a bit queasy. The promos say the ever-opportunistic Tony Robbins is going to take people who have been terribly down on their luck or suffered horrendous setbacks in their lives and show them in front of millions of viewers how to “make a Breakthrough.”

I hope he can. And does. Hope he succeeds smashingly with every person he tries it with. But to say it again, one final time, if breakthrough thinking was easy, everybody and their sweat lodge guru would be doing it.

Bookmark and Share

Maybe I’m Being Irrational. But This Terrible Oil Spill Has Ruined My Appetite at the Moment for Matt Ridley’s “Rational Optimism” Book

Let me share a few quick reasons why I’m not really a beach person.

Most visits to the beach quickly turn hot and sweaty. I’m more the 72-degree thermostat variety. Moreover, it is infernally difficult to leave the beach behind once you’ve been there; it adheres to your flesh and picnic utensils, invades your sandals, sticks to your clothes and goes home with you, defying all efforts to be rid of it. Besides that, it moves. You can hear thousands of sand grains displacing each other with every step you take. I keep asking myself, “Was that me or an earthquake?”

Still, because I’m a grandfather, my visits to the beach are not that infrequent. Earlier this week, there I was again, half-way up to my ankle bones in Florida’s incomparable bleached-white seaside sands.

However, my mind was restive. Rather than undulating waves and freshly soaked sands, I kept thinking of BP’s runaway oil well out there, down there, not far over the horizon. The thought that oil could soon be washing up on this very beach was impossible to ignore.

Hubris in the Air?
As I walked along the extraordinarily beautiful beaches of Siesta Key, Florida, it wasn’t BP’s hugely flawed CEO Tony Hayward who came to mind. You’ll probably remember Hayward’s self-pitying plaint the day he got word that the Deepwater Horizon drilling rig had caught fire in the Gulf of Mexico: “What did we do to deserve this?” The Brit on my mind was one Matt Ridley, whose controversial new book,The Rational Optimist: How Prosperity Evolves, is just out.

Because the Gulf oil spill has not only shattered Hayward’s credibility. It has also put a sizable dent in Ridley’s. Odd thing is that, perhaps to this very moment, neither of these gentlemen seem to have realized it very much. Hayward’s repetitive gaffs have made him a video poster boy for penthouse corporate hubris. And Ridley? When you go to his website, you get a Hayward-like, tone-deaf discussion mostly about “how rare such terrible oil spills have now become.”

I first became aware of Ridley’s gift as a popular science writer around the turn of the millennium when I read, cover to cover, his superb bestselling work, Genome: The Autobiography of a Species in 23 Chapters. In that book, he wasn’t all that far from his training as a naturalist, one with a Ph.D. in zoology from Oxford. In The Rational Optimist, as I’ll elaborate on a little bit more in a moment, he probably isn’t that far from home either. But Ridley’s particular stomping grounds this time don’t inspire the confidence in his analysis of “how progress evolves” that we had in his genome book about how biological evolution works.

Campaigning Against Gloom and Doom
Still and all, I’m more than a little interested in Ridley’s inquiry and open to the possibility that he might turn out to be right. His view, stated in a few words, is that humanity is on a roll and has been for, oh, 100,000 years. Decade by decade, century by century, things have been getting better, especially of late. He really thinks that things are going to keep getting better and better and is vexed that “eco-catastrophists” like those warning of global warming are such predictable spoilsports. (He feels that he was first misled as a young thinker by Rachel Carson’s Silent Spring and her anxiety about DDT and other chemicals.)

If you are interested in Ridley’s pet theory explaining why the human species has been so persistently (despite setbacks) blessed, you’ll find it summarized in numerous recent articles in the British and American press (like this one). Ridley is, in a way, still rooting around in his genetics ideas. Simply put, he suggests that ideas need sex, too. And they get it when trade picks up. The more people can interact with each other and trade their innovations, the more their ideas can have sex. And, ergo facto, that increases the rate of cultural and economic progress.

As Ridley told a reporter for The Guardian: “And if 6.7 billion people continue to keep specialising and exchanging and innovating, there’s no reason at all why we can’t overcome whatever problems face us—population explosions, food shortages, disease, poverty, terrorism, climate change, you name it. In fact I think it’s quite probable that in 100 years’ time both we and the planet will be better off than we are now.”

Another “Jolly Good Idea” Idea?
The rah-rah flavor of Ridley’s prose and pronouncements is sure to appeal to that slice of book-reading Americans and their pundits who love to pummel anyone who might suggest that “market fundamentalism” needs checks and balances and who counsel a strong ratio of realism and rationalism to optimism. (New York Times science section columnist John Tierney praised Ridley’s book in a column headlined, “Doomsayers Beware, a Bright Future Beckons.”)

Please don’t interpret what I’m about to say as a sign in the slightest that Anglophobia suddenly has a chokehold on your humble scribe. There are sound, lasting reasons why I watch the BBC World News nearly every night, experience something akin to pain if I miss the latest British literary melodrama on Masterpiece Theater, prefer MI5 over any of the CSI shows and count Inspector Lynley as my all-time favorite fictional gumshoe. But the more I look at the Ridley phenomenon the more I suspect that there is something peculiarly British in all this: that this is a raging instance of British ideas having wild sex with British ideas, some of them a bit hoary. (Nobody ever quite milked “trade” like the British Empire in its heyday milked trade!)

We now know that Matt Ridley turned to writing popular science books because he needed something to do after his wife, neuroscientist Anya Hurlbert, got a job at Newcastle University. He was the third member of the Ridley family to sit on the board of the infamous Northern Rock bank. He was non-executive chairman of the bank in 2007 when it experienced Britain’s first bank run in 150 years and ended up costing taxpayers twenty-plus billion dollars. Ridley’s family is British upper crust and two centuries’ wealthy. He was born on third base and had no need to hit a triple. So we shouldn’t be surprised to find a patrician “We Shall Fight Them on the Beaches” quality in his jolly-good-idea, sunny-lip-upward expectations in The Rational Optimist.

As I walked on the beaches of Florida the other night, I made a decision. I’m not going to read Matt Ridley’s latest book. And it really isn’t Ridley’s fault. I even think I probably have a lot of “rational optimist” views myself. (For instance, few articles have ever gladdened my heart quite like Dr. Steven Pinker’s notable New Republic analysis of declining violence in the world.) But I think the screw-ups (from America’s political and regulatory myopia) and the screw-ups (hello, again, Mr. Hayward) who have put at risk much of Gulf of Mexico and all who depend on the habitability of her waters leave me with little interest in hearing arguments that it only happens once every ______ years. So, coitus interruptus, Dr. Ridley. S*** happens.

If I hear that you are donating your book earnings to help the shrimpers of south Louisiana recover or have been spotted helping clean oil off dying pelicans and damaged sea turtles, maybe I’ll change my mind.

Bookmark and Share

On Black Swan Wings: My Copy of Nassim Nicholas Taleb’s Book Got Me a Free Upgrade on a Flight from Tampa to Chicago

I’ve probably shared this universal “rule of thumb” with my readers more than once: The world is divided into people who divide the world into twos and those who don’t.

And now there’s this one: The world is divided into people who know what a “black swan event” is and those who are clueless. Judged by the number of people who have worked the phrase into their public pronouncements, the Society of the Black Swan is spreading faster than a wickedly clever (or cleverly wicked) tweet.

I just googled “black swan” and came up (in 0.27 seconds) with 10,100,000 hits. There may not have been a more popular catchphrase since “bird in the hand” (32,400,000 hits), “gift horse” (28,900,000 hits), “elephant in the room” (19,000,000 hits) and “bad penny” (13,200,000 hits).

Let me illustrate the drawing power of “black swan” by describing something that has never happened to me before and that, frankly, I never expect to happen again. I don’t know yet if it was a black swan event for me personally, but it certainly fulfilled the first of the three requirements for being one. That is, it was a surprise to the observers (that would include me, a poker-faced flight attendant on United Airlines Flight 569 from Tampa to Chicago and about two rows of passengers forward of the row containing seat 13C and about four rows of passengers to the rear).

The value of carrying a trendy book
As Flight 569 was about to pull away from the gate, I noticed that two seats in the row in front of me were unoccupied. It used to be that the airline seldom objected to someone slipping into an unused space. But in today’s cutthroat skies, no sooner had I plopped down in one of the vacant seats than a stern-faced flight attendant was at my elbow demanding an additional $39 for the extra legroom on that row.

I demurred and returned to my original seat. The plane took off, I settled in, closed my eyes and was close to falling asleep. Suddenly, the second (poker-faced) flight attendant shook my shoulder firmly. “This gentleman,” she said, pointing to the lone occupant of the row I had just attempted to crash, “has paid for your upgrade.”

He was a pleasant-looking, middle-aged fellow who looked like he probably knew his way around a tennis court. “What’s this all about?” I asked. He gestured at the copy of Nassim Nicholas Taleb’s The Black Swan: The Impact of the Highly Improbable that I’d brought along to read. “You just looked like an interesting guy,” he shrugged. He was head of the economics department at one of the Chicago area’s elite, small liberal arts colleagues. And a world-traveled expert on the financial impact of big sporting events and the financing of public sports arenas, among other things. We had a wonderful chat flying to Chicago. One of the things we talked about was black swan events.

1 out of 3 may get you a $39 upgrade
I’m going to assume that there probably isn’t a single person reading this blog item who has yet to hear the name “Nassim Nicholas Taleb” or who has not heard of his latest book. (If you haven’t, that’s okay, very Taleb-like, in fact, in that the “highly improbable” seems to be getting more and more—well—probable these days. Just go to Wikipedia and get up to speed.) The rest of my readers will likely understand when I observe that the reason I doubt that my getting an unexpected $39 upgrade from another passenger on Flight 569 will actually turn out to be a black swan event for me personally is that the other two criteria that Taleb requires for a genuine Black Swan are missing. Namely—

The event has a major impact.
After the fact, the event is rationalized by hindsight, as if it had been expected.

To which I will rejoin that one out of three isn’t a bad start. Certainly, my upgrade by a stranger who wanted to chat with me on a flight to Chicago was unexpected and thus met black-swan-event-criteria No. 1:

The event is a surprise (to the observer).

Taleb is—what?—an intellectual swashbuckler? He’s a former Wall Street hedge fund manager and now a professor who thinks nearly all Wall Street traders, bank risk managers and economists are, plainly, posers and idiots. But then he also places most users of statistical methods professionally, including academicians, physicians, philosophers and government financial regulators, in that category. This middle-aged Lebanese-born curmudgeon derisively calls them “quants” (as in quantitative users of mathematics to try to predict how the world, and especially, the world of finance, works) and he’s at war with them because they keep trying to predict things that, or so Taleb says, turn out to be black swan events and thus are not predictable.

Sometimes, Black Swans are positive, but the ones we remember most may not be. In his book, first published in 2007, Taleb lumps together such events as the rise of the Internet, the personal computer, World War I and 9/11 as being Black Swans. To that list, the argument can be made (and it isn’t always clear that Taleb’s required triplet of rarity, extreme impact and retrospective predictability has been satisfied) that the Indian Ocean tsunami, hurricane Katrina, the Haiti earthquake, the Eyjafjallajoekull volcano eruption and the BP oil spill not to mention the Lehman Brothers bankruptcy, the sub-prime mortgage crisis and late-2000s recession and the now unfolding European sovereign debt crisis should be added to the list.

An explanation for “just about everything”
Taleb apparently wants black swan event status more judiciously applied but welcomes a full appreciation for the role that such events have played on the development of humans and their civilizations. In an interview with The New York Times, he argued, “A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.”

As the planet’s population keeps adding billions, more and more people are destined to find themselves unexpectedly thrown into the path of the highly improbable. Thus Taleb has staked out a growth industry that he currently has almost all to himself. Overthrowing the rule of the “quants” is going to require a full-scale paradigm shift (which will probably, in retrospect, turn out to be a Black Swan!). As financial reformers in the Obama Administration and the Congress have discovered, there’s simply too much power to be challenged and too much money to be made by people who claim to know how to predict and control the future.

Hopefully, Taleb has only just begun. Now that he’s riveted the attention of people (like my pro-active seatmate on Flight 569) on the importance of paying attention to unexpected events of large magnitude and impact, he needs to take his genius, impatience with the status quo and his salty joie de vivre on to the next, next challenge: helping humanity find better ways to prepare for what it can’t see coming.

The reigning black swan expert is …
And Taleb appears to be headed that way. A 2nd (paperback) edition of The Black Swan was released earlier this week by Taleb’s publisher. It contains (or so Taleb himself claims on his website) about 100 new pages on robustness and fragility. I’ve only read an excerpt. But it is clear that Taleb has been doing a lot of thinking about how best to merge skepticism and decision-making in the real world. He’s gone looking for clues in how other complex systems have managed to deal with black swan events and survive. And what he’s found in Mother Nature has blown him away.

He writes on the newest pages of his book:

“Mother Nature is clearly a complex system, with webs of interdependence, nonlinearities, and a robust ecology (otherwise it would have blown up a long time ago). It is an old, very old person with an impeccable memory. Mother Nature does not develop Alzheimer’s—actually there is evidence that even humans would not easily lose brain function with age if they followed a regimen of stochastic exercise and stochastic fasting, took long walks, avoided sugar, bread, white rice, and stock market investments,
and refrained from taking economics classes or reading such things as The New York Times.

“Let me summarize my ideas about how Mother Nature deals with the
Black Swan, both positive and negative—it knows much better than humans
how to take advantage of positive Black Swans. First, Mother Nature likes redundancies, three different types of redundancies.”

And he’s off and running.

With Nassim Nicholas Taleb, it is always likely to be a wild ride, except when you are having a leisurely chat about his provocative claims and discoveries on your flight from Tampa to Chicago. Then it may turn out to be merely a pleasant surprise.

Bookmark and Share

The Indefatigable “Strategy of the Dolphin™” Just Keeps on Giving. Its Forte: Helping the Whole Exceed the Parts

The healthy human brain is no dummy. By the time it reaches adulthood, it knows a lot about what works and what doesn’t work. Where it gets in trouble is when things that it thought worked no longer do so, at least not well enough.

When that brain was much younger and in the body of a child, change was much easier. The child brain is quite malleable. When it wants or needs to do something different, doing that different something usually isn’t nearly as difficult as doing something different is for adults.

At one point in his widely admired book, Brain and Culture, Yale psychiatrist Bruce Wexler explains it this way:

“During the first part of life, the brain and mind are highly plastic, require sensory input to grow and develop, and shape themselves to the major recurring features of their environments. During these years, individuals have little ability to act on or alter the environment, but are easily altered by it. By early adulthood, the mind and brain have elaborately developed structures and a diminished ability to change those structures. The individual is now able to act on and alter the environment, and much of that activity is devoted to making the environment conform to the established structures.”

A brain that “backs” its way into maturity
A lot of what happens as the brain ages and matures on the long, arduous journey from birth to adulthood has been a career-focus of the husband-wife research team of Drs. Stephen Rushton and Anne Juola-Rushton at the University of South Florida Sarasota-Manatee. A couple of weeks ago, the Rushtons were sharing their views with parents in Mumbai, India.

My interest was immediately captured by Stephen’s comment (as quoted by an Indian reporter) that “The child’s brain develops from the back to the front.” The two Drs. Rushton took their child-rearing audience on a tour of just how the child brain develops beginning with the spinal cord and cerebellum and moving more or less sequentially over the years to the occipital lobes, parietal lobes, temporal lobes, motor cortex and finally to the frontal/pre-frontal lobes. This doesn’t mean that there are empty spaces where those later-developing organs are, but I understand exactly what the Rushtons are describing: an advancing “biopsychosocial” locus and focus—a forward-moving frontline—to a person’s cerebral capabilities.

While I’ve not had the opportunity to talk with the Rushtons about all this but hope to—we have family in Sarasota and are there often—what I’ve heard thus far sounds highly supportive and endorsing of many of Brain Technologies/Brain Me Up’s applications and explanations. This is particular true of those based on the late Dr. Clare W. Graves’ “biopsychosocial” model of human development. That is to say, our Dolphin strategy models and materials.

My colleague, Dr. Paul Kordis, and I wrote our first “dolphin”-based work more than 20 years ago. Other works on the Graves model followed. Thus far, however, none seems to have caught the attention of a globe-spanning audience with quite the magnetism and usefulness of our book, Strategy of the Dolphin™: Scoring a Win in a Chaotic World. This work appears to speak directly to the desire of its admirers for a better way to understand the marvelous, mysterious dance between brain and culture and for better ways to use that knowledge in their own self-development, organizing and relationships.

From ‘best ever’ lists to the House of Lords
The assignments with which Strategy of the Dolphin have been tasked and the list of its admirers continue to grow.

Just the other day, we learned that our dolphin strategy provided the Inspirational Forum for Organizational Health with the theme for its 31st annual conference at The Hague, Netherlands—in the late 1990s. We’d never have known had not a report on a speech delivered in England’s House of Lords by the organization’s president about a year after Princess Di’s tragic death been recently revisited by a blogger.

While we know of no parents who have named their newborns after the dolphin (or us) because of the book, more than a few organizations have put “dolphin” in their name or dolphins in their logo in the book’s honor. (Alas, our company lawyer has had to remind more than a few enthusiasts that “Strategy of a Dolphin” is one of our trademarks.) In one language and then another, the book is frequently reviewed; here’s a recent French language review—of the French language version of the book, natch—written by a Belgian blogger. Self-development writers can’t seem to stay away from the book and its compelling metaphor for very long, as this recent U.K. article confirms.

And business students who gave oral book reports on SOD years ago sometimes discover that their professors never forgot how moved they were by their students’ enthusiam for the book’s content.

Earlier this year, we learned that SOD is on the short list of “best business books ever” that management professor David C. Wyld maintains. Dr. Wyld has opined that “the authors’ insights are brilliant and so very relevant to the challenges most individuals and organizations faced through the nineties and still grapple with today: going for the elegant outcome; leveraging the wave; breaking set; being on purpose; seeing through the brain’s ‘time window’; releasing to a higher order; pushing the envelope; shifting in time. It’s deep and intelligent, but not intellectual. It’s a thoughtful blueprint and practical road map of useful insight.”

Thanks, Professor!

What keeps this book timely and relevant?
I was already mulling over Professor Wexler’s book and the Rushtons’ model of how children’s brains develop, along with some other ideas about how the brain deals with the need to change. Then came Wyld’s comment that the insights in SOD are still “so very relevant to the challenges most individuals and organizations face….” Why is that, I pondered? Eventually, I penned these thoughts:

To get the adult brain to change, you must work with the way that brain is already wired. It has a lot of beliefs, protocols, habits and practices already in place. It has a strong sense of how it thinks the world ought to be. The best way to make any head way changing all this requires helping people feel like they can use all that knowledge they already have but use it in exciting and productive new ways to do things differently.

The power of the Dolphin strategy is that it doesn’t require people to give up who they are. It simply asks them to take what they know and bring it into a wider, more productive context. Once they do that, then they often discover is that what they’ve added to the mix has not really been merely additive but also transformative. As the old saw puts it, the whole is suddenly more than the sum of the parts.

There’s a right time to think like a Carp (“self-sacrificially”). And a right time to think like a Shark (“controllingly”). And certainly, more and more times when it pays to think like a Dolphin (”situationally and pro-actively combinatorially”). You may need to think like all three in a short space of time. In today’s world, your audience or marketplace can change several times an hour. So at the moment, what people need more than anything else is a new comfort level for being more mentally and emotionally agile, versatile, competent.

This is what we teach with the Dolphin strategy. First, we offer a new way to think about the main ways that people believe, act and respond. There are only a handful of major filtering and belief-formulating scripts that people everywhere follow in daily life. Our goal with the Dolphin strategy is to help individuals recognize those overarching scripts and the behaviors they trigger faster than ever before. When they spot these scripts in others and themselves, they have valuable clues as to how to respond appropriately. And we may be introducing them to a new script—the script of the Dolphin thinker. In today’s marketplace, the Dolphin thinker—particularly, the Dolphin thinking executive and the Dolphin thinking entrepreneur—is going to win or achieve favorable outcomes more often than anyone else, for three reasons:

1) He or she sees change coming quicker than others (because he or she has more perspectives, and a wider perspective, with which to watch for change).

2) He or she understands better than most which change will matter and which may not (because the Dolphin worldview offers a better sense of what lies behind and beyond change and how other worldviews or belief holders are likely to respond to it).

3) He or she thrives on making new things happen, old things better and the world a more competently functioning place (because the appearance of new technologies, new viewpoints and new configurations of people working together doesn’t spook Dolphins but, to the contrary, excites their innovative spirit and outlook).

Not everyone is equipped to think like a Dolphin. But all Dolphins are equipped to help those around them think better, with less fear and inner resistance because the world is changing and needs to change even more.

________

For more information about the Dolphin strategy book and the other Brain Technologies self-growth materials, go here. And you can arrange to take our online Yo!Dolphin! Worldview Survey™ here.

Bookmark and Share

History’s Longest Running Whack-a-Mole Game (“Dualism”) Continues. As Usual, Friends of the Right Brain Are Kicking (Left Brain) Posteriors and Taking Names

The physicist-turned-healer (G*d rest his soul—he’s no longer with us) fulminated against eating too much garlic. He said gorging on “the stinking rose” is a very bad thing for the brain.

He reasoned this way:

Garlic contains a poison called sulfone hydroxyl. The sulfone hydroxyl ion, he alleged, can penetrate the brain’s blood barrier. Heavy garlic eaters, he warned, should be prepared instantly to lose millions of the very cells that link the brain halves. Loss of those cells, he averred, will lead to “desynchronization of the left and right brain hemispheres”—AND WE ALL KNOW HOW DANGEROUS THAT IS!!!

I’m afraid I find this one of those “time-to-debunk” moments. About the garlic, that is. But not about the idea of brain lateralization.

Debunk it all you want but the idea of finding value in looking at what the brain halves are and represent and do isn’t going away. Not in the popular news. Not in the cultural and worldview wars. Not even in the medical and other scientific literature. The concept is simply too useful. In illustration, argumentation and calculation, there’s nothing quite as easy as “cleaving the apple”—that is, dividing things in half. Dichotomizing.

[In the interest of full disclosure, let me say right away that most of Brain Technologies/Brain Me Up assessment models have important right brain/left brain components.]

The brain forever has dichotomies on its mind
The ancient Taoists did it with yin/yang. Religious types with good/evil. Philosophers with mind/matter. Particle scientists with wave/particle. Psychologists with nature/nurture. Law officers with good cop/bad cop. On and on and on. You just had to know that it was only a matter of time before “dualism”—or … harrumph! … co-eternal binary opposition—infested neuro discussions like kudzu.

Maybe, as one thoughtful observer has suggested, even as old dualisms get knocked down, “it seems that there is something about the wiring of the brain that leads to new dualisms springing up.” Talk about Whack-a-Mole!

That was certainly what the late George Kelly, the father of personal construct psychology, thought. “Our psychological geometry is a geometry of dichotomies [italics mine] rather than the geometry of areas envisioned by the classical logic of concepts, or the geometry of lines envisioned by classical mathematical geometries.” (Double harrumph!)

So what’s been happening lately in the popularized right brain/left brain arena? No mystery there. Same thing that’s been going on ever since the late Marilyn Ferguson’s The Aquarian Conspiracy. As usual, at least in literary and salon circles and the post-modernist-influenced domains of academia, friends of the right brain have been kicking the glial cells out of the left brain.

Bashing in the name of balance
Consider the late Dr. Leonard Shlain. The San Francisco surgeon first graced us with Art & Physics: Parallel Visions in Space, Time, and Light. Then he weighed in with The Alphabet Versus the Goddess: The Conflict Between Word and Image. Shlain’s bias was not even thinly veiled: he thought the left hemisphere of the brain had run amok for 5,000 years, and it was time to put it in its place. In fact, he argued that this is happening as we, the people, move away from dependence on the left brain’s fixation on its symbolic unit of choice: the alphabet. And move toward the right brain’s symbolic unit of choice. The image.

If Shlain had done surgery the way he went after the—quote—linear, abstract, logical pro-masculine left hemisphere and extolled the praises of the—quote—holistic, visually oriented, iconic pro-feminine right hemisphere in his books, they just might have considered removing all sharp-edged objects before letting him in the O.R. In fact, by the time he got to the end of Alphabet/Goddess, he seemed to recognize that he had “expended considerable ink bashing the left brain.” So he pleaded for a balance in using both sides of the brain, despite such a stance having so clearly eluded him in his writings on the subject.

But then Shlain’s love affair with the right brain paled in comparison to Iain McGilchrist’s. A London psychiatrist, McGilchrist wrote the recently published The Master and His Emissary: The Divided Brain and the Making of the Western World.

In an interview just out on the blog Bookslut [Editor’s note: If you think her naming choice was piquant, wait ‘til you see her logo!], McGilchrist anguished over having had to take so many risks in his 600-page tome. For example, after castigating left brain researchers for thinking about the two sides of the brain as machines, he took the risk of turning the hemispheres into personalities, “with desires and values of their own….” And promptly uses this literary license to—right!—bash the glial cells out of the left brain personality.

Left brain origins for post-modernism?
He told Bookslut’s Jessa Crispin: “The left hemisphere sees only a very simple version of reality, is black and white in its view, tends to arrogant certainty, a view that it ‘knows it all already’ and doesn’t have to listen to anything new, and is in denial about its own short-comings. And it has a tendency to paranoia if it feels its position is being threatened….

“I do find it very hard to be optimistic at present, because, as I say in the book, the left hemisphere’s view pretends to have it all sown up, and people are taken in by that, especially when it appears to come from the mouth of ‘science’ (usually biologists—the discoveries of physicists forced them long ago to abandon the Victorian mechanistic model, but the life sciences are slow in catching up). Not that the current arts scene is much better—post-modernism is no challenge to the left hemisphere’s view, but, as I suggest, an expression of it.”

And while it is less pugnacious, we shouldn’t overlook neuropsychiatrist Michael R. Trimble’s The Soul in the Brain: The Cerebral Basis of Language, Art, and Belief. More cautious than some of the brain wars’ luminaries, Dr. Trimble nevertheless focuses lovingly on what he calls the “seven L’s”—Language, Laudation, Lying, Laughter, Lachrymation, Lyric and Love. And each of these, he asserts, is “quintessentially driven by the right hemisphere.”

So the debate over which brain half is on top and which is to blame for the things humans do and which needs to be encouraged to contribute more or contribute less and how well they are connected and so forth continues unabated. And it’s not just the cultural warriors interested in doing the math.

Using your earlobes to tug your brain into shape
Professor Yash Gupta at Johns Hopkins’ B-school cites Apple Computer’s ability to use both its left and right brains as the reason for much of its success. A researcher at Cleveland Clinic thinks autism may be a result of the brain halves’ inability to “talk” adequately with each other. Humpback whales have been found to be either left-handed or right-handed (and probably right/left-brained, too). And the halves of songbirds’ brains don’t “lateralize” normally, or their voices, if they don’t get a chance to imitate the vocalizations of their caregivers in infancy.

So what’s a body to do if it feels its brain halves are not sufficiently in sync—that is, aren’t “balanced,” to use a favorite right brain way of phrasing it?

Well, there’s always Master Choa Kok Sui’s “superbrain yoga” exercise. It involves crossing your arms, tugging on your earlobes and doing deep-knee squats for three minutes, five if you can. Details are here.

Just please don’t eat the daisies—or overdose on the garlic.

______________

At Brain Technologies/Brain Me Up, we came early to the right brain/left brain party and plan to stay late. Talking about brain habits and choices this way is simply too valuable and constructive to our clients to ignore. So all of our personal and team assessments seek to get the brain to think more constructively about itself. Look in the mirror. Take the measure of not only its halves but its wholes. When you are ready to experience the power of knowing your own brain, we recommend starting with The BrainMap®, which is available both online and in paper versions. For more information, please go here.

Bookmark and Share

A Special Valentine’s Day Reprise on Sex and the Brain: We Just Never Seem to Get Enough of Talking and Doing!

Valentine’s Day is just around the corner. That means every blogger and her bird dog are thinking about sex. But then, who needs Valentine’s Day as an excuse to think about sex?

The brain—as every psychobabble and (as you are seeing) thinking-skills aficionado is sure to remind you eventually—is arguably our major sex organ. So it should be no surprise that sex is never far removed from our thoughts. Which is amazing, since, as one scientist has noted, nobody is ever known to have died from a lack of it.

How far removed?

Well, that’s been a lively sex-on-the-brain issue lately. An online polling company (not to be confused with “a major scientific research institute”) has claimed that a typical male thinks about having sexual intercourse (not to be confused with a hug or a handshake) an average of 13 times a day, or about 5,000 times per year. A typical female? Only five times daily, or about 2,000 times per year. On average, how often do men actually have sex? About twice a week, this outfit reports.

One reason women don’t have more of it may be due to what often seems to be foremost on their mind when they do think about having sex. Condoms.

Men’s issue bloggers know to expect a deluge of comment any time they mention the “c” word. One frustrated respondent wrote, “Picture wrapping your vagina in a Walmart bag before sex, and you’ll have some idea of how condoms can feel at their worst.” One comment about condoms is sure to followed by another, not infrequently from a woman reader. The above male comment prompted this female comment, “I mean, couldn’t I at least wrap my you-know in a bag from, say… IKEA?” Another shared, “Oddly enough, the biggest condom whiners I’ve ever been with both had STDs that could’ve been prevented if they’d wrapped up their junk.”

How big an issue is it to get a condom on a male when the lovers aren’t in a long-term, committed relationship? This devilishly clever, potentially offensive (so be warned!) piece of French graffiti animation about AIDS prevention probably offers a solid clue.

All of which is to observe that the subject of sex and the brain is as controversial as ever. But that’s not to say that we aren’t beginning to clarify some important matters:

Good sex (and good jazz) requires the prefrontal cortex to take a powder.

Specifically, the left lateral orbitofrontal and the dorsomedial prefrontal cortexes. The former policies self-control over basic drives like sex. The latter can lead to a suspension of judgment and reflection. Diminish both their outputs, and you can apparently liberate the libido. Brain imaging studies show deactivation of the same areas of the brain in jazz musicians. Ergo, good sex is really a zonked-out brain improvisating! The “play” question then becomes do you screw or do you riff?

Don’t hug the lug unless you are serious, sister!

Why not? The Big O’s. Oxytocin and the ovaries. One expert has issued this caution to women: “The effects of oxytocin can be incredibly disarming to a woman. Female animals injected with the stuff seem to throw caution to the wind and cuddle up with the first available male. And that is why, when women ask me for advice about men, I warn them, ‘Don’t hug the guy unless you plan to trust him.’” The ovaries produce testosterone. One woman with “arousal dysfunction” joined a scientific trial where some participants wore a testosterone patch. She blamed the patch when she suddenly had a desire to throw herself into the arms of a cousin at a funeral. The problem? Her patch was a placebo. The testosterone was of her own making.*

The Mars versus Venus thing is a brain issue.
Bestselling author John Gray was on the right track: men and women are from different planets. Their brains, that is. And the list of male-female brain differences is growing ever longer. Researchers are astonished that this hasn’t been realized sooner. But then most test subjects—human or animal—have been male. For example, only now are we realizing that women get better pain relief from the opioid painkiller nalbuphine and men from morphine. “It’s scandalous,” one Canadian researcher says. “Women are the most common pain sufferers, and yet our model for basic pain research is the male rat.” Often, men don’t understand brain differences as they affect sex, either. Therapists still marvel at how quickly the male brain can begin to suspect that its female partner is having an affair if she’s just not in the mood. (After all, if she doesn’t want sex with him, it must be because she’s getting it somewhere else.)

Forget the G spot. Think B spot.

This just in!!! A new study of 1,800 women at King’s College, London, suggests that the legendary G spot (a supposedly bean-sized vaginal area said to be the female body’s prime erogenous zone) is a myth. But never mind. Dr. Daniel Amen is a clinical psychologist and brain-imaging junkie. He wants to show you some pitchers. (No, not dirty ones.) Pictures that suggest that the right temporal lobe—Amen’s B spot—is “the seat of orgasms.” (You can learn more in his book, Sex on the Brain: 12 Lessons To Enhance Your Love Life.) The B spot, the good doc says, is what can make love dangerous. He likes to talk about former astronaut Lisa Nowak. She donned adult diapers so she could drive hundreds of miles nonstop to confront a romantic rival. Amen thinks going into space may have affected her B spot!

The brain just can’t let the subject go.
And I’m not even going near the sex-on-the-brain problems of Tiger Woods, John Edwards, Mark Sanford, David Vitter or Elliot Spitzer. Instead, I’m going to talk about the compulsions of the Christian housewife who blogs at “Beyond the Pale.” On Dec. 16, she asked, “Is there sex in heaven?” Jesus never said never, she noted. Good thing, too. “[If] he’d flat-out said, ‘Well, kids, tough break, but no one will be gettin’ wichoo in heaven,’ all kinds of sex-crazed flaky goobers like me would say, ‘Seriously?…. Lemme get back to you on that salvation thing, Jesus…….’” On Jan. 13, she was back with “More sex in heaven.” Reassuring her readers that going to heaven doesn’t mean you are going to end up being a “little Hindu floaty thing.” Good thing, too. She said, “[If] MB wants to be a floaty thing in heaven, I am going to be royally pissed. I need the feel of his arms available for me forever.” MB is her husband.  (”My Beloved.”)

But I can’t be all serious all the time about the subject of Valentine’s Day and sex-on-the-brain. I have to tell you one joke.

U.S. President Calvin Coolidge and his wife are visiting a poultry farm.

During the tour, Mrs. Coolidge inquires of the farmer how his farm has managed to produce so many fertile eggs with so few roosters. The farmer proudly explains that his roosters perform their duty dozens of times each day.

“Perhaps you could point that out to Mr. Coolidge,” pointedly replies the First Lady.

The President, overhearing the remark, asks the farmer, “Does each rooster service the same hen each time?”

“No,” replies the farmer, “there are many hens for each rooster.”

“Perhaps you could point that out to Mrs. Coolidge,” replies the President.

I don’t know whether President and Mrs. Coolidge ever actually visited a poultry farm or had such a conversation with its owner. But the Coolidge Effect—named after the joke—is real. Human males who ejaculate ususally can’t have sex with the same female without a rest. But if a different female enters the picture (and the room) right away … well, hello, Mr. President! More physical and emotional complications in the ever-winding road that has emerged to keep our species around.

Happy Valentine’s Day!

_________
*Actually, as one neuropsychologist has explained, the sex-on-the-female-brain thing is a bit more complicated. Let’s say a woman spots someone interesting. The picture travels the lateral geniculate nucleus to her visual cortex, which evaluates the “mate potential.” If it’s a go, the news is sent to the signal-boosting amygdala, which passes the spark to the hormone-controlling hypothalamus. The word next goes to the ovaries, for a release of testosterone. That’s when the left lateral orbitofrontal and the dorsomedial prefrontal cortex get involved, shutting down inhibitions, judgment and reflection. Or something like that. At least when it’s a male being eyed by the female.

Bookmark and Share

Once Upon a Time (About 15 Months Ago), Two Observers of the American Scene Looked Upon a Great Nation and Feared the Worst. We Still Do.

In the early fall of 2008, my colleague, Dr. Paul Kordis, and I promoted a new book idea to some of Madison Avenue’s top literary agents. Soon, we were being greeted every few days by a sound familiar to writers: a Bronx cheer.

In the book business, that translates to “Get lost, dullard!”

Reading our rejection notices carefully, it was clear that most of these agents believed (a) Barack Obama was going to be elected President and (b) this would solve most of the problems we proposed to address. So we dropped the project. And it came to pass, as most of the world knows, that Barack Obama did become President. Otherwise, nothing much has happened. The problems we proposed to address are ever more weighty on the shoulders of the nation and the world. Our suggested solutions—we think—are more valid than ever.

We have no book to offer you. But I thought you might want to add an “Amen” or two as you sampled our sense of concern, our passion and our outrage at what has been allowed to happen to the United States of America. Here are a few excerpts from our sample chapter.


© 2008-2009 Dudley Lynch and Paul L. Kordis

This is not a book about niceties. No “read” for the squeamish. Or for those who prefer the status quo. Because we are here to alert you to no less than an approaching national cataclysm.

We seek to make a near-airtight case that our nation’s continued existence as the world’s foremost beacon of freedom and opportunity is in severe jeopardy. Don’t let anyone tell you any differently, and be vigilant around anyone who tries. America stands at the edge of the chasm, teetering. The once-strong flame of state viewed from afar as that of a country and a people without historical peer, not even close, is flickering. An ill wind blows over the enterprise called the United States, and our country and our people are in serious peril. Make no mistake. If “business as usual” continues, the U.S. of A. is almost certain to go the way of the Titantic. Proud, boastful, mighty, capable and resting on the bottom. America, the Beautiful, will be toast.

This has happened—is happening—for the most part because much of our legacy, rights, laws, resources, strengths and trust is being steadily expropriated by powerful elites centered in big government, big business and big religion. It is an extraordinary set of puppet masters. One unlike any other oligarchy that has ever coalesced for the purpose of pulling America’s economic and political strings.

America’s most wealthy and most pythonic have been running this country like it was a Monaco-on-the-Potomac instead of the world’s most visible (and once most viable) democracy. They control our economy and corporations, our politics, our government, our media and many of our other institutions, including the so-called spiritual. We find the fingerprints of this hyper-powerful, hyper-wealthy clique time and again at the site of America’s economic disasters, great societal imbalances and crimes against the people and the republic, not to mention the rest of the world.

####
We ordinary folks understand what the “elite deviants” who have been calling the shots and squandering our national identity, integrity, heritage and wealth don’t. And that is this. Once upon a time, a parade of experimenters that stretches backwards into the mists of antiquity (and includes America’s founders) spent three or four thousand years trying to figure out how to make a society come together. And hang together. And thrive and strive always to do better for its citizens.

And figure it out they did!

They pretty much solved the big equations and argued out the fine points. They wrote it all down. As a consequence, an extraordinary place called America appeared on the scene a little more than two centuries ago. This neophyte of a polity across the ocean blue—this outlander to the world’s established orders—had its ups and downs. Its doubts. Its discouraging moments. Its brushes big-time with disaster. And yet, by the third quartile of the 20th Century, America had proved something more convincingly than any other major society before it. America proved that the great theorists and experimenters of human organizing over the eons had, indeed, gotten it right.

And then what did we do?

In a mere heartbeat in historical time, we promptly forgot nearly everything we’d learned! This is such an astounding demonstration of national amnesia and irresponsibility that it bears repeating: we humans spent thousands of years figuring out how to create the best large-sized, self-renewing, fairest-to-all-concerned society the world had ever seen. And then, in a few short years, we Americans promptly forgot nearly everything the risk-takers and civic savants of the ages had taught us!

Forgot it!

Squandered our advantage, debased our achievements. Thumbed our noses at our planetary neighbors. Ignored our most vulnerable and poorest. Mechanized the “bio-cide” of our other earth-mate species. And then left our covenants and promises strewn across the commonweal of our once great land like so-much broken pottery.

We elevated corporations over individuals (by abandoning our anti-trust laws and watering down our laws and regulations and turning a blind eye to the egregious and never-ending misdeeds of giant companies).

We jettisoned fiscal responsibility (by letting Congress and the White House spend at will).

We sold out our workers and their families (by allowing our unions to be destroyed and our companies to be sold and opening our borders profligately to predatory importers).

We thumbed our noses at poor people (by cutting or failing to fund the programs they depended on to stay healthy and try to improve their plight).

We made higher education ever more expensive (by letting college tuitions and fees soar and refusing adequate public funding).

We permitted the dumbing down and debasement of our news media and what they report (by allowing extreme consolidation of ownership, allowing the media’s owners to meddle in the newsroom and removing requirements for fair use of the public’s airways).

We forgot what religion is really supposed to be about (by politicizing our religious institutions and building the walls between us and our neighbors of faith and unfaith ever higher).

We let our public places and shared spaces go to pot (by failing to maintain our infrastructure).

We saved nary a penny for a rainy day either as a person or as a people (by becoming the largest borrowers in history).

We poisoned the immense good will that many other peoples of the world held for us (by acting the bully and petulantly telling everyone else it was our way or the highway).

That, just for starters.

####
Can America be fixed?

Test the air.

Can you sense it? Can you feel it?

A stirring. A breeze amid the stagnancy. Your authors believe they can feel such a ripple. A zephyr that signals an awakening. If this promising murmur in the wind can be upgraded to a mighty blow, can be expanded and accelerated and soon, then it is not quixotic to predict the possibility of another kind of finish to the game of chicken America is playing with its fate. A marathon race to the checkered flag along the edge of the abyss that the good guys and gals can win.

Your authors believe such an outcome is possible. We also believe this. If this hopeful breeze becomes a mighty force, it will be because this we of modest footprints and ordinary wing spans—We, the People—have remembered our civics lessons. Our history lessons. Our spiritual lessons. All at a time when the puppet masters at the very top of America’s pecking order of power and privilege will have gone right on thinking they need a civics lesson, history lesson or a Bible, Torah or Koran lesson like they need a hole in their Pucci money bag.

####
You may question whether things are really as bad as we are claiming. Whether it really is much closer to midnight than morning in America. Whether as a nation, we are truly driving without headlights in the dark. Whether our country’s problems are realistically approaching irresolvable overload. Whether Americans have gotten it so wrong that we may never again get it right.

We are prepared to show you a picture of the jeopardy America is in so scarifying and ominous you may find it hard to fall sleep tonight.

We realize that we, as authors, must provide you with convincing answers. Answers based on the evidence. We also want you to feel and appreciate our own love for this country. Understand what we want it to be for our children and grandchildren. And share our unshakable belief that America is in great peril. Share our belief that it is worth saving. And come to see that only you, the “ordinary” people of this most extraordinary land, can save it.

Our one best hope is a radical and rapid return to normal. We must resuscitate the ideal. Remember the story. Restore the dream. Reclaim and refortify America’s soul. And we must equip We, the People, to unscrew a royally, totally screwed-over republic by taking back the memory, the promise, the foundation.

Because freedom isn’t ringing. Alabaster cities aren’t gleaming. Most Americans are no longer dreaming the dream. More than it ever has before, your country needs you in the driver’s seat, gripping the wheel and steering the people’s bus. America is badly broken. Only you can fix it. And there’s not a lot of time left to do it.

####
So Paul and I were thinking 15 months ago. In the interim, very little has happened to change our minds. As the New Year of 2010 approaches, whether a citizen of America or not, we invite you to do all you can to sense the stirring, find the breeze, encourage the ripple, fan the zephyr of what has been lost. And may your God or the operative force that animates your dearest hopes and dreams bless us all!

Bookmark and Share

All of Us Are Like This 7-Year-Old Who Doesn’t Like His Story-Making to Be Interrupted

Friends of ours told us the other night about their grandson, now 7, who lives just down the street from them. That means he spends a lot of nights at their place, school nights included. And that means either his grandmother or his granddad (but usually his grandmother) is freighted with the task of rousting him for school in the morning.

While getting him awake is not often a problem, his grandparents say, getting his feet on the floor usually is. He loves to lay in bed, eyes wide open, eyes very active in fact. Looking first in one direction, then another, though almost never at you. Ask him what he’s doing, and you are inviting a minor Vesuvius of emotion, they report. “You are interrupting my story!” they say he’ll protest. It is clear that their grandson does not like his story-making interrupted. And I’ve come to realize that few of us do.

I’m going to assume that most of the emotion is being generated by his right hemisphere, which is irritated that its understanding of what the left side of his brain is currently up to has been disrupted. That’s because for a lot of things, until the left side of our brain supplies an explanation, the right side is left pretty much without one. This, at least, is what neuroscientist Michael Gazzaniga suggested years ago, and continues to suggest, with his theory of the interpreter.

Residing in the left hemisphere—or so “split brain” expert Gazzaniga concluded, as he explained (among many other places) in The Mind’s Past (page 174)—“The interpreter constantly establishes a running narrative of our actions, emotions, thoughts, and dreams. It is the glue that keeps our story unified and creates our sense of being a coherent, rational agent. It brings to our bag of individual instincts the illusion that we are something other than what we are. It builds our theories about our own life, and these narratives of our past behavior seep into our awareness.”

Ever since reading Dr. Gazzaniga’s theory of the interpreter, I’ve tended to tell anyone curious about what I do professionally that I’m a deadly serious student of the stories people tell themselves and others to explain who they are. You can notice this persistent thread running through nearly all of our models, books and assessment tools here at Brain Me Up. And few things interest me more than the “core” story people tell about themselves.

I’ve concluded that there aren’t very many core stories. And that understanding what your core story is and admitting to its realities, and constantly assessing when and where it makes sense to submit to guidance from your core story, are crucial to being an effective human. (Of course, not every core story equips its user to know or even to care whether they are an effective human as well as some core stories do.)

Any scholar or researcher who professes to be a “developmental” person, following how one person over time and how all persons over the generational expanses of time, assemble and enable and sometimes limit their personal qualities and skills, is hard at work seeking to understand the stories people tell themselves and others in an effort to explain who they are.

Years ago, I was introduced to the pioneering, self-described “biopsychosocial” theory of self-explanatory storytelling of the late Clare W. Graves, the American psychologist. I’ve yet to discover a better model. So I’ve spent much of my career seeking to make his model—which is sometimes called “the theory of everything” and can quickly overload anyone who comes to it just wanting to know a little bit about a few things—more accessible to ordinary souls.

I love all my model-children equally, but first among equals is the schematic that Dr. Paul Kordis and I put together a couple of decades ago and still continue to expand. That would be the water creatures model that was the focus of our book, Strategy of the Dolphin.

The users of the Carp story explain themselves to themselves as perennial victims. They see the world as being against them, and much of the time, they can be forgiven for thinking so. Life is hard. There aren’t a lot of opportunities to bootstrap one’s way upward economically, socially and culturally. There are more Carp storytellers on earth than any other kind. The Carp story reeks with vulnerability. Where it is heavily in use, there is often much resentment and anger and suffering. Can IEDs, suicide bombers, child and spousal abuse, public protests that turn bloody and political Tea Parties that turn shrill and accusatory be far behind?

Next comes the Shark storyteller. The user of the Shark story usually feels entitled. And often for good reason. They hold most of the cards and many of the marbles. The easiest way to learn how to tell the Shark story is to be the daughter or son of someone who told it well. In the 21st Century, the most formidable redoubt of the Shark storyteller is the major corporation and governments and other agglomerates (like universities) that act like one. It is important to the Shark story user to appear confident, in the know, on top of things, and really a pretty good Jane or Joe. Funny thing, though, how often Shark waters turn bloody, good Jane, good Joe or not.

Someone who isn’t forced by dire life circumstances to use the Carp story and who has the sensibilities to understand what a dead-end the Shark story tends to be often gravitates toward a much more fructiferous story. In fact, it sometimes seems to me like the brain has suddenly discovered itself when it arrives at the ability to tell this next story. That’s because, welcome improvement that it is, the new story and its user soon seem to be surrounded by wretched excess. Not by money, necessarily, although users of this story often do well enough. But a wretched excess of ideas, possibilities, symbols, connections and desires. Originally, Dr. Kordis and I called this the Pseudo-Enlightened Carp story. But we eventually came to realize that this was probably too harsh and an unnecessary diversion.

Because in being censorious of the premature assumption by persons suddenly able to tell this story that they have arrived at enlightenment, we were probably steering people away from a realization that they are very close now—psychologically, operationally—to a radically new, fecund, competent kind of story that people on the planet increasingly needed to hear and to which they need to self-adapt.

And so we changed the name of this new story to First Dolphin. It is only a beginning, important as it turns out to be. Truth be known and acknowledged, the First Dolphin story is the story being told of themselves by many of the people who are now feverishly connecting through Facebook and Twitter, who are raising the alarums about global ecological injury, who are scanning the heavens for signs of other intelligent “beings” in the universe, who are protesting against the treatment of the Carp storytellers and the abuses of the Shark storytellers and propagating the desire for a fairer, safer, more peaceful world.

Users of the First Dolphin story are nowhere near being able to live up to all their precepts or deliver on all their promises. But their story is a great improvement. And a critical spawning grounds. Already, at Brain Me Up, we are tracking two additional stories that have grown from the First Dolphin’s: the stories of the Prime Dolphin and of the Deep See Change Dolphin. It is one of these stories that, if the audacious theories of The Singulatarians come to pass, is most likely going to be the leading candidate for implantation in the “mind” of the artificial intelligence that they are predicting is destined to exceed our own.

But enough for now. If you’d like to know which of these stories you currently use to explain to yourself and others who you are—well, that’s the intended function of our newest Brain Me Up assessment. It’s called the Yo!Dolphin!™ Worldview Survey. Go here to know more. Be assured, our purpose is helping you understand and put to good use your life-story-making, not interrupt it, whether you are lying in bed musing about it or have your feet on the floor.

Bookmark and Share

So Far, the Singularity Volunteer Fire Dept. Has Been Sounding Ten Alarms While Rushing Around Trying to Find Smoke

I don’t often experience writer’s block. Sleeping on a topic overnight is nearly always enough to return a free flow of ideas and images. But it was not working that way with this thing called The Singularity. For days, I tried without success to tie a literary bow around a supposition that had fast become a phenomenon that is now on the verge of becoming the first Great Technological Religion. In repeated stare-downs with my computer screen, I lost.

In a moment, I’ll share what finally dissolved the plaque in my creative arteries on this subject, but first I may need to introduce you to the current high drama and low wattage of the whole Singularity debate.

The word first appeared in a 1993 essay written by a California math professor, Vernor Steffen Vinge. The full title was “The Coming Technological Singularity.” Professor Vinge was not the first to raise the issue. But he was the first to supply a name worthy of building a whole “end of the world at least as we know it”-fearing movement around this idea: that computer and other technologies are hurdling toward a time when humans may not be the smartest intelligences on the planet. Why? Because some kind of artificial intelligence (“AI”) will have surpassed us, bringing an end to the human era.

Dr. Vinge is now retired. But his Singularity idea has become another of those Californications that is sucking the air out of intellectually tinged, futuristically oriented salons and saloons faster than a speeding epiphany. The relentless personality under the hood of the Singularity phenomenon is a talented 61-year-old inventor and big-screen-thinking, oft-honored futurist from New York City and MIT named Ray Kurzweil.

Where “My Way” Is the Theme Song
The Singularity movement has just finished what one irreverent observer called Kurzweil’s “yearly Sinatra at Caesar’s.” He was referring to Singularity Summit 2009 (the first was in 2006 and before this one, all had been in California) at the historic 92nd Street Y in New York City. Between 800 and 900 enthusiasts paid $498 each to listen to 25 notables, including Kurzweil, on such subjects as The Singularity, transhumanism and consciousness.

I wasn’t there, but bloggers who were say Kurzweil wasn’t physically at the top of his game this year, but his importance, his mesmerizing slides and his beliefs were as central as ever.

Futurist Kurzweil believes with all his heart that unimaginably powerful computers are soon going to be able to simulate the human brain, then far surpass it. He even has the year pegged for this to happen: 2029. He thinks great, wondrous, positive things will be possible for humanity because of this new capability. If you track Kurzweil’s day-to-day activities and influence, you quickly realize that he’s not so much Singularity’s prophet as its evangelist. His zeal is messianic. And he’s constantly on the prowl for new believers in a funky techno-fringe movement that is definitely showing legs.

Consider these developments:

• No less than four documentary movies will be released within a year’s time on The Singularity. One debuted last April at the Tribeca Film Festival and also was shown a couple of weeks ago at the AFI Fest in Los Angeles. Transcendent Man features or rather lionizes—who else?—Ray Kurzweil. The film is loosely based on his book, The Singularity Is Near. Movies called The Singularity Film and The Singularity Is Near are due out shortly; We Are the Singularity is still in production. One admiring critic writes of Transcendent Man, “[The] film is as much about Ray Kurzweil as it is about the Singularity. In fact, much of the film is concerned with whether or not Kurzweil’s predictions stem from psychological pressures in his life.” [Oh, my! Oh, no! How many times have we seen a movement that influences the fate of millions turn out to be the personification of one man’s neuroses?!!]

• Meanwhile, the debate continues over how soon will be the first and only coming of The Singularity (otherwise it would be named something like The Multilarity or perhaps just The Hilarity). At the Y, Paypal co-founder Peter Thiel gave voice to his nightmare that The Singularity may take too long, leaving the world economy short of cash. Michael Anissimov of the Singularity Institute for Artificial Intelligence and one of the movement’s most articulate voices, continues to warn that “a singleton, a Maximillian, an unrivaled superintelligence, a transcending upload”—you name it—could arrive very quickly and covertly. Vernor Vinge continues to say between 2005 and 2030. That means it could conceivably arrive (gasp!) on Dec. 21, 2012, bringing a boffo ending to the Mayan calendar, as some boffoyans are predicting. And, of course, Kurzweil’s charts says 2029.

• Science fiction writers continue to flee from the potential taint of having been believed to have authored the phrase, “the Rapture of the Nerds.” The Rapture, of course, is some fundamentalist Christians’ idea of a jolly good ending to the human adventure. Righteous people will ascend to heaven, leaving the rest of us behind to suffer. It’s probably the Singulatarians’ own fault that their ending sometimes gets mistaken for “those other people’s” ending. They can’t even talk about endings in general without “listing some ways in which the singularity and the rapture do resemble each other.”

• The Best and the Brightest among the Singulatarians don’t help much when they try to clear the air. For instance, there is this effort by Matt Mahoney, a plain-spoken Florida computer scientist, to explain why the people who are promoting the idea of a Friendly AI (an artificial intelligence that likes people) are the Don Quixotes of the 21st Century. “I do not believe the Singularity will be an apocalypse,” says Mahoney. “It will be invisible; a barrier you cannot look beyond from either side. A godlike intelligence could no more make its presence known to you than you could make your presence known to the bacteria in your gut. Asking what we should do [to try and insure a “friendly” AI] would be like bacteria asking how they can evolve into humans who won’t use antibiotics.” Thanks, Dr. Mahoney. We’re feeling better already!

• Philosopher Anders Sandberg can’t quit obsessing over the fact that the only way to AI is through the human brain. That’s because our brain is the only available working example of natural intelligence. And not just “the brain” is necessary but it will need to be a single, particular brain whose personality the great, incoming artificial brain apes. Popsci.com commentator Stuart Fox puckishly says this probably means copying the brain of a volunteer for scientific tests, which is usually “a half stoned, cash-strapped, college student.” Fox adds, “I think avoiding destruction at the hands of artificial intelligence could mean convincing a computer hardwired for a love of Asher Roth, keg stands and pornography to concentrate on helping mankind.” His suggestion for getting humanity out of The Singularity alive: “[Keep] letting our robot overlord beat us at beer pong.” (This is also the guy who says that if and when the AI of The Singularity shows up, he just hopes “it doesn’t run on Windows.”)

• Whether there is going to be a Singularity, and when, and to what ends does indeed seem to correlate closely to the personality of the explainer or predictor, whether it is overlord Kurzweil or someone else. For example, Vernor Vinge is a libertarian, who tends to be intensely optimistic, likes power cut and dried and maximally left in the hands of the individual. No doubt, he really does expect the Singularity no later than 2030, bar nothing. On the other hand, James J. Hughes, an ordained Buddhist monk, wants to make sure that a sense of “radical democracy”—which sees safe, self-controllable human enhancement technologies guaranteed for everyone—is embedded in the artificial intelligence on the other side of The Singularity. One has to wonder how long it will take for the Great AI that the Singulatarians say is coming to splinter and start forming opposing political parties.

• It may be that the penultimate act of the Singulatarians is to throw The Party to End All Parties. It should be a doozy. Because you don’t have thoughts and beliefs like the Singulatarians without a personal right-angle-to-the-rest-of-humanity bend in your booties. The Singularity remains an obscurity to the masses in no small part because the Singulatarians’ irreverence. Like calling the Christian God “a big authoritarian alpha monkey.” Or denouncing Howard Gardner’s popular theory of multiple intelligences as “something that doesn’t stand up to scientific scrutiny.” Or suggesting that most of today’s computer software is “s***”. No wonder that when the Institute for Ethics and Emerging Technologies was pondering speakers for its upcoming confab on The Singularity, among other topics, it added a comic book culture expert, the author of New Flesh A GoGo and one of the writers for TV’s Hercules and Xena, among other presenters.

All of the individuals quoted above and a lengthy parade of other highly opinionated folks (mostly males) who typically have scientific backgrounds (and often an “engineering” mentality) and who tend to see the world through “survival of the smartest” lenses are the people doing most of the talking today about The Singularity. It is a bewildering and ultimately stultifying babel of voices and opinions based on very little hard evidence and huge skeins of science-fiction-like supposition. I was about hit delete on the whole shrill cacophony of imaginings and outcome electioneering that I’d collected when I came across a comment from one of the more sane and even-keeled Singulatarian voices.

That would be the voice of Eliezer Yudkowsky, a co-founder and research fellow of the Singularity Institute.

He writes, “A good deal of the material I have ever produced—specifically, everything dated 2002 or earlier—I now consider completely obsolete.”

As a non-scientific observer of what’s being said and written about The Singularity at the moment, making a similar declaration would seem to be a great idea for most everyone who has voiced an opinion thus far. I suspect it’s still going to be awhile before anyone has an idea about The Singularity worth keeping.

Bookmark and Share