the world's most uniquely productive brain change personality profiles
Home Page

Thinkologist: The Dudley Lynch Blog on Brain Change

… a (mostly) good natured critique of World Handling Skills & Tools

Maybe Science Fiction Is Dying, But If So, The ER Is As Crowded and Raucous As That Cantina In Star Wars

I’ve had a lifelong patchiness in my relationship to science fiction. In the up part of the cycle, I devour it and read—or watch—little else. Once I discovered Frank Herbert’s Dune saga novels, with their giant, spice-protecting sandworms in the deserts of Arrakis and all else, I had to read them all, and did so with dispatch. Ditto with most of Robert Heinlein’s books and to a lesser extent, Arthur C. Clarke’s and Isaac Asimov’s (at least, his sci-fi stuff). And then in the down part of the cycle, I’m deaf and blind to the genre.

This bipolar irregularity no doubt caused me to miss a once-in-a-lifetime opportunity as a sci-fic fan and may have caused me to miss a unique opportunity as a writer. Because one of the greats of American science fiction writing joined the English faculty of my alma mater, Eastern New Mexico University, while I was enrolled there. It was a small school, but even so, I still blew right past the fact that I could have studied science fiction writing as taught by one of the winners of the Science Fiction Writers of America’s Grand Master of Science Fiction Writing award. The first honoree was Heinlein. The second was Jack Williamson of Elida, New Mexico.

Lately, I’ve been away. Again. Though I am aware of the availability of new sci-fi TV series like Fringe, Heroes and FlashForward, nothing has really captivated me sci-fi-wise on the TV screen or book page since The X-Files.

And probably this sleeping dog would still be slumbering had not an odd-sounding post showed up the other day in a Yahoo Group I belong to. The writer is a highly placed government bureaucrat. A really powerful one in scientific research circles, if for no other reason because he presides over who gets multi-million-dollar government research grants. So he’s an accomplished veteran at eviscerating claims by others with whom he disagrees, and he does so regularly in this online community. But here he was, speaking with an unaccustomed tentativeness approaching raw awe.

His story had to do with his viewing the second of Arnold Schwarzenegger’s Terminator movies, the ones in which post-apocalyptic artificially intelligent machines seek to exterminate what is left of a human race. What totally “creeped” him out about this movie, said this Washington bureaucrat, was that it contained several scenes that tracked real events and people in his life and career so faithfully and accurately that it felt like he was being spied on. Fresh on his mind, too, was one of the findings of his agency about good predictors of which emerging technologies would pan out (and should be funded) and which wouldn’t (and shouldn’t be). One finding was that new technologies treated as being favorable to humans in science fiction plots “somehow mysteriously prospered more than you would have expected.” (I’m not posting a link to his comments, since the group is private.)

I don’t know whether he was being spied on or not (he thinks not, after thinking about it.) But what about the U.S. government’s finding that if science fiction thinks well of something being possible in the real world, it has a better than otherwise chance of really happening? Does, in fact, science fiction have a good track record of predicting anything? And does, in fact, anyone take science fiction very seriously anymore? I went looking for answers or, at minimum, opinions. Here’s a CliffsNotes’ version of what I found:

Science fiction requires an optimistic audience, so in a world of growing pessimism, as a viable literary category, science fiction may be dying.

This is the view of George R. R. Martin, who writes both sci-fi and fantasy novels. Sci-fi was in its heydays in the 1950s and 1960s, he observes. The future was an appealing place—one some people couldn’t wait to get to. They thought that their children and grandchildren would be better off and happier there. Now he says, people worry about ecological problems, global warming, the growing instability of the world with nuclear proliferation. He says, “People no longer believe on some level that the future is going to be a good place and they prefer to read about other times and other places that are maybe not so scary as science fiction.”

The real and the fictional worlds have become so interwoven that good sci-fi writing gets lost in the hoopla, the buncombe and macabre of the potentially real. The new ABC series, FlashForward, is based on Canadian sci-fi author Robert J. Sawyer’s acclaimed book of that title. In it, two scientists at the European CERN particule accelerator accidentally transport the world’s consciousness 21 years into the future, then return it a couple of minutes later. Naturally, the sudden memories of what is to be terrorize humanity. The best-seller thriller Angels and Demons has Vatican City under threat from a bomb made of anti-matter stolen from CERN. Another best-seller, Blasphemy, has a mad physicist trying to use a CERN-like particle accelerator to talk with God. Then along comes two halfway reputable physicists with a real theory about an experiment CERN hopes to do with its problem-beset new accelerator. They believe that the illusive sub-atomic particle, the Higgs boson, may be so “abhorrent to nature” that it could cause the natural world to try to reverse-engineer reality and wipe out the experimental apparatus trying to create it. They suggest that this might explain the serious mishaps that have struck the Swiss project. So … which is “the science fiction”?

Even science fiction writers have been disappointed that more of their predictions and expectations have not panned out. One candid enough to say so is the oft-honored Frederik Pohl, who has written sci-fi for more than 70 years (he celebrates birthday No. 90 on November 26). Very little of what science fiction has described, he says, has come to pass. “You can’t jump into your spaceship and fly off to Mars and have adventures with six-limbed green Martians, riding floats. It isn’t going to happen. There aren’t any,” he says. “I’m really kind of disappointed. I wish that we had had the right kind of spacecraft. And it doesn’t look like now they’re ever going to happen, or at least not in the immediate future, by which I mean, the time before the sun goes out.”

No matter how hard they try, writers of science fiction can’t escape the influence of the bigger social trends (biases, political correctnesses) of their times. And this isn’t necessarily good for their work. Sci-fi/fantasy writer Jo Walton makes this case in talking about Heinlein’s juvenile work, Time for the Stars, published in 1956. She suggests if Heinlein had written the work recently, “it would have been a different book in almost every way.” For example, earthlings wouldn’t be going out to exploit the galaxy. Earth would be dying because of global warming and pollution, not simple over-population. The book would focus on relationships, not adventure. Characters would have more sex, treated very differently. The odd incestuous relationship between Tom and his great-great-niece Vicky would be more explicitly sexualized at long distance and contain more angst. Says Walton, “I’d read it, but I probably wouldn’t keep coming back to it.” In other words, its topicality would have diminished its appeal, something some critics suggest is happening a lot to science fiction these days.

Prophesy and prophets that take themselves seriously—Nostrademas included—are usually delusional, but some science fiction reader/critics are concerned that sci-fi writers not back away from the prophetical challenge. James Wallace Harris worries, “Personally, I think science fiction is at a turning point—at a cusp—like when a religion turns from revelation to dogma.” He is captivated by Nassim Nicholas Taleb’s arguments in The Black Swan against trying to predict the future. (The black swan is Taleb’s brilliant metaphor for a future that seems to be predictable only in hindsight.) But Harris is also concerned that science fiction writers not discount their value as writers of philosophical fiction. That is, fiction that helps us imagine purposeful “black swans” useful in explaining why our species may be the first to come fully awake (in Harris’s words) “in the infinite foam of multiverse reality.”

And all this doesn’t really begin to do justice to the cacophonous debate under way about the health, the role, the purity (or the contamination), the state and the fate or the coolness or uncoolness of science fiction today. Is it true that science fiction has become too feminine in no small part because its ranks have been invaded by feminists? Is it true that science fiction perennially “eats its best”—that is, automatically redefines its most talented writers as being writers of some other type of literature the moment they become recognized or canonized? Is it true that some of the best sci-fi writers—Kurt Vonnegut, J. G. Ballard, Margaret Atwood—are, or were, right to resist the idea that they write science fiction at all?

Obviously a lot has been going on since I last paid much attention. I think I’ll go rent a copy of Blade Runner and get back in the hunt.

Bookmark and Share

The Excitement (and Often the Claims) about the New “Brain Stuff” Is Still Running Ahead of Its Utility

You don’t have to spend much time googling or digging—or doing that old-fashioned thing: reading a book—these days to realize that the brain is often up to its usual tricks when the subject is neuroscientific research.

That is, the brain is simply going about its business. Sometimes, it lights up like a Christmas tree on the fMRI screens when asked to perform some task, or doesn’t light up much at all, or lights up in surprising locales or surprises researchers by not lighting up where they had hoped or thought it would.

At that point, all interpreters can do is to argue their feelings and biases about what it all means. Of course, that’s what humans, including scientific researchers, have always done where the brain is concerned. And we don’t seem to be getting much closer to crossing the Rubicon—or maybe we should say past the Albatross—of how to explain what we see when we map what researchers call “the subject’s neural state.”

Take, for example, Stanford economist Douglas Bernheim’s point in a just published American Economic Journal article that is causing waves in the new field of “neuroeconomics.” Dr. Bernheim wants to be optimistic about neuroeconomics but isn’t yet. This is because of the circular nature of using brain data to measure something subjective. Happiness, for example.

Brain research still needs the tongue

Bernheim’s article and his point about circularity drew the attention of the two Northwestern University economists who write the “Cheap Talk” blog: “Since neural states don’t come ready-made with labels, we need some independent measurement of well-being to correlate with. That is, we have to ask the subject. Let’s assume we make sufficiently many observations coupled with “are you happy now?” questions to identify exactly the happy states. What will we have accomplished then? We will simply have catalogued and translated subjective welfare statements. And using this catalogue adds nothing new.” Which is the central problem of a lot of expensive brain research.

The researchers who were seeking a “God” spot in the brain encountered a similar obstacle. Well, actually, they encountered several. Their goal was to pinpoint what part of the brain “correlates” with a mystical experience. Or at least the most intense Christian-type mystical experience that could be remembered by the Carmelite nuns who participated in the experiment.

Did anything on the fMRI screen light up? Absolutely. There was significant brain activity observed in the nuns’ right medial orbitofrontal cortex, right middle temporal cortex, right inferior and superior parietal lobules, right caudate, left medial prefrontal cortex, left anterior cingulate cortex, left inferior parietal lobule, left insula, left caudate, left brainstem and the extra-striate visual cortex. So, forget “God” spot and think “God” network!

But once again, this wasn’t the primary issue stumping the band. Clever though the experiment was, it didn’t—and couldn’t—tell us anything about God, such as, whether there is One. Just as with the happiness question, the only way we can really learn “something about God” is to ask individuals who think they know something. And you really don’t need fMRI experiments to do that. As one poetic critic put it, those who use fMRI, or brain, imaging to study the God issue still can’t “bridge the gap between the spiritual and the mundane.”

This observer added, “Until they do, there is simply no way to know whether the brain’s response to a religious experience is quantitatively different than its response to any of the deeply meaningful stimuli that surround our daily lives.”

Brain Magic for Investors Still Undiscovered?
Indeed, it is all too easy to get egg on one’s face by rushing in where old salts or your bitterest enemies know better than to tread. Alas, that appears to be what some of my favorite “neurosociety” advocates have done with some of their claims about the value of behavioral finance, neuroeconomics and the new “science of irrationality” in stock picking.

Russell Fuller and Richard Thaler are the brains behind Fuller & Thaler Asset Management, Inc., of San Mateo, CA, and a couple of investment portfolios set up to “exploit insights from behavioral finance.” The funds are called the Undiscovered Managers Behavioral Growth Fund and the Undiscovered Managers Behavioral Value Fund. The core idea is to avoid the consequences of this: “Under certain conditions behavioral biases cause market participants to misprocess information in the financial markets.”

So how are Dr. Fuller’s and Dr. Thaler’s funds doing in their quest to use behavioral finance discoveries to guide their trading decisions. Not well … and their enemies are gloating. One of the most outspoken is Paul B. Farrell, who writes the blog “MarketWatch.” He had named Fuller’s and Thaler’s funds “the Obama Nudgers Funds.” This is because Thaler co-authored the best-selling book Nudge: Improving Decisions About Health, Wealth and Happiness with Cass Sunstein, who is now high up in the Obama White House.

Farrell’s glee is at observing that the Obama Nudgers Funds have been outperformed in by 1-year returns, 3-year annualized returns and 5-year annualized returns by what Farrell calls “the Lazy Portfolios.” These are eight well-diversified portfolios of no-load index funds that strive to cut operating cost, trading action and taxes to a bare minimum (near zero). For the uninitiated, this means the “behavioral finance” guys are doing worse—sometimes much worse—than funds that do next to nothing investment-management-wise, behavioral or otherwise. It’s a quite normal outcome for investment managers, but an embarrassing one for brain studies iconoclasts who were hoping to do better.

That’s also what the “neurosociety” crowd (and you can include the Thinkologist in the group) are seeking to avoid: making unwitting or unnecessary mistakes by correcting blind behaviors caused by the workings of a brain we’ve misjudged, ignored or known far too little about before. But we’re just getting started at this, something we need to be reminded of often.

A lot going on—with the best yet to come
Even as passionate an advocate as Zack Lynch (no relation to this blogger), author of The Neuro Revolution: How Brain Science is Changing Our World, admits that we are only in the beginning stages of the revolution he thinks is coming. He doesn’t believe it will reach critical mass in the public’s opinion (creating a perceptual shift toward a neurocosmos viewpoint) until the 2030s. Another neuroevangelist thinks we are 50 years away from a time when our new knowledge of things neuro will have thoroughly permeated and penetrated our lives and technologies.

But I’m personally encouraged at growing discoveries and inquiries of the new approaches to neuroscience.

For example, there is growing evidence that, as one CalTech researcher put it,
“We are biologically primed to be moral.” To be altruistic, to enforce fairness norms even when we have to pay a price ourselves.

I’m excited by the new brain studies on willpower and self-control. For example, we’re learning, or so the experts say, that if you overload certain areas of the brain, you weaken people’s abilities to resist temptation, such as eating foods that aren’t good for them. The challenge, obviously, is to avoid the overload. Our new pictures of the brain “pigging out” will help us figure out how best to model, and then to thwart, this self-destructive brain behavior.

Needed: cheaper toys and a comprehensive theory
The question of how to respond to the needs of the world’s have-nots in a neuro revolution is increasingly on our minds, and for all our self-protection, it needs to be. In one breath, one neuroethics expert noted that Olympic training programs are now using fMRI scans to correlate areas of depression and negativity in their athletes’ brains. In the next, she told about learning on a recent trip to Africa that the entire country of Uganda only has one fMRI machine.

I like the neurological nudgers’ idea of building in little pushes to get people’s brains to do the right thing or avoid the wrong thing—like getting hospital workers to wash their hands more often or putting warning bulbs in view in our homes to signal when we are using too much energy. The nudge factor is looking to be more and more important as we learn how quickly our mind quits thinking strategically, if it thinks strategically at all. And how little it really knows about what it really wants.

One thing is clear. The rush is on by “neuro” researchers to find tie-ins to the larger picture of what humans do—often together—with their brains.

That was made clear by this year’s Society for Neuroeconomics conference, which has just ended in Evanston, IL. One observer called this year’s program remarkably different from last year’s. “Much less rat studies and a lot of papers and posters on social interactions in humans,” he noted.

Now if fMRI manufacturers can just get the prices on their machines low enough to where everyone—including Olympics coaches in Uganda—can afford them. And if we can learn enough from reading our new brain pictures to move toward producing a dependable “unified theory” about what it all means where the grey matter meets the road.

Bookmark and Share

Is Twitter An Acquired Taste That Needs a Gourmet Chef’s Touch to be Really Effective?

About Twitter: some can and some can’t. Those who can, in some fashion or another, have been doing it all along, because, at its most basic, tweeting is gossiping. Those who can’t do it well simply won’t, at least for long. According to the Nielson Company, that includes about 60 percent of those who at first thought they could, and would. But they quit when they realized that, for some folks, tweeting is the imagination’s equivalent of gout. On average, Nielson says discovering that takes about a month.

I’m not sure which camp I’m going to end up in. Along with 25 million others, there is now a Twitter account with my company’s monicker on it. I’m going to see how quickly I can get the hang—if I can get the hang—of producing worthwhile “thoughtoids” of 140 characters or less. But at the moment, I’m having fun as a Twitter groupie, hanging out at the edges of the microblogging movement and pondering profundities from the company executives like this one: “You don’t make time for Twitter—you grout your day with it,”

Twitter’s co-founder Biz Stone often says short things like that. Or like this: “Tiny bits of information can have profound impact.”

Well, yes, even Plato understood what yelling “Fire!” in a crowd can do. (Plato would not have liked Twitter. Remember, he was against writing things down at all, fearing we’d lose the ability to remember.)

A “tweetiquette” learning curve
Even among proponents of tweeting, there is already ample criticism of bad tweeting habits and practices. For example, Irish telecom entrepreneur Pat Phelan’s Apoplexy Meter shoots off the scale when he hears sellers of software programs promise to ensnare “10,000 Twitter followers for you in 30 days.” He calls such marketers “Social Meeja” (for social media, natch) whores.”

Already, there are rules of tweetiquette. The author of “The Twidiot’s Guide to Twitter Etiquette” suggests things like, “Don’t think that you are a celebrity when you hit thirty followers.” And there’s scads of advice on what it takes to build a decent Twitter following. “Social Meeja” expert Mike Prasad told one of those Rehab-Sundays-at-the-Pool-like “140” conferences in LA the other day that all it takes is a “great product and some ingenuity to build a decent following on Twitter.” Computer games expert Jeff Greenspeak opines that tweets works best if you just “take a specific, funny angle and stick to it.”

How will you know that you are getting the hang of it? Greenspeak says you’ll begin to make mental note of things that you want to tweet rather than blog. Like the other day, he overheard a co-worker dining out in Cologne complain—seriously—how annoyed she was that most German restaurant menus were in German. For a budding Twitter carnivore, that’s tweet meat.

So what do really good tweets read like?

Before sharing some examples that get kudos from both cognoscenti and rift raft, perhaps it would help to share some examples that get brickbats from everyone.

Good tweeters are probably born, not made

I snared these barf stirrers from a site called tweetingtoohard.com, where you can vote for your “tops in tastelessness” favorites. Here are tweet writers whose creations did well with voters:

• The Mary Kay executive who tweeted: “I make multi-million $ decisions on a regular basis—why is it soooo difficult to decide what to do with my hair?”

• The rich broad who shared, “OMG i was saying how i couldn’t afford the gas to fly daddy’s jet to the riviera this summer, and this barista totally rolled her eyes at me.”

• The narcissist who prattled, “The people who say I’m arrogant and shallow don’t see me when I’m at home with my wife. Did I mention that she’s a former swimsuit model?”

In visiting a site called Best Tweets, I noticed a couple of things about tweet writers who draw raves—and followers. I’m not taking about the rich and famous like Britney Spears (3,888,252 followers) and Ashton Kutcher (1,000,000+ followers) but simply tweeters who seem to have the knack for tweeting. I think you’ll agree that it helps if (1) you have a sense either of humor or the ironic and/or (2) if you are a natural born storyteller.

For example, there’s @badbanana, a blogger named Tim Siedell (maybe), who comes up with tweets like “The Kindle version of Dan Brown’s new book is outselling the hard copy on Amazon. Meaning nobody wants to be seen reading it.” And, “I question the president’s decision to start a trade war with China this close to Christmas stocking stuffer season.” And, “Hugh Hefner is getting a divorce? Well, there goes his conservative Catholic fan base.” Is this guy one of Jay Leno’s writers or what?

Tweeterers are young, but not too young

There is @Blue_Crab (probably a young woman but who knows?), who writes stuff like “I swear, if it’s not one thing, it’s my mother.” And, “So I sat on the baby and it just wriggled and screamed until the tequila hit. Babysitting is hard.” Or Adam Isacson (@adamisacson), a policy wonk on Latin America, who pens “smile if you love archness” gems like this one: “I honked and flipped someone off while listening to the Dalai Lama’s book on CD, and I–well, I think I attained enlightenment.”

There’s this one from @donchiefnerd, which I love: “Ahmadinejad!” “Ahmadidtooejad!” “Ahmadinejad!” “Ahmadidtooejad!” I hate it when 6 year olds debate world politics.” And @trelvix’s “Sarah Palin will speak in Hong Kong on Thursday.This’ll be the former governor’s first trip to Europe since visiting Maine in April.” And @secretsquirrel’s “Unemployment: discovering you can put spreadable cheese on both sides of your toast & wondering if there’s a way to patent it.”

According to the Pew Internet & American Life Project, the average age of Twitter users is 31. When Pear Analytics studied a sample of 2,000 tweets, most of what it found was trivial: spam, self-promotion, pointless babble and tidbits of chit-chat (and some news, too!) . The other day investors coughed up another $100 million for Twitter executives and staff in that giant industrial warehouse in San Francisco to play around with. And the company does have goals, such as the announced desire to reach a billion users and become “the pulse of the planet.”

Does brain research offer any ideas on why Twitter has expanded so rapidly—and on whether it will continue to do so?

Yes, but most of it is inferential and not directly produced by studying the effects of tweeting. A primary reason why tweeting is like gossiping is because both trigger the feel-good hormone progesterone. But Twitter can overstimulate your brain, too, (the scientific term is “continuous partial attention”), and always being “on” can affect the brain like smoking pot or missing a night’s sleep. It subtracts IQ and interferes with focus.

Important, but destined to fade?
Like Google, Twitter’s look and MO are streamlined. Fewer pictures, fewer words. The brain likes this. It means you get stuff quicker than you expected, and this triggers dopamine, another feel-good neurochemical that shows up when outcomes “are better than predicted.” On the other hand, Twitter is addictive. It seems to trigger seeking behaviors—as in always seeking the next fix, the newest buzz.

60 Minutes and Vanity Fair announced the first result from a new monthly U.S. poll the other day. The age cohort most likely to view Twitter as an “important new tool” was the 18-to-29 group (22 percent). However, this age bracket also was the highest to deem Twitter “a fad that will fade” (51 percent). The poll’s experts were puzzled. They noted, “In other words, the 18-to-29s believe in Twitter’s importance and its inevitable obsolescence, making them … what? Brooding and pessimistic? Wise beyond their years? Too busy tweeting to grasp the question?”

I just don’t sense that Twitter provides anything all that essential to the new communications mix even though it can be entertaining, some times informational and on a few occasions may turn out to have planetary significance as an early-warning, quick-alert service. So I suspect it will turn out to be more Alka-Seltzer than Viagra. The thing you have to discover is whether there’s enough people out there who can benefit from frequent reminders of who, what and/or where you are who really want you in their face that often.

Bookmark and Share

With So Many Unhappy People Around, It’s a Very Apt Time to Think Anew about What Happiness Is and How to Make it Happen. (Even Though All the “Be Happy!” Talk and Techniques Aren’t Always Enough)

In the early 1990s, we traveled around Europe together for several weeks. Mostly by train, a few times by car, as we produced business seminars. He was a brainy, ambitious, sparely worded chap. A sly sense of humor: dry, cowboy-ish. Very good English, too, polished during an extended sojourn in America—he once addressed the downtown Los Angeles Rotary Club—but still clearly accented.

I have no idea why, or even how, he killed himself. The terse account on the Internet of his demise had to be run through Babel Fish because I don’t read the language. What was originally written was short and circumspect, and the machine translation is even less revealing.

It is probably safe to say that my ex-seminar-producing partner felt deeply unhappy and concluded that the paralyzing stalemate that his living had become wasn’t going to yield. So, tragically, he ended his life.

There are signs everywhere that a lot of people are unhappy. And there plenty of people around who are asking why and suggesting steps for them to take and, increasingly, for their governments to take, to make happiness more accessible and widespread. Some are claiming that in places like the U.S., the United Kingdom and Germany, happiness has been stagnating for years.

An “enlightened” idea that bombed—for a while
As is often the case on matters of the public good, Europeans seem to be ahead of Americans and much of the rest of the world in their levels of official wonderment about how to help people be happier.

In Britain, for example, there is Sir Richard Layard, the economist sometimes known as the “happiness czar.” Layard never misses a chance to campaign vigorously for his Principle of the Greatest Happiness.

He explains, “This says that I should aim to produce the most happiness I can in the world and, above all, the least misery.”

The idea sounds irrefutable and self-evidently right. And actually, it has been around for almost three centuries. Jurist Jeremy Bentham promoted it about the time America was born. As with Layard, Bentham advocated actions that increase everyone’s pleasure and decrease everyone’s pain. The concept caught on widely—and was called the most noble discovery of the Enlightenment.

But then, in one of history’s extraordinary ironies, no less than Bentham’s own godson, whom he raised, reversed all that, at least for a time. John Stuart Mill tried living his young life by such a precept, and it nearly killed him. Mill was contemplating suicide when he discovered the Romantics—the Coleridges and Wordsworths—and came to this conclusion, “Ask yourself whether you are happy, and you cease to be so.”

Gross national product is out, bonheur is in
Mill concluded that happiness is like a crab—it approaches you sideways. He thought deliberately pursuing happiness was a deal-killer, a fool’s errand.

To measure happiness, you must first decide what happiness is, Mill concluded. He thought happiness is impossible to pin down. He put in a good word for pain, too. For example, falling in love often brings pain, but it is a part of any rich life. So Bentham’s single-minded “principle of utility” faded in economics and politics, buried by Mill’s “let happiness find you” arguments.

That was then, though. And this is now. There has come to be what is sometimes called “the science of happiness.” Even governments are starting to move into the picture or make noises like they’d like to.

In France last week, President Nicholas Sarcozy’s top-drawer commission to study what governments should be doing to make people happier released its report. Essentially, it called on governments to “help people produce the most happiness you can in the world and, above all, the least misery.” From now on, Sarcozy says that economic progress in France will be measured not by GNP (gross national product) but by “bonheur” (happiness). “The [banking] crisis doesn’t only make us free to imagine other models, another future, another world. It obliges us to do so,” he said, happily.

So we’ve come full circle. And if you are trying to decide how to be happy, a very full circle it is. It can be a very confusing one, too. Because neoBenthamism has become very John-Stuart-Mill-like in its variety. That is, it has a kind of anything goes, laissez faire spirit about it.

Prescribed routes to happiness that take many paths
There is happiness psychologist Dr. Robert Holden, who says he can make happy optimists of clinical depressives simply by getting them to laugh or simulate laugher for 20 minutes a day and think positive thoughts all day long. Hypnotist Paul McKenna’s “Endorphin Button” exercise is quicker. You recall happy times, enhance the colors in your memory and squeeze your thumb and index finger together five times.

Neuroscientist Dr. Nick Lavidis had an epiphany while strolling through Yosemite National Park. The smell of freshly cut grass produced pleasant feelings. So Lavidis now markets a room spray that—you guessed it!—releases a chemical like that in grass cuttings. Lavidis says it stimulates the hippocampus, improving our memory functions and good feelings.

Sociologist Nicholas Christakis and political scientist James Fowler believe social relationships can cause happiness to be passed from person to person like they were contagious viruses. They got the idea by studying the famed Framingham Heart Study, which started following 15,000 people back in 1948. Your happiness can not only affect your friends but also friends of your friends. And get this: it may affect your friends’ friends, even if it didn’t affect your friends! (Unhappily, bad habits are also transmitted this way, too. Like obesity, smoking or using harmful drugs.)

In his book on college students and achievement, Derek Bok, the former Harvard University president, flagged three consequences of poor health as producing long-lasting unhappiness: mental illness (notably depression), chronic pain and sleep deprivation (notably insomnia). He said these “afflict a surprising number of people and have a marked and continuing effect on well-being.’’

Speaking of Harvard University, a study there has been tracking hundreds of students for more than 70 years. Researchers have concluded that seven major factors are most likely to produce happy old-timers: mature adaptations (or the ability to respond well to problems), education, a stable marriage, not smoking, not abusing alcohol, exercise and maintaining a healthy weight.

The “I’m OK, You’re OK” view of happiness
Numerous studies have suggested that childless couples experience more enjoyable times and fewer stressful ones than couples with children. The enjoyable-times penalty from having children is even greater for women. But if producing successful, happy, productive children makes you happy, having a family is a happiness no-brainer.

And that’s the anti-Benthamic rub to bringing any real coherence to the happiness movement: one person’s happiness maker may be another person’s pleasure eraser.

All of which causes experts like Dr. Caroline West to adjudge the Bentham-versus-Mill controversy a wash. West teaches a popular course called “The Philosophy of Happiness” at the University of Sydney. She says:

“We’re inclined to think that there is something that happiness really is. If we only knew which of enjoyment and aspiration-fulfillment happiness really was, then we would know what to be basing these and other important life decisions on. The problem is that there isn’t an answer to the question of what happiness really is. And there’s certainly no answer that everyone will agree with.

“What one person means by ‘happiness’ can be completely different to what the next person means, far more different than we commonly imagine…. Happiness can be used to refer a momentary sensation, such as pleasure or enjoyment. Or it might refer to an enduring mood, such as tranquility or contentment. Or believing that one’s desires are being achieved, or the actual achievement of one’s desires. Or believing one’s life as a whole is going well, in terms of one’s own priorities. Or leading a life that is considered to be—from some objective standpoint—worthwhile.”

I would wish that my former seminar partner and I could have talked about some of this. In his right mind, he would have enjoyed the discussion. On the trains of Europe in those yesteryear travels, we talked about a lot of things. I particularly remember one animated discussion on a long trip between Mannheim and the Polish border about John Galt’s radio speech in Atlas Shrugged. There was much, we both agreed, in the speech that spoke to our own sensibilities and ideals.

What I would liked to have shared with my friend

If he had asked on those trips about my personal thoughts on being happy, I’d probably have said things like this:

Remember that happiness ebbs and flows. People have a range of happiness and move up and down in it. Of course, some are simply congenitally and seemingly forever joyous. And then the happiness capacity of others appears to vacillate somewhere between a passing break in the clouds and a murderous funk. The rest of us are somewhere in between and usually make do.

Brain chemistry is important but it isn’t everything. If you need antidepressants to ward off danger to yourself, hurry on to your physician. But remember that time can be a potent healer, too. And that learning is not a pain-free zone. An irreducible side effect of the good-feelings-from-the-medicine-cabinet drugs is that they close certain self-correcting and insight-filled windows on the mind and soul.

Few actions in life encourage expanded happiness and satisfaction more than “willing” oneself to initiate positive self-change. It’s both an art and a science. A key element is often pro-actively seeking out increased connectivity of the right kind—finding people you can be close to or at least be around who don’t mind you being happier.

Find your own happiness rhythms and honor them. Give into the highs and enjoy them to the fullest. Accept the lows and understand that they are almost certain to pass. Then view and treat the in-betweens as the times when you are cleaning up the messes left over from previous train wrecks or wrong track choices and preparing for the arrival of the next great moments.

Understand that if you find happiness, it’s going to have to be on your terms. Happiness is not a pure quality. It is a concoction of tradeoffs negotiated between the self that you ideally wish to be and the self that bumps its nose against a surprise-prone, often uncooperative world every day. You need to find your own personal recipe for responding to this mix, or it will never work or taste right.

The current moment can be a real shrew. It lies a lot. It may profess to own you soul and marrow and insist it will never let go. When it says that, look it in the eye and spit in its face. And remind yourself that in a few hours, or a few days, or a few months, you will most likely be restored and healed but the current moment will be nothing but a smear on a neuron, if that.

Get really, really good at the inner art of cleaning the slate. I’m not into meditating but I’m told by those who are that this can be very effective mental squeegee. What I often do is switch gears. Spring the unexpected on my mind. Read the unpredictable book. Watch another culture’s films. Visit a restaurant in a part of town that our neighbors or usual crowd wouldn’t think of being seen in. Or sometimes, just book a trio of big Ryder or Penske rental trucks and move half-way across the U.S. And watch from the corner of my eyes for the happiness crab to again sidle into view.

What my erstwhile European colleague and friend—may he gently rest in peace—would have thought of John Stuart Mill’s advice in those final moments, I have no idea. When you can see no light at the tunnel’s end, it probably doesn’t help to be told to forget happiness and just get on with living the best way you know how. But it’s probably good advice at most other times. Happiness may show up anyway. And even if it doesn’t, you’ll be a lot less unhappy at not having found all the happiness you think you deserve.

Bookmark and Share

The Latest Business Buzz Word Is Trust, But Rather than Expanding the Supply, the TrustMe Movement Is Hugely Expanding the Number of People Who Have Reason to Wonder If You and I Are Trustworthy at All

Trust is a precious metal in my periodic table of people qualities, although I tend toward optimism that it can be justified. As readers of Dr. Paul Kordis’ and my book, Strategy of the Dolphin, know, it is a worldview thing with me. Evil, stupidity and blind belief show up much too often to treat trustworthiness as child’s play. Such qualities offend my desire for … well, competence and fairness. So I don’t bestow trust automatically, and I counsel others not to.

For example, I don’t trust automobile dealers. Not a single one of the lot, anywhere on Earth—not a whit. There is nothing in my experience or observation that indicates they deserve to be trusted. The car lots and auto showrooms of the world are marinated in greed, untruths and shady gamesmanship.

For similar reasons, I do not trust big-time politicians. Not a single one, anywhere on Earth. Now, there are some whom I admire more than others. But I don’t fully trust any of them, and you shouldn’t either. Because sooner or later, every prominent politician’s integrity goes on the auction block. And nearly all will claim righteousness or feign piety or swear ignorance or innocence when they sell out, and very few ever get indicted or penalized.

Admire Their Courage, But Be Cautious of Their Power
I do not trust cops. Not a single one, anywhere on Earth. I often admire their courage. And I find their job so fascinating that one of my favorite TV shows is Fox’s “Cops,” on Saturday night. But when you are in the clutches of a policeperson, for a brief but parlous time, you are at their total mercy. For that instant, they can be judge, jury and executioner. You can die, or be beaten, or be framed for a crime on the mere whim of the person behind the badge, and many victims around the world are, every day.

I do not trust ministers, priests, imams or rabbis. Probably most clergy people I’ve met are “good people,” and I’ve liked some of the ones I’ve known best a great deal. They often act sacrificially in admirable ways. They can provide wise, helpful counsel for many at difficult moments. But none I’ve ever met would I trust fully with my deepest questions about what it all means. Those who profess to respect my questioning show suspicions of being in camouflage; those who oppose it can be downright scary.

And now I must confess to a growing distrust of what I’ve come to call the New TrustMe Gurus of the business marketplace. There has been an explosion of them. They are promoting and peddling everything from nasal sprays to social networks and networking to books that tout things like the Joseph D. Pistone technique for winning friends and influencing people.

You may remember Pistone. He was the FBI agent who spent six years infiltrating the Bonanno crime family. In their new best-selling book, Trust Agents, digital marketing consultants Chris Brogan and Julien Smith admire how Pistone, using the alias of Donnie Brasco, won the Mafia’s trust by simply hanging around bars until the goons came to accept him as part of the scene. The point Trust Agents’ authors wish to make: you need to build up trust with your target markets before you make your move, not as you are making it.

Go Straight to the Heart of the Matter: the Pituitary Gland
Now, I’m willing to concede that many of the techniques in Trust Agents have value and are ethically light years ahead of the methods being advocated by some of the promoters of oxytocin, the “love hormone.”

Researchers from Zurich to Atlanta to Houston to Los Angeles are captivated these days by what happens when they squirt a few atomized drops of oxytocin into people’s (and rodents’) noses.

Oxytocin (not to be confused with oxycontin, a morphine-like drug associated with the death of DJ AM) is the short polypeptide hormone released by the pituitary gland. Within a few minutes of inhaling the drug in sufficient quantity, trust becomes a five-letter word for everything from let’s make a date (or set one for nuptials) to where did you want me to sign to let’s spray the whole Middle East with this stuff. The New-Age-in-a-spray-bottle effect seems to last for two to four hours.

Liquid Trust® was reputedly the first oxytocin spray on the market. (There is now also a Liquid Trust Enhanced.) Sellers of LT have this advice for their business customers: “Use Liquid Trust in creative ways around your workplace. Before important presentations or meetings, spray some Liquid Trust around your desk or conference room [sic] see the magic happen. You could even spray some on memos or reports that you have to hand to your manager! Although they cannot smell it, Liquid Trust is there and working to increase trust in you.” [Go here for more tips, like spraying LT on thank you cards to your clients.]

The Trust Equation Is Still the Same: Stand and Deliver
However, it is neither the outpouring of glib “Chicken Soup for the Marketer” books nor the wretched excesses of the new Mary Kays of the oxytocin receptor industry that has triggered my disgruntlement for the new TrustMe movement in business. I’m simply disappointed that trust has been monetized and commodified and its pursuit irrationally “scaled” to the point where it is sure to be devalued when the trust bubble implodes.

The newly evangelical TrustMe movement in business simply isn’t producing. I know this because people who keep making me promises as part of the new TrustMe clique simply aren’t delivering, not any more than before. Tantalizing hints of imminent breakthrough developments tipple off the lips and fingertips as easily as ever—never to be heard of again, just as before. Expressions and pledges of networking solidarity arrive en masse, only to wither like last week’s flower bouquet. It’s the same old, same old, not the New Millennium.

What I think has happened is this: the TrustMe/social networking edifice is built on sands underlain by the same old human deep-water rip tides and whirlpools, and nobody has been doing any real core-sampling. While the neurocortex poses, the limbic circle and the reptilian brain continue to dispose.

Trust is still what our deepest instincts have always said it is: a very small circle. Earning trust still requires what it has always required: showing over time that you can deliver consistently on honest promises. You can have a thousand people in your LinkedIn network and three thousand Twitter followers and Facebook friends out the kazoo, and nothing fundamental about the trust equation changes. Commit + follow-through, again and again = trust.

The Danger is Seeing Trust as a Numbers Game
Meanwhile, the demands of all that networking have made it nearly impossible for more and more of us to carry out the basics that can, over time, lead to the kind of trust that the new TrustMe business and social networking movement has been hoping to benefit from.

The experts call this “strategic trust.” This develops slowly, usually requiring years. It is very fragile, and can disappear in a finger’s snap. It happens, if it happens, because people stay around. They keep their promises. They radiate dependability and integrity in their actions. They reveal more and more of themselves and eventually, over time, become a “sum that is greater than the parts” in the experience and expectations of people for whom they count and on whom they count.

Few things are more fragile and require more tending than strategic trust. I’m not seeing very much of that emerging from the new TrustMe movement, and I don’t expect that it will. And that’s going to be very disappointing to a lot of folks.

They bought into the idea that trust-building can be a numbers game. And that being trusted is something that can be demonstrated and benefited from by showing up more and more often along the long tail of the Internet. By the time they figure out the truth, the authors of things like Trust Agents and the inventors of Liquid Trust will be long gone. And with them will go the only money anyone will make out of all this talk about how important it is to send word at the speed of light to an ever-growing myriad of message addicts (or message ignorers) of just how trustworthy you are.

Trust Values Are Eroding, Across the Board and the Seas
While Nero is fiddling, Rome shows signs of burning down. In its summer report on the top 10 trends for 2010, McKinsey, the big consulting company, says trust in business is declining. McKinsey points out that falling trust levels increase transaction costs, lower brand values and bring greater difficulties attracting customers and retaining talent.

Dr. Eric Uslander, the trust-studying scholar at the University of Maryland-College Park, says generalized levels of trust have been declining in the United States for more than 30 years. The decline is substantial. While not the same as “strategic trust,” generalized trust is a barometer of sorts for the overall economic health of a society and its business environment. In poorer countries, both strategic and generalized levels of trust are abominable, and getting no better. This is, of course, one of the chief reasons that they are poor.

So trust is as important as ever. Too important, I think, to be left to the TrustMe Movement. This is my advice: don’t put a lot of trust, time or money in following the TrustMe hype. The last thing you need to do is let the TrustMe folks cause you to devote so much time to trying to network with people you hope you can trust and who will end up trusting you that you have no time to prove yourself trustworthy. Call that a fatal attraction to trying to do trust on the cheap.

Bookmark and Share

Maybe There’s Very Little New to Be Said About Creativity But If So, There’s An Awful Lot of People Saying The Same Old Thing in Imaginative Ways on the Internet

All I did was ask Google Alerts to tell me for a couple of weeks every time the words “creativity,” “creative problem-solving” and “innovation” appeared in something new on the Internet. Before long, my e-mail box runneth over.

The intent was simple. I wanted to see if there was anything new under the sun being said on the subject of using our imagination.

After having scanned synopses of a few hundred Internet items and having given several dozens of them my rapt attention and many dozens more at least a glance over, it’s time to tell you what I’ve learned.

I can report some serious soul-searching about the alleged tepidness of today’s technological creativity. Also, that the fuss and bother about whether we are adequately encouraging creativity in young minds shows no signs of letting up. Nor is the advice about how grownups can best to partner with their own imaginations in short supply.

Perhaps the tartest candor I spotted on the lack of world-changing technological creativity in recent years was from a blogger named Raghuraman, who writes Intuivator. He tells us next to nothing about who he really is—his name may even be apocryphal—but he sounds like an engineer, possibly from India, and a very observant and thoughtful one.

Was 2008 Simply the Year Consumers Wised Up?
Raghuraman faults the creative technological spirit of the times worldwide for producing things that are merely larger, cheaper, faster, better, thicker and thinner instead of producing breakthroughs capable of feeding great new fundamental, society-improving shifts.

To him, creating the automobile was a fundamental shift, creating the SUV was not. Creating the computer was a fundamental shift, creating Web 2.0 was not. Creating the steam engine, aircraft and antibiotics were fundamental shifts, but
… starting with a 21-inch TV in the 1970s and adding two inches every year
… starting with a 10-MHz processor and adding transistors until you hit a 2.5-GHz multi core
… moving from normal TV to high definition TV
… going from a 10 Kbps modem to 10 Mbps fiber to 20 Mbps intense broadband
… going from newspapers to analytics to intense analytics to real-time analytics
… going from two story buildings to 50 story buildings to 120 story buildings
… going from voice over IP to IP over voice
… going from pay for content (magazines) to get paid for content (AdSense)
… going from employment (you go to work) to telecommuting (work comes to you)
… going from TV (content comes to you) to the Internet (you go to content)
… going from stocks to derivatives to futures to options to derivative loans to traded derivatives of loans
… going from cars to sports cars to two seaters to 10 seaters with different shapes and sizes and now the same car with different bodies from two different car vendors

were not fundamental shifts.

For the past 15 or so years, argues this critic, technological creators and innovators have been coasting on old creative and innovative waves and wares, “re-selling, re-packaging, re-wording, re-branding, re-furbishing, upgrading and pushing [products and technologies] mercilessly into the customer’s hands….The economic crisis of 2008 was the breaking point for all those innovations. [Suddenly,] the buying stopped, in a terrifying instant. Because, consumers realized that there was no real need for any of those things that they bought since the Ford Mustang or the PC.”

Copycatting at the Speed of Innovation
Mr. Raghuraman seems to be warning that too many of today’s business and technology innovators are whoring after the false gods of copycatting and concept recycling in the name of innovation, and we have reason to fear more of the same. An item the other day on a Wall Street Journal blog made that clear. Listen to this pair of MIT professors exult over how quick and how cheaply a company can use today’s new IT capabilities to copy-cat and recycle concepts:

“Innovation initiatives that used to take months and megabucks to coordinate and launch can often be started in seconds for cents. And that makes innovation, the lifeblood of growth, more efficient and cheaper. Companies are able to get a much better idea of how their customers behave and what they want. This gives new offerings and marketing efforts a better shot at success.”

Uh-huh … until it doesn’t. And then, as we saw in 2008, the bottom may drop out.

I found several gloomy discussions of why entire countries are having grim thoughts about their perceived failures to innovate adequately. One expert lamented that Canadian success stories like the invention of the BlackBerry are far too rare. “[Innovation] is not in Canada’s DNA,” he concludes. A fearful observer of another country’s innovative paralysis notes, “Pakistan is experiencing a major existential crises.”

Even in America, if not a renewal, there is at least a continuing concern for just how effective the country is proving to be at cultivating “skills of inquiry, problem solving and creative thinking” in its children. Think-tanker Batista Schlesinger has made the issue a focus of her new book, The Death of “Why?”: The Decline of Questioning and the Future of Democracy.

She writes, “”We have the mistaken belief that even the most pressing challenges facing our country—climate change, globalization, healthcare, poverty—are problems to be ‘fixed’ once and for all, if only we can find the right solution and the right person to implement.”

What Lies at the Heart of Creativity? Good Question

Schlesinger wants us to teach our kids and acknowledge ourselves that we don’t know everything. “We cannot know everything,” she says. “Knowledge changes….The future is a moving target, and the ground beneath us will never be still. The only thing we can count on to see us through an uncertain future is our ability to ask questions.” One more time, she reminds us that asking good questions lies at the heart of knowing how to think creatively.

You don’t have to spend very much time researching creative thinking to realize that the British are world leaders providing answers to this question: “How do you do a better job of teaching youngsters to think creatively?”

And a person you’ll find very near the top of any knowledgeable list of British creativity experts is Sir Ken Robinson. His big idea about teaching creativity is that you make it clear and operational, “like we have done with literacy.” And the big man with the big idea has spared no opportunity to spread his thoughts on just how to proceed with this. For example, his 2006 TEDTalk on the subject is in TED’s top ten viewed programs and has helped make him a kind of rock star on the creativity circuit.

But any time you catch him elucidating on the subject, you quickly understand that what he really bleeds for is his enthusiasm for being human.

“One of the points I make in the TEDTalk, and that I make generally, is that the human mind is essentially created. We live in worlds that we have forged and composed. It’s much more true than any of the species that you see. I mean, it seems to me that one of the most distinctive features of human intelligence is the capacity to imagine, to project out of our own immediate circumstances and to bring to mind things that aren’t present here and now. You know, to conceive of the past, to anticipate the future, and not just a future but multiple possible futures and many different sorts of pasts.”

Creativity Has Its Ways—and Often They Are Liberating
When we look at the Big Picture, we see that this is a defensible view of the way it’s been. And for all the bumps in the road, that it has the potential to continue.

And that was what really struck me as I worked my way through Google Alerts’ smorgasbord on creativity. The sheer multiplicity of ideas out there in cyberspace and other spaces about how to have better ideas and produce unprecedented combinations—in short, to make a better world. It’s a feast of energies and aspirations, even if in various aspects and at various times, it falls short.

For sure, how to be creative is one of America’s favorite topics. We may sometimes take a back seat to understanding how to use our educational institutions to make it happen to people like Britain’s Sir Ken. But when we put our best imaginations to capturing the essence of what creativity entails and how to have more of it, nobody gets right to the nitty-gritty with quite the energy and, yes, creativity that we Americans bring to the topic of having a new idea that’s different.

Take, for example, this fellow named Hugh MacLeod. He enjoys drawing cartoons, usually weird ones, and he’s very good at it. Not that anyone much noticed for a long while. He confides:

“One evening, after one false start too many, I just gave up. Sitting at a bar, feeling a bit burned out by work and life in general, I just started drawing on the back of business cards for no reason. I didn’t really need a reason. I just did it because it was there, because it amused me in a kind of random, arbitrary way.

“Of course it was stupid. Of course it was uncommercial. Of course it wasn’t going to go anywhere. Of course it was a complete and utter waste of time. But in retrospect, it was this built-in futility that gave it its edge. Because it was the exact opposite of all the ‘Big Plans’ my peers and I were used to making. It was so liberating not to have to be thinking about all that, for a change.

“It was so liberating to be doing something that didn’t have to impress anybody, for a change.

“It was so liberating to be doing something that didn’t have to have some sort of commercial angle, for a change.

“It was so liberating to have something that belonged just to me and no one else, for a change.

“It was so liberating to feel complete sovereignty, for a change. To feel complete freedom, for a change.

“And of course, it was then, and only then, that the outside world started paying attention.”

And, wow, is it paying attention!

On stupdity’s cessation and other inspired topics
At the moment, MacLeod’s new book, Ignore Everybody: and 39 Other Keys to Creativity, is Amazon’s No. 1 selling book on creativity. Each of MacLeod’s 40 keys has its own small, snappy, pure-gold commentary. (You can read the first 12 commentaries and see a list of all 40 keys for free here.)

And if you don’t find the inspiration you need in MacLeod’s 40 keys, then you may very well find it in the list of quotes that one of the members of the Advertising group on Facebook sent his colleagues. That showed up in my automated Internet search, too. (My favorite is from Edwin H. Land, who said, “Creativity is the sudden cessation of stupidity.”)

If after all that, you are still hungry for more on this topic, just ask Google Alerts to do you your own search for a few days on, say, “imagination.” You’ll get the good, the bad and the ugly, and that’s pretty much the way it’s always been with creativity, both the topic and what comes out (if anything does) at the end. Hugh MacLeod gets the last word on that, too: “Whatever choice you make, The Devil gets his due eventually.”

Bookmark and Share

Here Are “Ten Rules of the Road” for Journeying on the Spiral Values Highway of Life, Courtesy of a Couple of “Rebels with an Agenda”

Not long ago, on a Sunday afternoon drive, the wife and I rounded a bend in the road near the hamlet of Cross Creek, Florida, and abruptly found ourselves staring at the “cracker”-styled farmhouse where the Pulitzer-Prize-winning novel, The Yearling, was written.

A few minutes later, we were viewing the battered upright typewriter the novel had been written on. We were soon to learn that the book’s author, Marjorie Kinnan Rawlings, had preceded us to Florida by almost exactly 80 years. Over the wood-burning cook stove on display in the kitchen, the tour guide assured us, Ms. Rawlings had slaved to prepare meals for guests like Ernest Hemingway, Thomas Wolfe, F. Scott Fitzgerald, Robert Frost, Margaret Mitchell and Zora Neale Hurston. Often, she was so worn out from cooking that she retired to bed early and her guests had to eat her superb meals without her.

And then the other night, a movie called “Cross Creek” appeared in the On DEMAND listings for our cable service. Mary Steenburgen played Majorie Rawlings in the movie, released in 1983. The Rawlings depicted in the movie is shown battling her way into a psychological clearing where she could be the proprietor of her own sense of how the world works and what she wanted to be. The fight was not pretty, and it was one of the reasons she had ended up living in the hardscrabble “cracker” back country of northcentral Florida.

From the local library we checked out an autobiographical novel by Rawlings called Blood of My Blood. A spare, almost tortured story, this work tells how a badly conflicted daughter fought free of her egotistical, domineering, spiritually vacuous mother. Marjorie Rawlings had moved to Cross Creek to do epic tussling with the final stinging nettles of parental stifling and there had finally succeeded in freeing her literary self and adopting a new worldview.

I’ve become infatuated with Marjorie Kinnan Rawlings, who wrote The Yearling in the 1930s. I feel in some ways like Ms. Rawlings and I have shared similar kinds of growing-onward experiences. Call them “Being a Rebel with an Agenda” experiences.

I wouldn’t burden my readers with any of this were it not for the surprising discovery the other day that my own “Being a Rebel with an Agenda” experiences were being discussed by at least one business blogger in the United Kingdom. He’s read (and approves of) my often autobiographical work, The Mother of All Minds. In it I talk about casting off one worldview after another in pursuit of more complicated successors. (Aficionados of spiral values developmental theories will recognize this as moving up the complexity processing spectrum.) The blogger was wondering why some people can and some people can’t replace belief systems with some dependable regularity.

I’m not at all sure how far Rawlings might have eventually traveled on the developmental spiral. She died of a cerebral hemorrhage at age 57, which is young; she might have traveled farther had she lived longer. But having immersed myself in Rawling’s story and having read the British blogger’s remarks about my own experiences on the spiral have surfaced these thoughts:

10 Ways to Keep Adding Innovative New
Lanes to Your Personal Capacity Highway


1. Develop an early, healthy distrust of voices around you that insist the world has to be a certain way. Learn to say with confidence, “Okay, but what about…” so you can feel what it is like to challenge these voices and to experience their push-back and push back the experience.

2. Insist on finding out what it feels like to have your own hands on the controls. Nobody learns how to drive by watching a video or listening to someone else’s narration.

3. And yet you also need to live vicariously the greatest number of lives of the greatest variety possible by reading (both fiction and non-fiction) and watching video depictions (both real and imaginary) observantly.

4. Accept that neither you nor anyone else is a unity of one but rather are always a paradox of often irreconcilable fragments. Demand neither perfection nor a seamless assembly of your persona—not now, not ever.

5. Learn to surf your culture instead of absorbing it. This way you’ll be able to navigate your way in and out of it with considerable freedom instead of becoming one with it and thus its captive.

6. Come to terms with the realization that you’ll meet very few people in life who are comfortable with the kind of fluidity of mind and spirit you are developing. You can expect to feel alone a lot of the time even when surrounded by people you love and befriend.

7. Recognize that being alive guarantees suffering. The gift of being human is being able to decide what you’ll make of your suffering and the suffering of others. Be gentle with, but unyielding toward, those who want you to renounce your right to decide and acquiesce to their explanation.

8. Find that one person or that small number of people who can abide listening to you talk about what you are becoming. If you can’t talk about this aloud to real, listening, accepting ears other than your own, you’ll run the risk of real, dangerous depression and isolation.

9. Come to regard the experiences available at any one stage of your life and career as exhaustible. Make using them up a serious, ongoing goal. And make replacing them with very different ones a serious, ongoing passion.

10. Understand that all beliefs and all organizations that espouse and defend them are always hiding self-interested motives and promulgating undesirable self-limitations. The test of a belief at any point should always be, “Is this sensible in light of what I really, truly seek?”

I’d love to be able to have a meal with Majorie Kinnan Rawlings and discuss this list at length. She wouldn’t even have to cook. We could eat at the only restaurant in Cross Creek. It’s called The Yearling.

Bookmark and Share

There Is No Brain on Earth Quite Like the Chinese Brain, And Given the Coming Importance of That Brain, We Need to Understand Everything We Can About It

I have come not to bury the Chinese brain but to praise it. And to warn neuroscientists, particularly in the West, that they need to devote substantial resources to studying it, and do so urgently. There are bigger issues afoot than simply what we can learn by turning our fMRI beams on the brain tissue of people who grew up speaking the standard Beijing dialect of the Mandarin language.

But does it matter whether the newly proliferating “neuro lab rats” study Chinese brains, American brains, Luxembourgian brains or Sri Lankan brains? Isn’t a healthy human brain a healthy human brain wherever it is found? And isn’t the whole idea of focusing on brains in one country versus brains in another country a slippery ethical slope that could easily dump the whole scientific neuroenterprise in the lap of—yes—racism or worse … gasp! … a kind of eugenics profiling?

Well, first off, it is already clear that studying one country’s brains doesn’t count for studying them all. That idea flies straight into the headwinds of some of the latest neuroscience. One of the very first faceoffs between brains made in America and brains made in “East Asia” revealed that, in terms of similarities, something was rotten in Denmark.

Moreover, what Professor John Gabrieli at the McGovern Institute for Brain Research at MIT discovered speaks directly to my primary thesis: the Chinese brain is like none other. And in a century that is merely a decade or two away from China inexorably beginning to rule the world, the rest of us should hasten to understand the differences.

Surprised by the role that culture plays
You can get more details on Professor Gabrieli’s experiment here. Suffice it to say, it was the findings that should raise eyebrows. Brains made in America must work harder to make judgments for which society’s answers aren’t that clear. Brains made in East Asian must work harder to make judgments where society’s stance is not in doubt.

That outcome surprised the researchers. “Everyone uses the same attention machinery for more difficult cognitive tasks, but they are trained to use it in different ways, and it’s the culture that does the training,” Dr. Gabrieli said.

In other words, it is often the culture that shapes the brain, and differing cultures shape differing brains. The reason why the Chinese brain is like none other is in sizable measure because the Chinese culture is like none other. Again, you may beg to differ. And, again, I ask that you accompany me to an expert.

Meet Martin Jacques. He’s the author of When China Rules the World. I spotted him again the other day in Macleans, the Canadian news weekly. He was explaining why China is soon going to be the world’s pre-eminent economic, political and cultural superpower. I can remember only one other newsmagazine analysis that riveted me as much as this one (that was a Newsweek piece in the summer of ’74 showing how Watergate’s corruption reached the very top of American politics).

Jacques says the Chinese don’t represent a country, or nation-state, so much as a civilization, and he marvels at its “powerful centripetal quality.” He notes—and worries about—the centrality of race in the thinking of the Chinese people and their assumption of cultural superiority. He comments:

“I mean, 92 per cent of them think of themselves as of the same race. While this is clearly not true—the Han Chinese are in fact descended from many different races—it gives a kind of biological reason for Chinese unity. And you can see it in their attitude toward those within China’s borders who have not been integrated in this way. The Tibetans or the Uighurs in Xinjiang province, for example, are regarded as needing to be helped up to the level of the Han Chinese. It’s a patronizing and very assimilationist attitude.”

More than just a country called China
Part of it is the Chinese language, Jacques believes. And the Confuscian values as applied to society and governance. And above all, the notion of the state as family—as the guardian of civilization. Not even the “Century of Humiliation” dating from the Opium Wars, not even Mao, with all his ruthlessness, could dislodge the Chinese from these beliefs. “It is a very remarkable characteristic,” says Jacques.

It is very much a postmodern biological “Great Wall of China,” a neurological Maginot Line in the brains of 1.4 billion people. It is one that is ordained to shape the brains of nearly every yet unborn child of China because of a culture that has been increasingly fabulously successful at seeing itself as a civilization, not just a country.

Do I skate here on thin ice? Not if you are willing to be informed by the work of Bruce E. Wexler at Yale University. He published Brain and Culture a couple of years ago.

B&C is, in my judgment, an exemplary piece of research and argumentation that, at its simplest, says this: Up to young adulthood, the brain puts its neurons together based in no small part on what its environment is telling it. After that, the brain works mightily to shape its environment based in no small part on the way its neurons suggest it ought to be.

Each generation thus acts to shape the brains of the next generation of its offspring, and this is where the Chinese civilization excels. Nor does the adult brain stop there, Professor Wexler says. Going forward, it hungers to lay the reality it constructed in those formative, neuron-linking years on all kinds of individuals, kin or not. And this is a quality that concerns a lot of people, including Martin Jacques. And I might add people like Australian Brian Hennessy, who has taught the past three years at the Chongqing Medical University and is currently providing psychological assistance to survivors of the Sichuan earthquake.

Segueing from a crime to cultural imperatives

The other day Hennessy says he had his wallet pick-pocketed near his home in Chongqing. When he reported the incident to police, he says he and his wife became the targets of the police investigation. He says they were hassled for hours. At first Hennessy says he didn’t understand. Then he realized it was as simple as realizing that the neighborhood police officers interrogating him and his Chinese-born and Chinese-speaking wife had lost face.

You’ll need to read this with a grain of salt because these are the words of an angry man and words that I can’t check for accuracy. But in the context of what veteran China observers like Martin Jacques believe and brain researchers like Bruce Wexler have reported on, it has the ring of reality.

Hennessy says the moment of truth for him arrived when he pointed out that the theft occurred in the police precinct’s “own backyard.” The policemen’s faces froze, he says.

“Suddenly, everything that I had read about and experienced in China gelled into a one brief moment of enlightenment: I understood clearly what was really going on around me. Thank you, Buddha,” he writes. “A foreigner had been robbed in their area of responsibility, and embarrassing questions would be asked by their superiors. Institutional cultural imperatives as well as traditional cultural imperatives were guiding the behaviour of these investigators.”

Yes, Mr. Hennessy, that is also my point. Neuroscience has already shown us that the brain and its culture are inextricably linked. In some cultures more than others. In terms of internal coherence, the culture of China is perhaps the most powerful extant on Earth today. It believes itself to be superior to all other cultures. There is no reason not to believe that it believes the brains it produces are superior to all other brains.

A source of home-grown brain tissue only
Five years ago, first brain bank specializing in the study of Chinese brains was established at the Xiangya Medical School of the Central South University of China. One of the reasons given by the project’s sponsors was that “the western based brain banks do not have an adequate supply of brain tissue from Chinese subjects.”

This time, the Chinese can be forgiven some of their self-preoccupation. Their brain is different. In ways that already matter and are about to matter more, the Chinese brain has done extraordinary things over the centuries. It is doing things today that are without peer (its brilliant economic strategy of the past few decades, for example). For all its challenges, it shows every promise of having its best days ahead.

But it is a brain formulated by five hundred centuries of a civilization unique unto itself. Where the rest of us stand in the estimate of that civilization we have yet to have clarified. To say it one more time, it is absolutely critical that we know as much as we can about how the neurons work in a brain that may be about to rule the world.

Bookmark and Share

This Family Is Learning as They Go What It’s Like to Have a Child In Their Midst Whose Behavior Resembles a Pint-Sized Henry Kissinger’s—That Is, A Big Picture Thinker

Today’s commentary was prompted by listening to one mother’s frustration with a precocious, hyperactive six-year-old. Among other things, she says, “He never quits asking questions.” He also seems to be an extremely healthy demonstration of what chaos scientists call “self-organized criticality,” about which I’ll say more in detail later.

In general terms, this kid’s brain cycles between chaos and stability again and again, moment by moment, hour by hour, day after day, moving first one direction and back again. He’s predictably unpredictable on the outside and we can suspect on the inside, too, and it really takes a toll at times on the people around him, particularly those who love him most.

It is increasingly clear to his mother and father that something different is going on in their child’s head compared to many other children’s heads. And that this is surely going to be a continuing challenge to the adults in his life if they are not (1) agile enough in their own thinking to appreciate just how different his thinking is and (2) if they are not willing to work with the extra demands and needs this difference brings. Because we aren’t talking about a youngster whose behavior has him lagging behind. This kid is a kindergartner who already reads at a second grade level.

Noticing the things that don’t fit
Early on in my conversation with this often-exasperated mom, a snippet of dialogue from Sherlock Holmes’ famous story, “Silver Blaze,” popped into my mind. It goes like this:

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

This child seems to be just the type of thinker who, like the ever-curious, ever-observant master sleuth from Sir Arthur Conan Doyle’s stories, would notice that the dog did nothing in the night and wonder why not.

Then a few days later, while reading one of my favorite blogs, one that tracks children’s brain and learning research, I spotted a quote from another famous figure, the late, great scientist Richard Feynman. Feynman once said, “The thing that doesn’t fit is the interesting thing.”

And I knew instantly that Drs. Fernette and Brock Eide, the two gifted Edmonds, WA, physicians who write this blog, were onto something that the frustrated mother and father were going to find intensely interesting. (And, believe me, they have!)

Little people locked in a big-picture mind
It was yet another blog item that had triggered the Eides’ blog item. They had been reading management consultant Andrew Sobel’s thoughts on big-picture thinking. It’s the kind of thinking that CEOs will practically kill for. Henry Kissinger has been brilliant at it, for example. In 1968 he realized that the Soviet Union and China could both be encouraged to seek a common bond with America; this triangulation dominated superpower relations for 20 years, thanks to Kissinger’s big picture thinking skills.

Sobel’s revisitation of Kissinger’s and other leaders’ big picture thinking skills prompted the Eides to wonder if big-picture skills are showing up far earlier and more often in our children than parents, teachers and other gatekeepers for the young are realizing and responding to. And immediately they concluded, “Pint-sized big picture thinkers really do exist and they seem to be over-represented among gifted children who underperform or cause behavioral disruptions in their early elementary school years.”

Moreover, the Eides suggest that the issue is not that pint-sized big picture thinkers can think this way but rather that they really can’t think any other way. And that the implications of this are manifold:

• Count on it, these children are going to have time management problems. For them, their learning environment is upside down and can be a real impediment.

• Writing assignments are hard not because they know too little but because they know too much.

• They feel like if they are going to understand anything at all, they have to understand a lot of things better. They are driven to know the overarching framework into which new bits of knowledge fit.

• They need to know why something is true, not just that it is true.

• They like discovering novel things, and they use novelties to generate new hypotheses or rules; they are inductive, not deductive, learners.

• For them, complexity often brings simplicity because with enough
examples, a pint-sized big picture thinker can often spot a new pattern of meaning.

By now, I was hooked. I immediately forwarded the Eides’ blog item to the parents of what I’m just sure is another pint-sized big picture thinker in the making.

Meanwhile, I was off to surf the Internet for new findings on what’s happening inside such a child’s brain.

Butterflies are out, sand piles are in

One of the most intriguing discoveries of late was made just down Interstate 75 from me—at the University of South Florida, in Tampa. It involves self-organizing criticality, the behavioral pattern I mentioned at the first above.

Self-organizing criticality is a kind of chaos. When brain scientists first began trying to apply chaos theory in the late 1980s, they were all a’flutter over the so-called “butterfly effect” (so named because thanks to deterministic chaos, if a butterfly in China flaps its wings the small perturbation may eventually cascade into a blizzard over New York City). But they could find little in the brain’s electricity resembling the butterfly effect.

However, in the 1990s, a growing sand pile effect was quickly evident. If you keep piling on sand grains, eventually you are going to get an avalanche. This is self-organized criticality. For a while, the pile grows predictably, and then suddenly and without notice, it “goes a grain too far” and collapses. We now know that the brain makes frequent and apparently fundamental use of self-organized criticality.

Which brings us to what USF researcher Robert Thacker found. If you can keep the collapse of certain of the sand-pile-like electrical patterns in the brain moving for as little as a single additional millisecond (out of a typical 55 milliseconds), you can add as many as 20 points to a child’s IQ.

And how do you do that?

Well, how do create a suitable classroom and home environment for a pint-sized big picture thinker?

Big-picture brain research needs a kick in the pants
The Drs. Eides have suggested that you make such a child’s environment as rich and varied sensorially as you can. They think you should throw “chronologically advanced experiences” at the youngster out the kazoo (second grade literature for the kindergarten big picture thinker is just great!). Patiently answer their questions and feed them more. And especially feed their hunger for subjects, phenomena and ideas that can be compared and contrasted.

And probably—and this is my observation, not the Eides—this is going to help extend the duration of the sand pile collapses in a big-picture child’s brain, too.

If there is a fertile field in dire need of big-picture thinkers, it is brain research. As veteran split brain researcher Michael Gazzaniga says, “neuro” research badly needs a unifying theory. Currently, it is vastly fragmented and having to resort more often than not to borrowed story lines from fields like motivational thinkers and management theorists (thank you, Andrew Sobel!) to bring some coherence to all the insights and observations tumbling out of things like fMRI (functional Magnetic Resonance Imaging) laboratories.

Maybe that patience-testing, ever-questioning six-year-old kindergartner who is already reading at a second grade level will turn out to be the one who provides the new theory that Dr. Gazzaniga yearns for. Gazzaniga says it will come from giving neuro-research a good kick in the pants. And our friends’ son seems to be in training for participating in just such an act of levitation.

Bookmark and Share

If You Think Being R. Buckminster Fuller Was a Challenge, Try Being His Ghost: A Report on Summer Doings and Travels of Bucky’s Legacy

A few weeks ago, I occasioned to wonder what the ghost of Richard Buckminster Fuller is doing these days. I am now ready to report.

For my younger readers, I may need to explain exactly who Bucky Fuller was. I wish I could. I’ve never really understood exactly who—or what—he was. If you believe in reincarnation, he might be remembered as Leonardo di ser Piero da Vinci taking a bow. Let’s just say this. He was an original. Hugely original. And far ahead of his times. His ultra-efficient designs for cars and houses and other objects and his unrelenting insistence that better care must be had for his beloved Spaceship Earth, along with his natural charms and engaging intellectualism, captivated almost everyone he met. And the ghost of one of American’s greatest creative thinkers of the 20th Century is still at it.

Bucky’s ghost has been busy. So busy that I’m quite sure that it hasn’t spent more than a night or two all summer at home in Mount Auburn cemetery, the famous bird-watching preserve and final resting grounds for the famous near Harvard Square in Cambridge. I’m sure that’s been a bit of a disappointment to the nature-loving Fuller. According to the park message board, birds spotted at Mount Auburn this year include the red-tailed hawk, eastern screech owl, warblers, song sparrows, scarlet tanager, house wren and great blue heron.

But I’m also quite certain that the missed bird watching opportunities were quickly forgotten by one of God’s most peripatetic poltergeists, for whom the summer months are always very busy.

Some years, a bit of a fuss is often made in July since Fuller was both born (on the 12th, 1895) and died (on the 1st, 1983) in July. (You can check out his astrological chart here.) But his ghost seems not to have had to deal much with many historical formalities this year. There were more fun things to do.

On June 5, the Museum of Contemporary Art in Chicago closed the doors on its run of one of the first major exhibitions about Fuller in many years. The show had debuted the previous summer at the New York Whitney Museum. Normally, Bucky’s ghost would choose opening day for an appearance but the way the Chicago museum ended its exhibition would have almost guaranteed—I’m just sure of it—an appearance by Fuller’s apparition.

It was the idea of a couple of local artists, Jennifer Karmin and Ira S. Murfin, to use the closing to mark the occasion of Bucky’s notable chat with the hippies in Golden Gate Park in the late 1960s. This magical moment in human discourse was captured on film. In Chicago, the two artists sat down on two overturned black plastic milk crates in the museum lobby and began to read from a transcript of the film. Soon others joined them and started reading parts. This continued for two hours, with both actors and audience coming and going. At one point there was an elderly gent in thick black spectacles and burr haircut with a wide smile on his face seen pretending to read a show poster on a nearby wall. That had to have been the ghost of you-know-who.

While in the Midwest, it would have made sense for Bucky’s ghost to have zipped over to the Henry Ford Museum in Dearborn, Michigan. Ah, the nostalgia. Of all the thousands of his famed Dymaxion House that were fabricated in Wichita, Kansas, only the one here remains. Built on a central mask. Made of aluminum alloys. Hovers like a flying saucer. It was truly sui generis. And like so many of Bucky’s ideas, it was a commercial failure.

Later in June, security guards at the University of Portland’s graduation ceremonies looked high and low for a figure spotted back stage without a security badge. He was never found, and speaker Paul Hawken never knew about the incident. Interestingly, the few who saw the gent said he disappeared right after Hawken, the well-know sustainability economist, read these lines from his speech:

“Basically, civilization needs a new operating system, you are the programmers, and we need it within a few decades. This planet came with a set of instructions, but we seem to have misplaced them. Important rules like don’t poison the water, soil or air, don’t let the Earth get overcrowded, and don’t touch the thermostat have been broken. Buckminster Fuller said that spaceship Earth was so ingeniously designed that no one has a clue that we are on one, flying through the universe at a million miles per hour, with no need for seatbelts, lots of room in coach, and really good food. But all that is changing.”

At that point, the witnesses said the little man raised both arms into the air, flashed “V” for victory signs and just disappeared behind the drapes. You-know-who again.

A good ghost, or so I’m told, tries to honor the legacy of its owner by keeping close tabs on the personal keepers of the memories and the sustainers of the flame, and it’s been a busy summer for Bucky’s ghost on that front, too.

Strangely enough, Michael Jackson’s death on June 25 had to have generated several frenetic days in a row for Bucky’s apparation. I mean … talk about Memory Lane. The “current trends examiner” for one website celebrated that Michael was died doing what he loved. Then the commentator noted that Buckminster Fuller had done the same when he died in Los Angeles just days before his 88th birthday. This celebrity lover claims Bucky as a mentor. Shameless namedropping? Well, Bucky’s ghost is used to it. Even the guy who was designing the sets for the King of Pop’s final tour cited Fuller’s influence on his early career in interviews occasioned by Jackson’s demise.

And if the demands of unexpected passings were not enough, there were those scheduled Bucky-named events and the regular keepers of the memory to bless with a fly-by, no matter how quotidian.

In Liberty, Missouri, young Bradley Dice was being honored by the mayor for placing 10th in the National History Day competition with his project, “Buckminster Fuller: The Actions and Legacies of a Comprehensive Anticipatory Design Scientist.”

And there was homage to be paid to the eight students and faculty member from MIT who won this year’s $100,000 Buckminster Fuller Challenge Prize. Their idea for placing fleets of shared-use lightweight electric vehicles at automatic charging racks throughout a city won the day and the prize.

While it wasn’t altogether necessary, a whirlwind wrath-brush with Houston produced a feel-good moment on July 20, when the Apollo 11 landing’s 40th anniversary was observed. Some Houstonians still remember Bucky’s stirring words when he proclaimed Apollo 11—and by inference, Houston, too—at the “dead center of evolutionary events.”

In Israel, Tal Ronen continued to tell any audience that would invite him how he happened to launch a lucrative career as a globe-trotting transformational thinker and coach to top management who constantly preaches the need for environmental sustainability. Bucky gets the credit, he says. Ronen was 24 when he heard Fuller speak. He says Bucky asked his audience who would help steer Planet Earth, and Ronen says for a time he was the only person to raise his hand. He found the moment life-changing.

Somewhere along the way Trevor Blake had a similar epiphany. Blake is a sign language interpreter in Portland, Oregon. But his passion is tracking Bucky materials. His published bibliography references nearly a thousand printed works spanning Bucky’s life and career. Without Blake’s devotion, we might never have known just how central the date of “July 12” was in Buckminster Fuller’s life and its aftermath. (Go here for Blake’s list of things Fuller-ish that have happened on that date.)

Oh, yes, one more stop that I’m sure Bucky’s apparition made this summer. I feel sure he dropped in to trigger delicious shivers of recognition and relevance from the staff of the jubilat poetry magazine at the University of Massachusetts. There in their summer issue, amid selected aphorisms from the 13th Century German philosopher Albertus Magnus and African-American experimental poetry, was an excerpt from Bucky’s book, I Seem to be a Verb.

If Bucky was a verb, so is his ghost. Who even though summer is only about half gone must surely be ready for some birdwatching in Boston.

Bookmark and Share