The Design of the Brain

A look at the design of something that wasn’t designed at all.
Since you are reading
this sentence, I will make a bold assumption and assert that you have a brain.
This is neither sarcasm nor a metaphoric comment on your intellect or taste;
this is about the roughly three pounds of squishy tissue between your ears.
 
Game
show fans already have an inkling as to why; IBM finally showed off its natural-language-processing
computer Watson on the game show Jeopardy! in February, where it demolished its
fleshy opponents. That humanity could only sheepishly grumble about the
computer’s buzzer reflexes is a tacit admission that it could basically read
and understand the game’s clues as well as any human.
 
But
this is only a bold, and not totally foolish, assumption under certain
definitions of the word “read,” since computer programs have been scanning and
memorizing this text long before it hit your optic nerves. In fact, everyone
involved in the production of this article depends on that ability to
effortlessly recall each character and the order it was entered, and to rearrange
them into previous patterns at our discretion.
 
And
while they might be able to read, what our computers have no hope at doing—and
what Watson is perhaps only scratching the surface of—is coming up with the
idea for this article in the first place. So far, the only machine we know
capable of that kind of creative behavior is not the product of decades of
meticulous engineering, but millennia
of haphazard biological evolution. The
brain wasn’t designed to think, analyze, or create. It wasn’t designed at all.
But that the brain is the only thing on
the planet that can surprise its owner with a novel idea is one of our biggest
unanswered scientific questions. What makes us more than meat-machines,
programmed to sing, dance, and dream? What makes us human?
 
Picture by Kokoro & Moi 
 
 
The Undesigned
The basic building blocks of the brain are neurons, long,
branching cells that communicate with each other via electrochemical signals.
The human brain has roughly 100 billion of them, or more than ten times the
number of people on the planet. The organism with the simplest nervous system,
the nematode, has 302.
To
be totally reductionist, everything that happens in the brain can be boiled
down to electrical signals in these neurons. The electrical signals cause
chemicals known as neurotransmitters to jump the tiny gulf separating a neuron
from one neighbor or another, which sets off new electrical signals in the
recipient, and so on until you wiggled your left big toe or selected the next
word in your sonnet. The difference lies in the pattern of neurons firing and
the path through the various parts of the brain that pattern takes.   
 
This
process is more or less identical in humans and nematodes, as both species’
neurons are the product of the same slow, incremental changes of evolution. What
separates the two species’ nervous systems can be traced back to surviving in
the environments of our ancestors and those of a millimeter-long roundworm.
Nematodes’ neural development could stop once life’s most basic functions—breathing,
eating—were satisfied. The human hindbrain takes care of those, but to get to
complex sensory processing, and then to poetry, painting, and neuroscience, the
midbrain and forebrain needed to develop on top of it. 
 
But
when we concern ourselves with those uniquely human abilities, we’re really
talking about the part of the forebrain known as the cerebral cortex and its
Frontal, Parietal, Occipital, and Temporal lobes. Broadly speaking, they are
respectively the centers of decision-making, spatial perception, vision, and
speech. Of course, the actual mechanisms of all of the above involve both
higher specialization within each of those lobes and interactions with many
other parts of the brain.
 
The
organization, interactions, and specificity of these regions seem so orderly,
in fact, that it is tempting to think of them as being designed for their
various purposes. But not only did these structures arise from the ground up,
through millions of random mutations rather than a concerted effort, they did
so in an environment that was largely devoid of the things we think they’re so
purpose-built to interact with. To say there’s a part of the brain design for
reading ignores the fact that there was nothing to read at the point it took
the shape it has today. 
 
“I
think part of what designers do is try to reverse engineer the human mind to
find out what kinds of things will tickle the brain,­­­­­” says Gary Marcus,
professor of psychology at New York University, and author of Kluge, an
account of the brain’s haphazard evolution. “I don’t think there’s a simple
formula for it, because the brain itself is not a particularly simple system.”
 
In
Kluge, Marcus outlines two overlapping thinking systems that evolution
bestowed upon the brain: deliberative and reflexive. In the environment these
systems evolved, both were useful—you’d need to deliberate with your fellow
proto-humans about how to best corner your prey in order to eat, but allow your
reactive systems to override your hunting strategy if you suddenly thought you
might be the one on the menu.
 
In
the modern context, the concurrence of these systems also has implications for
the diversity of art and culture. You find humor in both pie-in-face gags and
complex satire, representational, and abstract paintings.  
 
“You
can imagine that if we were designed by intelligent designers, we’d only have
deliberative pleasure or we’d only have reflexive pleasure, but we have both
because evolution doesn’t think ahead,” says Marcus.  
 
Clearly,
appreciating culture takes a mix of both of these systems. But there’s more;
it’s only by combining the reflexive and deliberative systems that human brains
can create new ideas.

Picture by Kokoro & Moi 
 
The Process of A New Idea
Consider this peculiar aspect of your brain: You have an
awareness of things you have forgotten, and can recall things you never knew
you knew. More impressively, you can unconsciously cobble together bits of
half-remembered information and apply them to a problem at hand, producing a
eureka moment and an idea that seemingly came from nowhere.
 
Neuroscientists
use something akin to that definition when trying to pin down what is meant by
“insight.” It makes sense; the only place such ideas could actually come from
is within the brain itself, hence “in” plus “sight.” More broadly, these
scientists are delving into the electrochemical roots of creativity, the
creation of new ideas.  
Major
advances in brain imaging have assisted in examining those roots, but the real
hurdles to understanding this phenomenon are not technical. A review of the
last decade’s worth of research into the neuroscience of creativity, recently
published in the American Psychological Association’s Psychological Bulletin,
emphasizes this difficulty.
 
“An
insight is so capricious, such a slippery thing to catch in flagrante,
that it appears almost deliberately designed to defy empirical inquiry,” said
the review’s authors, Arne Dietrich and Riam Kanso. “To most neuroscientists,
the prospect of looking for creativity in the brain must seem like trying to
nail jelly to the wall.”
New
ideas are interesting and useful only because they are unpredictable; if we
knew where to look for them while in the bore of an MRI machine, we’d know
where to look for them when we’re hunched over our laptops and drafting tables.
 
The
closest we’ve been able to come in the lab involve experiments intended to
determine what parts of the brain are most active when someone is completing a
task that requires some mix of the reactive and deliberative systems in the
brain, and can be done either creatively or in a systematic way.
 
The
most famous of these studies was conducted by John Kounios and Mark
Jung-Beeman, psychologists at Drexel University and Northeastern, respectively.
They asked participants to find the connection in a trio of words, such as
“bump, egg, step.” Did you get it? Did the word “goose” just come to you, or
did you try out lots of different words to see if any fit? If it was the
former, congratulations, you just had an insight.
 
While
participants were solving these riddles, Kounios and Jung-Beeman were watching
what was going inside their brains. The research team used both EEG (which uses
electrodes on the scalp to sense the brain’s electrical signals and is
temporally accurate) and fMRI (which uses powerful magnets to detect blood
concentration in different parts of the brain and is spatially accurate) to
pinpoint what was happening at the eureka moment.
 
The
prefrontal cortex, the outer part of the frontal lobe, was a logical place to
look, as almost all complex decision-making originates in that part of the
brain. And indeed, the prefrontal cortex, as well as the anterior cingulate
cortex, which is involved in detecting contradictions and errors, were most
active when focusing on the parameters of the task.
 
But
the real work of generating an insight was done by another part of the brain,
the anterior superior temporal gyrus, where disparate pieces of information are
examined in parallel, then recombined into an insight. Kounios and Jung-Beeman
have gone on to examine the ways the brain might be primed to have such
insights, but the common theme of their research is that, after the
deliberative framing of a problem, the synthesis of the solution involves
reflexive behavior in the brain that not even be consciously accessible.
 
Picture by Kokoro & Moi 
 
Man vs. Machine
As it happened, the computer Watson provided a perfect coda
for this research into insight. After the public display of its trivia
dominance, Congressman Rush Holt beat the computer in a private round of
Jeopardy, partially due to a category that caters to our brains’ capacity to
pull together disparate pieces of information in semi-conscious fashion. The
category, “Presidential Rhyme Time,” did not require arcane knowledge; a list
of U.S. presidents and a rhyming dictionary would likely suffice to come up
with answers such as “What are Hoover’s Maneuvers?”
 
And
while Watson surely had those words filed in its memory banks in a precise and
orderly fashion, it took something with a mess of neurons to put together the
ones that satisfied the clue. If you’ve played along with Jeopardy! before (and
Holt has; he’s a five-time champion in addition to being a plasma physicist)
you know how it feels to solve those kinds of clues. There is no chronological
checking off of presidential names, cross-indexed with a collection of potential
rhymes that could fit the bill; there’s simply not enough time. It’s like the
word trios in the Kounios experiment; you get it or you don’t.
 
Watson
didn’t, or more likely, couldn’t. Even with the computer’s massive processing-speed
advantage, the solely systematic, deliberative approach was no match for human
insight.
 
But
what of pure creativity? How do we generate those black swan ideas if we don’t
have kernel around which the deliberative systems of our brains can focus? Even
when we’re purely free-associating there is always a one framework at our
disposal: No matter the medium or the output, all products of human creativity
will—at least for now—be processed through a human brain, even if it’s only our
own. The haphazard biological machinery that allows us to enjoy is the same
that allows us to create.
 
“All
designers need to be intuitive psychologists of human beings,” says Marcus. “If
you wanted to please the aesthetics of a robot, you might do something
different.”

Related Articles:

  • No Related Posts Found

ADD A COMMENT