Strategies, the Brain, Neural Networks, and Cognitive Science: Part 1

Strategies, the Brain, Neural Networks, and Cognitive Science (Part 1)

Listen, this is a test. Ten crows were sitting on a roof. A hunter shot one of them with his gun. How many crows are left on the roof? Think about it for a moment before reading on.

The logical answer is, of course, nine. Any digital computer would tell you that. In fact, the correct answer is ten. The gun fired quietly, and the roof was flat. The dead crow fell and remained on the roof, and the other nine didn’t notice. But wait a minute! Suppose the gun fired loudly and the roof was sloped. Then the answer would be none, because the dead crow would slide off the roof, and the others, startled by the noise, would fly away.

These are questions posed by real life. Real life doesn’t give simple answers to complex, contextual questions. In real life, your brain works differently than a digital computer.

In the test, the logically correct answer is nine, but nine is just one answer. There are other ways to think about it. Thinking logically is not the same as just thinking. Notice how your brain shifted focus when you realized this wasn’t a logic test. The full context of the problem determines how we think about it. Context is part of the information. A new idea rarely comes from logical thinking. Logical thinking most often leads to contradictions. In reality, human beings aren’t particularly good at logical thinking. The laws of thought are not the laws of logic.

These sections address cognitive science and modern neurology. Most work on the brain and thinking today is presented in the field called neurocomputation, which has nearly 40 years of studying the architecture of brain neural networks. This is very different from the stimulus/response reflex-arc model of the brain, which 40 years ago laid the foundation for the development of digital computers and also serves as a model for strategies in NLP.

The real brain doesn’t work in fixed steps; one cell interacts with the processes of other cells. The real brain is a globally interconnected, widely distributed, simultaneously operating set of parallel processes. Remember when we told you that strategies in NLP follow a step-by-step chain, one after another? Well, we lied. It all happens at once.

Clearly, our current understanding of internal computation in NLP needs updating. This is the first in a series of articles written to improve our models of strategies and to integrate some valuable discoveries from the past few decades that have now revolutionized computer science, neurology, and cognitive psychology.

A Little History

Ever since Egyptian surgeons opened up the brain 5,000 years ago, we’ve tried to understand it and find models for how it works. Aristotle thought the brain’s function was to cool the blood. Descartes thought consciousness was stored in the pituitary gland and the rest of the brain contained memory in the form of pathways.

Other metaphors usually explain how the brain works in terms of man-made devices, such as communication through pipes, telephone switchboards, or, more recently, Neuro-Linguistic Programming (named in 1975 because it drew from neurology and linguistics). Programming refers to how we organize our actions and ideas to create results, and the metaphor comes from computer science (cybernetics).

There are significant differences between computers in 1975 and computers today. Obviously, the computer on my desk is more powerful than the one that took up the entire first floor of London University when I studied there in the 1970s. Computers have changed in both design and power, and our understanding of how the brain works is very different. The digital computer has never been an adequate model of the brain. Artificial intelligence is not a replacement for the real thing.

Our assumption is that NLP is stuck with an outdated metaphor.

Digital Computers

The origins of the idea of a Thinking Machine may go back to George Boole’s book, “An Investigation of the Laws of Thought,” written in 1854. Boole described a way to mathematically define logic. He believed that the connection between algebra and language demonstrated a higher logic, which he called the laws of thought. Boolean logic is now widely used in digital computers.

The next major step came almost 100 years later with Alan Turing, who developed a generalized model of computation that is still the basis of the most complex and powerful machines operating today. He proposed that machines could manipulate binary algebra-zeros and ones-to solve any mathematical problem.

John von Neumann took Turing’s idea and put it into practice. He was fascinated by how consciousness worked and believed he was modeling the brain. Von Neumann created a computer whose design was innovative at the time: a memory module that stored both numbers for calculations and instructions (programs) for carrying them out. This was a big step forward, compared to computers that required rewiring for each different type of operation. Von Neumann thought that such shared memory for data and programs was a model of the mind’s flexibility. However, this created a bottleneck, where the contents of memory could only be checked one element at a time. Modern computers have advanced in the speed of these operations, but the bottleneck remains. The brain doesn’t have such a bottleneck. It has billions of autonomous neurons functioning simultaneously.

The digital computer differs from the way our brain works in several ways. Digital computers operate through a central processing unit to control data. They process data one block at a time, and although there are many and they work in parallel, they all operate in a linear and sequential way. Because of this bottleneck, the faster they can work and the more units can work at once, the better.

This reminds me of the somewhat illogical image of people examining Einstein’s brain to see if it was bigger than the average person’s. Since digital computers work in a linear and sequential way, they operate through an algorithm of cause, effect, and IF…THEN… logic. Modern research paints a picture of the brain as a much more complex set of processes.

Digital computers are often too precise for their own purpose. If the answer must be “yes” or “no,” this paradoxically limits your thinking. More often, the answer needs to be “maybe” or “possibly,” depending on what else is happening. High precision may be necessary in mathematical tasks, but most of the time, it’s a matter of honor. We choose and filter from a whole range of possibilities and keep many options open as possibilities, rather than trying to resolve them. The result of one chain of thought is to return to a similar process to refine it. Paradoxically, it takes enormous computing power for a computer to mimic this natural quality of human thinking.

Now, some truly decisive differences. Digital computers don’t learn; they distribute knowledge, serving as a metaphor for the human brain as a library-a database. In a computer, data is independent of the system that contains it. In the library or database metaphor, it doesn’t matter which library you found the book in; it will be exactly the same. A book or database can be transferred from one system to another without change.

Now we can see where this metaphor breaks down. You can’t transfer knowledge from one mind to another. The meaning of this article for you won’t be the same as it is for me. Meaning depends on context, as any crow will tell you.

There’s a well-known story about a computer analysis of cases collected in “Home Accident Reviews,” where incidents on stairs were statistically studied, and it was found that most occurred on the first and last steps. The logical suggestion: remove the first and last steps. A computer requires a programmer, someone external to it.

There is hope and promise that computers will think and beat people at their own games. The best example of this is probably the work that came from developing computer programs capable of playing chess and competing with top chess masters. Initially, there were high hopes for this. It seemed like the perfect test. Chess players supposedly analyzed sequences of possibilities in their minds, and the right move was the one that brought victory or an advantage at the end of the game. The best chess players, according to this model, were those who could see far ahead and analyze a larger tree of possible moves.

Unfortunately, people as players make mistakes. Either they don’t consider immediate possibilities because the number of possible moves on a chessboard is astronomical, or they only analyze a plan to the extent needed to carry it out, without looking far enough ahead. Then their opponent would be surprised by a move they didn’t foresee. All computers had to do was calculate many moves ahead, further than a human chess player could, and consider all the possibilities a person missed. This is simpler, at least in principle.

Although computer chess has made great strides in the last 10 years, the gap between the best computer chess programs and the best human players is as wide as ever. Top-rated computer programs are ranked among the world’s top thousand players.

When you model a chess player’s success, you often find that they sense positions they won’t analyze. They base this feeling on something similar that happened in the past. They won’t calculate as many moves ahead as a computer, but just as a computer calculates, masters access positions visually in their minds. They discard many positions as undesirable without trying to analyze why they’re bad. When one of the best chess players was asked how many moves ahead he could see, he replied, “One. But it’s always the best one!”

Even in the simplest game-checkers-the unofficial champion Dr. Marion Tinsley was beaten by his closest competitor, the computer Chinook, in 2 out of 40 games. As a side note, Chinook could calculate 3,000,000 moves per minute and look 20 moves ahead. Dr. Tinsley, after becoming champion in 1955, said, “Chinook was programmed by a man, but I was programmed by God.”

The Neuro-Linguistic Metaphor

How does the programming metaphor affect NLP? Let’s think about modeling. NLP was originally developed by extracting patterns of idiosyncratic genius (Perls, Satir, and especially Erickson) and applying them in various fields. This was amazingly convenient and creative in some ways and disastrous in others. If you extract Erickson’s amazing hypnotic skills and treat them as if they can be transferred independently of Erickson’s ethics and values, you’re asking for trouble. And the problems will be proportional to the power of the tools you have. Perhaps that’s why Gregory Bateson, who endorsed The Structure of Magic I, later said, “NLP? If you encounter NLP-run as fast as you can in the opposite direction. I stopped sending people to study Milton; they all come back energetically hungry.”

Many NLP techniques read like algorithms. Step 1: establish rapport. Step 2: access the state. Step 3: … These step-by-step models of techniques are convenient as long as we remember that they don’t actually happen sequentially. They are a convenient fiction, a frozen abstraction. What does it mean to do a six-step reframe with new behavior generation during anchor collapse using a metaphor?

Another main area where the programming metaphor had an effect is strategies and modeling. Most stimulus-response anchoring and modeling of internal process strategies were based on the rebellion of Miller, Galanter, and Pribram against the limitations (and behavioral tyranny) of the stimulus-response reflex in the central nervous system. A bit earlier, in 1923, its discoverers (Sherrington and Pavlov) referred to the stimulus-response reflex as just a convenient fiction. Their model was improved by Miller and others by adding feedback to the historically sequential model of neural communication.

The accepted wisdom of strategies is that you extract physiology, beliefs, and internal sequences of sensory representations with corresponding submodalities. Strategy diagrams are mapped as algorithms with loops, pointers, and steps. These maps are not the territory.

The Brain Is Not a Computer

The human brain weighs about three pounds and contains over 100 billion neurons. The cerebral cortex contains more than 10 billion neurons. It’s the connections between nerve cells that are more important than the cells themselves. A single neuron can have up to 100,000 inputs. The cortex contains over a million billion connections. If you counted them one per second, it would take you 32 million years.

We don’t have electronics like that. No two brains are exactly alike. We’re born with all our neurons, and in the first year of life, up to 70% of them die before some structure forms. The surviving neurons form an even more complex network of connections, and our brain quadruples in size. Certain connections are strengthened by using the death of others. We learn from consequences and mistakes. Nerve cells specialize and form a hyper-dense network. The brain is not independent of the world; it is shaped by the world. Today, neurologists often describe the brain as an interconnected, decentralized, parallel-functioning, distributed network of simultaneous waves of interactively resonating patterns. The brain is a vast collection of hopes and fears all at once.

The computer metaphor would have a system of symbols controlling consciousness, based on logical laws. If that were the case, it could actually be studied independently of the brain. Consciousness is not the brain, and building theories of how consciousness works without considering the brain’s operation is very risky. The brain surpasses all models because it builds all models.

The brain uses processes that change themselves. They create memory, which changes how we think about the future; they make changes in themselves. We build perception filters that determine what we pay attention to. We pay attention to something to strengthen networks and thus build perception filters. The brain must model many different possible futures at the same time. We can’t know in advance what to pay attention to, because the world doesn’t come to us with labels attached. We attach the labels and then often forget we did, thinking the labels are an inherent part of the world. Computers can extend the nervous system; they can’t replace or model it. In fact, many cyberneticists build computers just to better understand how they think the brain might work.

Our second article will explore types of computer neural networks modeled after the way the brain works, to then begin explaining how to incorporate a new model of strategies that aren’t so digitally based.

In Conclusion: A Story from Gregory Bateson

He tells of a man who wanted to learn about the brain-how it really works and whether computers would ever be smarter than humans. This man entered the following question into the most powerful modern computer (which took up an entire floor of the university): “Do you think you’ll ever think like human beings?”

The machine rattled and hummed, starting to analyze its own computing abilities. Finally, the machine printed its answer on a slip of paper. The man, hurrying and excited, read these neatly typed words: “That reminds me of a story…”

Authors: Brian Van der Horst, Joseph O’Connor

Leave a Reply