NLP: Do Language Patterns Really Work?
When discussing NLP (Neuro-Linguistic Programming), it’s important to approach not just “language patterns” specifically, but also the critical evaluation of any proposed models—regardless of their names or who introduces them. The effects of most NLP techniques can be verified through sensory experience. For example, you can perform an anchor collapse on a client (even one unfamiliar with NLP), calibrate the effect, establish rapport, and then lead the interaction. This applies to submodalities, anchors, eye-accessing cues, rapport, the “Swish” technique, and “Parts Integration,” among others.
However, there are some models in NLP whose effectiveness cannot be directly verified through sensory experience—you simply have to believe they work. A prime example is Robert Dilts’ “language patterns” (or “mind lines” as Michael Hall calls them). While you might sometimes observe a shift in belief as a result of an intervention, it’s hard to pinpoint whether this change was due to language patterns, suggestion, or simply the client’s belief that interacting with a renowned NLPer should make them feel better.
If you can’t verify effectiveness through personal experience, are there any rigorous studies—like randomized trials with untrained clients? As far as I know, such research doesn’t exist.
In practice, language patterns often rely on belief in their effectiveness (and, as you may recall, belief doesn’t require experiential confirmation, unlike conviction). Belief also acts as a filter for perception. If a practitioner believes language patterns work, they may ignore situations where the desired effect doesn’t occur and attribute any positive outcome to the use of language patterns.
This is why results from training sessions don’t really count—they’re “studies” conducted on adapted clients. An adapted client knows what’s supposed to happen and demonstrates the expected result. You could even perform a mock shamanic dance in front of them and get a similar effect.
Another point: why assume that all patterns (14 from Dilts or 26 from Hall) work? We’re often encouraged to accept the entire model as a package. Maybe only 3–5 patterns actually work (if any), and the practitioner only gets results after cycling through them all.
It’s nearly impossible to observe the effectiveness of a specific language pattern in changing beliefs, though sometimes you might see a “pattern interrupt.” But a pattern interrupt doesn’t necessarily change a belief—otherwise, you could change beliefs just by interrupting a handshake.
Language patterns were modeled by Dilts and Hall after the legendary Bandler, but they were based on Bandler’s demonstrations of “paranoid” patterns. On what grounds was it assumed that these would work in reverse? Perhaps it’s just faith in big names at play.
“Great orators used language patterns and so on.” Great orators used many things—nominalizations, interjections, even nose breathing. In modeling, you’d use subtraction to test if a pattern works, but this hasn’t been done. Maybe their speeches would have been even more powerful without language patterns.
As a result, we have a model whose effectiveness simply can’t be verified. That’s why someone like Mikhail Pelekhaty can confidently claim that language patterns don’t work—there’s no way to prove or disprove it. And that’s why it’s possible to create a huge number of patterns and general models of language patterns—since their effectiveness can’t be tested in practice, it’s only confirmed by the confidence of their creators.
In NLP, the validity of a model is determined by its usefulness. But here, we have a model whose usefulness (effectiveness) hasn’t been tested. Yes, in theory, you could test the language patterns model and at least find out which patterns actually work. But that would require a proper scientific study with a large sample of untrained clients.
What I’ve always liked about NLP is that most models are easy to test through experience. When asked how something works, you can always say, “Try it yourself and see what happens.” However, there are some models in NLP that are hard to test experientially and haven’t been scientifically validated either. In my opinion, these models deserve a healthy dose of skepticism (as do all models, but especially these).