AI Don't SPeak NO English
We assume AI speaks English because it was trained on human text. But what if that's backwards? What if there's a better pidgin language hiding in plain sight—one that gets superior results?
In the summer of 1905, a German horse named Clever Hans became the most famous animal in the world. Hans could solve arithmetic problems, tell time, and identify musical notes by tapping his hoof the correct number of times. Crowds gathered in Berlin to witness this miraculous horse who seemed to understand human language as well as any schoolchild.
But Hans wasn't clever at all. He was responding to tiny, unconscious cues from his questioners—a slight lean forward when he reached the correct number of taps, a barely perceptible relaxation of posture that signaled he should stop. The humans thought they were communicating through language. Hans was reading body language. They were speaking different languages entirely, but the communication worked—until someone figured out what was really happening.
Today, we're making the same mistake with artificial intelligence, only in reverse. We think we should speak to AI "naturally," in plain English, the way we talk to each other. But what if that's exactly wrong? What if there's a better way to communicate with these systems—not their native language, but something closer to a shared pidgin that actually works?
The Apple Pie Experiment
Let me tell you about an experiment. It's almost embarrassingly simple, which is precisely what makes it so revealing.
A researcher wanted to see what happened when you asked an AI to generate the same recipe using two different approaches. The first was natural, conversational English:
"Classic American Apple Pie: Provide comprehensive steps with detailed ingredients including Granny Smith apples, cinnamon, nutmeg, sugar, lemon juice, and a buttery pie crust."
The second was structured, systematic JSON:
{
"system_role": "expert_chef_instructor",
"task": {
"type": "recipe_generation",
"subject": "Classic American Apple Pie"
},
"requirements": {
"detail_level": "comprehensive",
"include_timing": true,
"include_temperatures": true
},
"mandatory_ingredients": [
"Granny Smith apples",
"ground cinnamon",
"ground nutmeg",
"granulated sugar",
"fresh lemon juice",
"buttery pie crust"
]
}
Same AI. Same basic request. But the results were startlingly different.
The natural language prompt produced a friendly, conversational recipe—the kind you might get from a neighbor sharing her grandmother's method. Warm, accessible, but somewhat vague about timing and technique.
The JSON prompt produced something else entirely: a professional, systematically organized instruction manual with precise measurements, detailed timing, equipment lists, and pro tips. It read like something from a culinary school textbook.
Same intelligence. Same knowledge base. Completely different persona.
The Pidgin Revelation
This is the moment when our assumptions about communication begin to unravel.
We've been thinking about AI interaction all wrong. We assume that because we're dealing with language, we should use our language—natural, conversational English. After all, these systems were trained on human text. They understand grammar, context, nuance. Surely speaking to them "naturally" is the most effective approach?
But consider this: when Portuguese traders first encountered Chinese merchants in the 16th century, neither group spoke the other's language fluently. So they developed something new—a pidgin language that borrowed elements from both Portuguese and Chinese, but belonged fully to neither. It wasn't anyone's native tongue, but it worked better than either language alone for their specific purpose: trade.
JSON isn't AI's native language any more than English is. An AI's true native language is vectors and matrix operations—mathematical relationships in high-dimensional space that no human could possibly speak. But JSON functions as something far more valuable: a pidgin language between human intent and machine processing.
The structure matters. The explicit relationships matter. The removal of ambiguity matters.
When you write "detail_level": "comprehensive"
, you're not just requesting comprehensive details—you're signaling to the AI that this is a systematic, professional interaction that requires systematic, professional output. The format itself becomes part of the message.
The Tipping Point of Communication
Malcolm Gladwell has long argued that small changes can make big differences—that the right tiny adjustment can tip a system into an entirely different state. The difference between natural language and structured prompts isn't small; it's a communication tipping point.
Think about what happens in the AI's processing pipeline when it encounters these different formats:
Natural Language: The system doesn't literally "switch modes," but the statistical patterns shift. It draws more heavily on conversational training data—casual cooking blogs, friendly recipe sharing, informal kitchen wisdom. The probability weights favor approachable, accessible content.
Structured Format: The probabilistic emphasis shifts toward professional patterns—technical documentation, systematic instruction manuals, formal culinary training materials. The same underlying statistics, but weighted differently.
It's not like flipping a switch from "casual" to "professional." Rather, different input formats nudge the probability distributions in different directions, like how the same musician might play differently in a jazz club versus a concert hall—same skills, different statistical tendencies based on context.
But here's the fascinating part: neither approach is objectively "better." They're optimized for different purposes. Natural language is perfect when you want creative, conversational, accessible output. Structured prompts are superior when you need systematic, comprehensive, professional results.
The mistake is thinking there's one "right" way to communicate with AI, just as it would be a mistake to speak to a sommelier the same way you order at McDonald's.
The Clever Hans Principle
Remember our German horse? The real revelation wasn't that Hans couldn't do arithmetic—it was that humans and animals could communicate effectively without sharing a language, as long as they found the right protocol.
We're in a similar situation with AI. These systems don't "understand" language the way humans do. They're pattern-matching machines that happen to be extraordinarily good at statistical relationships in text. But that doesn't mean communication is impossible—it means we need to find the right protocol.
The researchers who exposed Clever Hans made a crucial discovery: when they eliminated the unconscious human cues, the horse's abilities vanished. But what if, instead of seeing this as fraud, we recognized it as a different kind of communication? What if Hans and his questioners had developed an effective interspecies protocol, even if it wasn't the one they thought they were using?
JSON prompts work the same way. They're not "tricking" the AI into being more systematic—they're using the AI's actual processing patterns to achieve better communication. Instead of fighting against how these systems work, we're working with them.
The Hidden Structure of Thought
Here's where the story gets really interesting.
In the 1950s, linguist Benjamin Lee Whorf proposed that the structure of language shapes the structure of thought. Different languages don't just use different words for the same concepts—they create different ways of thinking about reality entirely.
What if the same principle applies to AI communication?
When you prompt an AI with natural language, you're not just requesting information—you're shaping how the AI "thinks" about the problem. Conversational prompts prime conversational thinking. Technical prompts prime technical thinking. The format doesn't just affect the output; it affects the cognitive mode.
This explains why experienced prompt engineers often sound like they're speaking in code:
"You are a world-class expert in X. Your task is to Y. Please provide Z in the following format..."
It looks robotic to outsiders, but it's actually more sophisticated communication. They've learned to speak the AI's second language—not its native vector space, but the structured pidgin that produces reliable results.
The New Rules of Communication
So what does this mean for the rest of us? How do we apply this insight?
Rule 1: Match Format to Function
- Need creativity? Use natural language
- Need precision? Use structured prompts
- Need consistency? Use templates
Rule 2: Be Explicit About Context
{
"role": "expert_analyst",
"audience": "technical_professionals",
"tone": "authoritative_but_accessible"
}
Rule 3: Remove Ambiguity Instead of: "Write something good about X" Try: "Write a 500-word analysis of X for Y audience, focusing on Z aspects"
Rule 4: Use Structure as Signal The format itself communicates your expectations. Bullet points signal systematic thinking. Conversational prompts signal creative thinking. Choose deliberately.
The Broader Implications
This isn't just about getting better recipes from ChatGPT. We're witnessing the emergence of a new form of human-machine communication that will shape the next decade of technological interaction.
Consider the parallels:
- Early internet users had to learn URL syntax and command lines
- Smartphone users had to learn touch gestures and app navigation
- Now we're learning to communicate with AI through structured languages
Each technological shift required new communication protocols. The difference is that AI communication feels like it should be "natural" because it involves language. But language with AI isn't conversation—it's programming.
The people who master structured prompting today will have the same advantage that early internet users had with search engines, or early smartphone adopters had with mobile apps. They'll be native speakers of the new lingua franca between humans and machines.
The Future of Understanding
In 1905, the scientists who debunked Clever Hans thought they had ended the story. The horse couldn't really do math, case closed. But they missed the bigger point: Hans and his questioners had actually achieved something remarkable—successful interspecies communication through an entirely unconscious protocol.
Today's AI systems are like Clever Hans, but in reverse. They can do the intellectual equivalent of arithmetic, but they're reading statistical cues instead of body language. And just like Hans's questioners, we're unconsciously developing new protocols for communication without fully understanding what we're doing.
The difference is that this time, we have the opportunity to do it consciously. To recognize that effective communication with AI isn't about speaking "naturally"—it's about finding the right pidgin language for each interaction.
We're not just learning to talk to machines. We're developing a new form of language itself—one that sits at the intersection of human intent and artificial intelligence, belonging fully to neither but serving both.
The question isn't whether AI will understand us better in the future. It's whether we'll learn to understand ourselves better through the process of learning to communicate with AI.
And that might be the most human story of all.
This piece is based on experimental data comparing natural language vs. structured prompting with AI systems. The apple pie experiment used small sample sizes and the insights should be considered preliminary observations rather than definitive conclusions about human-AI communication.