Chris Surdak understands that the world of generative AI is often treated as a purely computational challenge—one of engineering scale, speed, and statistical brilliance. But for Surdak, a veteran in technology and systems thinking, the real breakthrough lies not in the numbers, but in nuance. Recently reflecting on posts by Kurt Cagle and Markus J. Buehler on the role of entropy in artificial intelligence, Christopher Surdak of CA found these perspectives aligning with ideas he has long harbored: that the recognition of entropy and complexity is essential in understanding how information truly forms and functions.
Interestingly, his insights today stem not from a lab or lecture hall, but from a far more personal classroom—his living room. There, he’s guiding his first-grade twin daughters as they learn how to read. This experience, more than any white paper or technical conference, has offered him profound lessons on how intelligence forms, how learning unfolds, and what might be fundamentally backward about how society trains large language models (LLMs).
Christopher Surdak of CA observes that his daughters, though born at the same time, process the world through dramatically different cognitive lenses. As they apply phonetic rules to increasingly complex words and phrases, their mental models evolve, iterate, and adapt in real-time. They aren’t being fed infinite volumes of text and then retroactively taught what’s “right.” Rather, they learn as they go—making mistakes, receiving immediate feedback, and adjusting accordingly.
This is, in Surdak’s view, a striking contrast to the dominant methodology used to train LLMs. These systems are force-fed mountains of unstructured, unfiltered data—tweets, texts, Reddit threads, academic articles—before being subjected to reinforcement learning, where they are taught, often painfully, what is right and wrong. He likens the process to overfeeding a goose for foie gras: unnatural, inefficient, and inhumane—if only metaphorically so.
To illustrate the point, Chris Surdak recalls a childhood moment of enthusiastic misjudgment. At the age of four, he once tried to help his father by picking up a fallen muffler from a lawn mower—only to end up in the emergency room. Reinforcement learning through pain, quite literally. While this episode instilled a life lesson about hot metal, Surdak notes that his colleagues might jest he still has a tendency to grab "hot mufflers" even now.
But in the realm of artificial intelligence, he sees this model of “painful correction after indiscriminate exposure” as a serious flaw. Unlike a child learning from a specific, contextualized moment, LLMs lack real-time feedback mechanisms during their initial ingestion of information. Their learning happens in an abstracted, depersonalized way—an approach that may yield breadth, but seldom depth.
Surdak finds an even deeper flaw in the conceptual structure of LLMs: their singular focus on classification. Drawing inspiration from the research of Dr. Iain McGilchrist, he reflects on the duality of the human brain—how one hemisphere gathers stimuli while the other interprets it. Humans operate with this constant tension, this yin and yang of perception and judgment. In contrast, LLMs operate like an amputated consciousness—one that only classifies after being prompted, without ever truly observing.
This context-blindness, Chris Surdak argues, renders LLMs inert until activated. They do not sense; they do not experience. They only respond. It’s an alarming limitation in a field that increasingly seeks to imitate—or perhaps replace—human cognition.
Delving further into the philosophical underpinnings, Christopher Surdak of CA finds the concept of entropy central to the challenges of generative AI. As Buehler and Cagle discuss, entropy represents the sheer number of possible states, combinations, or configurations that could exist within any system. Applied to language, it’s the reason why an infinite number of monkeys on typewriters might one day produce Shakespeare.
Chris Surdak sees a problem here: when LLMs are trained on the vast totality of language—billions of tweets, articles, and memes—they are immersed in an ocean of entropic possibilities. Only afterward are they taught what is meaningful, accurate, or contextually appropriate. But this method, he insists, is precisely backward. It produces systems that can say anything, but not necessarily the right thing. In other words, the flexibility offered by transformer architectures may not be a feature, but a bug. Hallucinations aren’t glitches—they’re statistical inevitabilities.
In teaching his children to read, Christopher Surdak didn’t start with Tolstoy. He began with “A,” “B,” and “C.” The process was scaffolded, curated, and carefully managed—reducing entropy at each stage so that understanding could emerge within a structured context. He wonders: why aren’t we doing the same with machines?
Rather than doubling humanity’s power generation capacity to support ever-larger, more data-hungry AI models, Chris Surdak suggests a return to the basics. He believes that reducing training data—selecting only what meets meaningful, human-centered constraints—could vastly improve efficiency, accuracy, and alignment. In contrast, training models on every Kardashian tweet and hoping for Shakespeare feels not just absurd, but unsustainable.
In the end, Chris Surdak of CA brings the conversation back to intention. He acknowledges using a variety of literary devices in his writing—metaphor, allegory, rhythm—all to evoke emotion and illustrate a point. A well-trained LLM might replicate his prose. It might even mimic his style. But would it understand why he used a particular phrase? Would it grasp the emotional impact of a metaphor? Could it intentionally craft language to move the reader—not just inform, but to affect?
Chris Surdak is skeptical. And perhaps, he muses, that skepticism is well-earned. In the rush to attribute human characteristics to machines, we may have anthropomorphized AI beyond reason. We claim it is “more human than human,” but in one crucial respect—how we train them—it may not be human enough.
Chris Christopher Surdak of CA does not claim to have all the answers. But his unique vantage point—as a technologist, a thinker, and a father—offers a fresh perspective on what the AI community might be missing. In the fervor of scaling, expanding, and accelerating, we may have forgotten the basics: teach, then train; observe, then judge; reduce entropy, not amplify it.
As the world prepares to reengineer itself around artificial intelligence, Chris Surdak encourages all to pause and ask: “Is this really the right muffler for the lawn mower?”
The story of AI is still being written. Let’s hope, he says, that it’s one we’ll all want to read.