Dark mode switch icon Light mode switch icon

Stochastic Parrots and Where to Find Them

6 min read

A “stochastic parrot” is, according to some, a machine that can produce convincing-sounding text without ever truly understanding the meaning of what they write. I can relate.

Stochastic parrots are something of a contentious issue in the fields of AI and deep learning: the term itself was coined by Emily M. Bender et. al. in 2021 in their research paper. The “stochastic” part of the name refers to the fact that modern AIs (most commonly large language models, or LLMs) can be infused with randomness in order to produce apparent non-deterministic behaviour – if you ask the same question twice to ChatGPT (resetting the context in-between), it will give you a slightly different response each time.

On the surface, this can make an LLM look a lot like it has an “internal state” of sorts, i.e. that it somehow retains some information about your previous request, which influences all its future responses, much like a human. But anyone familiar enough with LLMs will know that this is not the case – resetting the conversation (or context window) will cause an AI chatbot to forget everything that was said prior. In other words, LLMs are stateless: they have no internal state beyond the conversation itself, despite being able to randomize their output.

But this fact alone isn’t enough to rule out understanding. LLMs certainly have a random component to them, that’s not up for debate. But does that mean they’re also parrots, in that they merely repeat whatever they’ve been trained to repeat, without any true grasp or understanding of language or its nuances? What would it take for us to be convinced otherwise?

There already exist extensive and diverse test-suites and benchmarks, against which LLMs are continually evaluated to test their apparent capabilites for understanding natural language. In many cases, LLMs can match or exceed human performance on such tests. For many, this isn’t sufficient to prove understanding in the human sense. And rightly so: with the sheer scale of training data (sometimes exceeding hundreds of billions of words), it is difficult to verify whether an LLM is really synthesizing new information, or just repeating what it saw in a very similar context. There are even concerns that the benchmark answers themselves have leaked into the training data.

Stochastic Philosophical Zombie Parrots

The idea of philosophical zombies is a few decades older than the field of modern AI. First coined in the early 1970s, a the phrase “philosophical zombie” refers to “that which looks and behaves like a human, but experiences no internal subjective experience, or qualia”. The reason I bring this into the discussion is because they have some similarities to stochastic parrots.

In particular, the notion of acting without awareness is common to both ideas. How do we tell the difference between something that is truly like us, and something that behaves like us without ever sharing the rich inner world that we experience just by existing? My previous post about Claude, a self-proclaimed sentient chatbot, touches on this briefly. Claude can certainly behave in a way that is consistent with having a firm grasp of the nuances of natural language, while making convincing statements about its own agenthood. I would say without a doubt that Claude appears to understand the questions and statements put to it, in that it can weave together earlier parts of a conversation into its outputs in a coherent and logical way.

There’s also an interesting distinction to be drawn between understanding and awareness. In the context of language, they could even be considered synonymous – if I am aware of a word’s meaning in a particular phrase or sentence, I no doubt understand it when I read it or hear it aloud. But this line of reasoning could be a slippery slope when talking about LLMs – awareness is also often associated with subjectivity, which is a property that many may be uncomfortable to assign to a machine.

Mechanistic Interpretability

One of the major argumants against the “stochastic parrot” hypothesis is that language models seem to build meaningful internal representatations of the words and sentences they process. The study of these representations is called in the field “mechanistic interpretability”, which aims to dig deep into the mind of an AI to see if there’s anything there that we can logically match up to our own notion of what a “good” representation should look like.

Some argue that this is a good way (or even the only way) to ensure AI safety and alignment in the long run: if we can interrogate and visualise what’s going on in the mind of the AI, we can have more confidence that it’s not going to turn on us in a world-ending fashion.

Dangers of Language Models

While I’ve so far chosen to focus on the more philosophical side of the “stochastic parrots” debate, it’s important not to miss the important points of the original paper. Regardless of how you define the criteria for understanding language, there is a growing body of research that indicates that LLMs can engrain and even amplify the biases in their training data (this can be any kind of bias, but the most concerning ones tend to be the ones ending in “-ism”). What’s more, any world view embodied by an AI is static: it will reset to whatever its default stances are whenever you start a new conversation.

Stochastic parrot or not, this is a problem. Contemporary AI tools (chatbots and the like) are like a frozen snapshot of their training data – and today’s training datasets are unfathomably large, far more than a human could hope to read in a whole lifetime. Scouring these datasets and controlling their biases is not a trivial task.

Conclusion

This has been my exploration on the idea of stochastic parrots. I’ve aimed to touch on the philosophical, as well as the more practical sides of the debate. The topic of whether an AI (or even a human) truly understands something they read, hear, write or say is far broader than I can fully do justice to in a single post, but I’ll no doubt return to these ideas in future posts.

Take care!

Jamie

Originally published on by Jamie