Why Hobbes would have thought LLMs were a big deal

Language is one of the most important topics in philosophy, and LLMs provide insights.

In Chapter 4 of the Leviathan, Hobbes talks about Speech, and language in general. The chapter begins:

“The Invention of Printing, though ingenious, compared with the invention of Letters, is no great matter…But the most noble and profitable invention of all other, was that of Speech, consisting of Names or Appellations, and their Connexion; whereby men register their Thoughts; recall them when they are past; and also declare them one to another for mutuall utility and conversation; without which, there had been amongst men, neither Commonwealth, nor Society, nor Contract, nor Peace, no more than amongst Lyons, Bears, and Wolves.”

Hobbes can be a little hard to follow, so I “translated” this and other chapters into modern English here:

Hobbes: The Leviathan in modern English

Language in Philosophy

Hobbes was not the first, and certainly not the last, philosopher to talk a lot about language. Much of the 20th century was characterized by the so-called Linguistic Turn where language became absolutely central to many philosophers, some even going so far as to see language as the only legitimate topic for philosophy.

Explicitly or implicitly, much of this attention comes down to the fact that humans are uniquely talented in language. Although animal language is often underrated, it is clear that humans use language more and differently than other animals: writing being one undisputedly unique human invention.

And so, with LLMs able to simulate human language usage far better than anything except humans, philosophers are and should be looking on in interest. What stands out? What have we learned from LLMs?

Stringing words together into coherent sentences is actually easier than we thought. LLMs are able to do this through a stochastic process of predicting the next word after looking at countless examples.

Context is everything. LLMs are perfectly happy to spin plausible-sounding sentences that have nothing to do with the real universe. Chomsky was right.

Language may be enough for some types of reasoning. Hobbes and most philosophers believed that language is about communication and at best an aid for thinking. But logicians for a while tried to show that language could actually be used to compute things, and some went so far as to say thought is impossible without language. LLMs seem to show that some forms of impressive calculation can be done just by manipulating language. But the jury is still whether all LLMs are really doing is plagiarizing the internet, and we’re projecting our beliefs about thought onto them.

Perhaps the most important thing we’re learning is related to this last point. Humans have a well-founded intuition that language users are special: intelligent, sapient, conscious, “people.” Even researchers who understand the algorithms behind LLMs are prone to moments of projecting human qualities onto LLMs. We need to get past that, which is going to become even harder as AI gets better.