The intentionality we lend to LLMs
Stick googly eyes on a sock and people will anthropomorphise it. Give it thirty seconds and someone will be doing its voice. Our brains are hard wired for it, which is a strange thing to say about brains isn’t it? Brains don’t have wires.
These ideas frequently come up in discourse around AI; whether it’s OK to say ChatGPT “thinks” something, or Claude “remembers” me, or an agent “decided” to ignore an instruction. It’s common to reject “AI” as a valid term at all - “intelligence” being unacceptable, even with the “artificial” qualifier.
One way the topic often comes up is when people remind others that these tools aren’t actually doing the thing that the words imply, because words have meanings, dammit. What is it that motivates us to make these linguistic push-backs about AI, whereas other language evolutions seem more acceptable?
I have my own view on this, and the sands have shifted for me recently. Here’s where I’m at:
Go on, then