This is a repost promoting content originally published elsewhere. See more things Dan’s reposted.
…
“Botsplaining,” as I use the term, describes a troubling new trend on social media, whereby one person feeds comments made by another person into a large language model (like ChatGPT), asks it to provide a contrarian (often condescending) explanation for why that person is “wrong,” and then pastes the resulting response into a reply. They may occasionally add in “I asked ChatGPT to read your post, and here’s what he said,”2 but most just let the LLM speak freely on …
This is a repost promoting content originally published elsewhere. See more things Dan’s reposted.
…
“Botsplaining,” as I use the term, describes a troubling new trend on social media, whereby one person feeds comments made by another person into a large language model (like ChatGPT), asks it to provide a contrarian (often condescending) explanation for why that person is “wrong,” and then pastes the resulting response into a reply. They may occasionally add in “I asked ChatGPT to read your post, and here’s what he said,”2 but most just let the LLM speak freely on their behalf without acknowledging that they’ve used it. ChatGPT’s writing style is incredibly obvious, of course, so it doesn’t really matter if they disclose their use of it or not. When you ask them to stop speaking to you through an LLM, they often simply continue feeding your responses into ChatGPT until you stop engaging with them or you block them.
This has happened to me multiple times across various social media platforms this year, and I’m over it.
…
Stephanie hits it right on the nose in this wonderful blog post from last month.
I just don’t get it why somebody would *ask an AI *to reply to me on their behalf, but I see it all the time. In threads around the ‘net, I see people say “I put your question into ChatGPT, and here’s what it said…” I’ve even seen coworkers at my current and formers employer do it.
What do they think I am? Stupid? It’s not like I don’t know that LLMs exist, what they’re good at, what they’re bad at (I’ve been blogging about it for years now!), and more-importantly, what people think they’re good at but are wrong about.
If I wanted an answer from an AI (which, just sometimes, I do)… I’d have asked an AI in the first place.
If I ask a question and it’s not to an AI, then it’s safe for you to assume that it’s because what I’m looking for isn’t an answer from an AI. Because if that’s what I wanted, that’s what I would have gotten in the first place and you wouldn’t even have known. No: I asked a human a question because I wanted an answer from a human.
When you take my request, ignore this obvious truth, and ask an LLM to answer it for you… it is, as Stephanie says, disrespectful to me.
But more than that, it’s disrespectful to you. You’re telling me that your only value is to take what I say, copy-paste it to a chatbot, then copy-paste the answer back again! Your purpose in life is to do for people what they’re perfectly capable of doing for themselves, but slower.
Galaxy Quest had a character (who played a character) who was as useful as you are, botsplainer. Maybe that should be a clue?
How low an opinion must you have of yourself to volunteer, unsolicited to be the middle-man between me and a mediocre search engine?
If you don’t know the answer, say nothing. Or say you don’t know. Or tell me you’re guessing, and speculate. Or ask a clarifying question. Or talk about a related problem and see if we can find some common ground. Bring your humanity.
But don’t, don’t, don’t belittle both of us by making yourself into a pointless go-between in the middle of me and an LLM. Just… dont’t.