The dustup over ChatGPT allegedly mimicking a famous actress’s voice raised concerns about making artificial intelligence seem more human.
OpinionWho is to blame for AI becoming ever more human?
Is it the tech companies? Or is it us?
Podcast episode
Amanda Ripley: I know the first time I used ChatGPT, my brain just kept thinking of it as a human. So I kept saying “please” and “thank you,” for example. Have you ever caught your brain treating AI like it’s human?
Josh Tyrangiel: I started very polite. You’re trained in society to be nice to things. And now that I’ve been using it for a year-plus, I’m like: “Just do it. Just do the thing, spit it back out!” And then if it’s too long, I’m like, “Too long!” So we’ve definitely gotten into a productive but very dysfunctional relationship.
Amanda: So when it started out, there was kind of a honeymoon period. And now it’s like you’ve been married for 20 years, and you’re just barking orders.
Josh: Yeah. It totally started out like a Jane Austen novel where everyone is excessively polite. And now it’s very much like Edith and Archie Bunker — like an ancient relationship of screaming between the kitchen and the dining room.
Bina Venkataraman: This is really about human nature. The way that we interact with these chatbots is related to how we do things and how we’ve been doing them for time eternal. We anthropomorphize animals. We put our own lens on what human consciousness, and human cognition and intelligence, are onto other objects and other beings, and this is an extension of that.
We might be extrapolating their intelligence beyond where they actually are today. And we in fact are far from truly conscious, sentient beings that people fear — this sort of artificial general intelligence, or the singularity, or the kind of scary scenario where AI can take over and has its own intent and consciousness.
Amanda: You’re saying it’s less about the AI and more about us — and what we project onto the machine?
Josh: The first real-world example most people have had of is the hallucination phenomenon, which — in ordinary technological language — it’s just a software bug. The software doesn’t respond the way you expect it to.
But because we very rapidly condition this thing to be a kind of human mirror, it’s actually everybody’s first Milgram prison experiment experience because what you’re doing is placing the chatbot in a position of some authority. It has knowledge, authority over you. So the weirdness of inputting something and then getting back something that is clearly either wrong or slightly manipulative is largely because of us. We are used to software bugs. We’re not used to this position where we imbue people with trust and authority and they screw with our heads.
Listen to the full conversation here: