People tend to anthropomorphize – we like to “attribute human form or personality to things not human” [merriam-webster.com/dictionary/anthropomorphize]. Cal Newport, an associate professor of computer science at Georgetown University, explains in simple terms what ChatGPT [& similar] is, & isn’t for The New Yorker.
newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have#:~:text=The%20response%20to%20ChatGPT%2C%20and,imaginative%2C%20and%20possibly%20even%20dangerous.
Google also has a short bit of a FAQ.
blog.google/inside-google/googlers/ask-a-techspert/what-is-generative-ai/
Long story short, ChatGPT is more of a mimic, like a parrot or mynah bird, and nothing like Mr. Ed, if you’re ancient enough to remember the series [like me]. That of course does not mean it can’t be used in bad, or even maybe evil ways, but it’s a far cry from Star Trek Discovery’s Zola, or what may be the ultimate villain in the Terminator films. How ChatGPT & the like are used and affect us depends both on their masters, and us, whether we insist on regulation, or not. Realizing full well it’s a bit naïve, wouldn’t it be nice if AI could be trained to tag the stuff on the web that’s used to train other AI – right now that’s a terrible cost born by very low paid people in often disadvantaged countries who come to suffer from PTSD from their work… the underside of the web is a truly dark place.
Now that we can so easily use AI to generate images, our dark side is on display… The New Yorker had a piece with a very chilling thought – rather than fake photos of Trump getting arrested by a throng of policemen to rile people up, what about photos of a fake bank run? That could very possibly spark the real thing.