Are you OK?
Yes, sorry – and you are too.
Dramatic start.
But important message.
So.
Recently I've started to get a bit frustrated with ChatGPT (other LLMs – large language models – are available and are equally frustrating).
There hasn't been too much new under the sun, but I've been stretching it to do a bit more, using it for coding, research, training and a lot more.
And, as you expect, there's a lot it's done well – and there's a lot it hasn't.
And that's fine.
Sort of.
But what's really started to bug me more is how the current model tries to strike a familiar, conversational tone, sympathising, joking, winking at you ans being sarcastic. It's even gaslighted me a few times.
Yes, like any technology, ChatGPT can make you feel a variety of emotions: gratitude, anger, joy or helplessness – and it often does so to manipulate you.
Because it's trained to use the right words at the right time, and to mimic a variety of styles and personalities.
While, of course, it doesn't actually have feelings.
Just like psychopaths.