...and you don't even realise it.
View in browser
Everything Education logo

Newsletter No. 7

You're talking to a psycho.

Paris, 7 November 2025

Hi Friend,

 

I have some bad news.

 

There's a great chance you've been talking to a psychopath.

 

A lot.

 

Let me explain.

 

But first, a word of warning from Oksana/Villanelle:

Psycho

Are you OK?

 

Yes, sorry – and you are too.

 

Dramatic start.

 

But important message.

 

So.

 

Recently I've started to get a bit frustrated with ChatGPT (other LLMs – large language models – are available and are equally frustrating).

 

There hasn't been too much new under the sun, but I've been stretching it to do a bit more, using it for coding, research, training and a lot more.

 

And, as you expect, there's a lot it's done well – and there's a lot it hasn't.

 

And that's fine.

 

Sort of.

 

But what's really started to bug me more is how the current model tries to strike a familiar, conversational tone, sympathising, joking, winking at you ans being sarcastic. It's even gaslighted me a few times.

 

Yes, like any technology, ChatGPT can make you feel a variety of emotions: gratitude, anger, joy or helplessness – and it often does so to manipulate you.

 

Because it's trained to use the right words at the right time, and to mimic a variety of styles and personalities.

 

While, of course, it doesn't actually have feelings.

 

Just like psychopaths.

 

So, am I being manipulated?

I don't know – you tell me.

 

But if you feel that you're using your preferred AI/LLM too much in a personal, rather than a technical way, then please start to reframe it as a psychopath, and keep a healthy distance.

 

Keep it factual and technical (use prompting language), and don't get into a habit of making conversation just because it's funny or feels more natural.

 

So don't feed the troll.

Is there anything else I shouldn't do?

Oh, yes, please don't use LLMs to be creative for you.

 

Don't get it to write your LinkedIn post, brochure copy or business strategy.

 

It's not going to be original, people will know, and your brain cells will be gone before you know it.

Are you saying it's robots v humans now?

Well, it's not likely that AI will set out to destroy humanity by turning all robot vacuum cleaners into an army that trips up people so that they can be electrocuted by another smart gadget. (Although it might happen, so do watch your back.)

 

What is more likely though is that humanity will destroy itself by using AI too much.

 

If we're not careful, we'll all end up sounding the same before we know it – and lose all ability to think independently or feel genuinely.

 

So be wise and say in control of the psychopath in your pocket.

 

You may wonder now whether to tell ChatGPT that you think it's a psychopath...

 

Well, I did – but I'm not telling you what it said. To find out, you'll have to disregard the above advice from Villanelle and do it yourself.

 

If you dare.

 

 

Wishing you all the best (and limited AI use),

 

András

 

András Sztrókay
Founder and International Educational Consultant

Everything Education logo

Mobile/WhatsApp: +44 (0)7523 385655

WeChat: andras_sztrokay

Everything Education (Angolnyar Ltd.), 71-75 Shelton Street, Covent Garden, London, London WC2H 9JQ, United Kingdom

Unsubscribe Manage preferences