Artificial intelligence is important…but so is artificial empathy: a ChatGPT demo

ChatGPT, from Open.AI is, I think, a society altering software. It’s a form of text generative artificial intelligence. In other words, it writes like a person does.

Oh, that doesn’t mean that it uses the same techniques to write something, but what it does write, appears to have been written by a human.

It’s important to point out here that it’s not copying and pasting: it’s writing something new, every time you ask it.

You can ask it to write a lot of things, and people are. There is a great deal of discussion around the idea of students using it to cheat. I find a lot of that talk assumes some things where I would like to see the data that proves it. One is that all students want to cheat, and will if you don’t catch them. I didn’t cheat in school, and wouldn’t have wanted to do that. I’ve taught management and seen that same mindset, that all employees will slack off if you don’t prevent it. How do you find work fun and fullfilling (which I typically did) if you aren’t doing it honestly?

On the other hand, they act like there aren’t other existing ways to cheat which use the internet. When some students want to cheat, keeping them off ChatGPT isn’t going to stop that…to paraphrase Dr. Ian Malcolm from Jurassic Park, “Lies will find a way.” 😉

What ChatGPT (and there are other similar programs) does is almost always referred to as “artificial intelligence” or AI, and in particular, generative AI.

Here’s the thing.

What impresses me the most about it is how it understands the context, how it knows what is appropriate to say, not just (or even mostly) how to produce fluid & flawless (in terms of spelling & grammer text).

It gives the appearance of empathy.

I have trained and hired trainers. When I would tell people, including applicants, what I was seeking in a new hire, I included empathy. I would define that for them as knowing how the students are feeling, and adjusting what you are doing based on that (the latter is crucial). Are they bored? Speed up. Confused? Slow down and use more analogies.

You can tell ChatGPT what emotion to use when writing, and it understands that.

I’m going to give you an illustration of that, where I asked ChatGPT to write something two different ways.

This was my first prompt in this experiment:

“Write a heartfelt apology for forgetting a birthday.”

Here’s how it responded (in seconds, by the way):

“I am deeply sorry for forgettig your birthday. It was thoughtless and inconsiderate of me, and I know that it must have hurt your feelings. I should have been there to celebrate with you and make the day special, and I regret that I let you down. Please know that I am truly sorry and that it will never happen again. I hope that you can find it in your heart to forgive me.”

Would I believe a human wrote that? Sure. It’s florid, but that makes sense here.

Here’s my second prompt:

“Write a lighthearted apology for forgetting a birthday.”

The response:

“Sorry I forgot your birthday, my bad. To make it up to you, let’s plan a do-over celebration and I promise to make it extra special. I’ll even throw in a cake shaped like a giant ‘I’m sorry’ for good measure. My apologies again, and happy belated birthday!”

Again, could I believe a human wrote this? Yes! Note that it even made a joke…I wouldn’t expect the person to actually make that “I’m sorry” cake.

I honestly have no idea how it does this. I doubt that “My bad” is explictly labeled with the term “lighthearted” on the internet.

To me, this is what feels really different. It’s more like Rosie the Robot on The Jetsons than Robbie the Robot from Forbidden Planet.

Actually. that’s not even enough. Rosie had a consistent personality. You can ask ChatGPT to adopt different “characters”.

I can see some real uses for this for people who are neuroatypical, particularly those who have difficulty recognizing emotion. There must be sentiment analysis going on here. A program like this could identify an angry (or loving) text or e-mail, and compose a response in the style requested! The AI: “You have an angry e-mail from the gym saying you’ve missed paying your membership.” User: “Write an apologetic response telling them I’ll pay them”, or “Write an angry response telling them I quit the gym last year.”

My main concern is if these resources aren’t evenly available. There is a free version of ChatGPT right now here:

https://chat.openai.com/auth/login

However, there will also be a pay version.

Picture two stores: a “Mom and Pop” store and a big corporation. BigCorp can afford the full version of ChatGPT & Mom & Pop’s Corner Store can only afford the free version.

BigCorp has ChatGPT writing a “persuasive ad” for a product, creating instructions, and communicating with customers. For their monthly or annual subscription, they can use it as much as they want.

Mom & Pop’s has a limited use version.

Will that exacerbate the resource divide between them?

Probably…and that’s just one issue.

Let’s ask ChatGPT!

“Write an optimistic paragraph about the societal impact of artificial empathy.”

“Artificial empathy has the potential to revolutionize the way we interact with technology and with each other. By creating machines that can understand and respond to human emotions, we can build more intuitive and personalized experiences that truly meet our needs. This can lead to greater understanding and connection between people, as well as more efficient and effective communication. Additionally, artificial empathy can help us to better understand and address mental heath issues, as well as provide support for those who may be struggling with emotional difficulties. Overall, artifiicial empathy has the power to bring us closer together and create a more compassionate and understanding society.”

“Write a pessimistic paragraph about the societal impact of artificial empathy.”

“Artificial empathy has the potential to cause significant harm to society. As technology advances, it may become more difficult to distinguish between genuine human emotion and those artificially generated by machines. This could lead to a breakdown in trust and communication between people, as well as a devaluation of authentic emotions. Additionally, the use of artificial empathy in decision-making could perpetuate biases and undermine the autonomy of individuals. Furthermore, it could lead to a decrease in empathy and emotional intelligence among humans as they rely more on machines to understand and respond to emotions. As a result, society may become less empathetic, more isolated, and more prone to misunderstanding and conflict.”

Fascinating! Certainly, I see stylistic similarities (I didn’t say anything about style, so this is probably close to its “natural” syle), but it gave me not just facts but perspectives.

We are very, very early in this. It has the potential, though, to have the kind of impact on society that the automobile did…or, at least, Google. 🙂

Have you used ChatGPT? Have you used a competitor? What do you think about it? Feel free to let me and my readers know by commenting on this post, or replying on Twitter:

Join thousands of readers and try the free The Measured Circle magazine at Flipboard!

All aboard our The Measured Circle’s Geek Time Trip at The History Project (AKA Enwoven)! Join the TMCGTT Timeblazers!

Bufo’s Alexa Skills

This post by Bufo Calvin originally appeared in the The Measured Circle blog. To support this or other organizations, begin your Amazon shopping from a link on their sites: Amazon.com (Smile.Amazon.com)