paint-brush
Yes, We Can Create Empathetic A.I. Systems. This Is How We’re Getting Startedby@gambino
155 reads

Yes, We Can Create Empathetic A.I. Systems. This Is How We’re Getting Started

by Angel GambinoOctober 18th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Right now, those of us working on the frontiers of artificial intelligence have a major opportunity: <strong>we can be a force for good in a connected world</strong>.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Yes, We Can Create Empathetic A.I. Systems. This Is How We’re Getting Started
Angel Gambino HackerNoon profile picture

Right now, those of us working on the frontiers of artificial intelligence have a major opportunity: we can be a force for good in a connected world.

By influencing the way people communicate with each other on social media, we have the power to help move the needle in one of two directions: toward more negativity, isolation, and conflict, or toward greater empathy, connection, and positivity.

I believe it’s our responsibility to choose the latter and that A.I. can empower empathetic engagement.

Here’s why.

The way many of us interact online is broken.

This is the current state of our world. Online anonymity has given people the perception they can say whatever they want whenever they want — however cruel it may be.

The result? Trolling, bullying, fake news — cancers that damage the online social sphere we all inhabit, and sometimes result in tragedies IRL (in real life). Even when communications are not masked by anonymity, the perceived physical distance creates a sense of disconnection between the cause and effect of cruel online comments.

The vexing truth, however, is that people respond to these negative messages, which makes the problem worse. Negative, sensational headlines get more attention, whether they’re on your nightly local news broadcast or your Twitter feed.

With this in mind, the question for those of us in a position to solve these problems needs to be: what can we do to encourage people into having more positive interactions?

For me and my team at Sensai — the A.I.-powered social media marketing company I founded — that question looks like this:

How do we help our customers identify the best ways, people, and communities to engage in a positive way? And how do we obtain, and then share, the insights we need to avoid engagements that create negative responses?

That’s where A.I. comes in. To answer the question above, we need to decipher a truly massive amount of unstructured data, and the only way we can do that is by using artificial intelligence and machine learning. (That’s what platforms like Facebook and Google do to moderate the massive quantities of content people share.)

But that’s not all.

Once we understand that trigger, we can use A.I. to encourage a more empathetic use of words, images, and media in the culture.

We can encourage more empathetic modes of engagement around all topics, in fact, no matter how polarizing or distressing they may be. While it’s still early to decipher sentiment using A.I., there are a number of techniques and tools emerging that can form the building blocks for a better world, both online and off.

My team and I are building a model for this.

Here’s how it works: a user signs up for our service, which allows our A.I. to analyze their social media presence and engagement. Based on that user’s specific goals — increasing brand awareness, promoting a product, recruiting new hires, etc. — our system offers content recommendations aimed at optimizing engagement, but specifically through positivity and empathy, rather than negativity.

Such systems are uniquely positioned to improve our culture, which is undeniably fractured. Those in charge of such systems, then, have something of a responsibility to use them.

Essentially, A.I. can promote socially responsible social media use.

The key, ultimately, is how we develop our A.I.

We should start by teaching it that humanity wants to be good and infusing the system with a value set that encourages an empathetic view of data inputs, and then projects an empathetic output.

Here’s a basic example.

If our A.I. thinks you should post something about parenting, because your followers really responded to that last parent-related post you put up, it’s not going to suggest a video of parents cruelly pranking their kids. (YouTube is full of that type of content, sadly.) It will suggest a video of a parent delighting their child with a surprise trip to Disney World or a new pet, instead.

Or, if a follower posted something sarcastic or rude on your profile, the system could suggest a response based in empathy, rather than a cutting response that matched that follower’s negativity.

In my experience, this approach has proved disarming, transforming conflict into understanding — or at least, shutting it down.

In some cases, it’s true that an empathetic approach will go against a person’s natural disposition. However, if we make it easy — if all a person has to do is click, or type a few words — research shows that more people will do the right thing.

This is something we’re already seeing. Consider, for example, the response recommendations Google recently instituted in Gmail.

Let’s say you get an email inviting you to an event. Gmail offers responses like this: “Yes, I’d love to!”, “Sounds good!” or “Sorry, I won’t be able to make it.”

Of course, the purpose of this feature is to save you time. But it also saves you from sending a passive-aggressive, snarky, or sarcastic response.

Empathetic A.I. is about putting positive messages out into the social media sphere in efficient and convenient ways which in turn encourages your followers to offer positive responses back. With enough momentum, you can positively impact moods and emotional well being.

It’s vital for good-willed, mission-driven companies to be at the forefront of A.I. development because this technology could easily go the other way.

The impact of social media is exponential, and, I believe, immeasurable.

That’s why we must be so careful about the kind of A.I. we allow to influence the things we post and share online.

I know how much fear and skepticism there is about A.I. People are worried about it taking over the world, potentially diminishing the human purpose, or harming human society in a variety of other ways.

That skepticism is a good thing, because it holds all of us working in this field accountable. It forces us to be thoughtful about the types of A.I. tools that we put out into the world. We know that we must earn the public’s trust, and then continue to build on that trust as our A.I. capabilities develop.

The only way we can do that is by creating more positive interactions.

Some days, when online bullying or the latest fake news scandal is all over our feeds, it can be difficult to believe that social media can be a force for good in the world.

But I truly believe that, as long as we A.I. developers closely monitor our own impact, this technology will create a stronger human connection. We’ll better understand each other. And in turn, we’ll become more empathetic.

When we recruit diverse teams of men and women that sometimes interpret language differently, we have the greatest opportunity to expand our notion of empathy as it exists. We can go beyond together.

This is the real power — and responsibility — of A.I.