paint-brush
A Future Forecaster on Habit-Forming Technologyby@nireyal
317 reads
317 reads

A Future Forecaster on Habit-Forming Technology

by Nir EyalJune 21st, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<strong>Nir’s Note: </strong><em>Jane McGonigal is a game designer at The Institute for the Future and bestselling author of </em><a href="https://amzn.to/2neS4yn" target="_blank">Reality is Broken</a><em> and </em><a href="https://amzn.to/2mMWGs9" target="_blank">SuperBetter</a><em>. In this interview with </em><a href="http://www.maxogles.com/" target="_blank"><em>Max Ogles</em></a><em>, McGonigal discusses impact of future technologies on behavior, habits, and the way we design products.</em>

People Mentioned

Mention Thumbnail
featured image - A Future Forecaster on Habit-Forming Technology
Nir Eyal HackerNoon profile picture

Nir’s Note: Jane McGonigal is a game designer at The Institute for the Future and bestselling author of Reality is Broken and SuperBetter. In this interview with Max Ogles, McGonigal discusses impact of future technologies on behavior, habits, and the way we design products.

Q: You recently worked on a project designed to visualize the future of technology. The idea was that using some future, not-yet-existent product, nicknamed FeelThat, people could actually share emotions with each other. (Here’s a link to the video.) What was the thinking behind it?

Jane McGonigal: This is a project with Institute for the Future to look at some of the emerging technologies that are being prototyped, tested, and innovated right now. We try to imagine where technologies might take us in a decade or more if they became widespread and popular. We use a process to collect signals, or “clues,” about the future that suggest things that might have the potential to change our lives down the road. Then we try to extrapolate where these signals will lead if they’re amplified. The video is the result of gathering around 50 different signals, such as technologies that are sensing or collecting data on our emotions, sharing them with others, or technologies that stimulate us neurologically so we change our feelings in real time.

All of that combined was in the story that you saw. But shortly after we shared the scenario and did the research, Mark Zuckerberg leaked that he started a new internal research and development group specifically to do what we showed in the video. So this may just be what we’re all doing in the future.

Q: In your work with the SuperBetter platform and book, you’ve thought about different ways to amplify human capacity for good or positive behavior change. How does futuristic technology like FeelThat inform the way we think about behavior change?

JM: One of the things I learned in my research and development of SuperBetter is that depression and anxiety underlie many people’s inability to change habits of behavior. And that doesn’t just mean clinical depression, but also a sort of mild depression that many people have when they think about behavior change. They lack confidence or ability and that’s reflected in the way they respond to setting goals or changing habits. SuperBetter was designed to let people hack their brains. So instead of like having low self-efficacy response when you are depressed or anxious about your your goals, you have a high self-efficacy response. Games are really good at building that response.

To relate that back to this vision of future emotionally-driven technologies, it seems to be really helpful to have this kind of data about our feelings, anxiety, or depression. Or even to have that information about the feelings of others. This type of technology could be an added layer of knowledge that you would have regarding people’s’ emotional state. Now you know when they might be most receptive to new information or when they might be most optimistic enough to do something difficult and not be discouraged by failure.

Q: We often see apps or online platforms for behavior change that struggle to convert in-app actions to real-life behaviors. Do you think this technology–the additional layer of data–could be helpful to that end?

JM: Yes. I think anytime you have more data about context, then you are better able to drive general behavior. In fact, you can imagine it getting complicated very fast in exciting ways. Not only do I know how you’re feeling, maybe I also know how your partner is feeling, or how your co-workers are feeling and whether this will be a high-stress day for you. The emotional information builds that context, similar to the way that mobile already gives us location information.

So in this future, not only do we know where you are and who you’re with, but we also know how you and the people around you are feeling. Learning to process that data in meaningful ways would be a really fun challenge for data scientists and designers and psychologists to figure out how to use that knowledge to support people.

Q: What do you think some of the obstacles or design implications that would exist in this possible future?

JM: Right now, we look around and say, “This future is definitely possible. People are making things that make this possible.” Do we want it to happen? That is an open question. I think a lot of startups and social media companies are needing to think ahead of the game; they’re thinking a decade out about what they’re building. And it’s important to push things in a direction so that we don’t wake up 10 years from now and realize, “Wow, our really cool social media app just led to the downfall of democracy.” So that’s the framework of why we think about this.

We had 10,000 high school students talk about this fictional technology, and we learned some interesting things. One is that you can never overestimate people’s willingness to share data. We think people are going to be talking about privacy, but the drive for social engagement, particularly before you hit the age of 30, is the most important evolutionary drive. They will just share, share, share. I don’t think that will be an obstacle.

And a lot of the teenagers who played our game were concerned about not wanting to feel other people’s negative feelings. They don’t want to be exposed to it, they don’t want to share it, they don’t want to receive it. It makes you wonder about the way we curate our social media feeds today, to try to project images of a perfect life. You know, what would it be like to try to project the images of the perfect mind? And how would we our thoughts and feelings if we knew we were sharing them even with just friends and family?

Q: There are obviously negative consequences that might arise from a technology like what we have been discussing. What is your approach to answering the question, “Should we build it?”? How do you think about that?

JM: Kevin Kelly, one of the founders of Wired Magazine and a long-time future forecaster himself, has a recent book that talks about the technological forces that are inevitably shaping our future. One of the inevitable forces he identifies are biometrics — knowing more about ourselves and how that would feed into products and services that are created for us.

On one hand, like Kevin Kelly, I think that there are certain technical trajectories that once they start, don’t get stuck because they feed certain human desires. For example, the desire to know ourselves is very fundamental and deep, right? So I don’t know whether we’ll get to make a choice about this future of deep biometric and neurological data being collected and shared in some way. The question is how do we want to share it, who do we want to share it with, do we want this data integrated in all of our apps and products? What are the most sort of benevolent and empowering ways to embed this? What are the really gross and exploitative ways to use them? I think it’s not “Do we want this future?” but “What are we going to start doing right now to affect all of the dimensions of this future?”

Q: In what ways could a product manager or designer, for example, prepare to be really influential with future technology?

JM: One of the practical skills that we do at the Institute for the Future is have people practice both positive and shadow imagination about any signal they encounter. Whenever you hear about something different that you haven’t heard about before, you force your brain to interact with it from both a positive, hopeful, optimistic side and also from the critical, pessimistic side. Most people have a bias about the future or about new technology, they default to either optimistic or pessimistic. If you don’t exercise both types of imagination, you tend to be really surprised. For example, they get surprised when something takes off and it’s really popular, or they can’t imagine that people would ever use this type of service.

That’s why I’m constantly running games. I ask thousands of people what they are excited about when they hear this, or what makes them anxious about when they hear this? The more people you kind of collect their point of view from, you can overcome your own biases. Whatever your bias is, you can help correct it by doing both kinds of reactions to every new thing that you encounter in technology.

This interview has been edited for length and clarity.

Originally published at www.nirandfar.com on March 20, 2017.


The Unbelievable Future of Habit-Forming Technology_Nir's Note: Q: You recently worked on a project designed to visualize the future of technology. The idea was that using…_www.nirandfar.com

If you found this post interesting, it would mean a lot to me if you could click on the “claps” icon below to let me know. That would really make my day — thanks!

Nir Eyal is the author of Hooked: How to Build Habit-Forming Products and blogs at NirAndFar.com. For more insights on using psychology to change behavior, join his newsletter and receive Nir’s free list of research-backed tips and tricks to increase your personal productivity.