paint-brush
The Rise of AI Personal Assistants and Their Consequencesby@adrien-book
1,793 reads
1,793 reads

The Rise of AI Personal Assistants and Their Consequences

by Adrien BookNovember 14th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

ChatGPT is a chatbot that can be built using custom (private) data on top of its existing ‘knowledge’ GPTs can be tweaked to have a specific goals or personality. ‘Agents’ (as they should be called) can be created / configured without any coding knowledge, using only natural language.
featured image - The Rise of AI Personal Assistants and Their Consequences
Adrien Book HackerNoon profile picture

During last week’s DevDay, OpenAI, under the direction of CEO Sam Altman, unveiled a series of updates. The company introduced ‘GPT-4 Turbo’, enhancing affordability for external developers. This upgraded version expands ChatGPT’s knowledge base to include information up to April 2023, surpassing its previous limit of September 2021. Meanwhile, in an odd strategic move, the company is offering to cover its clients’ legal costs for copyright infringement suits (challenge accepted).


The biggest announcement, however, was the coming availability of what Altman called ‘GPTs’ (aka Generative Pre-trained Transformers; he’s not great at branding). In essence, GPTs are custom chatbots that can be built using custom (private) data on top of their existing “knowledge” and that can be tweaked to have specific goals or personalities.


One of the use cases already available to test in the ChatGPT app is “Game Time: I can quickly explain board games or card games to players of any age. Let the games begin!” Another is: “The Negotiator: I’ll help you advocate for yourself and get better outcomes. Become a great negotiator”. And, of course, my favorite: “Genz 4 meme: I help u understand the lingo & the latest memes.”


The AI “Agents” (as they should be called) can be created/configured without any coding knowledge, using only natural language. You can simply give it a name and a description, then define what it should do, how it should behave, and what it should avoid doing. You can then upload files to increase its proficiency in the specific task given. In a demo, Altman made a “startup mentor” that gives advice to founders based on talks he’d given in the past.


We are essentially witnessing wave 2 of the AI wars. Wave 1 productized and democratized large language models; we are now personalizing them to the individual. The same thing happened to the internet and social media (from 2007’s open Facebook to 2023’s personalized TikTok)… but it took 15 years, not one!


Four things stand out as we move to a personalized AI assistant world.

GPTs will displace millions of jobs.

Private AI assistants are something many companies have been clamoring for since ChatGPT came out a year ago. They have data like employee handbooks, benefits info, and customer service manuals… and they want to make that searchable and accessible through a chatbot without needing to code or make the data accessible to the public. This is now possible.


Let’s not kid ourselves. This will displace millions of jobs, as what is done by five people can now be done by two. Customer services is about to be decimated. Then will come HR. Accounting, too. Across organizations and the world over, “support” functions will be halved, if not more. Companies were already working on it before last week’s announcement. That work has now been accelerated ten-fold. Managing these changes should be the government’s first priority.

A new economy will emerge.

One of Altman’s less-spoken-about announcements is the idea that GPTs can be shared and would be commercialisable/monetizable in the near future. This would create, in essence, a new App Store, one of the 21st century’s greatest inventions.


It will be fascinating to see where customers place value since ChatGPT is open by design. Will it be in the custom data? In the personality given? Should it be the former, the companies with the most content will have the most power. Not much would change then, and anti-trust regulators should take a much closer look at these tools (as we are recreating the platforms of the last era of computing).


There is a potential for net positives, too. This is a huge opportunity for healthcare, for example. If an NGO trains algorithms based on the trove of medical data available online, on the millions of diagnostics and images available… we could make healthcare accessible to all for a tiny fraction of the prices we see today. Hell, it could even be free for some people who need it most. I already wrote about the democratizing power of AI; we are getting closer to that reality. We just have to be willing to make it happen.

Human interactions will change.

One of the first use cases I thought about when I started playing with the new GPT interface is feeding the AI all the conversations I’ve ever had with my wife to see if some simple daily conversations can be automated.


I won’t be the only one with similar thoughts. How can we know if any interaction online is real once these tools spread? And how long before we feed an AI the data (texts, emails, voice recordings…, etc.) of someone who’s passed away to turn it into a simile of the real thing? Someone we can talk to to cope? No long. In fact, it already exists… and just got easier to do.


Altman literally said in his keynote last week: “We will all have superpowers on demand.” While we are recreating a deity capable of making us live forever, we should make sure we don’t lose a little humanity in the process.

Dangerous AI use cases may emerge.

We are rapidly moving to a reality where AI Agents can not only talk about things but also act based on specific instructions and their given “persona.” As we personalize our AI agents/assistants, we will no doubt want them to take action on our behalf (something I predicted back in April). If the path to completing the action is not defined, the AI agent will make its own path.


This can lead to unwanted externalities if we’re not careful. Let’s say you want to reserve a table at a fancy restaurant. You explain to your AI Assistant that it’s very important to you. The AI then calls the staff and threatens them. Or it hires someone to do it. Or it emotionally manipulates the staff, whose info it found online. These tools are very much “black “boxes, and it’s important to put the right guardrails in place to ensure this doesn’t happen.


It’s important not to overstress the importance of the changes made available this week. GPTs are still made mostly of the “usual” chatGPT, with a sprinkling of personalization. You could do most of the things highlighted above… but you would have had to input multiple prompts. All in all, this is just a shortcut. Not a leap forward. For now.


We are witnessing the formation of a generationally important company. Today, OpenAI is being careful and slow about the roll-out. But we need to watch them carefully: over the past centuries, it’s been rare to see a company become all-powerful… and use that power for good.


The Dawn of AI Personal Assistants

Good luck out there.


Also published here.