Is the ChatGPT honeymoon period over?
ChatGPT has taken the world by storm since its release last year. However, a recent survey found that
Why are companies getting cold feet about ChatGPT?
Itâs not that they doubt its capabilities. Instead, theyâre worried about potential cybersecurity risks.
A Growing Concern: Data Leaks and ChatGPT
Generative AI tools are designed to learn from every interaction. The more data you feed them, the smarter they become. Sounds great, right?
But thereâs a gray area concerning where the data goes, who sees it, and how itâs used.
These privacy concerns led to the Italian data-protection authority
https://twitter.com/GergelyOrosz/status/1643903467734409216?embedable=true
For businesses, the worry is that ChatGPT might take user-submitted information, learn from it, and potentially let it slip in future interactions with other users.
OpenAIâs guidelines for ChatGPT indicate that user data could be reviewed and used to refine the system. But what does this mean for data privacy?
The answer isnât clear-cut, and thatâs whatâs causing the anxiety.
Data Leak Worries: Whatâs the Real Risk?
A Cyberhaven study found that by June 1, 10.8% of workers utilized ChatGPT at work, with 8.6% inputting company information. The alarming statistic is that 4.7% of workers have entered confidential information into ChatGPT.
And because of the way ChatGPT works, traditional security measures are lacking. Most security products are designed to protect files from being shared or uploaded. But ChatGPT users copy and paste content into their browsers.
Now, OpenAI has added an opt-out option. Users can request their data not to be used for further training.
But the opt-out is not the default setting. So, unless users are aware and take proactive measures, their interactions might be used to train the AI.
The concerns donât stop there.
Even if you opt out, your data still passes through the system. And while OpenAI assures users that data is managed responsibly, ChatGPT
Thereâs also the risk that something goes wrong.
On March 21, 2023, OpenAI shut down ChatGPT because of a glitch that incorrectly titled chat histories with names from different users. If these titles held private or sensitive details, other ChatGPT users might have seen them. The bug also exposed the personal data of some ChatGPT Plus subscribers.
The Samsung Incident
The first major ChatGPT data leak occurred earlier this year and involved the tech giant Samsung. According to Bloomberg, sensitive internal source code was
https://twitter.com/SergioRocks/status/1653814248848470023?embedable=true
A leak like this can have severe implications.
And it wasnât just Samsung. Amazon, another titan in the tech industry, had its own concerns. The company
If ChatGPT has Amazonâs proprietary data, whatâs stopping it from inadvertently spilling it to competitors?
Lack of Clear Regulations
The rapid evolution and adoption of Generative AI tools have left regulatory bodies playing catch-up. There are limited guidelines around responsible use.
So, if thereâs a data breach because of the AI, whoâs responsible â the company using the tool, the employees, or the AI provider?
In OpenAIâs
That puts the risk on companies. Without clear regulations, they are left to decide on the best course of action. Thatâs why many are now becoming more hesitant.
Conflicting Views from Tech Leaders
When it comes to deploying new technologies, businesses often look to tech leaders. If a tech giant adopts a new innovation, itâs often seen as a green light for other companies to follow suit.
So, when a company as influential as Microsoft offers mixed signals regarding a technology, the ripples are felt across industries.
On the one hand, Microsoft has expressed reservations about the use of Generative AI tools. In January, Microsoft warned employees not to share âsensitive dataâ with ChatGPT.
https://twitter.com/BusinessInsider/status/1620528535017345025?embedable=true
But Microsoft also champions its own version of the technology, Azure ChatGPT. This iteration promises a safer and more controlled environment for corporate users.
This move raises questions: Is Azure ChatGPT genuinely immune to the risks Microsoft has pointed out in the broader Generative AI landscape? Or is this a strategic maneuver to ensure businesses stay within Microsoftâs ecosystem?
The Balance of Adopting ChatGPT
When evaluating any new technology, businesses are often caught in a tug-of-war between the potential benefits and the potential pitfalls.
With ChatGPT, companies seem to be adopting a wait-and-see approach.
As the technology and its regulations evolve, the hope is that a safer, more transparent use can be established.
For now, ChatGPT is great for a bit of coding help and content. But I wouldnât trust it with proprietary data.