AI’s advancements over the years have brought about both opportunities and setbacks. There have been notable breakthroughs that have revolutionized the internet; many for the better.
However, no one could’ve prepared for the launch of OpenAI’s ChatGPT, which has taken the world by storm. Its ability to create natural conversation while providing factual, insightful responses in seconds is unprecedented.
The minute the wider public started to get its eyes on what ChatGPT could do, every progressive leader in the world knew that digital communication would never be the same again.
But with innovation often comes controversy, and in this case, the Supernova chatbot has had to contend with legitimate data privacy concerns.
The development of ChatGPT required a significant amount of data harvesting, and without OpenAI being able to describe exactly how the chatbot works or processes and stores data, well-worn concerns and skepticism about data privacy practices have arisen from both thought leaders and government privacy watchdogs.
This matter is not lost on the public either, as according to
The same survey also found that 72.6% of iOS apps track private user data, with free apps being 4 times more likely to track down user data than paid ones.
If that makes you worried, remember that the majority of people using ChatGPT are still using the free version.
Given all of this, data privacy companies need to leverage the conversation generated by the development of ChatGPT to deliver products that enhance data privacy rights and promote a stronger culture of data transparency and responsibility, so people are aware of their data rights and how to use them, as well as having these groundbreaking AI technologies not devolve into relying on immoral tactics for profit the way many Big Tech companies have.
ChatGPT is a large language model (LLM), meaning it requires a vast amount of data to function correctly, enabling it to make predictions and process information coherently.
This means that if you have written anything online, it is highly likely that ChatGPT has already scanned and processed that information.
Moreover, LLMs like ChatGPT heavily rely on mountains of data to train their algorithms from a range of online sources (ebooks, articles, and social media posts). This provides users the ability to generate matter-of-fact responses that are arguably identical to human-generated text.
Simply put, any text that one has already posted online likely has been utilized to train ChatGPT or competing LLMs that are sure to follow in the wake of ChatGPT’s success.
To no surprise, data privacy concerns are valid, as OpenAI recently acknowledged that
Additionally, a cybersecurity firm identified that a recently added component was vulnerable to an actively exploited security flaw.
OpenAI
The breach also revealed the payment information of 1.2% of ChatGPT Plus subscribers, including their first and last names, email addresses, payment addresses, payment card expiration dates, and the last four digits of their payment card number.
To say this is a data protection disaster is an understatement. ChatGPT may have more information inside it than any product on the planet, and it is already leaking sensitive information just months after its release.
Here’s the silver lining: bringing attention to the actual risks that ChatGPT poses to privacy among the general public could provide an excellent opportunity for individuals to start understanding the importance of data protection down to the finer details.
This is particularly important considering the rapid rate at which ChatGPT's user base is expanding.
Aside from implementing precautionary measures and staying vigilant, users need to exercise their
Every user in the digital age must become an advocate for stronger data privacy regulations to take control of their personal information and ensure that it is being used with the utmost responsibility.
ChatGPT seems to have responded to this, as new sessions now lead with prompts warning people not to input sensitive data or corporate secrets, as they are not secure once inside the system.
Something like using one of the new ChatGPT plugins to do your grocery shopping might seem innocuous, but do you really want an unsecured digital record of everything you’ve been eating floating around the internet?
We as the public need to slow down and not get too caught up in the hysteria around new AI technologies until these privacy issues are sorted out.
Make no mistake: While users should commit to the end of their deal, companies must be held accountable for poor data use and protection practices.
With that, organizations of all sizes should promote transparent and comprehensible consent forms for individuals to clearly understand how their data will be used and where it will go, as well as any third-party entities that may have access to this data.
Moreover, leaders should provide clear paths for users to exercise their DSR as well as educate employees on ethical and appropriate data processing and storage practices.
We are far from accomplishing this, as the majority of consent banners still occupy regulatory gray zones considering they do not clearly advertise opt-out or opt-in rights, depending on the user and company’s location.
Transparency, clarity, and accountability should be at the forefront of every organization’s data privacy methods.
The rise of ChatGPT begins a new era of data privacy vigilance where organizations and individuals need to be equally proactive in ensuring that data is appropriately handled to veer away from breaches and misuse.
ChatGPT is absorbing more data at a faster pace than any other company in history, and if that balloon bursts, the ramifications for privacy will be unparalleled.
If companies want to make sure they are clear of a potential issue, they need to start exercising smarter data protection practices and building up consumer trust in the internet, or a better collective digital future may be in peril.