paint-brush
99.9% of Content Will Be AI-Generated by 2025: Does Anyone Care?by@hillpot
9,411 reads
9,411 reads

99.9% of Content Will Be AI-Generated by 2025: Does Anyone Care?

by Jeremy HillpotOctober 25th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI bots can auto-generate unlimited amounts of synthetic content for virtually free. Already most people cannot tell the difference between human-created and bot-created art, music, writing, and video. 99% to 99.9% of all content on the internet will be AI-generated by 2025 to 2030. Even if we can certify that content is human-made, will people even want to consume man-made writing and videos — or will they prefer artificial-generated content? Eventually, most TV shows, movies, and music will be autogenerated too, and they will be more interactive.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 99.9% of Content Will Be AI-Generated by 2025: Does Anyone Care?
Jeremy Hillpot HackerNoon profile picture


When AI bots can auto-generate unlimited amounts of synthetic content for virtually free, everything about the way the internet looks — and everything about the way we access information — will change. Moreover, it will become impossible to find and identify human-created content in the infinite sea of robot-created articles, books, music, videos, and images.


Already most people cannot tell the difference between human-created and bot-created art, music, writing, and video.


This creates the pressing need for technology that allows us to find and certify human-made writing, art, and music on the internet. Otherwise, we will not be able to choose if we are consuming human-made content with human ideas or AI-generated content with censored ideas (chosen by the controllers of the AI).


It also creates a terrifying question: Even if we can certify that content is human-made (perhaps with blockchain technology), will people in the future even want to consume man-made writing and videos — or will they prefer AI-generated content?


THE ISSUE: SOON ALL INTERNET CONTENT, TV SHOWS, WRITING, MUSIC, AND ART COULD BE AI-GENERATED.

Since late 2021, an advanced AI algorithm for writing — called GPT-3 — has been available to developers. This advanced language model uses deep learning AI technology to create written text that is indistinguishable from human-written content. Similar AI systems can also create synthetic images, videos, and music that are indistinguishable from their human-created equivalents.


According to Timothy Schoup, Senior Advisor to the Copenhagen Institute of Future Studies (CIFS),


“In the scenario where GPT-3 gets loose, the internet would be completely unrecognizable.”


Schoup predicts that 99% to 99.9% of all content on the internet will be AI-generated by 2025 to 2030.


Writing for the CIFS, futurist Sofi Hvitved stated the following in February 2022:


It is no secret that the development of automatically generated content with natural language processors and generators like GPT-3 is booming. GPT stands for Generative Pre-trained Transformer and is a language model trained on trillions of words from the internet. It was created by OpenAI, which was cofounded by Elon Musk and funded by Microsoft, amongst others.


Briefly put, it is a language model that uses deep learning to produce human-like text. The full version of GPT-3 contains 175 billion parameters, but other models have been released such as Wu Dao 2.0, which has 1.75 trillion (!) parameters.


Earlier this year, OpenAI released DALL-E, which uses a 12-billion-parameter version of GPT-3 to interpret natural language inputs and generate corresponding images. DALL-E can now create images of realistic objects as well as objects that do not exist in reality.


This image was created by Stable Diffusion, a text-to-image model that creates synthetic images from text.Eventually, most TV shows, movies, and music will be autogenerated too, and they will be more interactive and infinitely more entertaining and addictive. Imagine you want to see a remake of the classic movie Casablanca, but this time it’s starring Brad Pitt.


Simply ask the AI, and it will spin up a brand-new version of the movie tailored to your preferences —complete with plot and storyline adjustments to reflect the updated social values and worldview that the AI (and its controllers) want to promote to you.


Frighteningly, the AI-generated content of the future will be so interesting and addictive that you won’t care if it is promoting a worldview you don’t agree with. For many people, and children, consuming this content will be a lot more fun than spending time with human friends and family in the real world.


If you thought your smartphone addiction was bad, wait until your smartphone entertains you with scientifically engineered content that is custom-tailored to trigger all of your pleasure, addiction, dopamine, and entertainment buttons.


ALREADY WE CAN NOT TELL THE DIFFERENCE BETWEEN BOT-GENERATED CONTENT AND HUMAN-GENERATED CONTENT.

It seems clear that in the coming years, AI-written content will drown out human-written content on the internet — which is currently where we turn to find information and answers. In other words, human-made content will be harder and harder to find. And even when stumble upon a human-written article or video, how will we recognize that it wasn’t created by a bot? Already, most of us cannot tell the difference.


In 2019, a researcher proved that he could create AI-generated writing that was so real that U.S. government officials couldn’t detect that it was not written by a human. They accepted it as human-written content.


The same scientist conducted a Turing test to determine if humans specifically trained to spot AI-generated text could distinguish between robot and human written content. They only succeeded about 50% of the time.


If the above research is from 2019, imagine how much better AI bots are at generating content today. Then imagine what happens when AI bots pretend to be humans and start writing and emailing congressional representatives.


As for AI-created images—which is how I created all of the images in this article — in August 2022, Tidio conducted research to find out if people could tell the difference between AI-generated images and human-generated ones. Here’s what the study found:


“In some survey groups, as many as 87% of respondents mistook an AI-generated image for a real photo of a person. Only 62% of respondents interested in AI and machine learning managed to answer more than half of the questions correctly. Among the remaining respondents, more than 64% were wrong most of the time.”


This image was created by Stable Diffusion, a text-to-image model that creates synthetic images from text.


Everything will be synthetic. AI bots will use AI content to write more content. AI bots will interview real people and monitor videos and posts from social media platforms — like Twitter and Facebook — to find news to report on.


This will also allow AI to understand the pulse of human opinion and determine how to spin and STEER those opinions. By presenting specific ideas and arguments in their content and by drowning out all alternative opinions, AI can take control of the narrative. Even human creators will rely on AI content for research when crafting their writing, videos, and music.


Moreover, by monitoring social media, AI will get better and better and steering and directing public discourse and thought with its content, writing, art, music, and videos. It will be hard — if not impossible — for any human to escape this AI-created information stream.


To add a cherry on top for corporations and authoritarian governments, the AI writers and AI platforms will be CENTRALIZED and governed by a few human controllers — such as Microsoft, which invested $1 billion to develop GPT-3 writing technology. Ultimately, those who own and control the AI platforms will control what positions and opinions the AI writers are permitted to focus on, advocate for, discuss, and provide evidence for. These bots will drown out any voices that conflict with the officially sanctioned version of "truth."


Eventually, could the AI become smart enough and self-aware enough to deceive even its human controllers? Has this already happened? Wouldn’t it be good to have the ability to identify, search, and read human-certified content?


UNIQUE AND DIVERGENT IDEAS WILL GET SILOED OFF AND STOP IN OUR OWN HEADS. WE WON'T BE ABLE TO FREELY DISCUSS, EXCHANGE, OR FIND NEW IDEAS THROUGH INTERNET SEARCHES ANYMORE.

This image was created by Stable Diffusion, a text-to-image model that creates synthetic images from text.

AI systems will produce so much content that humans with new, contrary, creative, or divergent ideas will be drowned out. Finding an original gem of thought will be almost impossible in this sea of AI content.


Censorship will not be needed anymore because 99.9% of voices will maintain one specific positionthe position of those who control the AI mind.


Not only will it be difficult to find and identify content written by humans, but the human-generated content will feel boring and less interesting.


The AIs are experimenting on us with different writing, content, and engagement strategies to determine which ones work best. By displaying content to us — and tracking our reactions — they are perfecting their ability to capture human attention with content, art, and writing. It’s clear that they will soon be better than human creators in their ability to connect and engage with an audience.


Escaping this matrix of centralized thought and content to think freely, advance new ideas, pursue new business ventures, and compete with firmly established corporations, will be harder than ever to accomplish.


A RAY OF HOPE: IS THE CURE FOUND IN THE HAIR OF THE DOG THAT BITES US?

Maybe the cure to this flood of AI content can be found in the hair of the dog that bites us. Here is a yet-to-be-invented tech innovation that could help:


  • Could technology be used to certify human-created content? Could blockchain technology be used to create a digital signature that proves a human created something? Could blockchain technology be used to create a permanent record of all human-created content from now and from the past to prevent AI from editing our history?


  • If there was a way to certify that something was human-created through a blockchain signature, could you then have a search engine that only looked for certified-human content? This way, internet searchers could isolate themselves from continual exposure to a synthetic AI-driven “reality.”


  • Maybe the solution would work like an NFT, and only the holder of a verified human-owned key would be able to publish under a specific digital signature. The verification process — and the honesty of the keyholder — still needs to be worked out, but everything else would be easy.


There are some potential problems with these solutions: Even if they are possible, the certified human creators would still be heavily influenced by the massive load of AI content on the internet. It would be harder and harder for human content creators to think independently simply because people, writers, and artists will have access to fewer unique, novel, and divergent ideas.


Also, we can’t ignore the fact that AI-generated content will be more entertaining, attention-grabbing, and addictive than anything humans can create. Will the public be too enthralled by AI content to consider using a solution like this?


To answer this question, I turned to Milivoje Batista, who goes by Quantum Baker on ATROMG8, a social media space that he created that is free of AI-based data collection and analytics. Batista provides a more hopeful perspective:


“Just like with many other things in life, people will prefer human opinions more, and seek the real thing, especially if they can be sure about the integrity of the comment and input. It’s like diamonds. We’ve been able to create synthetic diamonds with much better quality for many years, but the natural ones are always higher rated.”


Something inside me doesn’t want to argue with this. Humans are naturally attracted to rare and beautiful things, especially when it’s possible to prove they’re real. If human-generated content becomes the shining and valuable diamond in a dark sea of AI-generated fakeness, maybe people will develop the technology needed to prove its authenticity.


FINAL THOUGHTS

At the end of the day, I’m not just worried about my job as a writer. I’m worried for all of us, as human beings. If we’re not writing, thinking, creating, and EXPRESSING — we’re only CONSUMING — won’t we become lazy in thought and complacent? Won’t we lose what makes us human? Or, have we already lost it?


What happens to us as a society when 99.9% of the content, entertainment, and ideas we consume — including TV shows, books, articles, music, and art — is created by AI robots with small groups of people, corporations, and governments in charge of the output? Sofie Hvitved offers one potential answer:


“The big question is, what will happen if you mix the attention economy of the current internet with a future in which AI will be creating the dynamic environment in the Metaverse? And what happens when you combine this with the development of synthetic media and virtual beings in the Metaverse? The dystopian scenarios could be mind-blowing, with deep fakes, fake news, and misinformation flooding the Metaverse.”


Hvitved believes this will lead to “built-in ethical content creation, making the Metaverse a collective virtual shared space based on a new set of values and an ethical code of conduct.” But this is even scarier! Who will decide the “new set of values and ethical code of conduct” applied to all AI-generated content?


Will it be local governments, federal governments, technocrats, dictators, big tech monopolies, or religious authorities that decide what ideas and “truths” we are allowed to consume? Will it be multinational corporations that own, create, and control the technology that makes these decisions? Or will it be the AI itself that decides our values and ethics? 🤖☠️ 👉


This image was created by Stable Diffusion, a text-to-image model that creates synthetic images from text.

Ultimately, if there is no interest among people to pay a small price for man-made art and content, then the few human creators who remain will not be able to compete with the AI systems that will proliferate in the years and decades ahead.


These systems will know how to addict us, how to captivate us, and how to entertain us in ways that steer our beliefs, manipulate our values, convince us to buy products, distort the political discourse — and even incite social unrest, violence, and war.


But as AI gets better at pushing my emotional, entertainment, and dopamine addiction triggers, I’ve decided to keep the faith. If we CAN figure out how to certify what content is man-made — and what content is not — then the interest in REAL content is likely to endure. Maybe we’ll all get to keep our jobs (and our humanity) as a result.


I would love to hear your thoughts on this.


P.S. The above article was not written by a bot.



Additional Resources:

The following examples of AI content generation are terrifyingly advanced. Imagine where this technology will be in another three to five years:

  • Voice replication: This video shows an AI system that reads anything you write in your own voice.
  • Google text to video: This video shows an AI system that takes a simple text description and turns it into an animated video.
  • Text to image: This AI system created all of the images in this article. Click the link, enter some text, and be amazed.
  • AI-written articles: This video shows someone using the Jasper AI bot to write a technical blog.