Accessibility Tools

Search
Close this search box.

Is the Internet as we know it ending?

Czy Internet, który znamy, się kończy? Co z promocją NGO w sieci?
Table of contents

If you’ve been annoyed that every social media app has been slowly turning into TikTok for the past few years, then you won’t like what’s going to happen next… Meta has announced that it will soon start displaying AI-generated content within its social media outlets, Facebook and Instagram, created based on what we share or are interested in. Instead of links to actual websites, search engines want to give us AI-generated answers. Will the Internet as we know it “die” or become “synthetic”? Where should NGOs that rely on social media look for new hope?

The Internet has always had its flaws, and many of the issues we are observing today on a large scale were predicted even long before tools like ChatGPT or even TikTok became popular. The prevalent business model on the web, advertising, has gradually progressed towards more and more invasion of privacy for better profiling.

Over the years, content recommendation systems armed with machine learning algorithms (MLAs) have become increasingly aggressive in deciding what will increase our engagement. It’s not surprising that, over time, without regulation, companies and the algorithms they create have focused on the most controversial and polarizing content. From a purely financial perspective, this model works best for them (at least for now).

The upcoming social media reset is clearly visible when it comes to weekly engagement. Video social media is catching up with Facebook and is the only one continuously growing. Video content combined with algorithms that profile and optimize content for rapid engagement are winning the race for our attention. Source: Overview and key findings of the 2024 Digital News Report, Reuters Institute

The dead Internet theory

Bots, which have overwhelmingly taken up residence on social media, are also responsible for a lot of automated activity on the web – from automated attacks on websites to extracting content from pages to learning AI models – account for a sizable portion of web traffic today. This incredibly simplistic theory of how the modern Internet works oscillates on the border of conspiracy theory. Many people believe that the presence of bots serves to control information and mass manipulate the public.

However, the bots are not controlled by global, secretive governments, but more prosaically, businesses and states specializing in cyberwarfare and disinformation. To make matters worse, the biggest companies on the web in charge of public communication spaces, namely social media, have not dealt with this kind of synthetic traffic for years. Sometimes, they don’t even want to admit the actual scale of the problem, like Twitter after Elon Musk’s takeover, because it would significantly depress the size of the service in question and lower its reliability in the eyes of its users.

According to research on Twitter, bots account for between 5 and 13.7% of all accounts, although the amount of users and their interactions are much higher. We don’t have to look far. According to the NATO think tank, pro-Russian disinformation supported by AI-generated content is already widespread on social media in Central European countries. It has been responsible, among other things, for reinforcing polarization around farmers’ protests against European Union policies.

Even without conspiracy theories, trust online is steadily declining. In a global Ipsos opinion poll, companies that run social media are rated as untrustworthy, on par with oil companies (only governments fare worse, and even pharmaceutical companies do better). According to a Reuters Institute survey, trust in content is declining year after year and is lowest in countries where political elections are about to take place.

Synthetic social media

Although the dead Internet theory, which can be used to disinform and undermine public trust, is compelling, it seems more pertinent to address synthetic media. Casey Newton, a Silicon Valley journalist and author of the Platformer newsletter, predicted shortly after the release of ChatGPT that within a few years, most of the content we’d be seeing would be generated by AI.

Cybercriminals use AI tools to impersonate real people (by cloning their voices or deep-faking their faces on video) and generate fake documents. Social media users themselves have taken to flooding social media with AI-generated content, most often responding to spam created to increase the reach of advertising profiles.

The pattern is straightforward:

  1. Absurd AI-generated content (like Jesus from Shrimp) published en masse elicits user reactions,
  2. Facebook’s algorithms promote them even more,
  3. Content is displayed to more and more users, driving traffic to groups and pages that serve mainly to display ads.

This is classic spam, although modernized, thanks to generative AI. Instead of fighting it, Big Tech companies are adapting it to their own needs. The previously mentioned Meta announcement about adding generated content to the feed is just the beginning.

Wygenerowane z pomocą AI zdjęcia zorzy polarnej nad San Francisco, które opublikowało oficjalne konto Meta, oburzyło wiele osób, które w serwisach firmy publikowały prawdziwe zdjęcia.
Meta’s official account published AI-generated photos of the Aurora borealis over San Francisco, outraged many who had posted real photos on the company’s media services.

Ensh*ttification of digital services

Cory Doctorow called this process of the deterioration of content and services on the web “enshittification” and described it in the Financial Times, among others. The blunt name is no accident. Doctorow argues that companies can afford to provide worse and worse services, extract more private data, and raise subscription prices. They have built up a monopoly position in the market and know full well how difficult it will be to stop using their services.

Facebook and Instagram are the most common examples. Over a dozen years, Facebook has replaced, for example, many services and sites that had helped facilitate the organization and promotion of local events. Today, while it’s challenging to find a person sincerely satisfied with the use of this social medium, it’s also difficult to find someone willing to abandon it because it alone has groups that interest them and contacts for most of their friends, as well as access to the audiences of organizations or cultural institutions. Many people feel that they are on it because they have to be.

Despite the growing dissatisfaction among creators and influencers, Instagram is increasingly modeling itself after TikTok. Increasing engagement time on content served by algorithms makes it harder for creators to reach their audiences.

Just because we follow someone no longer guarantees we will see their content. Just because we watch more of the videos suggested by the algorithm doesn’t mean we’re actually watching what we like. We’re watching what’s intended to maximize our time in the app.

Increasingly, this is content generated with the help of AI.

Obraz z materiałów Meta AI, przykładowe treści generowane z użyciem danych o nas, a nawet naszego wizerunku.
Meta AI content image, examples of content generated using data about us and even our image.

What’s next for the Internet?

Answering these two questions can possibly help us find a kernel of hope (or remove all the remnants of it).

First of all: How much does all of this cost? Synthetic content and the creation of AI-generated content are much more expensive than other forms of using the web. AI data centers consume a tremendous amount of energy, and the companies that build them are beginning to invest in their own nuclear power plants in search of new power sources. The unlimited consumption of electricity and water to run data centers can be curbed by regulation and economic calculations. If the energy costs of using generative AI can’t be lowered, or states decide to impose additional fees on Big Tech for this reason, the use of generative AI may be restricted to its applications in cases where it actually makes sense.

The second question is: What can we replace it with? If social media is still the main channel for you or your organization to reach your audience, you’ve probably also seen your reach deteriorate, as well as interactions and even the effectiveness of advertising campaigns decline. Social networks (except for video) are no longer growing as fast as they once did, or as in the case of Facebook, have been stagnating for several years. 

The answer is communities developed outside of social media and in a way that builds trust between creators and audiences. This can be seen in the popularity of newsletters, niche platforms, instant messaging for groups like Discord, and the transition of online creators and artists to services like Patreon and Patronite. Although they have completely disappeared from media discussions, forums, especially for professionals and hobbyists, are still operating and can offer advice without ads and algorithmic referrals.

Unfortunately, going “underground” like this carries the risk of reinforcing information bubbles, this time not through an algorithm that profiles us but the lack of strong mass media that can be trusted with scientific and socio-political issues. While those organizing communities outside social media have more control over their online space, most of these services are still commercial and can change their terms and conditions.

Before social media is completely taken over, as in the memes about AI reading posts written by AI, organizations should begin to move data about their audiences from mainstream sites to ones where they will have more control. We at Sektor 3.0 strongly encourage this in our newsletters, but it’s not the only solution.

Building community and trust takes time, whether through direct communication (emails, text messages, organizing online and offline meetings) or creating content for your audience rather than content optimized for an algorithm. It’s time to come to terms with the fact that easy outreach and instant profiles and groups will no longer work the way they used to. Unfortunately, Big Tech companies make more money from their algorithms than us humans.