- Last post
- 5 Responses
Drgs shares a theory about the internet being dead, and I feel like there is something in it, but the official version of it is pretty conspiratorial and I think, more handwavy than necessary. I want to try to improve the theory here, because there are, potentially, some real issues here, of an emergent nature, and maybe together we can get to the bottom of it.
The premise is that maybe between 2016-2017 the internet changed from one where people were predominantly producing content and commentary, to one where the bulk of content was produced by AI.
Let’s explore that a bit and flesh out the theory.
In 2016, Microsoft releases “Tay” ai chat bot and it tweets wildly racist things, it is shut down within 16hours of being online. It was claimed that “hackers” made it that way, but it is more likely that poor labelling of training data, or maybe NO LABELLING of training data, (read: just letting tay read everything on the internet and decide what’s good herself), lead her to be that way. A similar effect happened when IBM Watson started to swear after injesting urban dictionary, and in the YouTube recommendation algo which would invariably serve up beheading videos after a couple of clicks deeper down the rabbit hole. Letting Ai determine what is most likely to capture your attention has a poor track record. Yet, manual labeling of training data is extremely costly, so if you could find a way to train without paying a human to manually label your data, that would be like the holy grail for machine learning. That’s where efforts appear to be focused, but perhaps there is an unrecognized trait in learning algorithms to become complete assholes given certain prior conditions and heuristics.
The dead internet theory states that the bulk of information on the internet is now bots. That jives with reports of the steady trend in web traffic being more and more bot related, as everyone tries to game social media, Seo, gather analytics to sell to advertisers, do their own search and spidering, etc. And also with reports that sophisticated marketers like proctor and gamble have moved away from online ads as they don’t see the results they expect. Perhaps the real-world engagement does not match the online engagement due to the bot population.
Sock puppet accounts, at their simplest, just copy what others are posting and parrot it. By doing this they can fly under the radar for a long time before being activated to support a cause or mob someone. But this has the effect of creating echo chambers where the same idea is regurgitated.
More advanced ai (Open AI’s GPT-2) can write whole articles from a random prompt, and they do it one letter (or word?) at a time, probabilistically. There is no guidance, and they may be rewriting based on some random prompt that is coming from a bot that inherently skews agro (as in the case of Microsoft’s Tay), further filling the echo chamber with their excrement that the next bot will happily eat up and regurgitate.
On the proliferation side, online marketers have every incentive to use these tools to help generate content around their specific seo niche, and to feed the ravenous social media appetite for fresh content and user engagement.
Platforms too, may have incentives to create their own ai community that serves to improve engagement. There are many reports that Tiktok’s audience never translates to other platforms. Is this because Tiktok satisfies all audience needs so they never seek out other content on other platforms? or is it because everyone gets 100’s of ai-based watchers in short order, because that improves creator engagement? Who is going to continue to post when nobody is watching? Who would stop when they have legions of adoring fans waiting on their every word? This could have the effect of changing the behaviour of people, who are responding to ‘fans’, but are actually responding to ai. If your fans are angry about something, you will change your behaviour. And if those fans are bots, they might not have any concept of what they are disagreeing with, only that a given prompt results in a particular output.
Now, the big problem: The emergent behaviour that crowds out the good signal with the amplified noise.
If the ratio of bots posting to people exceeds 50%. Is there a possibility that the bot traffic ends up eating its own tail and rapidly spiralling off into fringier and more hateful areas? The rate of growth for bots eclipses that of people, so it’s only a matter of time before the signal is lost completely in the feedback loop. Could that explain the state of discourse we are seeing in the world? Where did all the white supremecists come from all of a sudden? Is it mostly bots, recycling bot talk, and making the conversation veer off into the weeds? And are we, the real people, being dragged off topic, into the muck, by systems that are optimized to pick fights because that maximizes engagement?
Now, given that politicians reflect their constituents, and use polling to find out what they want; and given the fact that bots are difficult to spot, does this online clusterfuck end up having real world consequences?
We reflect our society and our neighbours in the way we conduct ourselves online and off. We learn how to behave from good and bad social interactions. And the collection of those interactions, across society, and within the zeitgeist results in the culture of the time. If unthinking algorithms are the majority basis for what now constitutes society, perverting it with no concept of a goal in mind, only a probability rank of what the next word in a string is most likely to be, what is to prevent us from spiraling into oblivion?