We seem to be entering an “AI” apocalypse of sorts, they aren’t going to kill us or even take our jobs. What they are doing is destroying the Internet commons by filling it with rubbish. This isn’t even real AI, just pattern matching and prediction systems, mostly LLMs.
The Problem
Scott Shambaugh’s saga of being attacked and defamed by an OpenClaw AI bot is interesting and raises some disturbing possibilities for future online discussion [1]. Imagine what it would be like if everyone who was in any way notable for free software work had 100 such bots going after them.
Bruce Schneier and Nathan E. Sanders wrote an insightful article about the AI generated text arms race [3] primarily concentrating on situations in which text that was assumed to be written by humans but was actually written in bulk by bots was performing a DOS attack on people who were reviewing it. There are many situations such as book publishing and publishing letters to the editor of newspapers where getting new material from unknown people is an important part of the job but where there are also people making low quality submissions that are almost a DOS attack at the best of times.
Currently the email spam problem continues to get worse and when LLM use increases it will get significantly worse. Email encryption isn’t viable [4]. The PGP web of trust never really worked well as it’s too difficult for most users.
The amount of “AI” generated content that’s being recommended to users on platforms like YouTube and Facebook is steadily increasing and the amount of LLM generated commentary that purports to be from real people on Twitter and Facebook is also increasing. Here’s an informative blog post by Erich Schubert about this [5].
Potential Solutions
Surrender?
One option and possibly the default option is to surrender to this and just let everything we built on the Internet over decades get destroyed. Whether to surrender is a decision that can be made on a per-service basis.
Twitter is pretty much useless anyway, I quit Twitter because Elon deliberately made it suck [6]. In my opinion this is not surrendering to what’s being done there, I’m just stopping wasting time on it and using better options. I used to have about 300 followers on Twitter and I don’t think that many of them would ever choose to stop following me, so I presume that about 1/3 of the people following me have decided to totally quit Twitter and delete their accounts. I also presume that some of the remainder have done the same as me and just kept a mostly inactive account. If Elon suddenly stopped being a stupid asshole it probably wouldn’t change anything as the value of the system was connections to others. Some people will consider my abandonment of Twitter as surrender and I accept that it’s not an unreasonable opinion. I think that the possibly 100 Twitter followers of mine who deleted their accounts surrendered.
Facebook has been becoming a worse service, it’s business model is becoming increasingly exploitative and it’s interface is designed to be addictive. It’s probably best avoided unless you really need it. The only good thing about Facebook at the moment is that Facebook Marketplace doesn’t take a cut on sales and there are some really good deals on computers if you know what to look for. Unfortunately Facebook has a large number of users who are from marginalised communities and have no other alternatives for communication. It would be good to get them migrated to other platforms.
We could just give up on a lot of general communications services and have everyone accept that good content is drowned out by rubbish and have the Internet become divided between people who accept the rubbish and those who cease using large portions of the Internet environment to avoid it.
Using Non Commercial Services
Lemmy is a good FOSS federated alternative to Reddit which also covers some of the uses of Facebook. It needs more users to get critical mass but is still quite usable. A post that might get a dozen comments on Reddit may get 1 comment on Lemmy but that one comment will be a good one. Reddit doesn’t appear to be attacked much by LLM generated content at least not yet. Even if the Reddit model proves to be resilient to LLM attack the Lemmy software can be used to replace some things that are done on Facebook,
Mastodon is a good FOSS federated replacement for Twitter, it has a decent user-base including some VIPs. While it is aimed at the Twitter use case it can also cover a significant part of the Facebook use case.
There are some other FOSS social media programs which could take over other parts of the commercial social media environment.
Generally commercially run Internet services will have a financial incentive to allow the problems to get worse so we need to rely on FOSS software, non-commercial implementations, and government services.
Web Search
For a long time Google has had a monopoly on web search, but now they default to including an “AI Overview” at the start of the results which is sometimes useful but also sometimes very wrong. You can use the search URL “https://www.google.com/search?q=%s&udm=web” to get google results without rubbish. But I presume that they will break that if it gets too popular.
Searxng is a AGPL licensed metasearch engine that aggregates results from other engines, here’s the Searxng source [7] and here’s a list of Searxng instances if you want to try one [8].
Even using meta search engines like Searxng won’t help if the original data is overloaded with spam, but alleviating the problem is a good temporary measure.
Web of Trust for the Web?
I’ve idly considered the possibility of having some sort of rating system for web pages that uses a web of trust so that you can securely use trust ratings of friends of friends etc. But given all the difficulties in using a web of trust for signing GPG key for software developers (the demographic that is most skilled at doing such things) it doesn’t seem viable.
Should we surrender the idea of having a usable public web?
In the early days of the web (before Google) it was standard practice to rely on recommendations from other people or from trusted sites to find other sites, that could be considered to be an informal web of trust. We could go back to that sort of usage pattern if Google and many of the big sites get overwhelmed by LLM generated spam.
Wikipedia
I believe that Wikipedia will be at the front lines of this battle. It’s model has always included anonymous contributions. Benjamin Mako Hill wrote an interesting blog post about research he did with Kaylea Champion into Wikipedia pages on taboo topics which have a larger portion of contributors choosing to be anonymous than non-taboo pages [9]. Wikipedia also has a long history of being abused for various reasons, one that I witnessed was someone putting false content into Wikipedia pages to immediately cite them in support of their facebook arguments. That sort of thing can be dealt with at human scale but a large scale attack by bots is a different problem to solve. Also with the recent developments in AI developing multiple web sites entirely populated for the purpose of supporting one fake entry in Wikipedia is plausible.
The upside of these attacks that I predict is that they will attract the attention of all the people who have skills related to developing counter-measures. While LLM bots are filling the inboxes of publishers with rubbish and messing up the stackoverflow comments section not a lot of people are bothered, but once the attacks on Wikipedia get serious everyone will take notice.
National AI
Bruce Schneier and Nathan E. Sanders wrote an interesting blog post about nationalised public AI [10]. While that won’t directly address this issue it will get the right technology in the hands of people who can use it in the right way.
Conclusion
This is going to be a difficult problem to solve, more difficult than the email spam problem we have been unable to solve after 30 years of working on it.
This is also a very important problem, we are currently in an age where we have access to information that most people couldn’t even dream of 30 years ago. We also have disinformation that combines some of the worst aspects of authoritarian regimes throughout history combined with the worst aspects of cult brainwashing. If we lose access to the information but the disinformation remains (or get worse) then the result will be terrible.
I don’t have great ideas for solving this. I have outlined some small ideas to mitigate things and I hope that others can expand on them.
Please write comments with any good ideas you have, or even ideas that don’t totally suck. A problem this difficult is not going to be solved in a blog comment, but a blog comment might point in the right direction.
- [1] https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
- [2] https://tinyurl.com/26wm43e2
- [3] https://tinyurl.com/22ghka6s
- [4] https://tinyurl.com/29to4cw5
- [5] https://www.vitavonni.de/blog/202602/20260213dogfood-the-AI.html
- [6] https://etbe.coker.com.au/2026/03/25/death-of-twitter/
- [7] https://github.com/searxng/searxng
- [8] https://searx.space/
- [9] https://tinyurl.com/26m98gca
- [10] https://tinyurl.com/24xt9gst



