
If you were to believe ugly bald men with glasses, you would think we are all on the cusp of being replaced by artificial intelligence. Some even peddle the idea that our reality itself is generated by machines (then why would the machines make them look like that?).
This is not new. Medieval people suffered from the glass delusion. Once glass became a thing in the late medieval period, people who spent too much time thinking and not enough time picking turnips began to fear that they themselves were made of glass. They thought that they could shatter any moment.
This wasn’t just illiterate peasants who dealt with this, it seems to have been mostly the thinkers among the upper classes. The Dutch polymath Caspar Barlaeus suffered from the glass delusion at some point, he also feared that he was made of glass.
In our era, the “smart” people suffer from a kind of techno-gnostic apocalyptic delusion. They are convinced AI is going to cause some sort of profound transformation of our world, that it will render us humans obsolete. Or at least they peddle these ideas with great success to raise money for their own business ventures.
The AI large language models train on the data gathered from the Internet, then eventually become smart enough to improve themselves, then finally end up at super-human level intelligence and we become obsolete. That’s the idea they peddle.
But that’s not really how things work. AI bumps into a bunch of problems when competing with the human nervous system. Today I want to explain some of the problems the AI bubble is going to run into. They are as following:
-Energy use
-Distorted source material
-Inbreeding (AI trained on AI)
-Plagiarism
-Economic problems
-Accountability
Energy Use
To start with, our brains make use of quantum effects for computation, that are at least five million times more energy efficient than we can achieve with silicon based computation. Computer chips get more energy efficient over time, but that’s a pretty massive gap you would have to bridge. And their increasing efficiency, comes at the cost of an increasing error rate, as the smaller more efficient chips are more vulnerable to single-event errors.
The human brain benefits from the inherent carbon-based life form favoring nature of the laws of physics (the biocentric universe). And you might be wondering: Well fine, if you can’t make advanced AI based off “computronium”, because our own neurons are so much better at making use of quantum effects than the chatbots, could you make AI based off neurons?
You can’t. You run into a number of problems. There are experiments with a small number of neurons driving little cars and other creepy stuff like that. But when you wish to stack neurons together in a Petri dish, you run into the problem that they produce waste and need a source of oxygen. This needs to be performed by blood vessels, which need endothelial cells. The neural networks also need cells that engage in pruning, which our brain uses to make itself more efficient. Compare how hard it once was to learn to ride your bicycle as a kid, to how easy it eventually became. That requires correct pruning, done by microglia. Finally, you would need a way to feed information into the neural network and read from the network. For us, that’s a body. The brain communicates through our (body) language and it receives information from our sensory organs. You don’t have that in a Petri dish.
Distorted source material
But more important to understand, is that like the AI models, our brains are trained through exposure to the world around us. When your brain was being trained when you were a child, you were exposed to superior data, of higher quality, that is not available to Artificial Intelligence systems. We get to see the real world and we train our brains based on the real world. AI is trained on an idealized online representation of reality.
You can think for example, of when you were fourteen. You may remember what your classmates looked like. Most of them had some pimples, some had faces covered in pimples. You know that this is what most teenagers look like. But ask AI for an image of a bunch of teenage boys and you just won’t find any that have acne. You won’t find other traits that teenage boys have either, like puffy nipples. Boys don’t like having puffy nipples, they don’t show them off, so it doesn’t end up on the Internet.
AI is only ever exposed to what we show off online, which we all intuitively understand to be an idealized representation of the world. And even among the data that we upload online, there is stuff that will be amplified more than other stuff. Images of Kim Kardashian will pop up more frequently in a data set than images of your racist uncle Bob.
So AI, being trained on an idealized representation of reality, is forced to learn based on distorted training data, in contrast to your own mind. That’s part of the reason why generally speaking, the products of AI feel a little too perfect and soulless. It’s trained on humans sharing their idealized image of their world. You don’t just share pictures of yourself without acne. You share pictures of your street, without the partially decomposed dead rats and dog turds found on the pavement. And even if you were to share pictures of how your street really looks, they would not go viral, so the AI would not encounter them in its training data. AI is not really able to figure that out.
Inbreeding
Eventually, the Internet begins to be filled with AI generated data, based on this idealized representation of reality. And then the AI begins generating images based on that data. This is inherently going to be a distortion. From the data the AI generates, we then selectively choose the pictures that we desire. This is a further impoverishment of the total diversity that AI is able to generate. Most young people and artists hate AI, AI is mostly used by right-wing low IQ boomers. As a result, the diminished variety that AI can produce, is further impoverished by the cognitive filter of right-wing low IQ boomers. This AI generated junk then floods across the internet like diarrhea. It leads to this sort of crap:

You may have heard of this problem. It’s the AI inbreeding problem. You can see below an example of what you get after a few generations, of AI being trained on its own output:

Plagiarism
You can’t filter out the AI generated data, because AI generated data generally isn’t going to be labeled as such. Humans try to pass off AI generated data as generated by real humans. They made an algorithmic plagiarism app and they use it to make real artists unemployed. And if as an artist, you plan on suing them, you might want to hurry.
Economic problems
It’s not just that other artists filed lawsuits, along with various newspapers and other media outlets. The AI companies are also operating at a loss. OpenAI operated at a five billion dollar loss in 2024. And that’s in an environment without genuine competition. It fundamentally doesn’t matter if you’re theoretically capable of something, when it’s not economically sustainable. We used to have faster than the speed of sound travel between the United States and Europe. But then oil became expensive and it came to an end. The first flight of the Concorde flew in 1969, the last flew in 2003. The Soviet Union’s version, the Tupolev Tu-144, stopped flying after 1999.
You generally need a high share of AI generated data, to produce the AI inbreeding problem. However, that doesn’t matter, because there is an abundance of AI generated data being produced and anyone whose data is used to train AI models has an opportunity to hold the billionaires upside down and shake them around in the courts until money starts falling out of their pockets, so they can’t just keep using raw data.
It also doesn’t really matter if you try to favor rare data in your AI’s training data and try to encourage it to produce rare data. The reason it doesn’t matter, is because you’re still stuck with people using AI to try to deceive morons. Nobody is going to scam guys out of their life-savings with an AI generated image of a fat middle-aged woman with cellulite. Even if your AI model generates such an image, its users will tend to discard it. They will try to use a kind of Kim Kardashian instead, so that’s what ends up in your data set.
But you’re also dealing with the tragedy of the commons problem: The Internet, is the commons used by the AI companies. There may be all sorts of measures they can take, to try to avoid polluting the data set they’re all using. But if we stop using their plagiarism machine because it generates fat ugly women with cellulite and boys with acne and switch to their competitor instead, then they’re in trouble. They don’t really have a strong incentive, to avoid polluting their shared common data set with distorting mediocrity. Any attempt at avoiding the loss of diversity that leads to a loss of users or increases their costs (by training on more data for example), will tend to put them at a competitive disadvantage.
In the meantime, AI mostly has the effect of making the Internet less useful. We use the Internet, in an effort to find information produced by other humans. If predatory humans are able to use AI to pose as other people, the Internet becomes less useful. You won’t use a dating app if most of the profiles you talk to are AI. The main use of AI so far, is to allow humans to pretend to have skills and qualities they don’t genuinely possess. People use it to pretend to write code, or submit AI generated college essays. This means the value of a college degree as a signal of competence disappears. Social media networks like Twitter, Reddit and Facebook are effectively becoming useless due to AI. The introduction of AI to social media networks causes you to selectively lose your smartest users.
What the large language models being passed as “artificial intelligence” actually do is pretty simple: They try to predict the statistically most likely next word. That’s why they can never really hold anything together for very long either, any video they generate eventually gets really weird after a while. You just end up with something that is unrealistically bland, at least most of the time.
And when it decides not to be bland for once, or you try to get it not to be bland, that’s extremely dangerous, because there’s a high chance it ends up producing something inappropriate, because often when humans give an unconventional answer to a situation, the unconventional answer is terribly wrong. When someone stands on the ledge of a building, 90% of people will try to talk them down, but 10% may shout “jump”. And you don’t want your chatbot to get this one wrong.
Which brings me to the final problem:
Accountability
Why do we have lawyers, when everyone can just Google everything? Why do we have doctors, when we can just Google stuff? Why do we have judges? Why do we have compliance departments at companies?
The answer to all these questions is accountability. When something goes wrong, we want to be able to explain why something went wrong. We want to be able to have an investigation, find out whether someone made a mistake or simply behaved unethically and then deal with that in whatever way we see fit. We want to have someone who can take responsibility.
Now here’s an example:

In case you think I made this up, the link to the conversation is here. This is real. Every once in a while the supersmart chatbots will just tell you to kill yourself and there’s nobody who can tell you exactly why.
Now the important thing is as following: It’s not just a problem when the AI chatbot you use to replace real human employees tells your users/customers that they should kill themselves.
The bigger problem, is that you have no answer when someone asks what went wrong.
You can’t say “that employee has been fired”, because the employee doesn’t exist.
You can’t say “our AI model misbehaved”, because you don’t know WHAT went wrong.
The AI model you use is probably closed source, so they never show you its inner workings. If it’s open source, then it’s still an utter headache to figure out why it did what it did, it would be like hiring a psychiatrist to find out why your employee insulted a customer.
But as soon as you get an answer like this, the next question becomes: Alright, so what about the other things your AI chatbot told your customers? Ok, once in a while it tells your customers to kill itself. They will want to know: When it tells them to install some software on their computer, when it gives them medical advice, or any other sort of advice, can you trust it?
If Bob was found telling customers to kill themselves, you can isolate the problem to Bob and customers who didn’t talk to Bob can breathe a sigh of relief. But if you have an AI system that replaced 200 employees, you can’t isolate the problem. You can’t point a finger at anyone. Except that is, the moron who decided to replace 200 employees with AI. Don’t be that moron.
I’ve been wondering about something.
They say autism is caused by vaccines (and sundry environmental toxins)
Do you think your autism was a result of vaccination as an infant?
If so, do you consider that a good thing or a bad thing?
Is there a part of you which wishes to be a normie (had you not been infected with autism) or do you wave your freak autism flag high, and would like to encourage autism for the up-and-coming generations?
What exactly are we all doing here?
From another angle:
Should questionable vaccines be banned so as to save us from the scourge of angry autist children?
From yet another angle:
Should weirdos rule, or be relegated to subtard status as relatively impotent subversives who poke pitchforks into the normies?
I’m
1. The autism caused by neurodivergence, where you are not a normie and genuinely think differently is not the same as the autism caused by inflammatory brain damage from vaccines, where you just can’t think at all
2. Vaccines are about profit for state-linked enterprises, and coercive control of the population where parents are forced to humiliate themselves by pretending that being forced or tricked into injection their most precious children with dangerous chemicals was a good thing.
A rule by the autists would be hell on earth.
>A rule by the autists would be hell on earth.
No it would be heaven.
You brownoid normies have no idea what’s coming. You have no idea what coming out of the Kali Yuga really means.
Brahmin always overtakes those that feast on the suffering of his sacrifices. You’ve all been eating Brahmin, you’ve all been feeding off the suffering of autists like Christine Weston Chandler while they’ve been building terrible horrors in their heads that you can’t even comprehend to torture you with.
Hell was designed by an autist. The Demiurge is the toy God.
You must repent and beg God’s chosen for forgiveness; Satan only tortures the virtuous in the Kali Yuga but the coming New Dawn will be of a different order…
Respect the autist and they will do the same. Beg Chris Chan for forgiveness furfag and you might get lucky.
https://www.reddit.com/r/OneyPlays/comments/ow2hm5/chris_chans_final_arc/#lightbox
I’m joking btw I’m not actually insane.
The real use case for AI is to quickly analyse large datasets and make inferences on patterns in the data.
However, any and all decisions made as a result need to be verified by a human. You just can’t trust that the AI will make some error (or intentionally malicious action) with devastating consequences.
Even the simple operation of a temperature control loop (which is quite difficult to automate perfectly with a machine), there might be regions of control where the AI can cause catastrophic failure via seemingly innocuous actions.
nVidia now have frame gen where the gpu renders a frame, and then the AI units predict the 3 next frames from this one. Allegedly, this is computationally much cheaper than rendering all 4 frames. So that’s good, but this is a case where if it goes wrong, the consequence is only some artifacting in a video game display. It’s not some teenager being told to mix vinegar and bleach to clean the toilet.
I think there is an evolutionary cost to being vulnerable to AI outputs, and normies will either have to kill AI, or people will evolve to become fundamental skeptics.
Only those who were on usenet, godlikeproductions, 4chan and worse can sustain the state of permanent skepticism, while believing maybe totally mutually excluding positions might be both true.
Normies in the sense of non-tinfoilers can’t live like that. This leads, among other effects, to historic accounts and believes around history being almost black and white. It also leads to massive memory holing when the contradictions become unbearable. Like 9/11. Apparently USA college grads begin to forget what around the date 9/11 happened.
Maybe AI will accelerate the collective repression bc somebody plants prolog/logical thinking into it.
>Today is My Birthday because I’m POOR!
Not gonna lie, made me giggle, it’s so stupid.
A very precise description, from what I can tell.
Perhaps, AI could help with jobs that require humans to behave like robots (repetitive etc).
I guess they would still need supervision.
My hope is that young and old people wouldn’t have to waste their time in jobs like paralegal, junior lawyers, and others.
Even judges. Of course, the criminal cases will still need a huge contribution from humans.
But the other cases seem a good field for AI, e.g. the collection of data, the summaries..
You are right, that GPTs are basically auto-completion on steroids, as our man Linus Torvalds put it.
Still, I wonder if working on next paragraph prediction could lead to some “understanding” of structure…
P.S. Please remove the link (or post) I gave on the other thread.