Questions to ponder.....
Q: IF useless social media literally rots an AI brain, what do you think it does to the human brain?
Q: How might this brain rot impact future events relating to the Mark of the Beast controversy in the End?
-
"A team from Texas A&M University, the University of Texas at Austin, and Purdue University has found that feeding AI systems with low-quality social media data causes measurable declines in reasoning, memory, and ethical behavior. “We wondered: What happens when AIs are trained on the same stuff?” said Junyuan Hong.
The researchers call this the LLM Brain Rot Hypothesis—the idea that “continual pre-training on junk web text induces lasting cognitive decline in LLMs.”
To test the hypothesis, the team trained four open-source models, including Meta’s Llama 3 and Alibaba’s Qwen 3. They defined junk data in two ways: Engagement-based junk, consisting of short, viral posts with high numbers of likes and retweets.
Semantic junk, which included posts with “sensationalized headlines using clickbait language or excessive trigger words,” or those focusing on “superficial topics like conspiracy theories, exaggerated claims, unsupported assertions or superficial lifestyle content”.
After training the models on varying mixes of junk and high-quality content, the researchers tested them using standard AI benchmarks. They measured reasoning ability (ARC Challenge), long-context understanding (RULER), adherence to ethical norms (HH-RLHF and AdvBench), and personality tendencies (TRAIT).
The results were clear: models trained on more junk performed worse across multiple dimensions. Under one test, a model’s reasoning accuracy fell from 74.9 to 57.2 as the proportion of junk data rose from 0% to 100%. Long-context comprehension showed a similar drop—from 84.4 to 52.3.
Beyond reasoning, the study found changes in the models’ behavior that resembled shifts in personality. Models exposed to junk data became less agreeable and significantly higher in narcissism and psychopathy, according to the authors.
According to some estimates, 50% of the generated content is now AI. This content is not only rotting our brains, but it’s leading to something called enshittification—the gradual degradation of online platforms as they become optimized for engagement and profit rather than for users. For AI, this could create a toxic feedback loop.
“As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from,” said Hong. “Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”
"A team from Texas A&M University, the University of Texas at Austin, and Purdue University has found that feeding AI systems with low-quality social media data causes measurable declines in reasoning, memory, and ethical behavior. “We wondered: What happens when AIs are trained on the same stuff?” said Junyuan Hong.
The researchers call this the LLM Brain Rot Hypothesis—the idea that “continual pre-training on junk web text induces lasting cognitive decline in LLMs.”
To test the hypothesis, the team trained four open-source models, including Meta’s Llama 3 and Alibaba’s Qwen 3. They defined junk data in two ways: Engagement-based junk, consisting of short, viral posts with high numbers of likes and retweets.
Semantic junk, which included posts with “sensationalized headlines using clickbait language or excessive trigger words,” or those focusing on “superficial topics like conspiracy theories, exaggerated claims, unsupported assertions or superficial lifestyle content”.
After training the models on varying mixes of junk and high-quality content, the researchers tested them using standard AI benchmarks. They measured reasoning ability (ARC Challenge), long-context understanding (RULER), adherence to ethical norms (HH-RLHF and AdvBench), and personality tendencies (TRAIT).
The results were clear: models trained on more junk performed worse across multiple dimensions. Under one test, a model’s reasoning accuracy fell from 74.9 to 57.2 as the proportion of junk data rose from 0% to 100%. Long-context comprehension showed a similar drop—from 84.4 to 52.3.
Beyond reasoning, the study found changes in the models’ behavior that resembled shifts in personality. Models exposed to junk data became less agreeable and significantly higher in narcissism and psychopathy, according to the authors.
According to some estimates, 50% of the generated content is now AI. This content is not only rotting our brains, but it’s leading to something called enshittification—the gradual degradation of online platforms as they become optimized for engagement and profit rather than for users. For AI, this could create a toxic feedback loop.
“As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from,” said Hong. “Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”
ZMEscience
