Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)
saw a family member today for the first time in three years. they immediately told me “with your background bro you should just go work in AI and get super rich.”
told them that the ai shit doesn’t work and that everything involving LLMs is downright unethical. they respond
“i had a boss that gave me the best advice: you can either be right or you can be rich.”
recently, i saw someone use the phrase “got my bag nihilism” and i feel it really captures the moment. i just don’t understand how people can engage in this kind of behavior and even live with themselves, let alone ooze pride. it’s repulsive.
(family member later outright admitted that his job is basically selling things to companies that they don’t need.)
I unfortunately do understand. I think there are severe tradeoffs between living a good life and living a virtuous life. Most people usually compromise to lesser or greater degree and find ways to cope with that. Nihilism is one way.
To be fair it is really, really mentally taxing to be a young person who cares. You’re surrounded by a world that doesn’t. Everything is constructed to reward you if you simply stop. The effort to care is immense and the rewards are meager. The impact you can have on the world is so, so limited by your wealth, and wealth comes so, so easy if you just stop caring.
But you can’t. I mean, you can’t. If you stopped you wouldn’t be you anymore, it would destroy your soul. But it is gnawing. You could do the grift just for a bit. Save up $10k, maybe $20k. That’s life-changing money. How much good would it do to your family? Maybe you can forget that there are other families, ones you can’t see, that would be hurt. Well no. You can’t. You are better than that. And for that you will suffer.

i don’t think of myself as a young person (i’m closer to 40 than 30), but i agree with the sentiment. i often worry that it’s just don quixote energy and the windmills aren’t going to thank me when i’m in the ground with work experience that employers look at and scoff. 🤷
A worldview where one’s worth is measured by the balance in their bank account makes it really easy to flatten out morality.
Do you want Tylers Durden? Because this is how you get Tylers Durden.
OpenSlopware documents FOSS that sold out to LLMs. is there an opposite of it, a hall of fame to list software that has unambiguously and vocally rejected LLM code like the Zig programming language?
Quick TL;DR of my Discord Age Verification Experience™:
Using my face multiple times didn’t work due to the AV shitting itself inside out, but setting my DOB via Family Center somehow did it
Absolute fucking clown fiesta, Jesus Christ
How AI slop is causing a crisis in computer science | Nature h/t naked capitalism
One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.
Let’s not call it “productivity” - to quote Bergstrom, twice as many papers is not the same as twice as much science.
There was an underlying tension with an academia, and a society, that takes “productivity” by itself as an end goal, and the autogenerators are just the logical conclusion/extreme form of that. The tiny part of of me that can still be optimistic hopes that this leads to a real good reexamination of what academia (and society) is even for.
Goodhart’s law in action.
Since the advent of ChatGPT in November 2022, the number of monthly submissions to the arXiv preprint repository has risen by more than 50% and the number of articles rejected each month has risen fivefold to more than 2,400 (see ‘Rejection rates climb’).
If I’m interpreting this right then the growth in the number of rejections is wildly outpacing the growth in submissions, which means not only are we getting a tsunami of slop but that the bad papers are actively chasing away good ones.
Also your paper has to be truly irredeemable dogshit to get rejected from arxiv. Like you can post proofs of P=NP as long as it sounds kinda coherent. 2400 monthly rejections is absurd.
WD and Seagate confirm: Hard drives for 2026 sold out (because the AI datacentres have stolen them all)
idk if the bubble will pop or slowly deflate, but im certain that in 10 years we’ll look back at 2020s as the decade where tech stopped progressing in the way we know it - since we’re diverting all our resources to ai, there’s no longer any room left for anything else to grow
the 2010s crypto gpu shortage was the warning siren for this. it really hampered the growth of gpus because they permanently became so much more expensive - now the same is happening to memory, storage, and…well, gpus again! we’ve reached the point of reverse progress
2020s as the decade where tech stopped progressing in the way we know it
I mean, sure, but I think the underlying cause here is the end of Moore’s law and exponential growth of potential userbases as the world becomes fully connected. The Enshittocene can be viewed as a consequence of capital’s attempts to continue exponential growth while the fundamentals are no longer capable of sustaining it.
Indeed, fifteen years ago, Thailand had a horrific tsunami-induced flood displacing millions of people, and:
Thailand is the world’s second-largest producer of hard disk drives, supplying approximately 25 percent of the world’s production. Many of the factories that made hard disk drives were flooded, including Western Digital’s, leading some industry analysts to predict future worldwide shortages of hard disk drives. … As a result, most hard disk drive prices almost doubled globally, which took approximately two years to recover. Due to the price increase, Western Digital reported a 31 percent revenue increase and a more than doubled profit for fiscal year 2012.
As you say, we are no longer earning Moore’s Dividend and there is no longer opportunity in laying dark fiber for somebody else to rent or offering Facebook-only phones to reach the next billion users.
nasb, video from a climate scientist going over the claims by promptfondler ceos
womp, and wait for it, womp
This was not such an effective venture.
Rip the stately home.
I mean it’s presumably still standing, just with a slightly cheaper set of owners ;)
Good news, everyone’s favorite emacs is using AI now: https://www.vim.org/vim-9.2-released.php
@antifuchs s/is using AI/is being used for AI/g
There’s a big difference!can somebody tell me whether neovim is selling out to LLMs or led by transphobes or getting money from Anduril or something, so I decide whether to switch before getting disappointed again
somebody did show me neovim’s AI-assisted PR tag. nice, I can get disappointed all in one go instead of once per day
I poked around to see how far gone my main text editor is. They’re not about to join the Butlerian Jihad, but I think I can live with it.
We don’t care how you wrote the code, but we do care that you fully(!) understand it and how it solves the underlying issue. LLM coding assistants can help with tedious routine and investigation (such as constructing test cases), but they are not a replacement for understanding the problem as well as the code you touch. (Nvim’s codebase is full of… let’s say “history”, and generic models tend to do quite poorly here.)
What is not OK is to copy-paste responses from the LLM as your comments. We don’t want to play a game of telephone with the LLM (if it was smart enough to solve the problem, we would be doing that ourselves).
Except in special circumstances (and with explicit notes), all your comments and descriptions must be written by you yourself. (Use a translation tool if you must, but don’t let someone else put words in your mouth.)
Contributor actually was bullied into closing his PR, but maintainers reopened and merged it, as the change was fine apparently. Lol
disapointamaxxing
@antifuchs @techtakes Oh goodie they enshittified vim IS NOTHING SACRED?!? HAVE WE LIVED AND FOUGHT IN VAIN?!?!?
It looks more like they used an LLM (Copilot) to construct programs using the new features in the language.
So more “hop on the bandwagon using LLMs to advertise” rather than “we used LLMs to develop”
OTOH “these new features are so advanced you need AI to use them” is a bit of a weird sell
it has to be said, a runtime CVE in vim would be pretty embarrassing
AI Jobs Apocalypse is Here | UnHerd h/t naked capitalism
feels a bit critihype, idk
So, what happens to American politics when the script is flipped, and we enter a new era of white-collar precarity? We can look back to the recent past and recall that, after the 2008 recession, it was young men who got especially angry. Downwardly mobile urban millennials drifted toward radical Left-wing politics, including the Occupy Wall Street movement and both Sanders campaigns, myself included. In the current decade, the Gen-Z men shut out by elite institutions often join their grandfathers and turn toward MAGA, or worse, into Groypers. But an AI-driven white-collar apocalypse has no equivalent of the American Rescue Plan around the corner, and it will move faster through institutions because the people experiencing it — journalists, lawyers, policy staffers — are the ones who produce political legitimacy itself. When that class loses faith in the system’s stability, the political climate may quickly become volatile.
As I get older I am more and more disturbed by the selective memory of the GFC; no mention of the tea party or the fallout from the austerity measures they pushed in the middle of the country; no mention how the bailout saved banks not homes. The Tea Party won, not Occupy, and the current government is doing things beyond the Koch’s wildest dreams.
If and when there is a crash, these dumbass CEOs deserve /nothing/. Let them lose their vacation houses. And, maybe grow some balls and send the fraudsters to jail where they belong.
sigh
the Gen-Z men shut out by elite institutions often join their grandfathers and turn toward MAGA, or worse, into Groypers.
Iirc that is not as often true as people claim it is. But yeah, not gonna click unherd to see if they have a source. Because blergh unherd.
unherd is a fash publication. to me this comes across as an AI take-ified rewrite of a 1994 luttwak essay i read recently, an endorsement of a revival of italian style fascism: https://www.lrb.co.uk/the-paper/v16/n07/edward-luttwak/why-fascism-is-the-wave-of-the-future
One ray of sunshine: when the bubble pops, there will be quite a few “billionaires” who will become mere millionaires, and they will find that their friends in government and the media will suddenly be much less interested in being around them at all.
Most expensive number 2 generator
Hey, at least it’s efficiently making number 2 on the side while spitting out user prompted number 2s.
Apparently this sort of machine learning training pitfall I learned about a decade go in an undergraduate level class that I was like halfway paying attention to in a party school is now evidence of the impending AI apocalypse.
Wow, that highlighting really emphasises the insidious, nefarious behaviour. This is only a hop, skip, and jump away from, what was it again? Rhomboid? Rheumatoid bactothefuture?
Rhomboid? Rheumatoid bactothefuture?
Doc Brown couldn’t get optimal flux dispersal across the surface of the time machine without the heavy biofilm coating. It’s not a fetish thing, people! Stop saying that!
diamondoid. None of the derivatives I can come up with sound anywhere near as dumb as the actual word.
Derpadoid Burpateria
rolls 1, 1
I behold the paperclip calculator, and tremble in fear
New “AI is not a bubble” video just dropped https://youtu.be/wDBy2bUICQY a lot of skeptical comments pointing out the flaws in this argument while the creator tries to defend themselves with mostly mediocre lines
Chatbots are a cognitive hazard, part infinity: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking
It does seem more and more like the most relevant parallel is radicalization, particularly the concerns about algorithmic radicalization and stochastic terrorism we got back in the early 2010s. The machine system feeds the user back what they’ve put into it, validating that input and pushing the user into more extreme positions. When it happens through a community (“classical” radicalization) the fact that the community needs to persist serves to mediate or at least slow the destructive elements of the spiral. Your Nazi book club/street gang stops meeting if people go to prison, lose their jobs/homes, etc. Online communities reduce this friction and allow the spiral to accelerate to a great degree, but the group can still start eating itself if it accepts the wrong level of unhingedness and toxicity.
Algorithmic/Stochastic radicalization, where the user moves through a succession of media environments and (usually online) communities can allow things to accelerate even more because the user no longer actually has to maintain long-term social ties to remain engaged in the spiral. Rather than increasingly-destructive ideas echoing around a social space, the user can chase them across communities, with naive content algorithms providing a solid nudge in the right direction (pun wholly intended). However, the spiral is still dependent on the ability of the relevant media figures and communities to persist, even if the individual users no longer need a persistent connection to them. If the market doesn’t have space for a creator then their role in that network drops. Getting violent or destructive content deplatformed also helps slow down the spiral by adding friction back into the process of jumping to the next level of radicalism. Past a certain point you find yourself back in the world of needing to maintain a community because the ideology has gotten so rotten that there’s no profit in entertaining it. Past that you end up back with in-person or otherwise high-friction high-trist groups because the openness of a low-friction online community compromises internal security in ways that can’t be allowed when you’re literally doing crimes.
Chatbot-induced radicalization combines the extreme low friction of online interactions with an extremely high value validation and a complete lack of social restrictions. You don’t have to retain a baseline connection to reality to maintain a relationship with a chatbot. You don’t have to make connections and put in the work to find a chatbot to validate your worst impulses the same way that you do to join a militia. Your central cause doesn’t have to be something to motivate anyone outside yourself. Your local KKK chapter probable has more on its agenda than hating your ex-wife (not that it doesn’t make the list, of course), but your chatbot instance will happily give you an even stronger echo chamber no matter how narrow the focus. And unlike the stigma associated with the kinds of hate groups and cults that would normally fill this role for people, the entire weight if the trillion-dollar tech industry seems to be invested in promoting these chatbots as reliable and trustworthy – even more so than the experts and institutions that are supposed to provide an anchor to counter this kind of descent. That’s the most dangerous part of our Very Good Friends’ projects on the matter. That’s how you get relatively normal people to act like they’re talking to God and He’,s telling them everything they don’t want to admit they want to hear.
Semi-OT but a blog post where I’m just kinda gawking at the technology that saved my daughter’s life and the absurdity of comparing it to what now first comes to mind when we talk of “tech”.
Beautiful. As a dad, thank you for sharing!












