Discussion about this post

User's avatar
Robert McKay's avatar

After I realized I had thoughts about this post that would need articulating, and before I forced myself to actually articulate them, I opened two suggested posts in other tabs. The urge to endlessly consume content stimulate the pre-frontal cortex without engaging it in effort is ubiquitous.

So what are my thoughts? I won't really know until I finish writing them down here. I felt there were a number of half-thought thoughts in the post, loose threads, holes...

I am less sanguine than you about people's taste causing them to "leave creativity to the professionals." TikTok and the rise of fanfic should refute this idea: as in so many domains, AI just amplifies and reflects back to us the pathologies of the pre-LLM Internet. Do TikTok and fanfic spawn outlier creative gems? Of course. In fact, because they are ungated human fora, they inject occasional doses of the new into a sclerotic, enshittified, and eminently sloppy "professional" media. Same goes for Substack, of course: full of slop (with and without LLMs, but also full of outlier gems you will never, by definition, find in the New York Times, where the filtering mechanisms render everything squeezed through them down to, well, slop. Same with Harvard, Wharton, or any other institution. Not the ponds to go hunting Black Swans in.

Writing plausibly correct sentences, video clips, and lines of code is not writing, filmmaking, or software engineering. This has become a national pursuit in the insular nation formerly known as the Twitterati (we need a new, Substack-branded name for this, clearly): trying to pin down what it is about human creation that models suck at. "Process knowledge" is one Dan Wang- / Packy McCormick-inspired answer, which I like, partly because it only seems to be a good answer on a test; actually it's a compressed gesture which, when unpacked, shows you the very thick, messy, rendering-resistant stuff of human production, which always involves violent, sustained contact with the world and other humans. There are no manuals or rubrics for this stuff. So why can't LLMs render it out? They're trained literally on rubrics: their mindless Bayesian guessing tuned by underpaid, overworked humans on today's brutal assembly lines, churning out the apotheosis of "alignment" or what used to be called "adjustment" in human psychology: the industrial rendering of human to conform them to molds. Soylent Green, like AI slop, is people. The void stares back.

No posts

Ready for more?