Ryan Broderick is currently my fav reporter on the Internet culture beat, and on this podcast he’s talking about Epstein and why he was interested in putting his resources to developing the early internet (spoiler: for crimes).
Tech
Ruling: AI art is not copyrightable.
Apparently one thing LLMs excel at is deanonymization at scale. The original promise of pseudonymity online was social and normative, over and above any question of technical depth: decent people don’t try to unmask you, because why. What strikes me today is how what used to be unacceptably antisocial behavior online is now both automated and unremarkable.
Over the last couple of weeks, I asked a couple of chatbots what could be known about me from this pseudonymous site, where I am more intentional about what I choose to reveal and conceal. It pulled the obvious but also drew conclusions based on a few geographic points I’d made in context that were both revealing and correct. I also noticed that it only drew from the top two pages of information - anything beyond page two of posts wasn’t part of the compute. Archives are for humans?
People assume that there is some computer magic on the backend where the LLMs connect all your account logins behind the scenes, but no, in fact it does all this through inference, by linking your digital trail, your friends, your breadcrumbs of likes and hearts and follows, and obvs your posts, into a picture of who you are, practically and demographically.
Developing a new pet theory that you could theoretically crank AI outputs toward your vision by having an organized link blog revolution.
The Guardian has a long-running series where readers answer one another’s questions, which gives a pretty good point-in-time survey of how people of a certain demographic are feeling on a given subject. This week’s question is on AI futures: “what would happen to the world if computer said yes?”
Reading about a new “slow” RSS app called Current.
Some analysis on Anthropic and the Pentagon. Per @manton, it feels a little bit like the company isn’t fully on board with the product they produce. I find it interesting that the feds appear to be toying with the idea of nationalizing AI companies alongside other nationalization efforts.
Like Rickrolling on steroids: I didn’t even know P2P music file downloads were still a thing, but of course, and the scene had a big week recently when somebody trolled SoulSeek by generating a bunch of popular songs sung in Homer Simpson’s voice, then quietly polluted the platform by offering the AI generated Homer files up to other users. Someone then set up a digital radio station of all the Homer Simpson songs called D’oh Radio. Here’s more reporting from Gizmodo.
There is a long tradition of Simpsonifying things that go online, so this is the latest iteration. We observed Homer covers going viral on social in 2024 and 2025.
A new report shows that “just 100 users were responsible for almost 70 per cent of online conspiracy posts from influential accounts they examined in Canada.” Relatedly, a White House staffer is reportedly behind a massive Trump account on Twitter. This practice is a common information strategy by bad actors called astroturfing.
A guy put his life savings up against DOGE and won big on the new, anarchic betting markets online. This line hooked me: “Crucially, he’s been around long enough to see promises collide into reality and to know basic federal-budget math.”
Second wavers in tech and engineering talked a lot about the “god trick” of presenting knowledge and information, particularly around math and science, in a way that suggests its objectivity is eternal, immortal, unknowable. Work by Safiya Umoja Noble and others extended this lens to Internet search and architecture, finding that the search algorithm was never neutral, instead it was a series of business decisions wearing neutrality like a costume, creating a customer service experience. LLMs take that same trick and compress it further.
It’s an old idea, and one I’ve been drawing from while I tinker with Claude, which is purportedly the best in the game. The “god trick” is baked right into the AI interface: one input, one output, an authoritative-seeming answer, offered without named perspectives behind it, trained on text produced overwhelmingly by a narrow demographic who has historically had access to both literacy and publishing, by programmers and new media drawing from the same well. Smushed together, it gives the impression that consensus exists where there are in fact many, many loose ends.
I increasingly find it annoying that even “good” AI outputs seem fixed on phrases like “key,” “core,” “exist,” “actually,” “never,” and possibly the worst sentence structure of all time, “it’s not X, it’s Y” — and I’ve begun to recognize how LLMs work like autocorrect for phrases and ideas, drawing from ranked search sources first before fanning out to more obscure sources, trying to determine and assert what’s important to me, a user known by demographics and data. It feels like a big linguistics machine, which is pretty cool in some regards, but also aggressively semantic. The math doesn’t always work to connect me to what I want to find because I am situated in my individual context in ways LLMs are not able to understand, with my memory, in my body, with my unique experiences, which shape and translate meaning for me as I interact with the world (and the web).
And so for you, in your body and memory and experience. An LLM can approximate the outputs of an experience without having access to the experience itself. Sometimes this is useful, sometimes it’s reckless.
Overall the dynamic reminds me of the famous scene from Good Will Hunting: Claude is a smart kid, and he’s never been outta Boston.
An open letter from employees of Google and OpenAI in support of Anthropic, against the DoD: notdivided.org
McKenzie Wark for Verso Books on Donna Haraway, in 2015: “Creating any kind of knowledge and power in and against something as pervasive and effective as the world built by postwar techno-science is a difficult task. It may seem easier simply to vacate the field, to try to turn back the clock, or appeal to something outside of it. But this would be to remain stuck in the stage of romantic refusal. Just as Marx fused the romantic fiction that another world was possible with a resolve to understand from the inside the powers of capital itself, so too Haraway begins what can only be a collaborative project for a new international. One not just of laboring men, but of all the stuttering cyborgs stuck in reified relations not of their making.”
New research shows that social media advertising suppresses voting in targeted communities, and is the first to quantify the effect of this kind of microtargeting on voter turnout.
Sharing Digital Animal, a new, curated playlist of songs about all the angst and joys of living with modern technology.
Wednesday, February 25, 2026 →
I’ve been spending more time in tech spaces online and getting good information from folks like @manton, creator of Micro.blog. Like this reflection on how to think about AI now that vibe coding works. Something I’m thinking about: there’s an emerging tension between those who see value in being able to immediately prototype an idea and the people downstream who have to manage the outputs/code over time. The ability to proof every idea sounds like a superpower until you’re the one driving and maintaining the results.
Wednesday, February 25, 2026 →
Making the rounds on Twitter today, an AI programmer pulled together a dashboard comparing how LLMs respond when given a series of nonsense prompts. There is a notable difference in quality even between Claude’s Opus and Sonnet, in my experience. The lesser GPTs will take a wobbly claim as truth and run with it.
Some additional discussion of LLM models, including open weight and “staggered openness,” where orgs “release previous versions of proprietary models once a successor is launched, providing limited insight into the architecture while restricting access to the most current innovations.”
“Open source,” “open weight,” and “proprietary” describe different relationships between LLM model producers and users, governing what you see, modify and control. Comparing them isn’t necessarily about “best,” but whether you’re optimizing for transparency, compliance or performance. Massive investment in proprietary models means the best-resourced research teams, the largest training runs, and the most sophisticated safety work tend to happen behind closed doors.
A new report from Pew on how teens view and use AI says that more than half of teens turn to chatbots for help with school, and some are turning to AI for emotional support and companionship.
“A brief history on writing, a forgotten technology: how the challenges we face in today’s digital age can be traced back to one of the most significant yet underrated innovations of all time.”
“Digital silence” is a new ethic emerging on the visual web, where travelers are no longer posting about their travels. Reading along, thinking of how I rarely post about family out of respect for their privacy, and recently took several road trips without a single post.
Closer to the machine
Monday, February 2, 2026
There is something about the AI moment that reminds me a lot of when the internet was new. A lot of what was imagined and promised about the internet was never realized. But much was.
I’ve been reading Ellen Ullman’s memoirs - “Life in Code” and “Close to the Machine” - and her observations about proximity to technology feel relevant here. Being close to the machine means understanding its actual capabilities and limitations apart from the prevailing sales narratives. It also means a kind of loneliness, because you are working in a space that others don’t yet see clearly or fully understand.
I suspect people thinking seriously about AI right now will experience something similar: a stretch of hostility and discomfort while the rest of the world catches up and the consumer market level-sets on the promises being made. In the interim, the hype will not match the reality, and the reality will sometimes exceed the hype in ways no one predicted. And for a while, how it works and why it matters won’t be legible to everyone at once.
Crunching for clarity
Friday, January 30, 2026
In 1999, academic and theorist Judith Butler famously won an award for the worst academic sentence, raising good questions about how we read difficult texts, who gets to access academic ideas, and the role of academic and plain language in and around the academy:
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
This was my era in college, and I remember feeling challenged and overwhelmed as an undergrad trying to read theory for the first time. So today, I asked Claude.ai to put it in plain language, just to see what would happen. Here’s the after:
Scholars used to think that capitalism shaped society in consistent, predictable patterns. But newer thinking focuses on how power works—how it repeats, overlaps, and gets reshaped over time. This brought time into the picture and moved away from older theories that treated social structures as fixed. Instead, recognizing that structures can change opened up a new way of understanding power: it's tied to specific moments and strategies, and it's always being renegotiated.
The real problem is that it's not our quagmire
Friday, January 23, 2026
Tiktok is not much better or worse than other major social platforms, I say. The primary arguments against TikTok, including data collection, algorithmic manipulation, potential foreign government access, addiction and influence on public opinion, apply with equal or greater force to American platforms. Meta has faced billions in fines for allowing privacy violations, enabled documented election interference, and its algorithms have been linked to mental health harms and the amplification of extremist content globally, including perpetuating a genocide in Myanmar. Google and other domestic platforms vacuum up vastly more user data with fewer restrictions.
The distinguishing factor isn’t the behavior but the ownership: TikTok’s parent company ByteDance is subject to Chinese law and intelligence relationships, while Meta and Google are subject to U.S. law and intelligence relationships. That’s a legitimate policy distinction, but rarely articulated honestly. Instead, the debate has been framed around purportedly unacceptable harms that American tech companies perpetrate routinely, creating a kind of security theater that lets domestic platforms escape equivalent scrutiny while positioning a foreign competitor for a forced sale or ban.
Affinity as an organizing principle
Friday, January 16, 2026
Reading this blog post by a political scientist explaining the problem with our fractured information landscape, and how calls for more information and media literacy are not likely solutions:
“In short, decades of research have demonstrated that our political beliefs and behavior are thoroughly motivated and mediated by our social identities: i.e., the many cross-cutting social groupings we feel affinity with. And as long as we do not account for this profound and pervasive dependence, our attempts to address the epistemic failures threatening contemporary democracies will inevitably fall short. More than any particular institutional, technological, or educational reform, promoting a healthier democracy requires reshaping the social identity landscape that ultimately anchors other democratic pathologies.”
As always, this drives me back to Haraway’s cyborg, a useful metaphor for thinking about our political, environmental and social tangle and how it butts up against emerging tech and science. (In Haraway’s context, it was the rise of STEM as a driving force in academia at the dawn of the computer age.) Bagg’s argument lands in familiar territory for anyone who’s wrestled with the cyborg metaphor. Both reject the assumption that better information alone will save us from ourselves, whether from context collapse or the dualisms (binaries, heh) that structure how we think about technology, nature, humanity and politics.
Bagg arrives at something parallel from political science: We trust information that affirms the groups we belong to. (Business and marketing, for what it’s worth, tell us the same thing from a slightly different angle: you’re most likely to convert on a recommendation from a trusted friend. The next best thing in our current media landscape: a trusted influencer you identify with, which is why TikTok increasingly feels like QVC.) The problem isn’t that people lack access to truth, it’s that they’ve lost affinity with the experts, institutions and collaborative practices that produce expertise.
Both perspectives point toward the same conclusion: you have to recognize shared affinities through the slow work of creating conditions where people want to trust each other across differences.
The trust gap
Friday, January 9, 2026
I suspect these three trends are connected: Women reportedly use AI at significantly lower rates than men—25 percent lower on average—in part because they’re more concerned about ethics, including privacy, consent and intellectual property. At the same time, countries with more positive social media experiences tend to be more open to AI, while Americans’ distrust is shaped by years of watching tech platforms erode trust. Meanwhile, one of the largest social platforms has turned its AI chatbot into a harassment tool—generating roughly one nonconsensual sexualized deepfake image per minute, disproportionately targeting women and girls.
When platforms enable abuse at scale, it makes sense that people most likely to be harmed would be most attuned to ethical concerns, and would thus be the most cautious about AI adoption.
A meta lesson about AI assistance
Sunday, December 7, 2025
I just completed my first attempt at coding using AI, in this case having Claude assist me with putting together a simple client-side OPML parser using Dave Winer’s Feedland service.
Winer’s original script is pretty slick, and includes a list of all my feeds with titles, URLs, and categories; click-to-expand functionality to see the 5 most recent posts from each feed; clickable post titles that open articles in new tabs; sort options (by title or by update); and automatic updates when I change my FeedLand subscriptions.
You can check it out here: Feeds
The official documentation method didn’t initially work because Hugo (the blogging software behind micro.blog) was wrapping client-side templates around the script. The toolkit requires server-side dependencies that don’t exist on static sites like micro.blog, and we hit a cascade of missing JavaScript dependencies (jsonStringify, servercall, etc.). Each fix revealed another dependency, leading to some “sunk cost” frustrations for me. I kept trying because I wanted to see if Claude could pull it together. Through trial and error, I got to a point where the OPML file was rendered correctly without server dependencies or complex external libraries.
Time invested: ~3 hours (including wrong turns)
Time it should take: 10 minutes
AI extended my code reach beyond my practical skillset by quite a lot. I now have a dynamic and dedicated place to read and share news feeds as I wish. Though even when generative AI works and works well, I have significant concerns about the intellectual property implications of AI, and this project brought those tensions into sharp focus. The AI could only help me because it was trained on documentation and intellectual work from the open source community, contributions made freely in the spirit of knowledge sharing, not to train commercial AI systems. I tapped into their expertise by paying Anthropic $15 a month. While I’m grateful for the accessibility this provides to non-developers like me, I recognize there’s an unresolved ethical question about whether this use respects the intent and labor of the original creators. The feat is incredible; the foundation it’s built on deserves careful consideration.
After the exercise was complete, I asked Claude how I could have improved my prompting to make this process easier, and in short, Claude said I could have been a web developer. But since I’m not, here’s what it recommended:
✅ When the process isn’t working, question the process mid-stream. Most people either give up or keep following bad advice deeper into rabbit holes. Stop and question the LLM’s process and ask for alternatives to force a reset.
✅ Push for usability. Keep bringing the conversation back to what you actually need the end result to do, not what’s technically impressive or “correct.” In my case, this meant repeatedly asking “can I click through to the articles?” rather than getting lost in discussions about CORS proxies or JavaScript syntax. Focus on outcomes, not implementation details.
✅ Ask for complete solutions. Instead of trying to mentally patch together incremental changes across multiple responses, ask the LLM to provide fresh, complete code each time. This prevents copy-paste errors and ensures you’re always working with a coherent, tested solution. There’s more than one way to crack an egg, but you want the whole egg regardless.
After all that, I got it to work but can’t figure out how to make it show up in my header menu, with or without Claude. TBD.
Rhetoric of intertextuality
Sunday, November 9, 2025
“… every text is connected to other texts by citations, quotations, allusions, borrowings, adaptations, appropriations, parody, pastiche, imitation, and the like. Every text is in a dialogical relationship with other texts. In sum, intertextuality describes the relationships that exist between and among texts. What follows is a discussion of the strategies of intertextuality.”