Meta

Developing a new pet theory that you could theoretically crank AI outputs toward your vision by having an organized link blog revolution.

Second wavers in tech and engineering talked a lot about the “god trick” of presenting knowledge and information, particularly around math and science, in a way that suggests its objectivity is eternal, immortal, unknowable. Work by Safiya Umoja Noble and others extended this lens to Internet search and architecture, finding that the search algorithm was never neutral, instead it was a series of business decisions wearing neutrality like a costume, creating a customer service experience. LLMs take that same trick and compress it further.

It’s an old idea, and one I’ve been drawing from while I tinker with Claude, which is purportedly the best in the game. The “god trick” is baked right into the AI interface: one input, one output, an authoritative-seeming answer, offered without named perspectives behind it, trained on text produced overwhelmingly by a narrow demographic who has historically had access to both literacy and publishing, by programmers and new media drawing from the same well. Smushed together, it gives the impression that consensus exists where there are in fact many, many loose ends.

I increasingly find it annoying that even “good” AI outputs seem fixed on phrases like “key,” “core,” “exist,” “actually,” “never,” and possibly the worst sentence structure of all time, “it’s not X, it’s Y” — and I’ve begun to recognize how LLMs work like autocorrect for phrases and ideas, drawing from ranked search sources first before fanning out to more obscure sources, trying to determine and assert what’s important to me, a user known by demographics and data. It feels like a big linguistics machine, which is pretty cool in some regards, but also aggressively semantic. The math doesn’t always work to connect me to what I want to find because I am situated in my individual context in ways LLMs are not able to understand, with my memory, in my body, with my unique experiences, which shape and translate meaning for me as I interact with the world (and the web).

And so for you, in your body and memory and experience. An LLM can approximate the outputs of an experience without having access to the experience itself. Sometimes this is useful, sometimes it’s reckless.

Overall the dynamic reminds me of the famous scene from Good Will Hunting: Claude is a smart kid, and he’s never been outta Boston.

McKenzie Wark for Verso Books on Donna Haraway, in 2015: “Creating any kind of knowledge and power in and against something as pervasive and effective as the world built by postwar techno-science is a difficult task. It may seem easier simply to vacate the field, to try to turn back the clock, or appeal to something outside of it. But this would be to remain stuck in the stage of romantic refusal. Just as Marx fused the romantic fiction that another world was possible with a resolve to understand from the inside the powers of capital itself, so too Haraway begins what can only be a collaborative project for a new international. One not just of laboring men, but of all the stuttering cyborgs stuck in reified relations not of their making.”

Sharing Digital Animal, a new, curated playlist of songs about all the angst and joys of living with modern technology.

Closer to the machine

There is something about the AI moment that reminds me a lot of when the internet was new. A lot of what was imagined and promised about the internet was never realized. But much was.

I’ve been reading Ellen Ullman’s memoirs - “Life in Code” and “Close to the Machine” - and her observations about proximity to technology feel relevant here. Being close to the machine means understanding its actual capabilities and limitations apart from the prevailing sales narratives. It also means a kind of loneliness, because you are working in a space that others don’t yet see clearly or fully understand.

I suspect people thinking seriously about AI right now will experience something similar: a stretch of hostility and discomfort while the rest of the world catches up and the consumer market level-sets on the promises being made. In the interim, the hype will not match the reality, and the reality will sometimes exceed the hype in ways no one predicted. And for a while, how it works and why it matters won’t be legible to everyone at once.

Crunching for clarity

In 1999, academic and theorist Judith Butler famously won an award for the worst academic sentence, raising good questions about how we read difficult texts, who gets to access academic ideas, and the role of academic and plain language in and around the academy:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

This was my era in college, and I remember feeling challenged and overwhelmed as an undergrad trying to read theory for the first time. So today, I asked Claude.ai to put it in plain language, just to see what would happen. Here’s the after:

Scholars used to think that capitalism shaped society in consistent, predictable patterns. But newer thinking focuses on how power works—how it repeats, overlaps, and gets reshaped over time. This brought time into the picture and moved away from older theories that treated social structures as fixed. Instead, recognizing that structures can change opened up a new way of understanding power: it's tied to specific moments and strategies, and it's always being renegotiated.

A meta lesson about AI assistance

I just completed my first attempt at coding using AI, in this case having Claude assist me with putting together a simple client-side OPML parser using Dave Winer’s Feedland service.

Winer’s original script is pretty slick, and includes a list of all my feeds with titles, URLs, and categories; click-to-expand functionality to see the 5 most recent posts from each feed; clickable post titles that open articles in new tabs; sort options (by title or by update); and automatic updates when I change my FeedLand subscriptions.

You can check it out here: Feeds

The official documentation method didn’t initially work because Hugo (the blogging software behind micro.blog) was wrapping client-side templates around the script. The toolkit requires server-side dependencies that don’t exist on static sites like micro.blog, and we hit a cascade of missing JavaScript dependencies (jsonStringify, servercall, etc.). Each fix revealed another dependency, leading to some “sunk cost” frustrations for me. I kept trying because I wanted to see if Claude could pull it together. Through trial and error, I got to a point where the OPML file was rendered correctly without server dependencies or complex external libraries.

Time invested: ~3 hours (including wrong turns)

Time it should take: 10 minutes

AI extended my code reach beyond my practical skillset by quite a lot. I now have a dynamic and dedicated place to read and share news feeds as I wish. Though even when generative AI works and works well, I have significant concerns about the intellectual property implications of AI, and this project brought those tensions into sharp focus. The AI could only help me because it was trained on documentation and intellectual work from the open source community, contributions made freely in the spirit of knowledge sharing, not to train commercial AI systems. I tapped into their expertise by paying Anthropic $15 a month. While I’m grateful for the accessibility this provides to non-developers like me, I recognize there’s an unresolved ethical question about whether this use respects the intent and labor of the original creators. The feat is incredible; the foundation it’s built on deserves careful consideration.

After the exercise was complete, I asked Claude how I could have improved my prompting to make this process easier, and in short, Claude said I could have been a web developer. But since I’m not, here’s what it recommended:

When the process isn’t working, question the process mid-stream. Most people either give up or keep following bad advice deeper into rabbit holes. Stop and question the LLM’s process and ask for alternatives to force a reset.

Push for usability. Keep bringing the conversation back to what you actually need the end result to do, not what’s technically impressive or “correct.” In my case, this meant repeatedly asking “can I click through to the articles?” rather than getting lost in discussions about CORS proxies or JavaScript syntax. Focus on outcomes, not implementation details.

Ask for complete solutions. Instead of trying to mentally patch together incremental changes across multiple responses, ask the LLM to provide fresh, complete code each time. This prevents copy-paste errors and ensures you’re always working with a coherent, tested solution. There’s more than one way to crack an egg, but you want the whole egg regardless.

After all that, I got it to work but can’t figure out how to make it show up in my header menu, with or without Claude. TBD.

What is this site and why am I doing it?

In recent history I stopped posting on most social media and moved to the fediverse. I still browse the social platforms to keep up with trends and friends, but I only post on my private IG and here.

What I share here is separate from but related to my professional life — I’m thinking out loud and making room for rough, unfinished ideas. I write mainly for myself, but if others find it useful, that’s great. The practice of reading and reflecting makes your thinking stick, and I am from a certain time and place, so this is how I approach learning and communicating about what I’m learning. It’s a habit.

While this is my preferred approach, I acknowledge that sharing unfinished ideas publicly is risky and you have to accept accountability for the messiness that comes with that. But I also know that working through your vulnerability through the act of writing lets you tap into your most creative, innovative self and test your ideas against an evolving sense of what’s good. The potential for an audience, however real or implied, keeps you more honest and less self-indulgent. Despite the trade offs, I think it’s worthwhile.

As I add to this page, I’ll be thinking out loud about digital rhetoric and communication alongside emerging technology, and linking back to foundational ideas I see reflected online today. Occasionally I’ll say something longer.