Timothy Chester offers some thoughts on the place of AI-assisted software development in a modern research university, and suggests that just because you can doesn’t necessarily mean you should.
AI
Out: Twitter on a vape. In: AI-powered crypto vape. Is this real? Who can tell anymore.
Maris Kreizman on AI pressures building in the publishing industry, particularly how it impacts writers and editors: “It’s not an ideal environment for productivity, let alone for making art.”
Tom’s Hardware on Mythos and marketing hype. Additional commentary from Michael Corn, asking whether Mythos coverage reinforces or establishes perceptions about cybersecurity.
I once worked in a role where I keyed million dollar manufacturing orders into SAP, information that directly fed into factory specs for a manufacturing facility based in another country. Our regional office fed into a massive, global electrical engineering firm that ran on small margins (electricity delivery is a well-trod market), so our ability to deliver accurate orders on time was a differentiator in a field that is otherwise easily interrupted by chip shortages and logistics chains.
It was a big job. I learned a ton about electrical engineering, manufacturing and global logistics from a particular vantage point in North America. Our headquarters were based in Sweden, with locations around the world to support the electrical grid(s), covering both hardware and software solutions. My colleagues and I worked in positions that sat somewhere between B2B customer service, inside sales and data entry, and were expected to maintain a 99.8% accuracy rate because a single fat finger error would cascade across myriad systems, impacting real-world operations to the tune of hundreds of thousands of dollars per error.
Once (and only once), I fat-fingered a serial number during data entry which ruined an entire shipment of widgets. In response, the factory in Mexico sent the incorrect order of widgets, about five pallets, to my location in the United States so I could correct the order by hand. One by one, I had to physically remove each widget from a pallet, then from its individual shipping container, make a correction on the widget itself, and repackage each one, signing my name on each unit to ensure it was corrected by an accountable employee. I can’t recall why the issue couldn’t have been corrected on the factory floor, but it wasn’t on the menu. It was going to stay my problem.
This was the one and only factory error I made in about five years of tenure, precisely because it was so painful to correct it. The process was a little embarrassing but nobody made it especially so. Instead my coworkers up and down the org chart relayed a simple expectation: the desk workers need to pay attention to the details because the alternative is too costly. A few old-timers made sure to razz me about it in good humor, but ultimately the error was mine and the fix was mine, and the experience stuck because the whole chain of responsibility understood the stakes and reinforced the consequences. They also trusted me to stick around and continue to do my best.
During my annual review that year, I was dinged for only having a 98% accuracy rate, and I knew why that was a fair assessment.
I thought about this when I read today’s NYT piece on whether 90% accuracy is good enough for LLMs.
Related-ish: The important legacy of the Sarbanes-Oxley Act.
Poell argues AI is entangled with platform capitalism through shared infrastructure, reinforcing concentration of the market. The hype obscures local realities of adoption, putting public alternatives in the position of proving their existence alongside advocating for their place in the market.
Seeing a lot of discussion about trade-offs related to vibe-coding.
Angst about data center development continues to grow in the Rust Belt.
Silicon sampling is the practice of using LLMs to run surveys without talking to any people at all.