Comms
Sunday, March 8, 2026 →
Come look over my shoulder while I explore how and whether LLMs are good writing tools: Here’s a wee version of the LLM comparison exercise I did with my team. We’ll make it a two-fer so you can see how the “good writing” skill works in practice, though we’ll see how that actually goes.
One of the more useful things you can do with an LLM is hold up a few ideas side by side and apply lenses to them. I know this history pretty well, so I asked a series of LLMs, why is Wisconsin’s cultural identity and cohesion stronger than Indiana’s, from a historical and business perspective?
Here are the answers in one doc, for comparison.
Each LLM will give us more or less the same story, different flavor. Within the industry, the differences across the models reflect “model personality.” Asking “why” instead of “whether” will probably drive the answer to favor Wisconsin. Using multiple lenses (two states, historical + business, identity + cohesion) forces the LLM to cross-reference across more of its training data, which tends to produce a more comprehensive answer.
For all the chatter about consciousness and whatever, remember that an LLM is an infinite series of if/then/elses applied to human language and semantics, so being able to talk about language and communication, getting meta with the tool and how you think through language, helps a lot when using one. This is maybe the one thing I like about experimenting so hard with the tools. I’m thinking about the technical side of writing and enjoying it quite a lot.
Functionally: all of them acknowledge hard historical truths within the subject matter and don’t shy away from critical perspectives, which is good. Both Gemini and Copilot include in-line links, which lets you judge the output’s authority in the moment as a reader. I liked Copilot’s more than I expected here. Claude’s answers are more lyrical and do provide more context, and yet do not encourage checking against outside sources by providing links within the output. And you can see that even with the good writing skill calling out hard bans on certain structure, Claude plows right through them.
Model personality: Claude favors sociological answers to Copilot’s economic answers. Claude is also highly intellectual and narrative by comparison, and that narrative style can mask nuance by sinking relative context within the storytelling. Gemini simplifies, boosts and cheerleads where the others don’t, and really goes hard on Wisconsin’s reputation as a drinking and Packers state when there are stronger structural arguments in play. Copilot is tricky because it looks authoritative like a briefing, which also makes it easily “extractible” for the user, but every citation requires authentication unless this is one of those “good enough” tasks.
As a writer, something I find annoying across the whole spread is the semantic reveal. LLMs are semantic machines, and it is persistently revealed in ways that are weird to the human ear. All of them go out of their way to describe things as “structural,” “connective” as in “connective tissue,” “load-bearing” and “legible.”
Finally, I included a second tab where I asked Claude for analysis across the four outputs, where it suggests that my framing of the question is altogether kind of problematic. It shows how a strong prompt is sometimes also a bad approach.
There are a lot of possible takeaways here, but I’d rather set aside the question of which tool is “good” or “bad” or “better” and think more about the patterns across the tools and their implications.
Wednesday, March 4, 2026 →
You might call this a taste test: Obsessed with the story about the McDonald’s CEO and how his LinkedIn-style videos selling the McD’s franchise have escaped containment, leading to one of the funnier CEO/product marketing dynamics in recent history. Burger King’s CEO swooped in, holding widely-marketed listening sessions with customers and to demonstrate his love for the Whopper in contrast with the deeply weird McD’s videos. Must read: Internet long-hauler Katie Notopoulos on how direct-to-public marketing works when the public is more familiar with the product than the org’s leaders.
My teenager thinks fast food is cool and subversive (cue the sound of one hundred moms groaning) and she and her friends regularly talk shop. They are Taco Bell fans and think burger stops are kind of gauche. Not gauche: Baja Blast.
In the meantime, the fast food sector is becoming a playground for AI approaches, causing a lot of nervous discussion on social media and in tech spaces.
Monday, March 2, 2026 →
Developing a new pet theory that you could theoretically crank AI outputs toward your vision by having an organized link blog revolution.
Monday, March 2, 2026 →
Reading about a new “slow” RSS app called Current.
Thursday, February 26, 2026 →
New research shows that social media advertising suppresses voting in targeted communities, and is the first to quantify the effect of this kind of microtargeting on voter turnout.
Friday, January 30, 2026
In 1999, academic and theorist Judith Butler famously won an award for the worst academic sentence, raising good questions about how we read difficult texts, who gets to access academic ideas, and the role of academic and plain language in and around the academy:
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
This was my era in college, and I remember feeling challenged and overwhelmed as an undergrad trying to read theory for the first time. So today, I asked Claude.ai to put it in plain language, just to see what would happen. Here’s the after:
Scholars used to think that capitalism shaped society in consistent, predictable patterns. But newer thinking focuses on how power works—how it repeats, overlaps, and gets reshaped over time. This brought time into the picture and moved away from older theories that treated social structures as fixed. Instead, recognizing that structures can change opened up a new way of understanding power: it's tied to specific moments and strategies, and it's always being renegotiated.
Sunday, November 9, 2025
“… every text is connected to other texts by citations, quotations, allusions, borrowings, adaptations, appropriations, parody, pastiche, imitation, and the like. Every text is in a dialogical relationship with other texts. In sum, intertextuality describes the relationships that exist between and among texts. What follows is a discussion of the strategies of intertextuality.”
Sunday, November 9, 2025
In recent history I stopped posting on most social media and moved to the fediverse. I still browse the social platforms to keep up with trends and friends, but I only post on my private IG and here.
What I share here is separate from but related to my professional life — I’m thinking out loud and making room for rough, unfinished ideas. I write mainly for myself, but if others find it useful, that’s great. The practice of reading and reflecting makes your thinking stick, and I am from a certain time and place, so this is how I approach learning and communicating about what I’m learning. It’s a habit.
While this is my preferred approach, I acknowledge that sharing unfinished ideas publicly is risky and you have to accept accountability for the messiness that comes with that. But I also know that working through your vulnerability through the act of writing lets you tap into your most creative, innovative self and test your ideas against an evolving sense of what’s good. The potential for an audience, however real or implied, keeps you more honest and less self-indulgent. Despite the trade offs, I think it’s worthwhile.
As I add to this page, I’ll be thinking out loud about digital rhetoric and communication alongside emerging technology, and linking back to foundational ideas I see reflected online today. Occasionally I’ll say something longer.
Sunday, August 17, 2025
I graduated from college right before the 2008 recession and bounced through some unpromising temp jobs until an opportunity emerged for a permanent position. Sometimes you just need to get in where you fit in, and so I did. That’s how I came to work for a regional cable company that used federal money to expand the new national broadband network, extending out to the rural communities dotting central Indiana.
It was a front row seat to the national broadband expansion efforts of the early 2000s. Our business ran right across the state, spanning the 80 or so miles from Attica to Kokomo, which included several small cities with large manufacturers, two public research universities, and several community and liberal arts colleges. The strip of broadband fiber at the core of our service followed existing highways and electrical lines that split the corn and soybean fields from town to town, feathering out to more rural areas from there.
I worked a variety of roles there that put me face-to-face with a classic technical problem: the last mile. On many occasions, someone would come in looking pensive, and explain that the fiber had been extended all the way from town to their hamlet, and yet there was no plan to connect their property to the pole. Over time the pattern was clear: while the network was expanded, the cost of running a physical line to each individual property was too expensive and specific at scale. These customers often left without a path forward despite all their efforts and ours.
The “last mile problem” refers to the logistical challenges and high costs associated with the final leg of delivering goods or services to the end customer. It’s often the most difficult and expensive part of the supply chain, despite being a relatively short distance. The pattern shows up everywhere: public transit can get commuters most of the way most of the time, but that final leg of the journey remains specific and individual and problematic. E-commerce companies promise drone delivery solutions, scooter and bike-share apps claim to solve urban mobility gaps, but these technological optimizations remain persistently stubborn at scale, running up against the messy realities of sidewalks, intersections, and actual human behavior.
Tl;dr: I’ve been turning over this suspicion that AI automation will hit a classic “last mile problem,” especially in the public sector.
AI systems, particularly LLMs, are like those systems—they work incredibly well in their intended domain, processing and manipulating information. But because they’re fundamentally an information-only approach, that creates their own last mile problem when we try to implement them in physical and context-specific environments. Public institutions are uniquely specific — they are often the originators and producers of knowledge and the keepers of original policy, tasked with making the rubber hit the road. Additionally, the need for comprehensive data protection required by public workers and institutions fundamentally hamstring potential applications.
Actually implementing recommendations is where you hit the last mile. This is the work of public administration.
Imagine an AI application trained on every facilities management manual ever written and tuned to synthesize best practices for HVAC optimization. It can analyze years of energy usage data and recommend precise temperature adjustments for different zones of a building, but can’t feel that the third floor is always stuffy, or know that the facilities manager retired last year and took decades of institutional knowledge with him. Building A’s HVAC system was installed in 1987 and has a manual keypad. Building B’s system interfaces with the campus-wide monitoring system, but unreliably, and investigation is slated for later, someday, when resources allow. Professor Smith has taught in Room 204 for 25 years and will blow up your spot before moving to a different classroom for maintenance. Your engineers who manage these spaces are balancing human teams, who have time off and training and other priorities they manage in life. So, you need staff who understand the quirks of each area, the history of each system, the politics of which departments will accept changes and which will flood your inbox with complaints. You need someone who knows that the third floor always runs hot because of a design flaw from 1974, and that the solution isn’t more precise control but a $50,000 renovation that’s been deferred for a decade because a glittering new project across campus takes priority.
Imagine this tangle of questions and contingencies times infinity on every university campus in existence. Universities are like cities—they’ve been built and rebuilt over decades or centuries, with layers of systems and fiefdoms that weren’t designed to work together. AI recommendations assume a level of standardization that simply doesn’t exist. Every AI implementation in higher ed requires navigating multiple constituencies with different priorities and power structures. It’s like trying to redesign traffic patterns in a neighborhood where the residents, business owners, commuters, and city planners all have veto power and conflicting interests.
So. When looking at efficiency efforts spinning up across the education sector, I’m feeling pensive, trying to understand how exactly the house gets connected to the pole.
The promise of new tech in higher ed needs to more deeply consider the translation costs: the human labor, institutional knowledge, knowledge documentation and local adaptation required to bridge between the usefulness of tech and specific realities of public university work. Public employees want modernization and don’t want to fall behind. We want systems that work. We are also balancing a great deal of change and pressure as a sector, with fewer material resources than ever. We need less marketing and more right-sizing in the claims around AI against the political and tech realities of public administration.
This disconnect between technological promise and implementation reality becomes even more critical as higher education faces increased political scrutiny. When tech vendors promise that AI will solve efficiency problems or reduce administrative costs, institutions are under immense pressure to deliver measurable results quickly. But the translation costs we experience don’t disappear just because the political pressure to modernize increases.
The institutions that thread this needle will be the ones that accurately assess these translation costs upfront and set expectations accordingly—not the ones that assume the technology will magically bridge the gap between digital and physical, abstract and specific.
Friday, August 15, 2025
by Lauren Bruce
As communications professionals in higher education, we work for institutions built on the pursuit of knowledge and innovation, yet many of us feel uncertain about how to thoughtfully integrate one of the most significant technological advances of our time: artificial intelligence.
Over the past year, my team has wrestled with questions that didn’t exist in our profession just a few years ago. Should we use AI to draft articles and email copy? How do we disclose AI-generated content, or do we? When does AI assistance cross the line from helpful tool to ethical concern?
These aren’t abstract questions any longer. Over the last year, I had to overcome AI resistance of my own to develop practical, hands-on approaches to AI use that align with our institutional values while acknowledging the realities of modern communications work (more on that below). What I’ve learned is that the answers aren’t found in blanket policies or rules, but in applying our existing professional ethics to these new tools. Here is where I am today on the journey from AI praxis to practice.
Mission first
The foundation of responsible AI use in our field starts with a principle we already know: everything we do should advance our institution’s educational mission. Higher education exists to create, share, and preserve knowledge while fostering critical thinking and diverse perspectives, in service of students, faculty, researchers, workers and the world.
The bulk of our work comes from conversations with colleagues, understanding of our campus dynamics and processes, and professional judgment about what our community needs to hear. This inevitably means more work upfront, but it maintains the authenticity and institutional knowledge that our audience deserves, regardless of whether AI tools are part of the process.
Transparency without paranoia
Do I need to mention AI every time I use it? The answer isn’t simple, but I’ve found a helpful framework: consider whether your audience would feel misled if they knew how AI was involved in creating the content.
When I use AI to polish grammar and shape format, that feels similar to using spell-check – it’s helpful but not something that changes the fundamental nature of the content we wish to communicate. But when AI helps generate the main structure for a story about campus policy changes, that’s a different ball game. The audience expects those priorities and framing decisions to come from human judgment about what matters to our community.
Internally, we differentiate between the two by defining whether or not you are “automating” processes using AI, or “augmenting” processes using AI. Full disclosure, my area of experience is in augmentation, not automation. That said.
I’ve started recommending simple disclosures when AI plays a substantial role in content creation. A line like “This article was developed with AI assistance” maintains trust while allowing us to thoughtfully benefit from these tools. It’s not about being defensive, it’s about being transparent with the people we serve, especially as the tech and attitudes around it evolve over time. Additional qualifications can be included here, such as how the information was shaped and shared by AI or not (privacy implications abound).
Here, it’s important to remember to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.
The accuracy imperative
Perhaps nowhere are the stakes higher than with accuracy. In higher education communications, we’re not just sharing information—we’re stewarding public trust in our institutions and, by extension, in higher education itself. In addition, much of the information we are communicating is original, in that it’s new information that cannot be generated using the limitless soup of generative AI.
Every piece of AI-generated content requires human verification, especially anything involving numbers, research findings, or claims about institutional achievements. This means checking sources, confirming statistics, and ensuring that quotes are accurate and properly sourced. It’s more work, but the alternative—publishing incorrect information—could undermine years of relationship-building with community stakeholders and partners.
The promise of speed and efficiency that comes with generative AI must be balanced with the work of close reading, the skill and practice of carefully analyzing a passage’s language, content, structure, and patterns in order to understand what a passage means, what it suggests, and how it connects to our larger body of work. I firmly believe that close reading, learned in the Humanities and Social Sciences, will become increasingly important to understand, shape and steer AI output, especially with regard to public communication best practices.
Inclusion as a practice
AI bias isn’t an abstract concern—it shows up in subtle but significant ways in the work. I’ve noticed that AI tools often default to formal, academic language that might exclude first-generation college students, or suggest examples and metaphors that assume certain cultural backgrounds, for example.
This has made me more intentional about prompt engineering—the way I request AI assistance. I build digital accessibility and plain language best practices into my prompts, in alignment with institutional best practices. One tip is to draft the original using my chosen, intentional language, then ask for revisions using as much of the original verbiage as possible. The difference in output is significant and it allows me to focus on higher-order communication strategy while demonstrating both accuracy and inclusive values in our output.
Privacy and the long view
Working at a public university means balancing transparency with appropriate privacy protections. We work within strict guidelines about what information can be included in AI prompts, particularly around student data, personnel information, and strategic planning discussions. Again, it’s important to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.
The challenge is that AI tools work best with context, but providing that context can sometimes mean sharing information inappropriately. I’ve learned to be creative about how I frame requests to AI tools—giving enough context for useful output while protecting sensitive information about individuals and institutional operations.
I focus AI prompts on publicly available information rather than including details from internal planning discussions or individual faculty concerns. It requires more thoughtful preparation, but it ensures we’re protecting appropriate confidentiality.
Speed vs. strategy
The efficiency of AI is seductive, especially when facing tight deadlines and endless communication requests. But I’ve learned that speed can’t come at the expense of quality or authenticity.
Authentic institutional voice and authority doesn’t emerge from algorithms—it requires the deliberate application of human judgment to ensure our plans and communications reflect our campus culture, embody our values, and resonate with our specific audiences. The strategic thinking we bring—our ability to read context, navigate relationships, and understand the subtle dynamics of higher education communication—cannot be automated.
Consider my own practice: I frequently engage AI as a collaborative thinking tool, particularly for structural planning and format development. However, AI’s default tendency toward comprehensive, multi-layered approaches often produces unnecessarily complex frameworks for university communication realities. This is where professional judgement becomes critical. Strong strategic foundations and institutional knowledge allow us to right-size AI’s expansive suggestions into focused, contextually appropriate communication plans that actually serve our goals and communities.
Looking ahead
What I’ve learned over this past year is that responsible AI use isn’t about following a rigid set of rules. It’s about applying the professional ethics we already have to new technological capabilities. The core principles that guide good communications work—accuracy, transparency, service to mission, respect for audience—remain the same.
What’s different is that we now have tools that can enhance our ability to live up to those principles, if we use them thoughtfully. AI can help us communicate more clearly, research more efficiently, and reach broader audiences. But only if we maintain our professional judgment about when, how, and why to use these tools.
As our field continues to evolve, I’m convinced that the communications professionals who thrive will be those who can harness the power of AI while maintaining the human insight, ethical judgment, and institutional knowledge that define excellence in our profession. The technology will keep changing, but our commitment to serving our institutions and communities through ethical, effective communication remains constant. That’s the foundation we build on, whether we’re writing with pen and paper, collaborating in a digital document, or prompting the most sophisticated AI tool on campus.
*This post reflects my ongoing learning about AI ethics in communications practice and was generated using the assistance of AI (Claude, Gemini). Cross-posted on LinkedIn.
Tuesday, July 8, 2025 →
“In 2024, there were a total of 454 words used excessively by chatbots, the researchers report.” When does use of AI tip over into something fraudulent? Experts disagree.
Thursday, May 1, 2025 →
Private group chats are as maddening as public social media - and much harder to track. https://www.semafor.com/article/04/27/2025/the-group-chats-that-changed-america
Tuesday, June 25, 2024
This has a bad headline, but the gist is that AI is already beginning to be used to power racketeering and ransom business models against vulnerable human enterprises. The net effect is a general erosion of the trustworthiness of written communication, especially online, as the same tools we use to perform our work and extend our social lives are increasingly used to scam us.
Wednesday, June 19, 2024 →
In relationship to collectors, purchases of physical media are on the rise, with vinyl outselling everything, and cassette tapes, CDs and DVDs making a comeback. I’m a longtime downloader and streamer, but have been buying vinyl lately myself. Indicative of lost trust in Big Tech?
Wednesday, October 25, 2023
Hilariously (sadly? regretfully?), since I’ve been writing online for public audiences since about 1997, I’ve been thinking about the art of posting, community building, and who benefits and how, for a very long time. All of this (https://blog.ayjay.org/the-three-paths-of-micro-blog/) sounds about right, specifically:
“…it will — by design — never be a place for you to monetize your brand, troll, shitpost, or become an influencer. But hey, there are plenty of other platforms better suited for that kind of thing. Micro.blog is better suited for the more human and humane paths I have identified here.”
Wednesday, October 25, 2023 →
Microblogging on the open web - and why care? book.micro.blog
Wednesday, October 25, 2023 →
This is one to watch: www.theverge.com/2023/10/2…
Tuesday, October 24, 2023 →
As a professional poster (derogatory), this is exactly the kind of posting stream I’m interested in supporting for institutional communications. When you speak on behalf of a company or institution, you need the power to own your own platforms and content, the agility of posting to multiple channels from one platform/location, and it needs to be user friendly. We need support and ease of use, and since I’m also in public education, it would be great if we could do some of that for free or close to it.
I’m here after listening to this podcast yesterday, after a months-long conversation about abandoning social media in my department (https://www.theverge.com/2023/10/23/23928550/posse-posting-activitypub-standard-twitter-tumblr-mastodon). With the move to algorithmic CPC taking priority over newsworthiness across most platforms, we saw a major drop in engagement. That was after the ridiculous “pivot to video” disaster, though departments like mine nonetheless consider video production in a real way every few months. We decided to call it after Twitter, the final app that answered our need for just-in-time communication, decided to become a walled garden with bad SEO. We’re going to let those channels go dark and focus on talking to our stakeholders in more effective ways.
I’ve been playing with Bluesky and Mastodon on my own, and in many ways, this micro.blog communication channel fits what we need. It doesn’t hurt that I like an indie model with a friendly vibe (micro.blog functionality reminds me a little of early self-hosted blogging and this site feels a little like early Tumblr). The problem: our audiences are not here. We still need to be able to post to myriad platforms, including the big baddies that we’d rather ignore.