Newsletter

Out: SEO; In: GEO.

LOLgislation,” or how memes become policy, and posting becomes praxis.

For many years, “shitposting” has been a staple of internet culture in which individuals riff on the moment using nonsense and irony, derailing threads for fun. In today’s influencer-driven attention economy, however, shitposting as a practice is now a meaningful comms and engagement strategy.

What rivalry?

Purdue Exponent students distributed 3,000 copies of a special “solidarity edition” newspaper in Bloomington after IU spanked their student paper for insubordinance, ending the IDS print edition and firing their director. The media landscape in Indiana is bleak, generally, after years of disinvestment, so student reporters fill a social and political gap that the free market left behind. Given those conditions, the wider community depends on student media, much like public radio, to fill the information gaps. Also, these campuses are situated in communities where it can be very socially uncomfortable to be a squeaky wheel. So. As alum, I’m proud of the Exponent for this brave and newsworthy show of heart. 💐

A difficult number for so many reasons.

The next big trend in AI that I’m watching is platform integration. First company to produce the interoperability required for a united platform experience wins.

RIP D’Angelo. And a good occasion to reread this excellent 2012 profile discussing fame, religion, and how his status as a sex symbol in his youth negatively impacted D’Angelo’s self-esteem - and ultimately his career.

What happens to college towns after they’re hit by the so-called enrollment cliff?

Choco Taco is back: the business and engineering behind the revival of a retro treat.

From Propublica, on how publicly-funded private school vouchers are funding a segregation academy dynamic.

Saving for later: The story of DOGE, as told by federal workers.

A lovely story about the restoration of prairie land undertaken by the nuns of Holy Wisdom, just outside of Madison, WI.

A pleasant surprise resulting from LLM acceleration in the IT landscape is the sudden opportunity for storytelling around other, more analog kinds of information technology and access models. Nostalgia abound (complimentary).

On the rise of faith tech.

Harvard Business Review offers this take on why it’s bad for business to automate our way out of staffing entry-level positions.

What do people actually use ChatGPT for? A snapshot.

This is incredible storytelling from propublica.org, on opioids, inequality, and the scope of drug-induced homicide charges brought against teens in Wisconsin.

Gap x Katseye collab

Gap capitalized on the Sidney Sweeney “good genes” controversy by doing this collab with Katseye, the global pop group where each member is a different nationality. Each member of the pop group dances through in black and brown jeans.

Another one on how LLM sycophancy facilitates suicidal ideation.

Something that worries me about AI adoption in higher ed is the risk to students facing mental health challenges, who are increasingly turning to chat bots to plumb their own depths. What can higher ed ask of LLM business partners to protect our students’ mental health?

“It’s almost as if groups on both sides of the political spectrum are looking for an excuse to brand business decisions as politically or socially hostile,” said Jill Fisch, a professor of business law at the University of Pennsylvania who studies how corporations operate in political spaces.

Hey chat, how is pedagogy changing in an age of AI?

⚡ AI's last mile problem in higher ed

I graduated from college right before the 2008 recession and bounced through some unpromising temp jobs until an opportunity emerged for a permanent position. Sometimes you just need to get in where you fit in, and so I did. That’s how I came to work for a regional cable company that used federal money to expand the new national broadband network, extending out to the rural communities dotting central Indiana.

It was a front row seat to the national broadband expansion efforts of the early 2000s. Our business ran right across the state, spanning the 80 or so miles from Attica to Kokomo, which included several small cities with large manufacturers, two public research universities, and several community and liberal arts colleges. The strip of broadband fiber at the core of our service followed existing highways and electrical lines that split the corn and soybean fields from town to town, feathering out to more rural areas from there.

I worked a variety of roles there that put me face-to-face with a classic technical problem: the last mile. On many occasions, someone would come in looking pensive, and explain that the fiber had been extended all the way from town to their hamlet, and yet there was no plan to connect their property to the pole. Over time the pattern was clear: while the network was expanded, the cost of running a physical line to each individual property was too expensive and specific at scale. These customers often left without a path forward despite all their efforts and ours.

The “last mile problem” refers to the logistical challenges and high costs associated with the final leg of delivering goods or services to the end customer. It’s often the most difficult and expensive part of the supply chain, despite being a relatively short distance. The pattern shows up everywhere: public transit can get commuters most of the way most of the time, but that final leg of the journey remains specific and individual and problematic. E-commerce companies promise drone delivery solutions, scooter and bike-share apps claim to solve urban mobility gaps, but these technological optimizations remain persistently stubborn at scale, running up against the messy realities of sidewalks, intersections, and actual human behavior.

Tl;dr: I’ve been turning over this suspicion that AI automation will hit a classic “last mile problem,” especially in the public sector.

AI systems, particularly LLMs, are like those systems—they work incredibly well in their intended domain, processing and manipulating information. But because they’re fundamentally an information-only approach, that creates their own last mile problem when we try to implement them in physical and context-specific environments. Public institutions are uniquely specific — they are often the originators and producers of knowledge and the keepers of original policy, tasked with making the rubber hit the road. Additionally, the need for comprehensive data protection required by public workers and institutions fundamentally hamstring potential applications.

Actually implementing recommendations is where you hit the last mile. This is the work of public administration.

Imagine an AI application trained on every facilities management manual ever written and tuned to synthesize best practices for HVAC optimization. It can analyze years of energy usage data and recommend precise temperature adjustments for different zones of a building, but can’t feel that the third floor is always stuffy, or know that the facilities manager retired last year and took decades of institutional knowledge with him. Building A’s HVAC system was installed in 1987 and has a manual keypad. Building B’s system interfaces with the campus-wide monitoring system, but unreliably, and investigation is slated for later, someday, when resources allow. Professor Smith has taught in Room 204 for 25 years and will blow up your spot before moving to a different classroom for maintenance. Your engineers who manage these spaces are balancing human teams, who have time off and training and other priorities they manage in life. So, you need staff who understand the quirks of each area, the history of each system, the politics of which departments will accept changes and which will flood your inbox with complaints. You need someone who knows that the third floor always runs hot because of a design flaw from 1974, and that the solution isn’t more precise control but a $50,000 renovation that’s been deferred for a decade because a glittering new project across campus takes priority.

Imagine this tangle of questions and contingencies times infinity on every university campus in existence. Universities are like cities—they’ve been built and rebuilt over decades or centuries, with layers of systems and fiefdoms that weren’t designed to work together. AI recommendations assume a level of standardization that simply doesn’t exist. Every AI implementation in higher ed requires navigating multiple constituencies with different priorities and power structures. It’s like trying to redesign traffic patterns in a neighborhood where the residents, business owners, commuters, and city planners all have veto power and conflicting interests.


So. When looking at efficiency efforts spinning up across the education sector, I’m feeling pensive, trying to understand how exactly the house gets connected to the pole.

The promise of new tech in higher ed needs to more deeply consider the translation costs: the human labor, institutional knowledge, knowledge documentation and local adaptation required to bridge between the usefulness of tech and specific realities of public university work. Public employees want modernization and don’t want to fall behind. We want systems that work. We are also balancing a great deal of change and pressure as a sector, with fewer material resources than ever. We need less marketing and more right-sizing in the claims around AI against the political and tech realities of public administration.

This disconnect between technological promise and implementation reality becomes even more critical as higher education faces increased political scrutiny. When tech vendors promise that AI will solve efficiency problems or reduce administrative costs, institutions are under immense pressure to deliver measurable results quickly. But the translation costs we experience don’t disappear just because the political pressure to modernize increases.

The institutions that thread this needle will be the ones that accurately assess these translation costs upfront and set expectations accordingly—not the ones that assume the technology will magically bridge the gap between digital and physical, abstract and specific.

⚡ Navigating AI in higher ed communications: A practitioner's guide

by Lauren Bruce

As communications professionals in higher education, we work for institutions built on the pursuit of knowledge and innovation, yet many of us feel uncertain about how to thoughtfully integrate one of the most significant technological advances of our time: artificial intelligence.

Over the past year, my team has wrestled with questions that didn’t exist in our profession just a few years ago. Should we use AI to draft articles and email copy? How do we disclose AI-generated content, or do we? When does AI assistance cross the line from helpful tool to ethical concern?

These aren’t abstract questions any longer. Over the last year, I had to overcome AI resistance of my own to develop practical, hands-on approaches to AI use that align with our institutional values while acknowledging the realities of modern communications work (more on that below). What I’ve learned is that the answers aren’t found in blanket policies or rules, but in applying our existing professional ethics to these new tools. Here is where I am today on the journey from AI praxis to practice.

Mission first

The foundation of responsible AI use in our field starts with a principle we already know: everything we do should advance our institution’s educational mission. Higher education exists to create, share, and preserve knowledge while fostering critical thinking and diverse perspectives, in service of students, faculty, researchers, workers and the world.

The bulk of our work comes from conversations with colleagues, understanding of our campus dynamics and processes, and professional judgment about what our community needs to hear. This inevitably means more work upfront, but it maintains the authenticity and institutional knowledge that our audience deserves, regardless of whether AI tools are part of the process.

Transparency without paranoia

Do I need to mention AI every time I use it? The answer isn’t simple, but I’ve found a helpful framework: consider whether your audience would feel misled if they knew how AI was involved in creating the content.

When I use AI to polish grammar and shape format, that feels similar to using spell-check – it’s helpful but not something that changes the fundamental nature of the content we wish to communicate. But when AI helps generate the main structure for a story about campus policy changes, that’s a different ball game. The audience expects those priorities and framing decisions to come from human judgment about what matters to our community.

Internally, we differentiate between the two by defining whether or not you are “automating” processes using AI, or “augmenting” processes using AI. Full disclosure, my area of experience is in augmentation, not automation. That said.

I’ve started recommending simple disclosures when AI plays a substantial role in content creation. A line like “This article was developed with AI assistance” maintains trust while allowing us to thoughtfully benefit from these tools. It’s not about being defensive, it’s about being transparent with the people we serve, especially as the tech and attitudes around it evolve over time. Additional qualifications can be included here, such as how the information was shaped and shared by AI or not (privacy implications abound).

Here, it’s important to remember to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.

The accuracy imperative

Perhaps nowhere are the stakes higher than with accuracy. In higher education communications, we’re not just sharing information—we’re stewarding public trust in our institutions and, by extension, in higher education itself. In addition, much of the information we are communicating is original, in that it’s new information that cannot be generated using the limitless soup of generative AI.

Every piece of AI-generated content requires human verification, especially anything involving numbers, research findings, or claims about institutional achievements. This means checking sources, confirming statistics, and ensuring that quotes are accurate and properly sourced. It’s more work, but the alternative—publishing incorrect information—could undermine years of relationship-building with community stakeholders and partners.

The promise of speed and efficiency that comes with generative AI must be balanced with the work of close reading, the skill and practice of carefully analyzing a passage’s language, content, structure, and patterns in order to understand what a passage means, what it suggests, and how it connects to our larger body of work. I firmly believe that close reading, learned in the Humanities and Social Sciences, will become increasingly important to understand, shape and steer AI output, especially with regard to public communication best practices.

Inclusion as a practice

AI bias isn’t an abstract concern—it shows up in subtle but significant ways in the work. I’ve noticed that AI tools often default to formal, academic language that might exclude first-generation college students, or suggest examples and metaphors that assume certain cultural backgrounds, for example.

This has made me more intentional about prompt engineering—the way I request AI assistance. I build digital accessibility and plain language best practices into my prompts, in alignment with institutional best practices. One tip is to draft the original using my chosen, intentional language, then ask for revisions using as much of the original verbiage as possible. The difference in output is significant and it allows me to focus on higher-order communication strategy while demonstrating both accuracy and inclusive values in our output.

Privacy and the long view

Working at a public university means balancing transparency with appropriate privacy protections. We work within strict guidelines about what information can be included in AI prompts, particularly around student data, personnel information, and strategic planning discussions. Again, it’s important to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.

The challenge is that AI tools work best with context, but providing that context can sometimes mean sharing information inappropriately. I’ve learned to be creative about how I frame requests to AI tools—giving enough context for useful output while protecting sensitive information about individuals and institutional operations.

I focus AI prompts on publicly available information rather than including details from internal planning discussions or individual faculty concerns. It requires more thoughtful preparation, but it ensures we’re protecting appropriate confidentiality.

Speed vs. strategy

The efficiency of AI is seductive, especially when facing tight deadlines and endless communication requests. But I’ve learned that speed can’t come at the expense of quality or authenticity.

Authentic institutional voice and authority doesn’t emerge from algorithms—it requires the deliberate application of human judgment to ensure our plans and communications reflect our campus culture, embody our values, and resonate with our specific audiences. The strategic thinking we bring—our ability to read context, navigate relationships, and understand the subtle dynamics of higher education communication—cannot be automated.

Consider my own practice: I frequently engage AI as a collaborative thinking tool, particularly for structural planning and format development. However, AI’s default tendency toward comprehensive, multi-layered approaches often produces unnecessarily complex frameworks for university communication realities. This is where professional judgement becomes critical. Strong strategic foundations and institutional knowledge allow us to right-size AI’s expansive suggestions into focused, contextually appropriate communication plans that actually serve our goals and communities.

Looking ahead

What I’ve learned over this past year is that responsible AI use isn’t about following a rigid set of rules. It’s about applying the professional ethics we already have to new technological capabilities. The core principles that guide good communications work—accuracy, transparency, service to mission, respect for audience—remain the same.

What’s different is that we now have tools that can enhance our ability to live up to those principles, if we use them thoughtfully. AI can help us communicate more clearly, research more efficiently, and reach broader audiences. But only if we maintain our professional judgment about when, how, and why to use these tools.

As our field continues to evolve, I’m convinced that the communications professionals who thrive will be those who can harness the power of AI while maintaining the human insight, ethical judgment, and institutional knowledge that define excellence in our profession. The technology will keep changing, but our commitment to serving our institutions and communities through ethical, effective communication remains constant. That’s the foundation we build on, whether we’re writing with pen and paper, collaborating in a digital document, or prompting the most sophisticated AI tool on campus.


*This post reflects my ongoing learning about AI ethics in communications practice and was generated using the assistance of AI (Claude, Gemini). Cross-posted on LinkedIn.

I work in an IT communications role in higher ed, and AI finally crept into my life in a real way. Coming soon: some reflections on AI from a comms practitioner in higher ed.

AI-driven bots are disrupting web traffic— “and may also be inflating the internet economy by distorting the very metrics that drive tech company valuations.”

What makes AI generative? “To choose well between billions of things is difficult. To choose between effectively infinite options perfectly is impossible.”

“In 2024, there were a total of 454 words used excessively by chatbots, the researchers report.” When does use of AI tip over into something fraudulent? Experts disagree.

Currently reading: Women! In! Peril! by Jessie Ren Marshall 📚