⚡ Navigating AI in higher ed communications: A practitioner's guide

by Lauren Bruce

As communications professionals in higher education, we work for institutions built on the pursuit of knowledge and innovation, yet many of us feel uncertain about how to thoughtfully integrate one of the most significant technological advances of our time: artificial intelligence.

Over the past year, my team has wrestled with questions that didn’t exist in our profession just a few years ago. Should we use AI to draft articles and email copy? How do we disclose AI-generated content, or do we? When does AI assistance cross the line from helpful tool to ethical concern?

These aren’t abstract questions any longer. Over the last year, I had to overcome AI resistance of my own to develop practical, hands-on approaches to AI use that align with our institutional values while acknowledging the realities of modern communications work (more on that below). What I’ve learned is that the answers aren’t found in blanket policies or rules, but in applying our existing professional ethics to these new tools. Here is where I am today on the journey from AI praxis to practice.

Mission first

The foundation of responsible AI use in our field starts with a principle we already know: everything we do should advance our institution’s educational mission. Higher education exists to create, share, and preserve knowledge while fostering critical thinking and diverse perspectives, in service of students, faculty, researchers, workers and the world.

The bulk of our work comes from conversations with colleagues, understanding of our campus dynamics and processes, and professional judgment about what our community needs to hear. This inevitably means more work upfront, but it maintains the authenticity and institutional knowledge that our audience deserves, regardless of whether AI tools are part of the process.

Transparency without paranoia

Do I need to mention AI every time I use it? The answer isn’t simple, but I’ve found a helpful framework: consider whether your audience would feel misled if they knew how AI was involved in creating the content.

When I use AI to polish grammar and shape format, that feels similar to using spell-check – it’s helpful but not something that changes the fundamental nature of the content we wish to communicate. But when AI helps generate the main structure for a story about campus policy changes, that’s a different ball game. The audience expects those priorities and framing decisions to come from human judgment about what matters to our community.

Internally, we differentiate between the two by defining whether or not you are “automating” processes using AI, or “augmenting” processes using AI. Full disclosure, my area of experience is in augmentation, not automation. That said.

I’ve started recommending simple disclosures when AI plays a substantial role in content creation. A line like “This article was developed with AI assistance” maintains trust while allowing us to thoughtfully benefit from these tools. It’s not about being defensive, it’s about being transparent with the people we serve, especially as the tech and attitudes around it evolve over time. Additional qualifications can be included here, such as how the information was shaped and shared by AI or not (privacy implications abound).

Here, it’s important to remember to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.

The accuracy imperative

Perhaps nowhere are the stakes higher than with accuracy. In higher education communications, we’re not just sharing information—we’re stewarding public trust in our institutions and, by extension, in higher education itself. In addition, much of the information we are communicating is original, in that it’s new information that cannot be generated using the limitless soup of generative AI.

Every piece of AI-generated content requires human verification, especially anything involving numbers, research findings, or claims about institutional achievements. This means checking sources, confirming statistics, and ensuring that quotes are accurate and properly sourced. It’s more work, but the alternative—publishing incorrect information—could undermine years of relationship-building with community stakeholders and partners.

The promise of speed and efficiency that comes with generative AI must be balanced with the work of close reading, the skill and practice of carefully analyzing a passage’s language, content, structure, and patterns in order to understand what a passage means, what it suggests, and how it connects to our larger body of work. I firmly believe that close reading, learned in the Humanities and Social Sciences, will become increasingly important to understand, shape and steer AI output, especially with regard to public communication best practices.

Inclusion as a practice

AI bias isn’t an abstract concern—it shows up in subtle but significant ways in the work. I’ve noticed that AI tools often default to formal, academic language that might exclude first-generation college students, or suggest examples and metaphors that assume certain cultural backgrounds, for example.

This has made me more intentional about prompt engineering—the way I request AI assistance. I build digital accessibility and plain language best practices into my prompts, in alignment with institutional best practices. One tip is to draft the original using my chosen, intentional language, then ask for revisions using as much of the original verbiage as possible. The difference in output is significant and it allows me to focus on higher-order communication strategy while demonstrating both accuracy and inclusive values in our output.

Privacy and the long view

Working at a public university means balancing transparency with appropriate privacy protections. We work within strict guidelines about what information can be included in AI prompts, particularly around student data, personnel information, and strategic planning discussions. Again, it’s important to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.

The challenge is that AI tools work best with context, but providing that context can sometimes mean sharing information inappropriately. I’ve learned to be creative about how I frame requests to AI tools—giving enough context for useful output while protecting sensitive information about individuals and institutional operations.

I focus AI prompts on publicly available information rather than including details from internal planning discussions or individual faculty concerns. It requires more thoughtful preparation, but it ensures we’re protecting appropriate confidentiality.

Speed vs. strategy

The efficiency of AI is seductive, especially when facing tight deadlines and endless communication requests. But I’ve learned that speed can’t come at the expense of quality or authenticity.

Authentic institutional voice and authority doesn’t emerge from algorithms—it requires the deliberate application of human judgment to ensure our plans and communications reflect our campus culture, embody our values, and resonate with our specific audiences. The strategic thinking we bring—our ability to read context, navigate relationships, and understand the subtle dynamics of higher education communication—cannot be automated.

Consider my own practice: I frequently engage AI as a collaborative thinking tool, particularly for structural planning and format development. However, AI’s default tendency toward comprehensive, multi-layered approaches often produces unnecessarily complex frameworks for university communication realities. This is where professional judgement becomes critical. Strong strategic foundations and institutional knowledge allow us to right-size AI’s expansive suggestions into focused, contextually appropriate communication plans that actually serve our goals and communities.

Looking ahead

What I’ve learned over this past year is that responsible AI use isn’t about following a rigid set of rules. It’s about applying the professional ethics we already have to new technological capabilities. The core principles that guide good communications work—accuracy, transparency, service to mission, respect for audience—remain the same.

What’s different is that we now have tools that can enhance our ability to live up to those principles, if we use them thoughtfully. AI can help us communicate more clearly, research more efficiently, and reach broader audiences. But only if we maintain our professional judgment about when, how, and why to use these tools.

As our field continues to evolve, I’m convinced that the communications professionals who thrive will be those who can harness the power of AI while maintaining the human insight, ethical judgment, and institutional knowledge that define excellence in our profession. The technology will keep changing, but our commitment to serving our institutions and communities through ethical, effective communication remains constant. That’s the foundation we build on, whether we’re writing with pen and paper, collaborating in a digital document, or prompting the most sophisticated AI tool on campus.


*This post reflects my ongoing learning about AI ethics in communications practice and was generated using the assistance of AI (Claude, Gemini). Cross-posted on LinkedIn.

Comms Newsletter Longform Tech Higher Ed