As communicators, the tools at our disposal have never been more powerful. And the decisions on how to use these tools have never been harder.
In 2026, communications professionals are navigating a world where AI can analyze, draft, schedule, and optimize faster than any team. What it can’t do is make those strategic decisions that build trust. What to say, to whom, and why? These questions take a person to answer. For communicators today, the question isn’t should we use AI? It’s when do we use it.
To help make sense of where communications are heading in 2026, we’ve turned to three communications leaders and asked them to dust off their crystal balls: Cerys Thompson, Managing Director at CT Communications; Chris Lee, Vice President, Communication Consulting Practice, at Gallagher; and Tom Ormsby APR, FCPRS, Principal at Tom Ormsby Public Relations.
Here’s what they are watching.
The human centric shift: beyond the AI hype
There’s no denying that AI has brought a seismic shift to the communications profession. However, Cerys Thompson believes that an AI reality check is in order. AI has made it faster and easier to generate content, but fast does not equate to strategic content.
“The risk is that the conversation becomes focused on speed and scale rather than strategy,” she said.
AI hasn’t solved the deepening public trust crisis or the declining employee engagement. This is where the human touch is needed. Before writing anything or activating the AI tools, communicators should be asking questions.
“Questions like: What problem are we trying to solve? What do we want people to know, feel or do as a result?
“Technology can support communication, but it can’t replace thoughtful strategy or genuine empathy for your audience,” said Thompson.
The right guard rails need to be in place to guide the use of AI by communicators. The Gallagher State of the Sector survey shows that 36 per cent of respondents felt their organization was AI-ready.
“Without a proper commitment of resources and consideration, I think those organizations that view AI as somewhere between a novelty and a productivity tool, face risks associated with governance, and losing their voice,” said Chris Lee.
In 2026, successful communications practitioners will be the ones who bring the human touch back to the forefront of what they do.
“The communicators who will thrive in this environment are those who bring communication back to its core purpose: helping people understand each other, share meaningful stories and move forward together,” said Thompson.
Addressing the AI Trust Gap
Trust is the foundation of the communications profession. Trust is also something that AI can’t generate but it can break.
As communicators, we know that transparency normally builds trust. When it comes to AI, studies have shown it does the opposite. A University of Arizona study found that “the…act of disclosing AI usage can substantially diminish trust.”1 The amount of lost trust was shown to be less in audiences with a favourable view of technology.
Trust is also diminished when an organization is found to have used it without disclosure. This leaves communicators trying to manage reputations and trust with no easy answers. For Tom Ormsby, this is a conversation that he believes the industry needs to have in 2026.
“I think there’s an ethical reason to document the use of AI-generated materials that you are presenting as yours,” he said.
With the rapid advances in AI, and increasing acceptance of it by the public, Ormsby believes that developing a standard for what is ethical AI use as communications practitioners is essential. He believes that transparency will prevail, owning the work and the tools used to create it.
And this is where the human element is essential. As the tools continue to evolve so too do the ethical questions. Communications professionals who think strategically, build trust, and know when human judgment matters most will be the ones to succeed. In 2026, AI isn’t a threat to our profession, it’s the reason for it.
Sources:
1: https://www.sciencedirect.com/science/article/pii/S0749597825000172#ab005
AI Assisted–Human Reviewed
AI was used to summarize primary interviews and other original source materials used by the human author as well as ideation and proofing before final publication review.
