Skip to main content

Businesses that are over-relying on AI in their PR and content creation are quickly realising how damaging it can be to their business and brand.

As well as the organisations rehiring staff that they had laid off in favour of AI, many are wrestling with the consequences of misinformation in AI-generated content.

When businesses put out content that is factually incorrect – whether that’s on owned, paid or earned channels – it threatens the foundations of their brand trust. 75% of consumers say transparency is important to them, and incorrect information in content can quickly dissipate trust.

80% of consumers are sceptical about AI-generated content, and it’s easy to see why. Content created by AI misrepresents news almost half of the time, highlighting the widespread nature of how often it is incorrect.

Here, we’ll be exploring the brand-damaging impact of misinformation in AI, how it impacts your brand reputation, how you can reinstate trust if your brand is already damaged and how you can use AI in a smarter way.

AI-driven misinformation and its impact on brand trust

Using artificial intelligence comes with a high risk of incorrect or misinformed content. Not only can bad actors create convincing fake content that AI then pulls from, but AI platforms can also use outdated information or sources and even make up statistics and data – known as AI hallucinations.

The evolution of misinformation in the age of artificial intelligence

Online misinformation – particularly on social media platforms – is not a new problem, but it’s a significantly larger problem now that AI tools allow anyone to generate convincing fake content at scale.

Deepfakes are an especially insidious form of AI-generated misinformation. Fabricated videos, images and audio clips can convincingly impersonate your executives or brand representatives. The technology has advanced so rapidly that it’s hard to distinguish between reality and fantasy.

Chat GPT and similar generative AI platforms have made it simple to produce fake press releases, customer reviews and social media posts. These tools can mimic your brand’s voice and style, creating false narratives that appear authentic.

Synthetic personas pose another challenge for both businesses and news outlets. It’s now common for businesses to create entirely fabricated individuals with realistic profiles, complete with AI-generated photos and backstories. These fake accounts are reducing trust in the media, and are forcing journalists to scrutinise information they receive from even trusted sources more closely.

The sophistication of AI-powered content creation means media monitoring is more important than ever. You can identify fake or misleading content generated by others about your business quickly and easily and take fast action to tackle it.

The risks of using AI to generate content at scale

It’s easy to see the appeal of using AI to generate blogs, website service page content and press releases. But without careful consideration, this can dramatically damage your reputation.

It’s widely recognised that a lot of AI-generated content has a distinctive “AI voice”, making it easy to spot for seasoned professionals. But even if you can train an AI model to more closely match your brand voice, it can still hide nuggets of misinformation that customers and journalists can easily spot.

Generative AI models like Google’s AI Overview, ChatGPT and Perplexity essentially spit out aggregated information they’ve scraped from different websites. If that information is incorrect or outdated, the tool isn’t smart enough to know that and eliminate it.

Reputable organisations like the BBC, Sky News and the Associated Press are also now preventing generative AI tools from scraping their site, meaning that there are fewer trusted sources to use.

The lack of strong sourcing, personalisation and brand personality means that using AI to generate your own content has consequences for your reputation. There’s also the public’s perception of AI to contend with; take a look at the example of Duolingo. Following its announcement that it was going to lay staff off in favour of AI, its social media platforms were flooded with negativity, which resulted in the company deleting all its TikToks. As a result of the enormous backlash, Duolingo backtracked on its decision.

How AI-generated content erodes credibility

Your audience’s ability to trust digital content has fundamentally shifted. Exposure to AI-generated misinformation reduces trust and influences decision-making, creating an ever more challenging environment for your public relations efforts.

The primary consequence of misinformation is a lack of trust amongst customers and trusted media contacts. Your customers, partners and stakeholders will approach all digital content with suspicion, including your legitimate communications. When audiences can’t distinguish authentic content from fabricated material – or they have reason to suspect you’re using AI-generated content – your strategic messages lose their impact.

This presents a number of challenges that have a measurable impact on your reputation.

ChallengeImpact on your brand
Content authenticity doubtsReduced engagement from customers and journalists
Deepfake impersonationDamaged executive and brand credibility
Fake social accountsPolluted conversation around your brand
AI-generated negative reviewsSkewed perception of product quality

Your media relations strategy must both rely on authentic, human-generated content and be aware of misinformation that may have been spread about your brand. You can account for this uncertainty by building verification protocols and maintaining transparent communication channels with key contacts.

Strategies to defend brand trust against AI misuse

Protecting your brand from AI misinformation calls for multiple strategies including implementing proactive monitoring systems and clear ethical guidelines. It’s important to be able to address both content created within your organisation and by malicious outsiders.

Digital literacy and crisis preparedness

Your team needs strong digital literacy skills to be able to identify AI-generated misinformation confidently. Many professionals who’ve already been exposed to it will be able to identify certain tells, but those less involved may have a harder time.

Real-time monitoring tools provide important early warnings. Businesses can use customised dashboards to track brand mentions across social media and publications, allowing them to spot misinformation immediately and take action before it gains traction. This proactive approach means your communications team needs to be agile and responsive.

Crisis management training and support means that if your business is impacted by falsified AI content, you can respond effectively and protect your reputation. That’s true whether you’ve been caught out using AI and it’s created incorrect information or if external parties are doing so with deepfakes or faked reviews. Media training further helps your senior leadership team and anyone else approached by the media deal with queries effectively.

Navigating compliance and legal risks

Putting processes in place means that when you identify any misinformation about your business, you can take action. It’s also important to decide whether to publicly address misinformation or handle issues quietly through direct engagement with publishers.

Your approach should balance transparency with strategic thinking, considering the legal implications of every statement you make. Documenting your decision-making processes can help you demonstrate compliance with any regulatory requirements.

Key considerations:

AreaAction required
Content verificationEstablish fact-checking protocols before responding.
TransparencyClearly communicate your AI usage policies.
Data protectionEnsure audience data handling meets privacy standards.
Addressing external threatsAssign responsibility for addressing misinformation from third parties.
Media responsesDetermine whether or not you need to address issues publicly.

Measuring and rebuilding audience trust post-misinformation

Data analytics tools can help you assess damage and track your brand’s recovery after it’s been impacted by AI misinformation. Your PR metrics should measure not just reach and engagement, but also sentiment shifts and trust indicators across different customer segments.

To truly understand what you need to do, you’ll need to assess the immediate impact and long-term reputational impact any issues have had. Track audience engagement patterns before, during and after incidents to understand how misinformation has changed how customers and consumers feel towards you. This data will inform how you rebuild trust.

How you can win trust back:

  • Create content that emphasises authenticity and transparency.
  • Analyse sentiment patterns across customer segmentation groups to understand reputation damage.
  • Be transparent with your remediation actions, whether that’s not using AI to generate content or to review third-party content thoroughly.
  • Be honest and contrite when responding to queries from both the media and customers.

Ultimately, your content creation must reinforce your business and your people’s authenticity. Storytelling that highlights real experiences, data and proof points strengthens your credibility and helps audiences distinguish your genuine communications from AI-generated content.

Frequently asked questions

Businesses face real threats from AI-generated misinformation, whether they’re using it to create their own content or if external parties are. From revenue loss to damaged partnerships, the consequences of AI misinformation can be far-reaching and highly impactful. Here, we take a look at some of the most commonly asked questions around AI misinformation in PR and the potential effects.

What are the common repercussions for businesses facing AI-induced misinformation?

Your business can suffer financial and reputational damage when AI-generated misinformation targets your brand. Competitors and unhappy customers can quickly create fake news about your company, making consumers view your brand less favourably.

You may lose valuable partnerships when false information spreads about your business practices or products. Revenue drops often follow as consumers choose to spend their money elsewhere based on incorrect information they’ve encountered online.

This is also true if you’ve used AI to create your own content and it contains inaccuracies. Many consumers and businesspeople are sceptical of AI and its rising use in business, so this could come with considerable reputational damage too.

How does the spread of false information impact consumer trust in a brand?

Your customers are increasingly suspicious of all information they encounter as AI creates more misinformation. This general distrust affects your brand even when you’ve done nothing wrong.

Equally, with some types of falsified information like deepfakes, customers may struggle to distinguish between legitimate content from your company and fake material created by bad actors.

These trust deficits in society mean your customers already approach new information with suspicion. False claims about your products or services compound this problem, making it harder for you to communicate effectively with your audience.

It takes a lot longer to recover from reputation damage than the initial spread of misinformation takes. You may spend months or years rebuilding relationships with consumers who encountered false information about your brand.

What strategies are effective for businesses to counteract malicious AI activities targeting their brand image?

You should respond quickly to false information with clear, factual corrections across all your communication channels. Speed matters because misinformation can spread rapidly, especially through social platforms.

Your communications team needs to maintain an active social media presence where you can directly address customer concerns. Engaging with your audience builds relationships that make them more likely to trust your version of events over false claims.

Media relations and building partnerships with trusted media outlets and industry publications can help you counteract false information. These established voices carry weight that your key audiences will listen to.

What is the role of public relations in managing the risks associated with artificial intelligence?

Your PR team serves at the frontline of detecting and responding to AI-generated threats against your brand. They can monitor digital spaces for emerging issues and coordinate rapid response strategies when problems arise.

Public relations professionals like Polymedia help you develop clear messaging that counters false narratives without amplifying them further. You need skilled communicators who understand how to address misinformation without inadvertently spreading it to new audiences.

Your PR advisers should maintain relationships with journalists and fact-checkers who can help set the record straight. These connections prove invaluable because credible third parties can verify you in a way that people will trust.

PR teams guide you in building long-term brand resilience through consistent, transparent communication. This foundation of trust makes consumers more likely to believe your statements when false information emerges.

What measures should businesses take to proactively prevent the spread of misleading information by AI systems?

To prevent the spread of AI misinformation in PR, you should firstly implement strict guidelines for any AI tools your company uses in content creation. Clear parameters prevent your own systems from accidentally generating or spreading false information.

Your organisation should invest in employee training about AI misinformation risks. Staff members across all departments need to be able recognise fake content and understand proper response protocols.

Establish verification processes for all content before publication, especially material created with AI assistance. Human oversight remains essential even when using advanced automation tools.

When you spot misinformation not created by your own business using AI, it’s important to understand how to address it with the publisher and how to communicate with customers and stakeholders. Public statements and crisis communication plans may be necessary to prevent the spread.