AI's Transformative Impact on Public Relations and Social Media

Change is a constant in public relations and social media, but the emergence of artificial intelligence (AI) isn’t just another change – it’s a revolution. Generative AI goes beyond automating tasks. It has become a creative collaborator for many communication and public relations professionals, sparking fresh ideas for content creation, building media lists and helping overcome hurdles like writer’s block.

While still unfolding, it’s clear this is not just a trend. According to a recent study from Exploding Topics, 35% of global companies report using AI in their business already, and approximately half of businesses plan on incorporating AI into their processes this year. Additionally, one recent study reports that AI will grow to a nearly $108 billion value by 2028. As we embrace AI advancements, we’re witnessing a fundamental shift in how we strategize, connect with audiences and craft our messages. And while its potential seems limitless, AI also presents challenges that require careful consideration. In this edition of Plain Talk, we’ll discuss the intersection of AI and communication, how its dynamic force is shaping the future and whether it’s time for you to jump on board.

How Can You Use AI Today?

There are many ways that AI tools are helping PR and social media professionals right now. If you are anxious to try it out, here are a few uses worth considering:

  • Automating Routine Tasks

AI excels at automating repetitive tasks that eat into your valuable time. Things like scheduling social media posts on Facebook, Instagram and LinkedIn, identifying relevant hashtags and generating social and earned media coverage reports are all possible today. Using AI tools to assist can free up PR professionals and social media managers to focus on the bigger picture – crafting compelling content, being proactive with projects, having more time to handle urgent crises, and relationship-building with customers, media outlets and teammates. In fact, one recent study shows AI can save an employee an average of two and a half hours per day.

There are several automation platform tools that use AI to help professionals, including social media-specific tools like Sprout, which automatically chooses the best time of day to post and publishes the content on its own; SocialBee which can generate an entire social media strategy; and Missinglettr, a social marketing platform that turns YouTube and blog content into social campaigns.

  • Improving Audience Engagement

AI-powered chatbots, like OpenAI’s ChatGPT, have become game-changers for social media interactions, media monitoring and community management. These intelligent assistants can engage with users in real time, answer questions, provide personalized recommendations and even facilitate transactions, like shipping products to influencers. This customized approach enhances user experience and allows brands to deliver relevant content exactly when audiences need it.

There are third-party platforms, like Pipedream, which can help users integrate OpenAI’s ChatGPT with Facebook page’s application programming interface (API). There are several operations for this integration, including creating comments and posts, responding to events and even analyzing page data.

Meta also now offers its own AI tools within the platform. Within the business manager account, a manager can create automated responses for up to five keywords. When one of the keywords is used in an incoming message, the chatbot will recognize the term and send what it deems to be the correct reply. This can be helpful if someone is reaching out to ask a direct question, like store hours or a company phone number. These automated replies can be set up on a desktop or mobile app. However, this can also pose risks of an inappropriate response, which will be discussed later in this article.

  • Spark Creative Ideas

Stuck in a content rut? For many, AI has become an ideation and brainstorming partner. Popular platforms for brainstorming creative ideas include a newly launched Magic Studio from Canva, which offers text drafting and image creation, ChatGPT and Google Gemini, among others. By analyzing extensive datasets, AI algorithms uncover hidden insights about audience interests, trending topics and emerging issues. This data-driven approach helps communication professionals develop content and PR campaigns that resonate with their target audience. This is often a great first step to get those brainwaves going.

Additionally, based on previous prompts and passages used, AI tools utilize predictive analytics to suggest creative angles, keywords and visuals. This adaptability allows PR and social media professionals to refine strategies and boost audience engagement. In a recent survey from Muck Rack, 64% of respondents use AI to write social copy, 58% use AI for research and writing press releases and 54% use AI to craft pitches for media outreach.

When writing and brainstorming with AI, using a platform that will best meet your needs is important. For general writing, like a blog, Google’s Gemini, Microsoft’s Copilot and can be useful tools for creating a first draft. Semrush is also a popular AI tool, offering assistance with writing for websites, ads, social media content and more. However, for academic writing, may be more of a fit because this tool scours academic journals to find articles relevant to the search terms you entered.

Remember – the tools shouldn’t replace the overall writing you are doing, but they can be helpful to assist and improve your writing, also cutting back the time you’d spend on these overall efforts.

Is AI Too Risky?

While AI offers a powerful toolkit for PR practitioners and social media professionals, it also has risks that must be addressed. Responsible use requires careful navigation of potential pitfalls. Potential risks can include:

  • Data Dilemmas: Security and Leaks

AI thrives on data, but this dependency raises significant concerns about privacy and security. Social media and PR agencies must be vigilant about the type of data fed into AI systems, avoiding confidential information like unreleased products, partnerships or event details. Inadvertent leaks through AI platforms could have disastrous consequences for a brand’s reputation and competitive edge.

For example, in 2023, Samsung employees in Korea entered confidential code into ChatGPT to help fix a bug, and subsequently, the platform collected the code and leaked company secrets, resulting in the company banning the use of AI.

Another example of an AI misstep occurred when Microsoft’s AI research team, while publishing open-source training data on GitHub, accidentally exposed 38 terabytes of additional private data. This data included company secrets, passwords and thousands of private Microsoft Teams messages.

Microsoft - Documents leaked to Github

Image courtesy: Wiz


We cannot reiterate this enough – legal counsel should always be consulted to ensure compliance with data protection regulations. Never place anything into these AI tools that hasn’t been publicly released.

  • Who Owns an AI-Generated Image?

Another big question that is important to address as far as content creation and AI are concerned is, “Who owns the copyright of an AI-generated image?” Is it the programmer, the user who prompts the system or the AI itself? AI’s capability to generate images is impressive, but this innovation raises significant legal and ethical concerns. Generative AI systems often use trademarked images, leading to potential intellectual property violations. For example, some AI-generated images have been found with Getty watermarks, illustrating the severity of this issue. While companies like Getty are developing their own AI tools to navigate these complexities, many other platforms don’t, so it’s important to be aware of this potential bigger issue at hand.

Getty Images - AI and Copyright

AI training data may inadvertently include copyrighted material, raising concerns about whether the resulting generated images constitute derivative works. Courts will likely struggle with the concept of “fair use” in the context of AI-generated content. Artists, too, are pushing back against the unauthorized use of their work for AI training, escalating the debate.

Beyond the copyright issue, the ability to create highly realistic, fake images can be used to exploit and spread misinformation and propaganda. Deepfake photos and videos, for example, have become an increasingly significant issue and have the potential to affect reputation.

While utilizing AI tools that generate images, it’s important to be aware of these potential issues, which will likely be addressed in the very near future to help ensure fair and legal use of creative content.

  • Inappropriate Automated Responses

While we noted above how automated responses powered by AI carry can be helpful, this also carries several risks. AI tools may also lack understanding of your overall brand or the context of questions and comments, which can lead to generic, irrelevant replies that fail to address specific user questions. This creates a frustrating experience for audiences and undermines brand trust if the user doesn’t feel like they are communicating with an actual person. We’re seeing it more and more these days – live chats are offered on websites, but you end up chatting with a program instead of a person, and they just don’t seem to understand your question in many cases. This could happen if you use AI to automate your responses, too. AI systems may misinterpret user intent, potentially resulting in awkward miscommunications or even offensive responses that damage relationships.

Likewise, AI’s limited emotional intelligence can be problematic, mainly when dealing with emotionally charged situations. A poorly timed or insensitive response from a chatbot can escalate a situation and further alienate the audience. For example, in early 2024, Air Canada had to refund a passenger hundreds of dollars back after its chatbot gave him the wrong information about a bereavement refund. When you rely on an AI tool, there are bound to be mistakes. Always leverage machine learning preference settings when available to personalize automated interactions.

Even with Meta, its five keyword responses can cause issues when it doesn’t understand the context of a message. If the keyword is set to ‘sale’ and someone messages to say, “I had an issue with the sale code,” but the autoreply sends a link to a current sale, this is going to be incredibly frustrating for the user. If you’re using AI to automate responses, for now, it’s best to set this type of reply for closed-ended questions or statements, like business hours, unless you are prepared to deal with a lot of frustrated customers.

  • Losing the Human Touch

While AI excels at streamlining repetitive tasks, overreliance on automation can also strip away the human element from communication. This is especially detrimental during client crisis management. Imagine a PR team solely relying on AI to identify and respond to a critical situation. This detached approach could lead to disastrous consequences for the client’s reputation. AI tools are valuable for brainstorming general solutions and crafting generic crisis communication statements, but again, it’s just a starting point that should assist – not replace. AI tools should never be used to replace human judgment and decision-making. Allowing AI to react independently during a crisis can result in false statements, inappropriate emotional responses, incorrect sentiment analysis, or unrealistic promises – issues that can be easily avoided with a human review process.

How easy is it to see when someone is using AI tools? There are a few obvious signs that a writer has used AI to assist them, including overuse of buzzwords and phrases like “transformative,” “notably,” “navigating the…” and “in the realm of.” Other examples are repetitive sentence structures and cliché metaphors and similes. If it doesn’t sound like the writer’s own voice, they’ve probably used AI tools to assist, and relying on them completely can be easy to spot. This is why it’s crucial to be collaborative with AI and make edits to the suggested copy. You still want the writing to come from your team and not just a tool.

By being aware of these risks and implementing necessary safeguards, PR and social media professionals can leverage AI’s capabilities responsibly and maximize its positive impact on communication strategies and content generation.

Mitigating the Risks of Using AI by Implementing Security Measures

If organizations allow the use of AI tools, they must adopt proactive measures to maximize the benefits and power of AI technology and data analysis while mitigating the associated risks, including implementing security measures. Data security is the foundation of responsible AI use in PR and social media. Here are a few steps your organization can take to secure your defenses:

  • Encryption Tools: Employ strong encryption methods that scramble data into unreadable code, even if intercepted by unauthorized parties. Think of it as wrapping your data in a layer of protection.
  • Take Control: Implement strict access controls to regulate who can access sensitive information. Role-based permissions ensure that only authorized personnel can view specific data sets relevant to their job function. Multi-factor authentication (MFA) adds an extra layer of security by requiring a secondary verification step beyond a simple password – like a code sent to your phone. Think of it as a two-key lock system for maximum security.
  • Proactive Defense: Regular security audits and vulnerability assessments are like conducting security drills. These proactive measures help identify and patch potential weaknesses in your system before they can be exploited. By constantly testing your defenses, you can stay ahead of potential security threats.
  • Transparency Builds Trust: Prioritizing data security protects your confidential information and builds trust with clients, stakeholders and the public. Demonstrating a commitment to data protection shows them you value their privacy and security.
  • Keep Confidentiality Offline: If you have confidential or sensitive information, don’t share it with AI platforms. Here’s an example: Rather than throwing in information about your company’s top-secret rebranding, use broad information that helps you create a safe, generic framework without giving top-secret information away. From there, personalize the content offline with the confidential information, and ensure it is in brand voice and written in a way that sounds like your company and not an AI tool. Think of AI as your brainstorming partner for general concepts, but leave the confidential information to the human experts.

Regular Monitoring and Updates

AI tools are not “set it and forget it” solutions. Just like people, AI algorithms need periodic evaluations. These assessments verify their accuracy and ensure they deliver appropriate responses that align with your brand’s voice and ethical guidelines. This helps identify and address potential biases or unintended consequences in AI outputs.

Images showing two AI chatbots that don’t understand the question and are not providing useful information or help to the users. These bots needed to be checked and fixed by a human to prevent the issue from reoccurring. Image Courtesy: LinkedIn

Regular performance evaluations allow you to identify any deviations from expected behavior. Think of it like catching a bug in your system. Promptly addressing these issues helps maintain optimal system function and ensures your AI continues to deliver the best possible results.

AI is constantly evolving. By staying updated on industry advancements and new capabilities, you can continuously leverage the latest improvements to enhance your system’s performance. This ensures your AI stays sharp and operates at peak efficiency over time.

Regular monitoring and updates demonstrate your organization’s commitment to ethical AI use. This proactive approach builds trust with stakeholders and ensures your AI operates within the boundaries of regulatory requirements. By prioritizing continuous monitoring and updates, you can ensure that your AI-powered tools function effectively and ethically. This will make you a more reliable and trustworthy PR and social media partner.

Strike a Balance

AI is an intriguing new tool in the social media and PR industry, but it’s not a magic wand. The key lies in collaboration, not replacement. Imagine AI as a powerful research assistant and brainstorming partner. It can analyze data to identify solutions and craft general frameworks, but the human touch is irreplaceable.

PR pros and social media teams need proper training to work effectively with AI, allowing them to inject critical thinking, empathy and creative problem-solving into the mix. This empowers teams to create authentic and impactful communication that resonates with audiences on a deeper level. By striking the right balance between AI’s efficiency and the human touch, PR and social media professionals can achieve the winning formula for success in today’s dynamic communication landscape.

Are You Ready to Embrace the Change?

Integrating artificial intelligence into public relations and social media is reshaping communication strategies. While AI offers efficiency and creativity, it also poses risks that require careful management.

To navigate this landscape effectively, PR and social media professionals must prioritize security measures, regular monitoring and updates. Finding a balance between AI-driven automation and human empathy is crucial for maintaining authentic communication and building meaningful connections with audiences.

Organizations can harness its benefits by adopting responsible AI practices while mitigating potential risks, ensuring a successful and impactful approach to modern communication.

If you need help taking your PR strategy and social media presence to the next level, send us a note or give us a call at 502.499.4209.

Today’s article was penned by Public Relations and Social Media Specialist, Carly Curry