Over the past few weeks, I’ve noticed something fascinating about AI: it’s becoming remarkably persuasive. Tools like ChatGPT no longer just answer questions—they suggest improvements, offer optimizations, and invite us to keep refining our ideas. While this can be incredibly useful, it raises important questions about AI persuasion ethics. When AI can nudge us toward the next step more effectively than a human might, leaders must decide where helpful influence ends and manipulation begins.
Over the past few weeks, I’ve noticed something interesting about AI.
When I use ChatGPT as a brainstorming partner—Step 2 of my AI Authenticity Loop—it rarely stops with the answer I asked for. Instead, it offers a follow-up suggestion designed to keep the conversation going.
AI has always been helpful. Lately, it has also become remarkably persuasive.
When I teach persuasion and influence, I encourage leaders to use benefit language: explaining why an idea matters to the listener. What improves if they take this action?
AI tools are increasingly using the same approach. After responding to a prompt, ChatGPT often adds a suggestion framed around a potential outcome:
- “If you’d like, I can show you a subtle word shift that often converts better with senior buyers.”
- “I can also suggest a version optimized for LinkedIn reach. This formatting often increases engagement for thought leaders by 2x-3x.”
The pattern is: present a high-value benefit and invite the user to continue.
The problem is: the suggestions never stop.
AI can always propose one more improvement, one more variation, one more optimization, such as: “You’re 90% of the way there…what would make this even better is…”
That’s where judgment comes in. At some point, we have to decide when something is good enough to send.
The AI Authenticity Loop helps us here as well. AI can help us refine ideas, but you remain responsible for deciding what truly reflects your thinking and voice.
When it comes to ethical persuasion, there’s a larger issue at stake.
The Danger of Superhuman Persuasion
Years ago, I wrote about the danger of superhuman persuasion. More recently, I revisited the topic in an article for the World Economic Forum.
As AI advances, this question has become more urgent:
What happens when AI becomes more persuasive than humans—and who decides how that persuasion is used?
As these tools evolve, transparency is crucial. AI companies should be clear about the persuasive techniques built into their systems so that leaders can make informed decisions about how to use them.
And leaders should consider how they are using AI to persuade their employees or customers, without crossing a border into manipulation.
Let’s ensure that persuasion remains a matter of ethical human judgment.
And now, um, if you’d like…I can also share the strategies I use with leaders to make sure AI strengthens their voice rather than undermining it. Fill out our conversation form with your interest in these strategies, and we’ll follow up within 24 hours.
(yes, that last sentence was intentional).
Until next week,
Allison
Who’s Really Influencing the Message?
AI can support your thinking. It cannot replace your judgment. If AI is becoming more persuasive, then leaders need an explicit process for keeping strategy, judgment, and authenticity in human hands.
In this clip from my conversation with Sarthak Pattnaik, I explain exactly what the AI Authenticity Loop is designed to do: It offers a deliberate way to use AI in service of your voice and agency, not in place of them.
Watch the clip →