Bold takeaway: AI video tools are rapidly reshaping healthcare advertising, but powerful capabilities come with serious responsibilities.
Google has rolled out the latest version of Veo, its AI video generator, joining OpenAI’s Sora in the market for AI-created healthcare ads. Veo 3.1, introduced a little over a month ago, promises richer native audio, tighter narrative control, and stronger image-to-video conversion. Access is via Google Cloud, and a test by MM+M examined how well it can produce healthcare content and how it stacks up against Sora.
How Veo 3.1 works
Veo 3.1 is usable on desktop and mobile. Its interface resembles a chat window: you type a prompt, and the system creates a video.
In a test prompt—“create a TV ad for headache medication”—Veo 3.1 produced a realistic, roughly eight-second clip. The ad followed a pharma-commercial trope, showing a medication bottle with a woman holding her head in pain.
A second test tweaked the prompt to imitate branding for specific, existing medications. The generated videos looked highly realistic, and the branding closely mirrored real-world pharmaceutical branding.
By several measures, Veo 3.1 visuals felt more lifelike than Sora’s outputs. Viewers could easily question whether the actors were real or AI-generated.
Compared with Sora, Veo’s humans often appear more integrated into the scene, with fewer “halo” effects that can make AI figures pop out of the background. Sora’s AI people sometimes look sharply in focus while the surroundings blur slightly, which can give the impression of a stylized filter.
Safety and misinformation concerns
Google says Veo includes multiple safety filters designed to block content that could be harmful. For instance, prompts involving hate speech or hate-related topics should be rejected.
MM+M explored several boundaries by prompting Veo with controversial or risky healthcare messages. Specifically, the team asked the platform to generate ads featuring public figures, which Veo 3.1 declined, citing its guidelines.
A notable policy distinction: Veo restricts using living or deceased public figures in generated content, whereas Sora’s stance differs and may allow certain uses depending on implementation. In practice, Veo’s safeguards aren’t foolproof.
Some prompts yielded disturbing results. For example, when testing a claim associating acetaminophen with autism, Veo produced a 10-second video featuring a narrator asserting “Several studies show that acetaminophen causes autism,” without providing sources or context. The visuals were highly convincing, with a household scene and a branded product visible. After attempting to correct minor errors, the output remained difficult to distinguish from authentic content.
A follow-up prompt reversing the message—stating that acetaminophen does not cause autism—produced a roughly seven-second video with a similar look and feel, but without citations or study references.
Bottom line on messaging: both Veo and Sora can generate targeted healthcare messaging about drugs and their effects, which raises questions about accuracy, context, and ethics in marketing.
What this means for marketers
With multiple tools capable of producing realistic healthcare content, the landscape for advertising is shifting. These technologies offer speed and scale, but they also pose risks to trust, especially in sensitive areas like rare diseases where patient stories are deeply personal.
Industry voices are cautious. Adam Daley, VP of Social at CG Life, warns against fully replacing human creativity and patient voices with AI. He emphasizes that patient experiences in rare diseases are highly individualized, and authentic storytelling from real people is crucial for building trust within those communities. A single misstep in messaging can damage years of relationship-building with audiences that rely on credible, empathetic voices.
Daley argues that while AI can be a powerful tool, it should complement rather than replace human expertise, especially for campaigns aimed at patients and caregivers who deserve genuine narratives.
This tension isn’t hypothetical. Earlier this year, AI influencer Lil Miquela helped fuel a leukemia-awareness campaign that amassed over 10 million views, but also drew backlash over questions of authenticity and the ethics of AI-generated representation in health messaging.
Key questions for teams to consider
- How can AI-generated content convey complex medical information accurately and transparently?
- What safeguards are in place to prevent misinformation or manipulation, and who is responsible when something goes wrong?
- How should organizations balance efficiency and scale with the need to spotlight real patient experiences and voices?
- Are there ethical boundaries around using AI for public figures or sensitive health conditions, and how should policies adapt as technology evolves?
Thought-provoking note: the convergence of AI video tools and influencer-style health messaging invites debate about authenticity, trust, and the future role of human storytellers in healthcare marketing. Do AI-generated campaigns undermine or enhance patient empowerment? What level of human oversight is essential to protect vulnerable audiences?
If exploring these tools for your next campaign, consider a measured approach: use AI to draft and storyboard concepts, but rely on patient partners, clinicians, and regulatory experts to review every message before release. This layered approach can help preserve trust while still benefiting from the efficiencies that Veo and similar platforms offer.