Why AI-Generated Holiday Ads Fail — And What They Teach Us About Using AI in UX Work

Published on:


Summary: 
AI holiday ads lack authenticity and emotional resonance, highlighting the need for human judgment and attention to users’ needs in UX and creative work.

Brands like McDonald’s and Coca-Cola have recently experimented with AI-generated holiday advertisements, only to face significant public backlash. Looking beyond the controversy, these AI initiatives offer a useful lens for examining responsible AI use in UX work.

Controversy Around AI-Generated Ad Campaigns

One of the most recent GenAI ad campaigns was McDonald’s Netherlands’ “The Most Terrible Time of the Year,” depicting holiday chaos like cyclists slipping in snow, Santa Claus stuck in a traffic jam, and disastrous family gatherings, positioning McDonald’s as an escape from festive stress. The ad was pulled shortly after launch due to intense online backlash, with viewers saying it “ruined Christmas spirits” and labeling it “AI slop.” (Archived copies of the clip through a marketing-research database)

McDonald’s Netherlands’ Holiday Campaign: The Most Terrible Time of the Year (2025)

Prior to McDonald’s campaign, Coca-Cola had attempted, for two consecutive seasons, to reimagine its classic 1995 “Holidays Are Coming” ad with generative AI. The 2024 version, featuring red trucks in snowy scenes and people holding bottles, was created by partnering with several creative agencies using multiple AI models, but it was slammed for its “soulless” and “creepy” visuals, where characters appeared artificial and devoid of genuine emotion.

A screenshot of Coca-Cola’s AI-generated ad depicting a smiling person holding up a Coca-Cola bottle in a snowy night scene, with warm, glowing holiday lights blurred in the background.
Coca-Cola’s Holiday Campaign (2024)

In 2025, Coca-Cola doubled down with another AI ad featuring anthropomorphic animals (polar bears, pandas, sloths) admiring the Coca-Cola trucks, aiming to avoid human close-ups that plagued the previous version. While technically improved (e.g., better truck wheel animations), it still drew criticism for its artificial quality.

A screenshot of Coca-Cola’s AI-generated ad depicting three seals emerging from a river and watching illuminated Coca-Cola trucks cross a stone bridge in a snowy night landscape.
Coca-Cola’s Holiday Campaign (2025)

The Criticism

Both AI-generated ad campaigns sparked massive online backlash, and the criticism stemmed from a mix of aesthetic, ethical, emotional, and economic factors, often intertwined with broader societal concerns about AI’s role in creative work.

Technical and Aesthetic Shortcomings

Despite the significant improvement in the AI-video models over the past few years, generative AI still struggles to achieve the level of realism that aligns with real-world logic. AI-generated visuals often provoke the uncanny-valley effect — that uncomfortable sense of “almost human, but not quite.” Once viewers notice these subtle distortions, the emotional connection begins to collapse. In the case of holiday ads, instead of being drawn into a warm holiday narrative, they become preoccupied with what looks off.

To avoid exposing these (and other) limitations of AI, these ads relied heavily on workarounds: rapid montages of one-second clips, distant camera shots to conceal distortions, and minimal close-ups to avoid uncanny details. These choices mask the flaws in technology but also undermine the creative storytelling. Instead of a fluid, cohesive narrative, the ads feel stitched together — frantic, fragmented, and visually disorienting. Audiences can perceive when the narrative is shaped around what the technology can do rather than what the story should be.

Lost Authenticity

AI-generated content is often criticized as soulless, algorithmic, lacking human creativity or authorship. While humans may provide prompts and high-level direction, they do not fully control how those ideas are translated into the final output; much of the creative execution is determined by the model’s internal algorithm. When human involvement in the creative process is obscured and the outputs appear mass-produced, audiences are more likely to perceive the work as impersonal and inauthentic.

This directly contradicts the spirit of a traditional holiday ad, which is intended to create an emotional connection with its viewers. That mismatch helps explain why public backlash has been especially strong in these cases, whereas other brands’ smaller-scale AI marketing experiments (such as AI-generated social media content) have drawn comparatively muted reactions. Holiday campaigns carry unusually high emotional stakes: audiences bring deep-seated nostalgia, brand memories, and a well-established mental model of what these ads should feel like. When a campaign violates those expectations, the rejection is more intense — many see it as a betrayal of the brand’s heritage and the cultural significance of the holiday.

Human-Labor Substitute

The use of AI often signals a broader organizational strategy. In consumer-facing work, it is frequently interpreted as an intentional cost-cutting measure — one that prioritizes efficiency over craft at the expense of quality and jobs. These perceptions are reinforced by ongoing concerns about training data drawn from uncredited creative work and positioned to replace human labor.

To clarify, these ads are not produced entirely by AI. In both campaigns, substantial human labor was required to plan, direct, generate, curate, and refine the AI outputs.

  • Reporting on Coca-Cola’s 2025 campaign indicates that 5 AI specialists worked for about a month to generate roughly 70,000 video clips, with around 100 Coca-Cola staff involved across the broader production.
  • Similarly, in a since-deleted Little Black Book post (later archived by 80 Level), the filmmakers behind McDonald’s holiday campaign described a seven-week production involving up to 10 in-house AI and postproduction specialists, with each shot going through extensive iterative refinement.

Critics argue that this level of time, expertise, and budget could have been directed to support conventional human-driven production — employing real actors and crews to create more original and authentic content.

Why Companies Are Obsessed with AI Campaigns

As AI becomes increasingly associated with the future of work, organizations face growing pressure to adopt and publicly showcase AI initiatives to appear competitive and appeal to investors. High-profile ad campaigns like these are often used to demonstrate technological ambition and position the brand as “cutting edge.” However, this pursuit of visibility can lead companies to “use AI for the sake of using AI”, rather than telling a meaningful story about the product or experience they build.

It is particularly notable that these companies proceeded with AI-generated campaigns despite a well-documented history of public backlash. After its 2024 campaign was met with criticism, Coca-Cola did not retreat; instead, it returned with another AI-driven effort. This pattern suggests that brands may have anticipated — if not strategically accepted — controversy, recognizing that backlash itself can amplify visibility and drive public attention and discussion.

According to research conducted by Jonah Berger and Katherine Milkman, “arousal” is a key driver of social sharing: content with high-arousal emotions (e.g., awe, anger, anxiety) is more likely to drive sharing compared to content with low-arousal emotions (e.g., sadness). The AI-generated ads primarily trigger high-arousal negative reactions, such as anger and disgust. Although these reactions are unfavorable, they still motivate sharing, often in the form of criticism or mockery. As a result, this content travels further and faster than a “safe,” traditional ad would.

What Failed AI-Generated Ad Campaigns Teach Us About UX Work

The backlash against these AI-powered holiday ads highlights important lessons for UX, as AI becomes increasingly embedded in the work we do. Like holiday campaigns, UX choices often affect people emotionally and have lasting effects. What can design teams learn from these AI-related missteps when applying AI in UX work?

Prioritize Building Long-Term Trust

Teams should focus on the long-term impact, rather than the short-term gain. Although  eliciting negative emotions may temporarily increase virality, it can adversely affect brand reputation. Repeated attempts to generate interest through AI initiatives may lose novelty over time, and the trust eroded by these tactics can be difficult to rebuild.

In UX, some businesses deliberately create deceptive patterns for short-term business gains, such as boosting conversions. While these tactics may appear effective in the moment, they undermine user trust and cause long-term harm to the relationship between users and the brand. Designers should actively avoid deceptive patterns and advocate against their use.

Begin with a Purpose, Not AI

These holiday campaigns were built around AI, and the story was bent to accommodate the technology. By focusing on the technology rather than using it in service of the narrative, the outcome emphasized novelty at the expense of meaning.

The same lesson applies to product design. Teams must start with a clear rationale for why AI belongs in a product or feature in the first place. “AI-powered” is not a meaningful value proposition to users. Whether a solution uses AI or not, it must solve a real user problem in a way that feels useful, trustworthy, and appropriate.

Before embracing the technology, teams should ask: What value does AI add to the existing workflows? What problem does it solve better than existing approaches? Organizations need a clear, intentional AI strategy, rather than blindly rewarding AI usage by measuring adoption.

Use AI to Assist, Not to Replace

UX professionals should approach AI adoption with particular care and sensitivity. Because of its broader societal implications, AI remains a subject of ongoing public debate. When generative AI is used heavily in highly visible, commercial work, it often attracts heightened scrutiny.

Currently, it is safer to use AI for internal processes rather than end-user outputs. Instead of full automation of the entire process, apply AI selectively to discrete tasks while retaining human ownership of creative direction. Appropriate uses include early-stage ideation, lightweight copy refinement, or generating microcopy or image content for testing purposes.

Generative AI still requires substantial human direction to produce high-quality outcomes. It should therefore be treated as a support tool rather than a replacement of human work. As AI tools become increasingly available, the true differentiator is not the technology itself, but the quality of human thinking behind it.

When presenting work that involves generative AI, be transparent about how the technology was used. Rather than emphasizing technical novelty, foreground the human intent, creative vision, and storytelling that shaped the final outcome — these human elements are what separate meaningful work from purely AI-generated outputs that feel artificial.

Embrace Healthy Skepticism

It’s easy to laugh off failed AI-powered holiday campaigns or dismiss AI based on today’s uneven outputs. But don’t lose sight of the fact that these technologies are evolving fast and rapidly shaping the future of work. UX practitioners should adopt a stance of healthy skepticism: resist the hype, but don’t reflexively reject the tools either.

You can’t meaningfully critique AI in the abstract. The most effective way to stay grounded in a rapidly changing landscape is through proactive curiosity — run small, low-risk experiments and evaluate the results firsthand. Lived experience, not secondhand opinions, is what allows practitioners to retain judgment and agency as the technology evolves.

Conclusion

The AI-holiday-ad backlash is a reminder that people don’t judge AI solely by its output; they read its use as a signal of organizational priorities — whether a brand values novelty over craftsmanship or efficiency over care. In UX, those same signals shape trust: how AI is applied impacts how users interpret both the experience and the organization behind it. AI should support, not dictate, the experience — responsible use requires a clear purpose, human ownership, and restraint in contexts where AI meaningfully aligns with design intent.

References

Jonah Berger and Katherine L. Milkman. 2013. Emotion and virality: What makes online content go viral? GfK Marketing Intelligence Review 5, 1 (May 2013), 18–23. https://doi.org/10.2478/gfkmir-2014-0022

Source link

Related