The two sides of gen AI-powered marketing
By Ed Hill, SVP EMEA, Bazaarvoice
If there was any doubt that the world of marketing is being transformed by generative AI, that doubt was surely dispelled at this year’s Cannes Lions. At the glitzy annual festival of advertising, every conversation, presentation, and product pitch circled inexorably back around to AI: “if you were branding this Cannes,” an executive from Meta reportedly commented, “it would be the AI Cannes”.
And the technology does seem to be generating much more than just discourse. According to the analysts at McKinsey, marketing and sales functions which invest in AI gain a revenue boost of 3-15% as well as a 10-20% lift in sales ROI. Their research also indicates that 90% of commercial leaders expect to use gen AI tools often over the next two years.
Beyond the industry conversations and the bottom-line impact, it also seems fair to suggest that gen AI is delivering a distinct halo effect for businesses making moves towards it. In a context where AI content start-ups regularly acquire billion-dollar valuations, sometimes within a year of being founded, and even against a prevailing trend of significantly restrained venture capital funding, being early to utilise gen AI is a way of both upleveling a marketing business’s capabilities and signifying to the industry that it’s at the forefront of innovation more generally.
Is it, then, full steam ahead for marketing applications of gen AI, just as it was for the yachts congregating on the French Riviera for Cannes Lions?
The real value of real users
At the risk of swimming against the prevailing current, it’s worth discussing some of the risks that gen AI poses to marketers. The dangers of the technology have been widely noted on a broader social level: the potential for sowing disinformation, misleading consumers into buying a product based on false pretences, and other harmful outcomes all come with the territory of low-cost, mass-produced, human-like text and image production.
The specific issues in a marketing context, however, have overall been less commented upon. Over the last decade or two, there has been a significant enfranchisement of customers and audiences to have a say in how businesses are seen, and products received. Media like television and billboard advertising may still be important to brand-building, but what now converts brand interest into a purchasing decision is often user-generated content (UGC). Things like product reviews and photos, which combine the persuasive sincerity of word-of-mouth recommendation with the universal availability of traditional marketing, are so powerful as tools that many people are hesitant to buy online without their presence.
That quality of sincerity is, of course, precisely what AI-generated content inherently lacks. Research shows over half of consumers will lose trust in a brand that lists fake review next to their products, and that 81% will avoid using a brand again once that trust has been lost. As powerful as gen AI is in mimicking the speech patterns of real people, it offers no substitute for the specificity, inventiveness, and insight that comes with using a product in real life.
In the potential interaction between UGC and gen AI, then, we can see an example of how this new technology needs to be carefully managed in order to protect one of a business’s most important assets: trust.
It hardly needs spelling out how this combination could go wrong. There’s every likelihood that unscrupulous actors in the market could employ gen AI to pad products with positive reviews at scale – and that could be a tactic directly employed by brands, or a third-party vendor gulling brands into thinking they are receiving genuine feedback. Either way, the combination of high-quality text and image generation means that such UGC may be, at a glance, highly convincing.
Astute consumers, however, who have a personal financial interest in the reliability of that information, will eventually notice the issue in a way that could be fatal to brand trust. For marketers, it’s important to grasp that the risk at hand is not confined to a one-off interaction between a consumer and a product: just as there is a positive halo effect for businesses’ reputations in getting on the front foot with gen AI, consumers viewing that usage negatively will generate a dark halo effect which throws into doubt everything that the business does.
This lens needs to be applied whenever and however we bring gen AI to bear in marketing practices. Does this use of technology treat the consumer with respect? Does it improve their brand experience over the long term, not just at the point of sale? Does it, ultimately, build trust?
Moving forward with gen AI
Beyond the obvious advice I can give here – to not consider using gen AI as a way of seeding positive (fake) product reviews – what does this all mean for marketing and AI going forward?
One key point is that the situation is still fast-evolving. Marketing technology vendors have been using AI and machine learning for many years, for everything from product recommendation to content moderation, and that skillset is a powerful counter to mass-produced gen AI content.
Perhaps the most impactful way of combatting the risk of acquiring a dark AI halo, however, is to push forward with using gen AI in ways that openly demonstrate to customers that your business is a responsible and forward-thinking practitioner of the technology. The same capabilities that mean gen AI can create fake content also mean that it can usefully coach users into creating better content of their own, whether that’s prompting them to mention additional aspects of a product in their review or automatically captioning their photos to make them more informative.
After all, as Cannes Lions proved, generative AI is here and there is no turning back. The job of marketers now is to ensure that their brands acquire the best possible version of the AI halo.