Synthetic advertising - statistics & facts
Synthetic ads: raising political concerns, shaping TV ads as we know them
The upcoming 2024 presidential election season in the United States will differ from its precursors in one pivotal way: It will be the first to feature an augmented use of generative artificial intelligence (AI) by advertisers, which has already raised concerns over misinformation and manipulation. In November 2023, Meta announced forbidding political campaigns and advertisers in other regulated industries from utilizing its generative AI ad products. In the same month, Meta also declared that it would require advertisers to disclose whether their political ads on its platforms have AI-generated or digitally altered content. The move came after a similar announcement made by Google two months prior. Critics have earlier denounced Meta, particularly during the 2016 presidential election, for failing to prevent the spread of misinformation on its apps. As of March 2024, at least 40 U.S. states have ushered in legislation targeting AI-generated content that could harm a candidate or influence an election. An example is that of Wisconsin, which passed a bipartisan law stating that the label “Contains content generated by AI” must be applied at the beginning and end of all campaign-related synthetic ads.On the broader TV and video spectrum, synthetic ads are already among us. Startups like Movio use generative AI to create videos featuring talking human avatars. At the same time, Japanese company En, Ltd. made the news in January 2024 when it incorporated an AI-generated model in a TV ad for its green tea drink.
Synthetic influencers and trust
Influencer marketing and Gen AI are now a celebrity couple. With the boom of programs such as Canva, Midjourney, and Chat GPT, social media content creators can efficiently generate advertising text (copy) and images. Creating more engaging content and managing social media accounts are just some of the leading benefits AI has brought about for influencers. Moreover, social media celebrities can now speak any language they want and thus target a larger audience online. They can also achieve various aesthetics faster using Gen AI tools – for example, making a neon-colored, NY-style photoshoot. Still, almost a third of surveyed global influencers in September 2023 stated that they would not disclose the usage of AI programs in their content production.On the marketers’ side of business, generative AI is mainly adopted for influencer identification and the location and distribution of relevant content. While professionals on both sides of the game are battling with trends, consumers’ trust is being affected.
Synthetic ethics, mental health, and the future
In early 2023, more than a third of consumers in the U.S. stated that they would trust influencer content generated by AI equally as much as human influencer-created posts. Later in the same year, 64 percent of Americans were only willing to find AI-generated ads appealing if companies disclosed the usage of the technology.Overall, synthetic imaging used for political advertising and generative AI’s algorithmic biases raise strong ethical concerns around brand trust, media transparency and informed consent. Issues about privacy and the impact on authentic creativity are also on the table. Finally, the constant exposure to “perfect” synthetic models on social media would influence not just purchases but also consumers’ mental health. The solution for advertisers would be to be transparent about AI’s usage. That would gain future consumer trust and ensure that synthetic ads do not turn into a real mess.