What are the questions to ask before using big tech’s gen AI advertising tools? Explore the key considerations you need to be aware of, from content usage in AI models to potential brand risks and the underlying motives of major tech companies offering free AI tools.
What Content is Used to Train Gen AI Models?
Generative AI models are typically trained using publicly available content from across the web. According to a report from Retail TouchPoints by Nicole Silberstein, companies like Google and Meta have confirmed that their AI models rely heavily on this type of data.
For example, Google’s AI chatbot Google Bard and Meta’s text-to-image AI model Emu use publicly sourced content to enhance their capabilities.
While this approach benefits AI development, it poses ethical and legal concerns. Intellectual property rights and privacy issues become prominent when such vast amounts of public data are used. A study by Veale and Binns (2017) underscores the risk of copyright infringement and privacy violations, urging companies to adopt ethical guidelines to minimize legal exposures.
Furthermore, a Harvard Business Review article by Waters and Green highlights the potential for unintended sharing of proprietary content when brands upload data to these platforms.
Ethical and Compliance Guidelines
While understanding the data sources is crucial, it’s equally important to address the ethical and legal implications that come with it. CMOs can mitigate these risks by implementing stringent ethical standards and compliance measures:
- Ensuring User Consent: Rigorous consent mechanisms must be in place for collected data, ensuring user awareness and agreement.
- Data Anonymization: Techniques such as anonymization can protect user privacy while still allowing data usage for AI training.
- Navigating Copyright Issues: Conducting audits to ensure compliance with copyright laws and leveraging licensed data can minimize legal risks.
Are CMOs Comfortable with AI Using Their Brand Content?
Many CMOs express concern about their proprietary content being utilized by AI systems, potentially benefiting competitors. For example, Google’s recent tools allow brands to upload reference images, raising questions about their future use in AI training. Retail TouchPoints reveals that executives have been unclear about these practices, causing discomfort among CMOs about potential misuse of their content.
Regular data audits and an examination of past data misuse cases can help CMOs manage these risks effectively. Understanding how brand content is utilized and seeking transparency in platform policies are key steps in this process.
Subscribe to the latest AI news
Transform Your Marketing with AI Insights
Stay ahead with exclusive strategies, tools, and trends tailored
for innovative CMOs, delivered weekly to your inbox.
Why Are Platforms Offering Free AI Tools?
The motives behind offering free generative AI tools are rooted in data acquisition and financial gain. Retail TouchPoints explains that tech giants like Google, Meta, and TikTok use these tools to gather user-generated content, which helps refine their AI models.
This data-driven improvement cycle boosts the platforms’ competitive edge and drives ad revenue.
Historical precedents, such as the OpenAI-Johansson debacle highlighted in The Atlantic by Charlie Warzel, emphasize the need for cautious use of free AI tools. The true cost often lies in data privacy and long-term implications that are not immediately apparent.
Future-Proofing Strategies
To navigate the complexities of generative AI in marketing, CMOs should adopt forward-thinking strategies:
- Monitoring Regulatory Changes: Stay updated on AI regulations and compliance requirements to avoid legal pitfalls.
- Adopting Emerging Trends: Keep abreast of new AI developments and technologies that could influence marketing strategies.
- Strategic Roadmap: Develop a strategic roadmap to ensure resilient and adaptable marketing strategies in the face of rapid technological changes.
Well, to stay informed and adopt best practices, sign up for our newsletter today!
References
- Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. *Big Data & Society*, 4(2), 1-17. doi:10.1177/2053951717743530
- Waters, R., & Green, H. (2023). The AI responsibility gap: Why your business needs an AI ethics strategy. *Harvard Business Review*. Retrieved from https://hbr.org/2023/03/the-ai-responsibility-gap-why-your-business-needs-an-ai-ethics-strategy
- Warzel, C. (2023). AI’s secret sauce: Learning from the past to shape the future. *The Atlantic*. Retrieved from https://www.theatlantic.com/technology/archive/2023/05/ai-ethics-data-privacy-copyright/
—Inspired by: 5 Questions to Ask Before Using Big Tech’s Gen AI Advertising Tools, Retail TouchPoints, Nicole Silberstein












