VIEW SPEECH SUMMARY
Overlooked Costs of AI: Three Key Stories
1. Unauthorized Use of Artistic Styles
- A digital artist, Greg, had his name used by 400,000 people as prompts in AI image generation without consent, raising issues of intellectual property and style exploitation.
- Such misuse stems from poor or insufficient AI education in companies leading employees to rely on ungrounded social media advice.
- Actionable:
- Implement thorough AI education combining practical and theoretical knowledge to understand AI’s workings and limitations.
- Develop detailed company guidelines using a risk-reward matrix evaluating roles, tasks, risks, and benefits of AI automation.
2. AI-Powered Impersonation and Financial Fraud
- The story of Steve, who lost $25 million due to an AI-generated voice scam impersonating his boss and colleagues.
- Elderly people like the speaker’s grandmother are targeted but can be safeguarded by security protocols (e.g., family code words).
- Organizations’ verification systems are vulnerable as humans cannot reliably detect AI fakes.
- Actions to counter risks:
- Enforce multi-factor verification (both online and offline) for sensitive tasks such as payments and recruitment.
- Use media monitoring tools to protect executives from impersonation.
- Label AI-generated content clearly to increase awareness of AI's capabilities.
- Encourage families to establish security codes for emergency verification.
3. Deepfake Pornography and Psychological Harm
- Victoria’s story illustrates the trauma caused by deepfake pornography, depicting non-consensual and fabricated sexual images made with AI.
- This phenomenon affects students, teachers, employees, celebrities, journalists, and can potentially affect anyone, especially women in power.
- Prevention requires broad education beyond AI alone, including digital literacy, media literacy, digital well-being, social media awareness, parasocial relationship education, and sex education.
- National initiatives (e.g., Poland) are starting to add digital well-being to curricula, but global efforts are needed.
The Importance of Transparency and Education
- Transparency about AI use in daily life and work helps combat misconceptions and potential harms.
- Actions:
- Publicly disclose when content is AI-generated or AI-assisted.
- Promote honest portrayals of life on social media to reduce harmful comparisons and unrealistic expectations (example of Anna and AI filters).
- Educate especially young people on the nature and limitations of AI chatbots to prevent mental health risks (example of Martin).
- Awareness can prevent victimization and promote empathy, reducing the likelihood of harm caused by AI misuse.
Call to Action
- The audience is encouraged to make a concrete decision on one AI-related topic to discuss with their family to ensure safety in the AI era.
- Sharing of these discussions and solutions is welcomed by the speaker.
Key Actionable Items
- Provide comprehensive, practical, and theoretical AI education for all employees.
- Create detailed AI usage guidelines using a risk-reward approach per role and task.
- Enforce multi-channel, multi-factor verification for critical processes.
- Protect executives with monitoring tools against AI impersonation.
- Label AI-generated content clearly in public domains.
- Develop and promote digital literacy programs covering AI, media, digital well-being, and social issues.
- Encourage open conversations and transparency about AI use in families and workplaces.
- Advocate for educational curriculum changes to include digital well-being globally.
1. Unauthorized Use of Artistic Styles
- A digital artist, Greg, had his name used by 400,000 people as prompts in AI image generation without consent, raising issues of intellectual property and style exploitation.
- Such misuse stems from poor or insufficient AI education in companies leading employees to rely on ungrounded social media advice.
- Actionable:
- Implement thorough AI education combining practical and theoretical knowledge to understand AI’s workings and limitations.
- Develop detailed company guidelines using a risk-reward matrix evaluating roles, tasks, risks, and benefits of AI automation.
2. AI-Powered Impersonation and Financial Fraud
- The story of Steve, who lost $25 million due to an AI-generated voice scam impersonating his boss and colleagues.
- Elderly people like the speaker’s grandmother are targeted but can be safeguarded by security protocols (e.g., family code words).
- Organizations’ verification systems are vulnerable as humans cannot reliably detect AI fakes.
- Actions to counter risks:
- Enforce multi-factor verification (both online and offline) for sensitive tasks such as payments and recruitment.
- Use media monitoring tools to protect executives from impersonation.
- Label AI-generated content clearly to increase awareness of AI's capabilities.
- Encourage families to establish security codes for emergency verification.
3. Deepfake Pornography and Psychological Harm
- Victoria’s story illustrates the trauma caused by deepfake pornography, depicting non-consensual and fabricated sexual images made with AI.
- This phenomenon affects students, teachers, employees, celebrities, journalists, and can potentially affect anyone, especially women in power.
- Prevention requires broad education beyond AI alone, including digital literacy, media literacy, digital well-being, social media awareness, parasocial relationship education, and sex education.
- National initiatives (e.g., Poland) are starting to add digital well-being to curricula, but global efforts are needed.
The Importance of Transparency and Education
- Transparency about AI use in daily life and work helps combat misconceptions and potential harms.
- Actions:
- Publicly disclose when content is AI-generated or AI-assisted.
- Promote honest portrayals of life on social media to reduce harmful comparisons and unrealistic expectations (example of Anna and AI filters).
- Educate especially young people on the nature and limitations of AI chatbots to prevent mental health risks (example of Martin).
- Awareness can prevent victimization and promote empathy, reducing the likelihood of harm caused by AI misuse.
Call to Action
- The audience is encouraged to make a concrete decision on one AI-related topic to discuss with their family to ensure safety in the AI era.
- Sharing of these discussions and solutions is welcomed by the speaker.
Key Actionable Items
- Provide comprehensive, practical, and theoretical AI education for all employees.
- Create detailed AI usage guidelines using a risk-reward approach per role and task.
- Enforce multi-channel, multi-factor verification for critical processes.
- Protect executives with monitoring tools against AI impersonation.
- Label AI-generated content clearly in public domains.
- Develop and promote digital literacy programs covering AI, media, digital well-being, and social issues.
- Encourage open conversations and transparency about AI use in families and workplaces.
- Advocate for educational curriculum changes to include digital well-being globally.
How much are we really paying for generative AI in business?
15:10 - 15:30, 27th of May (Tuesday) 2025 / INSPIRE STAGE
We've become infatuated with AI. From increasing task automation, through highly personalised experiences, to floods of content generated by artificial intelligence - we're drinking generative AI in big gulps. But what does it cost us? Morally and ethically? Where are the concerns about fake videos and AI safety for individuals and groups? And, when it comes to it, can we afford to pay for all those hidden costs ourselves? This is exactly what we need to analyse through real situations and case studies.
AUDIENCE:
Startup
Scaleup
Profitable Company
TRACK:
AI/ML
Growth
TOPICS:
FutureNow
Trending Now