Deepfakes for All: Uncensored AI Art Model Prompts Ethics Questions

Kyle Wiggers
August 24, 2022
The article discusses the rapid adoption and ethical implications of Stability AI's open-source AI image generator, Stable Diffusion. Released in August 2022, Stable Diffusion allows users to create realistic images from text prompts and can run on consumer hardware, leading to widespread use by various art generation services like Artbreeder and Pixelz.ai.
Key Points:
- Rapid Adoption: Stable Diffusion saw swift uptake in its first week, being integrated into services like NovelAI for story accompaniment and Midjourney for enhanced photorealism.
- Dual Use: While many applications are benign (e.g., art generation for stories, photorealism), the model's unfiltered nature has led to misuse.
- Misuse Concerns: Leaked early on 4chan, Stable Diffusion has been used to generate pornographic content, including nude celebrities and other explicit material.
- Ethical Dilemmas: The ability to generate realistic images from any prompt, including those of public figures, raises significant ethical questions, particularly regarding the creation of non-consensual deepfake pornography.
- AI Safety and Responsibility: Stability AI CEO Emad Mostaque acknowledged the misuse as "unfortunate" and stated the company was working with ethicists on safety mechanisms. The software includes a "Safety Classifier" to detect and block offensive images, but it can be disabled.
- Comparison to Other Models: Unlike OpenAI's DALL-E 2, which has strict filters, Stable Diffusion's open-source nature and lack of technical fetters on certain applications (though its license prohibits some uses like exploiting minors) present different challenges.
- Deepfake Pornography: The article highlights that women are disproportionately targeted by non-consensual deepfakes. A 2019 study indicated that 90-95% of deepfakes are non-consensual, and about 90% of those target women. This trend is expected to worsen with advanced AI models like Stable Diffusion.
- Expert Opinions:
- Ravit Dotan (VP of Responsible AI at Mission Control) worries that synthetic images of illegal content could exacerbate real-world illegal behaviors, such as increasing child exploitation.
- Abhishek Gupta (Principal Researcher at Montreal AI Ethics Institute) emphasizes the need to consider the entire lifecycle of AI systems, including post-deployment monitoring and controls to minimize harm, especially when powerful capabilities like Stable Diffusion are released "into the wild" without API rate limits or safety controls.
- Potential for Abuse: The combination of creating images of public figures and the lack of technical restrictions could enable bad actors to create pornographic deepfakes, potentially perpetuating abuse or implicating individuals in crimes.
- Scale and Automation: Unlike previous methods requiring manual effort, Stable Diffusion allows for automated, customized image generation at scale, making personalized blackmail attacks more feasible. Personal photos scraped from social media could be used to train models for targeted harmful imagery.
- Real-World Examples: The article references journalist Rana Ayyub becoming a target of deepfake porn created by nationalist trolls, leading to harassment that required UN intervention. It also mentions a case where a legitimate photo of a child triggered AI detection systems, leading to account disabling.
- Platform Responses: Platforms like OnlyFans and Patreon are implementing policies to address deepfakes and harmful content. OnlyFans reviews content with technology and human moderators, deactivating suspected deepfakes. Patreon has policies against abusive behavior and content causing real-world harm, and continuously monitors emerging risks.
- Enforcement Challenges: Despite platform efforts, enforcement remains uneven due to a lack of specific laws against deepfake pornography. New sites can easily emerge.
- "Into the Wild" Problem: Gupta highlights that when models are released "into the wild," controls like API rate limits and safety filters are bypassed, allowing malicious users to generate objectionable content at scale with minimal resources.
The article concludes by emphasizing the "brave new world" of AI image generation, where the power of these tools necessitates careful consideration of ethical implications, safety measures, and regulatory frameworks to mitigate potential harms.
Tags:AIAI applicationsAI capabilitiesAI developmentAI EthicsAI governanceAI policyAI regulationAI researchAI SafetyAI securityAI trendsComputer VisionGenerative AIMachine LearningOpen Source AIResponsible AI
Original article available at: https://techcrunch.com/2022/08/24/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions/