
Google DeepMind
Google DeepMind has developed a digital watermarking tool called SynthID to identify AI-generated images and limit misinformation. SynthID embeds imperceptible watermarks into AI-generated images, which stay in place even when an image is cropped. The software works by embedding “signals” to individual pixels in pictures, making watermarks invisible to the human eye but identifiable by computers. SynthID uses two deep learning models for watermarking and identifying, trained together on a diverse set of images.
OpenAI
OpenAI, along with other companies like Microsoft, Google, Meta, Amazon, Anthropic, and Inflection, has committed to developing tech to clearly watermark AI-generated content. OpenAI has agreed to develop robust mechanisms, including provenance and/or watermarking systems for audio or visual content, as well as tools or APIs to determine if a particular piece of content was created with their system. OpenAI is also working on a way to watermark AI-generated text by embedding an “unnoticeable secret signal” indicating the text’s origin.
DALL-E
Although specific details about DALL-E’s watermarking approach are not available, it is known that DALL-E images usually have a watermark added to indicate their source, typically in a corner. However, this method has limitations, as the watermark can be removed with the right skills.
Adobe Firefly
Adobe Firefly, a generative AI-powered content creation tool, embeds watermarks and metadata in each image generated. This approach helps users identify AI-generated content and promotes responsible use of the technology.
MidJourney
No information about MidJourney’s efforts to curb fake news through watermarking was found in the search results.
Real-life Consequences of AI-generated Content
AI-generated content can have significant consequences in various areas, such as politics and news. Some potential negative impacts include:
- Deepfakes: AI-generated images and videos can create convincing deepfakes, leading to misinformation, manipulation, and potential harm to individuals’ reputations.
- Non-consensual porn: AI-generated content can be used to create non-consensual pornographic material, violating individuals’ privacy and causing emotional distress.
- Copyright infringements: AI-generated content can lead to copyright infringements, as it becomes increasingly difficult to determine the original creator of a piece of content.
- Academic plagiarism: AI-generated text can be used to plagiarize academic work, undermining the integrity of educational institutions.
- Propaganda and misinformation: AI-generated content can be used to create and spread propaganda and misinformation, potentially influencing public opinion and political outcomes.
Watermarking AI-generated content can help mitigate these issues by making it easier to identify and track the origin of the content, promoting transparency and responsible use of AI technology.