X
Innovation

How Adobe manages AI ethics concerns while fostering creativity

In this exclusive ZDNET interview, discover how Adobe's Grace Yee leads the charge in embedding ethical considerations into AI with a focus on accountability, responsibility, and transparency.
Written by David Gewirtz, Senior Contributing Editor
Abstract AI in the shape of a person
Andriy Onufriyenko/Getty Images

You may have seen the disclaimer, "Disclaimer: You should consider the legal consequences (e.g. copyright) of using AI-generated images before implementing them into your work," at the bottom of some of ZDNET's articles. These are articles that have used AI image tools to illustrate or demonstrate AI concepts.

Dealing with the use of AI-generated images is a big problem for organizations. Many of the generative AI tools were trained on whatever their developers could scoop up from the internet, creating a copyright nightmare for many companies.

Also: How to selectively modify a Midjourney image to create an artistic statement

We here at ZDNET have banned the use of AI-generated images except in articles about AI-generated images. For example, I did a how-to about Midjourney where I showcased a piece of AI-generated art I produced. That piece of art was only allowed because the article was about using the AI tool. 

But if I wanted to use an AI-generated image as the hero shot for an article about the gear I use to create YouTube videos, that would be disallowed because the article itself isn't about AI image generation. If any one company is at the center of this storm, it's Adobe. 

Adobe makes a wide range of tools for creators, including my beloved Photoshop, and has been incorporating AI features for quite a few years. Adobe has attempted to resolve corporate concerns by using commercially safe training data, but even that has run into some controversy.

Also: Adobe included AI-generated images in 'commercially safe' Firefly training set

Grace Yee

Grace Yee, Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe.

Adobe

To get Adobe's perspective on all of this, we had the opportunity to chat with Grace Yee, senior director of ethical innovation (AI ethics and accessibility) at Adobe. Her responsibility involves driving global, organization-wide work around ethics and developing processes, tools, trainings, and other resources to help ensure that Adobe's industry-leading AI innovations continually evolve in line with Adobe's core values and ethical principles. 

Grace advances Adobe's commitment to building and using technology responsibly, centering ethics and inclusivity in all of the company's work developing AI.

Also: Adobe unveils three new generative AI models, including the next generation of Firefly

One quick note before we begin. Grace talks a lot about Firefly, so let's make sure you know what that is. 

Firefly refers to both a free online tool, and the AI technology that underpins most of the generative AI features in Adobe's offerings.

And with that, let's get started.

ZDNET: Can you share insights on how Adobe has prioritized ethical considerations in the development of Firefly?

Grace Yee: At Adobe, ethical innovation is our commitment to developing AI technologies in a responsible way that respects our customers and communities and aligns with our values. Back in 2019, we established a set of AI Ethics Principles we hold ourselves to when we're innovating, including accountability, responsibility, and transparency.

With the development of Firefly, our focus has been on leveraging these principles to help mitigate biases, respond to issues quickly, and incorporate customer feedback. Our ongoing efforts help ensure that we are implementing Firefly responsibly without slowing down innovation.

ZDNET: With the rise of AI capabilities, including deepfake technology, what measures has Adobe implemented to address ethical concerns in Firefly?

GY: We recognized the potential misuse of generative AI and media manipulation several years ago, which is why, in 2019, Adobe co-founded the Content Authenticity Initiative to help increase trust and transparency online with Content Credentials. Like a "nutrition label" for digital content, Content Credentials can show important information such as the creator's identity, creation date, and whether AI was used to help consumers assess the trustworthiness of content.

In fact, Adobe recently conducted a Future of Trust survey that showed most people believe it is essential they have the right tools, like Content Credentials, to verify if online content is trustworthy.

Content Credentials are available in our Firefly features, including Generative Fill and text-to-image creation, which is crucial to restore trust online for digital content.

ZDNET: What strategies are crucial for embedding ethical considerations into AI tools from the outset, and how has Adobe approached this?

GY: Even before Adobe began work on Firefly, our Ethical Innovation team had leveraged our AI Ethics Principles to create a standardized review process for our AI products and features -- from design to development to deployment.

For any product development at Adobe, my team first works with the product team to assess potential risks, evaluate mitigations, and demonstrate how our AI Ethics Principles are being applied. It is not done in isolation.

From there, features with a higher potential ethical impact go on to further technical review, including by the AI Review Board, who can ask questions and suggest improvements.

ZDNET: How are companies like Adobe ensuring that AI technologies enhance creativity without compromising ethical standards?

GY: AI is only as good as the data on which it's trained. The larger the data set, the more diverse and accurate the results, which can help avoid perpetuating harmful biases. But the data has to come from somewhere, and you need to respect things like privacy and copyright.

Also: The ethics of generative AI: How we can harness this powerful technology

That's why we trained our first Firefly model using only licensed images from our own Adobe Stock photography collection, openly licensed content, and public domain content where copyright has expired.

Additionally, to ensure copyrighted or branded materials are not created as part of Firefly's output, we have a content moderation team that performs extra filtering on the images before they enter the Firefly dataset.

ZDNET: How does team collaboration at Adobe and within the industry at large enhance the development of safer AI technologies?

GY: We strongly believe that cross-team and cross-functional engagement expands perspectives and contributes to the culture of shared responsibility for the quality and safety of AI technologies. In addition to our product team, our Ethical Innovation team collaborates closely with Trust and Safety, Legal, and Internationalization teams to help account for possible issues, monitor feedback, and develop mitigations.

For example, last summer Firefly added AI-enabled multilingual support for our users globally. We reviewed and expanded our terminology to cover country-specific terms and connotations, and our Internationalization team made sure native speakers were part of the process.

ZDNET: What are the long-term ethical considerations of AI's impact on society?

GY: AI is not the sentient being you see in sci-fi movies. But there are harms that AI can cause today and in the long term, such as the spread of misinformation and harmful bias.

That's why we created an AI Ethics Impact Assessment. It is a multi-part assessment to identify features and products that can perpetuate harmful biases and stereotypes.

We believe this risk-based approach is a key component to a successful AI Ethics program and will help mitigate the long-term ethical considerations of AI's societal impact.

ZDNET: How does AI influence the concept of creativity in the digital age, with insights from Adobe's journey?

GY: When we launched Firefly, we viewed it as a co-pilot for creativity, not a replacement. With Firefly, anyone can type in a text prompt and watch their imagination come to life. Even people who have never used Photoshop or our other Creative Cloud products are now regularly using Firefly to weave imagery from just a few words.

Our mission is to give creators every advantage. As Firefly evolves, we'll continue to work closely with the creative community to build technology that supports and improves the creative process, including by 1) ensuring Firefly is producing outputs that reflect their needs, 2) developing future iterations of our AI technology using user feedback, and 3) addressing issues that require more complex decision-making and fixing blind spots in our guardrails.

Firefly user feedback has helped us understand what type of images our users are dreaming up and how they're interpreting the results, providing valuable insight into potential issues and areas for improvement.

This exercise has drawn our attention to issues that require more complex decision-making and highlighted some blind spots in our guardrails that we quickly fixed.

Beyond Firefly, companies need to make sure they build feedback mechanisms in all of their products that have generative AI.

ZDNET: What is your vision for a responsible and ethical future in the use of AI within the creative industry?

GY: We're just scratching the surface of generative AI and the technology is improving and becoming increasingly prevalent every day.

Down the road, we anticipate AI-powered applications, like Firefly, to become mainstream solutions for companies and professionals to improve efficiency, generate images that incorporate their brand, and unlock new possibilities -- all grounded in trust, transparency and responsible innovation.

Final thoughts

ZDNET's editors and I would like to give a huge shoutout to Grace Yee for taking the time to engage in this in-depth interview. There's a lot of food for thought here. Thank you, Grace!

What do you think? Did her recommendations give you any ideas about how to engage with AI in your creator journey, or for your company or organization? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards