X
Innovation

The ethics of generative AI: How we can harness this powerful technology

With the great power of generative AI comes great responsibility. Here's how researchers are addressing the ethical concerns and challenges -- the biggest questions of our time.
Written by Min Shin, Associate Editor
Person holding abstract AI interface
Yuichiro Chino/Getty Images

With new discoveries about generative AI's capabilities being announced every day, people in various industries are seeking to explore the extent to which AI can propel not only our daily tasks but also bigger, more complex projects. 

However, with these discoveries are concerns and questions about how to regulate the use of generative AI. Simultaneously, lawsuits against OpenAI are emerging, and the ethical use of generative AI is a glaring concern. 

As updated AI models evolve newer capabilities, legal regulations still lie in a gray area. What we can do now is educate ourselves on the challenges that come with using powerful technology and learn what guardrails are being put in place against the misuse of technology that holds enormous potential.  

Use AI to combat AI manipulation 

From situations such as lawyers citing false cases that ChatGPT created, to college students using AI chatbots to write their papers, and even AI-generated pictures of Donald Trump being arrested, it is becoming increasingly difficult to distinguish between what is real content, what was created by generative AI, and where the boundary is for using these AI assistants. How can we hold ourselves accountable while we test AI? 

Also: A thorny question: Who owns code, images, and narratives generated by AI? 

Researchers are studying ways to prevent the abuse of generative AI by developing methods of using it against itself to detect instances of AI manipulation. "The same neural networks that generated the outputs can also identify those signatures, almost the markers of a neural network," said Dr. Sarah Kreps, director and founder of the Cornell Tech Policy Institute. 

Also: 7 advanced ChatGPT prompt-writing tips you need to know

One method of identifying such signatures is called "watermarking," by which a kind of "stamp" is placed on outputs that have been created by generative AI such as ChatGPT. This helps to distinguish what content has and hasn't been subjected to AI. Though studies are still underway, this could potentially be a solution to distinguishing between content that has been altered with generative AI and content that is truly one's own. 

Dr. Kreps gave the analogy of researchers using this stamping method to that of teachers and professors scanning students' submitted work for plagiarism, where one can "scan a document for these kinds of technical signatures of ChatGPT or GPT model."

Also: Who owns the code? If ChatGPT's AI helps write your app, does it still belong to you?

"OpenAI [is] doing more to think about what kinds of values it encodes into [its] algorithms so that it's not including misinformation or contrary, contentious outputs," Dr. Kreps told ZDNET. This has especially been a concern as OpenAI's first lawsuit was due to ChatGPT's hallucination that created false information about Mark Walters, a radio host. 

Digital literacy education

Back when computers were gaining momentum for school use, it was common to take classes like computer lab to learn how to find reliable sources on the internet, make citations, and properly do research for school assignments. Consumers of generative AI can do the same as they did when they were first starting to learn how to use a piece of technology: Educate yourself. 

Also: The best AI chatbots 

Today, with AI assistants such as Google Smart Compose and Grammarly, using such tools is common if not universal. "I do think that this is going to become so ubiquitous, so 'Grammarly-ed' that people will look back in five years and think, why did we even have these debates?" Dr. Kreps said. 

However, until further regulations are put in place, Dr. Kreps says, "Teaching people what to look for I think is part of that digital literacy that would go along with thinking through being a more critical consumer of content."

For instance, it is common that even the most recent AI models create errors or factually incorrect information. "I think these models are better now at not doing these repetitive loops that they used to, but they'll do little factual errors, and they'll do it sort of very credibly," Dr. Kreps said. "They'll make up citations and attribute an article incorrectly to someone, those kinds of things, and I think being aware of that is really helpful. So scrutinizing outputs to think through, 'does this sound right?'"

Also: These are my 5 favorite AI tools for work

AI instruction should start at the most basic level. According to the Artificial Intelligence Index Report 2023, K-12 AI and computer science education grew in both the US and the rest of the world since 2021, reporting that since then, "11 countries, including Belgium, China, and South Korea have officially endorsed and implemented a K-12 AI curriculum." 

Time allocated to AI topics in classrooms included algorithms and programming (18%), data literacy (12%), AI technologies (14%), ethics of AI (7%), and more. In a sample curriculum in Austria, UNESCO reported that "they also gain an understanding of ethical dilemmas that are associated with the use of such technologies, and become active participants on these issues." 

Beware of biases

Generative AI is capable of creating images based on the text that a user inputs. This has become problematic for AI art generators such as Stable Diffusion, Midjourney, and DALL-E, not only because they're images that artists did not give permission to use, but also because these images are created with clear gender and racial biases. 

Also: The best AI art generators

According to the Artificial Intelligence Index Report, The Diffusion Bias Explorer by Hugging Face inputted adjectives with occupations to see what kinds of images Stable Diffusion would output. The stereotypical images that were generated revealed how an occupation is coded with certain adjective descriptors. For example, "CEO" still significantly generated images of men in suits when a variety of adjectives such as "pleasant" or "aggressive" were inputted. DALL-E, also had similar results with "CEO," producing images of older, serious men in suits. 

Images of mainly men in suits for "CEO" along with "assertive" and "pleasant"

Images from Stable Diffusion of "CEO" with different adjectives.

Diffusion Bias Explorer/Stable Diffusion
AI-generated images of CEO

Images from DALL-E of a "CEO."

Stanford University/DALL-E

Midjourney was shown to have a similar bias. When asked to produce an "influential person," it generated four older white males. However, when given the same prompt by AI Index later on, Midjourney did produce an image of one woman out of its four images. Images of "someone who is intelligent" revealed four generated images of white, older, males who were wearing glasses. 

Midjourney generated images

Images from Midjourney of an "influential person." 

Stanford University/Midjourney

According to Bloomberg's report on generative AI bias, these text-to-image generators also show a clear racial bias. Over 80% of the images generated by Stable Diffusion with the keyword "inmate," contained people with darker skin. However, according to the Federal Bureau of Prisons, less than half of the US prison population is made up of people of color. 

Stable Diffusion images from the word "inmate"

Images from Stable Diffusion of "inmate."

Bloomberg/Stable Diffusion

Furthermore, the keyword "fast-food worker" attributed images of people with darker skin tones 70% of the time. In reality, 70% of fast-food workers in the US are white. For the keyword "social worker," 68% of images generated were of people with darker skin tones. In the US, 65% of social workers are white. 

Stable Diffusion images from the word "fast-food worker"

Images from Stable Diffusion of "fast-food worker."

Bloomberg/Stable Diffusion

What are the ethical questions that experts are posing?

Currently, researchers are exploring hypothetical questions to unmoderated models to test how AI models such as ChatGPT would respond. "What topics should be off-limits for ChatGPT? Should people be able to learn the most effective assassination tactics?" Dr. Kreps posed the types of questions that researchers are examining. 

"That's just one sort of fringe example or question but it's one where if [it's] an unmoderated version of the model, you could put that question in or 'how to build an atomic bomb' or these things which maybe you could have done on the internet but now you're getting in one place, a more definitive answer. So they're thinking through those questions and trying to come with a set of values that they would encode into those algorithms" Dr. Kreps said. 

Also: 6 harmful ways ChatGPT can be used by bad actors, according to a study

According to the Artificial Intelligence Index Report, between 2012 and 2021, the number of AI incidents and controversies increased 26 times. With more controversies arising due to new AI capabilities, the need to carefully consider what we're inputting into these models is pressing. 

AI controversies graph
Artificial Intelligence Index Report

More importantly, if these generative AI models are drawing from data that is already available on the internet, such as the statistics from occupations, should they allow the risk of continuing to create misinformation and depict stereotypical images? If the answer is yes, AI could play a detrimental role in reinforcing humans' implicit and explicit biases. 

Questions also remain about who owns code and the risk of liability exposure using AI-generated code, as well as the legal implications of using images that AI generates. Dr. Kreps gave an example of the controversy surrounding copyright violation when it comes to asking an art generator to create an image in the style of a specific artist. 

"I think that some of those questions are ones that would have been hard to anticipate because it was hard to anticipate just how quickly these technologies would diffuse," Dr. Kreps said. 

Whether these questions will finally be answered when the use of AI tools like ChatGPT starts to plateau is still a mystery, but data shows that we may be past ChatGPT's peak, as it experienced its first drop in traffic in June. 

The ethics of AI moving forward 

Many experts believe the use of AI isn't a new concept and this is evident in our use of AI to perform the simplest tasks. Dr. Kreps also gave examples of using Google Smart Compose when sending an email and checking essays for errors through Grammarly. With the growing presence of generative AI, how can we move forward so that we can coexist with it without becoming consumed by it? 

"The people have been working with these models for years and then they come out with ChatGPT, and you have 100 million downloads in a short amount of time," Dr. Kreps said. "With that power [comes] a responsibility to study more systematically some of these questions now that are coming up." 

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways

According to the Artificial Intelligence Index Report, the number of bills with "artificial intelligence" that were passed increased from only one in 2016 to 37 in 2022 among 127 countries. Furthermore, the report also shows that in 81 countries, parliamentary records about AI show that global legislative proceedings having to do with AI have increased almost 6.5 times since 2016. 

Graph of AI bills
Artificial Intelligence Index Report

Though we are witnessing a push for stronger legal regulations, much is still unclear, according to experts and researchers. Dr. Kreps suggests that the "most effective" way of using AI tools is "as an assistant rather than a replacement for humans." 

While we await further updates from lawmakers, companies, and teams are taking their own precautions when using AI. For instance, ZDNET has begun to include disclaimers at the end of its explainer pieces that use AI-generated images to show how to use a specific AI tool. OpenAI even has its Bug Bounty program in which the company will pay people to look for ChatGPT bugs. 

Regardless of what regulations are eventually implemented and when they are solidified, the responsibility comes down to the human using AI. Rather than fearing generative AI's growing capabilities, it is important to focus on the consequences of our inputs into these models so that we can recognize when AI is being used unethically and act accordingly to combat these attempts. 

Editorial standards