X
Innovation

Organizations are fighting for the ethical adoption of AI. Here's how you can help

At a time when AI can have more risks than benefits, organizations are minimizing those risks by establishing AI policies to protect workers and consumers.
Written by Allison Murray, Staff Writer
Two people looking at futuristic screen
Bloom Productions/Getty Images

As artificial intelligence becomes more intertwined in our daily lives, so are the ethical implications of this technology. As a result, organizations are advocating for workers and consumers whom AI could adversely impact -- and there are ways you can join the fight for society's ethical adoption of AI. 

Causing ethical concerns, AI has been known to exhibit gender and racial biases. It has also raised issues about privacy such as in the case of using AI for surveillance. What's more, the technology has been exploited to spread misinformation. 

If used correctly -- and ethically -- AI could level up society as a whole and positively advance future technology. That's why these organizations are striving to even out the negative impacts and steer us in the right direction.

Also: The ethics of generative AI: How we can harness this powerful technology

One such nonprofit organization is ForHumanity, which works to examine and analyze the risks associated with AI and autonomous systems, as well as engage in the maximum amount of risk mitigation in those systems. Ryan Carrier, ForHumanity's executive director and founder, told ZDNET that the organization is made up of volunteers from around the world.

"ForHumanity is 1,600 plus people from 91 countries around the world, and we're growing 40 to 60 people per month," Carrier said. "Volunteers are a full spectrum of consumers, workers, academics, thought leaders, problem solvers, independent auditors, etc., helping us with our auditable rules and even some training to become certified auditors."

The ForHumanity community is 100% open, and there are no restrictions on who can join. You simply have to register on the website and agree to a code of conduct. Anyone who volunteers for the nonprofit can participate in the process as much or as little as they'd like.

One of the main things ForHumanity focuses on is creating auditable rules for AI auditors (people who evaluate AI systems to ensure they work as expected) based on the law, standards, and best practices through a crowd-sourced, iterated, and collaborative process with ForHumanity volunteers. The organization then submits these auditable rules to governments and regulators.

"We provide this level playing field, this ecosystem, where we encourage auditors, service providers, people in companies, etc., to use those rules to basically create compliance with the ever-changing landscape of laws, regulations, best practices, and so on," Carrier said.

So far, ForHumanity has submitted to both the UK and EU governments and is close to having an approved certification scheme (or, a sum of auditable rules), which Carrier said would be the world's first approved certification scheme for AI and algorithmic systems. This set of rules is the highest form of assured compliance, and there is currently nothing like it in AI today.

"The rules that have been crafted are designed to mitigate risks to humans and provide a binary interpretation of compliance with the law," Carrier said, adding that the impact of voluntary certification schemes is that companies that invest in them will have a higher certainty that they are not failing to be compliant with the law.

"ForHumanity's mission is exclusively focused on humans.  So both consumers/users and employees will benefit from the implementation of these certification schemes," Carrier said. 

Also: Five ways to use AI responsibly

Another organization working on AI research and policy is the Center for AI and Digital Policy (CAIDP), which focuses on building AI education that promotes fundamental rights, democratic values, and the rule of law.

CAIDP has AI policy clinics where those interested in learning more about AI can come together for free. So far 414 students have graduated from these learning sessions. 

"Students [of the clinics] run the gamut of lawyers, practitioners, researchers, society advocates, etc., to learn how AI impacts rights and leave with skills and advocacy on how to keep governments accountable and how to affect change in the AI space," CAIDP president, Merve Hickok, told ZDNET.

Anyone can join these AI policy clinics to receive a CAIDP AI Policy Certification. Those interested in signing up can do so on CAIDP's website. The clinics last for a semester and require a time commitment of about six hours per week.

Hickok said aside from the education portion of CAIDP, the organization is also interested in the advisory side, especially in protecting consumer rights regarding AI. 

Hickok said in addition to the organization's education work on human rights and democracy, CAIDP also advocates for protection of consumer rights regarding AI. In March, CAIDP filed a detailed complaint with the Federal Trade Commission regarding OpenAI, and the FTC has opened an investigation. Subsequent to CAIDP's complaint regarding OpenAI, data protection and consumer agencies around the world have launched investigations of ChatGPT.

Also: 6 AI tools to supercharge your work and everyday life

"We ask for safe guardrails for AI systems," Hickok added. "That AI systems should not be deployed without taking into account certain safety, security, and fairness measures. Consumers and society at large should not be testbeds -- they shouldn't be experimented on."

In addition, the other part of CAIDP's concerns are workers' rights -- especially in worker surveillance.

"We submitted recommendations to the White House, EEOC, and different agencies (about workers' AI rights)," Hickok said. "Because we want workers to have a say in how they get to use or engage in AI systems, and not just be subject to surveillance and performance monitoring, etc., and being exploited."

CAIDP's annual AI & Democratic Values Index is an important piece of its work, where it examines a worldwide assessment of AI policies and practices, broken down by 75 countries. In 2022, for example, countries like Canada, Japan, and Columbia ranked high in terms of AI policies in place, while countries like Iran, Vietnam, Venezuela, and more scored low. 

AIDV Index Report graph
Center for AI and Digital Policy

"Governments around the world are moving rapidly to understand the implications of the deployment of AI as more systems are deployed," the report reads." We anticipate that the rate of AI policymaking will accelerate in the next few years." 

CAIDP is already working with federal agencies and governments. In March, the organization submitted a comprehensive complaint about OpenAI to the Federal Trade Commission, asking to investigate and stop future models' deployments until guardrails are in place. 

Also: Executives need better tech skills. Here are six ways to educate upward

And while both organizations are doing different things to keep AI accountable, both have the same concerns when it comes to the future of AI and its risks.

Carrier said it's essentially broken down into five areas of concern: Ethical risks, bias, privacy, trust, and cybersecurity.

"It really depends on the use case and the biggest risks within that, but there's usually always many," he said. "And that's why we do what we do."

As far as what the future looks like for AI, both organizations are relatively optimistic that regulations will be adopted and risks will be controlled.

"From our perceptive, we want to ensure that AI is adopted and that it is always done in a way that is beneficial to humans by maximizing the risk mitigation of each individual tool," Carrier said. "In a perfect future, it would be that independent audit AI systems are mandatory for all AI and that algorithmic and autonomous systems impact humans meaningfully."

Editorial standards