X
Innovation

AI safety and bias: Untangling the complex chain of AI training

AI is progressing at a rapid pace. If we don't build safe systems now, we never will, says Intel researcher Lama Nachman.
Written by Dan Patterson, Contributor

AI safety and bias are urgent yet complex problems for safety researchers. As AI is integrated into every facet of society, understanding its development process, functionality, and potential drawbacks is paramount. 

Lama Nachman, director of the Intelligent Systems Research Lab at Intel Labs, said including input from a diverse spectrum of domain experts in the AI training and learning process is essential. She states, "We're assuming that the AI system is learning from the domain expert, not the AI developer...The person teaching the AI system doesn't understand how to program an AI system...and the system can automatically build these action recognition and dialogue models."

Also: World's first AI safety summit to be held at Bletchley Park, home of WWII codebreakers

This presents an exciting yet potentially costly prospect, with the possibility of continued system improvements as it interacts with users. Nachman explains, "There are parts that you can absolutely leverage from the generic aspect of dialogue, but there are a lot of things in terms of just...the specificity of how people perform things in the physical world that isn't similar to what you would do in a ChatGPT. This indicates that while current AI technologies offer great dialogue systems, the shift towards understanding and executing physical tasks is an altogether different challenge," she said.

AI safety can be compromised, she said, by several factors, such as poorly defined objectives, lack of robustness, and unpredictability of the AI's response to specific inputs. When an AI system is trained on a large dataset, it might learn and reproduce harmful behaviors found in the data. 

Biases in AI systems could also lead to unfair outcomes, such as discrimination or unjust decision-making. Biases can enter AI systems in numerous ways; for example, through the data used for training, which might reflect the prejudices present in society. As AI continues to permeate various aspects of human life, the potential for harm due to biased decisions grows significantly, reinforcing the need for effective methodologies to detect and mitigate these biases.

Also: 4 things Claude AI can do that ChatGPT can't

Another concern is the role of AI in spreading misinformation. As sophisticated AI tools become more accessible, there's an increased risk of these being used to generate deceptive content that can mislead public opinion or promote false narratives. The consequences can be far-reaching, including threats to democracy, public health, and social cohesion. This underscores the need for building robust countermeasures to mitigate the spread of misinformation by AI and for ongoing research to stay ahead of the evolving threats.

Also: These are my 5 favorite AI tools for work

With every innovation, there is an inevitable set of challenges. Nachman proposed AI systems be designed to "align with human values" at a high level and suggests a risk-based approach to AI development that considers trust, accountability, transparency, and explainability. Addressing AI now will help assure that future systems are safe.

Editorial standards