X
Innovation

How Lenovo works on dismantling AI bias while building laptops

Product diversity reimagined: Lenovo's Ada Lopez reveals how diverse teams can shape the future of technology and accessibility for all, in this exclusive interview.
Written by David Gewirtz, Senior Contributing Editor
lenovo-ada-screenshot-2024-02-07-114559

  Ada Lopez, Senior Manager, Lenovo Product Diversity Office

Lenovo

Here's what you may already know about Lenovo: The multinational technology giant ships more PCs than any other company. Also, Lenovo's diverse business investments span tablets, monitors, accessories, smartphones, smart home and collaboration solutions, high-performance computing, augmented and virtual reality, commercial Internet of Things, software, services, and smart infrastructure data center solutions.

Also: AI safety and bias: Untangling the complex chain of AI training

But here's what you may not know about Lenovo: The company is also heavily invested in AI and has made another kind of diversity -- human diversity -- a top priority. I had the opportunity to interview Ada Lopez, Lenovo senior manager of the company's Product Diversity Office. She shared her time and the result is this fascinating, wide-ranging conversation on the company's efforts to dismantle AI bias and promote inclusion.  

Let's dig right in.  

ZDNET: Please introduce yourself and give us a little background on how you came to be running Lenovo's Product Diversity Office.

Ada Lopez: My name is Ada Lopez and I am the Senior Manager of the Product Diversity Office at Lenovo. I have over 18 years of experience as a teacher, and as both a product and project manager.

As a child born in Cuba who immigrated to the US at age 5, I had to confront and solve issues of cultural, linguistic, and familial exclusion. Issues related to diversity and inclusiveness have been essential to my survival -- and I mean that literally -- for as long as I can remember. In my role at Lenovo, now I can apply my efforts to removing technological barriers or biases that might exclude any of our customers.

I want to make sure that Lenovo's products are as accessible to users of all abilities and other underserved populations as they are to everyone else. Because we are constantly breaking new ground, my job is very exciting. We're working in a long-neglected area where there are no set answers. It also means that I need to be a bit disruptive since -- at the company level -- I'm asking technology specialists to expand their view of what constitutes a successful product.

ZDNET: Can you discuss the long-term societal effects of unchecked AI bias?

AL: AI is changing the business landscape, and Lenovo recognizes the importance that AI be implemented safely and responsibly. To meet this need, Lenovo established the Responsible AI Committee, a group of 20 employees representing diverse backgrounds across gender, ethnicity, and disability.

Also: The ethics of generative AI: How we can harness this powerful technology

Together, they review internal products and external partnerships across the principles of diversity and inclusion, privacy and security, accountability and reliability, explainability, transparency, and environmental and social impact.

ZDNET: What are common misconceptions about AI bias in the tech industry?

AL: There's a misconception that there's nothing we can do to stop bias from infiltrating AI systems.

We can begin mitigating AI bias risk today by ensuring that we have talent with various backgrounds or lived experiences. Establishing internal protocols that promote the inclusion of diverse perspectives of programmers or designers is the first step to addressing a significant number of biases within the data set AI leverages to generate outputs.

This is something businesses can begin today!

ZDNET: Can you provide an example of AI bias in automated systems and its societal impact?

AL: Given that the information for AI programs is being pulled from preexisting internet sources, it is possible that these systems cannot filter out biased opinions and perspectives. Ultimately, this can lead to an imbalanced future -- one in which AI may never reach its full potential as a tool for the greater good.

A common example we are experiencing is gender bias. With much of the data online skewing toward men, research conducted by Boston University in collaboration with Microsoft found that systems being trained with Google News associate men with titles such as "captain" and "financer." In contrast, women are associated with "receptionist" and "homemaker."

Also: Generative AI should be more inclusive as it evolves, according to OpenAI's CEO

Many AI systems trained on biased data -- often created by largely male teams -- have created significant problems for women. These prejudices are reflected in credit card companies offering men better options and tools more favorably screening for COVID and liver disease, areas where wrong decisions can damage people's financial or physical health.

We've also seen racial discrimination in US healthcare systems that use AI, according to Prolific. The AI system was designed to predict which patients needed extra medical care, analyzing their healthcare cost history.

The system assumes that cost indicates a person's healthcare needs, but it doesn't account for the different forms of payment between Black and white patients. Because of this discrepancy, Black patients received lower risk scores -- assumed to be on par in terms of cost with healthier white people -- and didn't qualify for the same extra care as white patients with the same issues.

ZDNET: Can you describe a challenge Lenovo faced regarding AI bias and how it was resolved?

AL: We once unveiled a hyper-realistic AI-powered avatar during an employee event to demonstrate powerful generative AI technology.

We didn't expect the negative feedback it received from employees, but it provided a learning opportunity, which would impact the creation of avatars in the future. Detailed surveying of employees gave us an insight into user perceptions to help us address concerns about inadvertent bias in future iterations.

Also: Algorithms soon will run your life - and ruin it, if trained incorrectly

We must apply real rigor to our own solutions as well as the work of our partners, where diversity, equity, and inclusion needs to be a proven priority. We use dedicated tools to evaluate bias in data and identify sub-populations that might be under-represented or somehow segmented.

We also use open-source software called AI Fairness 360 to evaluate different algorithms and training data and mitigate bias. This goes deeper than protected classes, too, for example checking for bias against socioeconomic groups selected against variables like income level or credit score.

ZDNET: How does Lenovo's Product Diversity Office work to identify and correct potential biases in AI?

AL: While AI bias can rarely be eliminated entirely, we strive to manage and mitigate it as much as possible by including a diverse background of people in the training dataset.

At Lenovo, we established a Responsible AI Committee bringing together 20 people of diverse backgrounds to decide the principles that AI must support in the organization.

ZDNET: How does the diversity of a development team influence the mitigation of AI bias?

AL: Promoting and encouraging diversity within the workplace is crucial, and it will ensure that we are bringing in talent with various backgrounds or lived experiences. As I mentioned above, establishing internal protocols that promote the inclusion of diverse perspectives of programmers or designers mitigates the risk of incorporating a significant number of biases within the data set AI leverages to generate outputs.

Also: 6 ways business leaders are exploring generative AI at work

Business leaders play a large role in controlling what AI looks like and can unlock. It is imperative that organizations thoroughly plan for what responsible AI usage means and remain committed to upholding that ideal. Engaging with stakeholders to determine potential problems and establishing best practices will require constant attention from leadership and respective teams, but doing so is essential.

ZDNET: What role do data sources play in perpetuating AI bias, and how can this be addressed?

AL: Lenovo's Data for Humanity report found that 88% of business leaders say that AI technology will be an important factor in helping their organization unlock the value of its data over the next five years. So, when these companies collect, process, or use data, there is a risk that any findings could be shaped by bias.

ZDNET: How can AI bias impact decision-making in various sectors, like healthcare or finance?

AL: There are abundant examples of bias in healthcare with or without AI. With AI, the challenge is partly that an algorithm might recognize patterns in the data and draw the wrong conclusion. Even though that data set supports the conclusion, there may be key variables missing. Or, as is often the case, the pattern may be the product of historical misdiagnosis or neglect within a specific group.

Also: Will an AI-powered robocop keep New York's busiest subway station safe?

Policing data is a common example of data reinforcing bias. If certain communities are policed more, then the arrests are higher. For the AI, arrests equate to crime, so the conclusion might be that crime is greater. The data enshrines the biases and patterns. Context is everything here.

ZDNET: What advancements in AI technology are being made to detect and correct bias?

AL: Explainability is advancing quickly, so we have a better understanding of how an AI generated something. Linear regression algorithms are extremely explainable, but neural network processes will always have hidden elements. Still, there are new ways to demystify and better explain the AI, and it's important for companies like Lenovo to take advantage of those advancements.

We also see greater transparency in the source data and the model used in AI, so we can better identify and correct gaps and deficiencies. Without transparency, it's impossible to interrogate and improve the training data and algorithms.

ZDNET: In what ways can consumer feedback be used to identify and correct AI bias?

AL: In most instances, customer feedback should be a last resort. During development, teams need to very deliberately consult and represent diverse groups to mitigate bias -- this needs to happen at the foundation of any AI.

Also: How trusted generative AI can improve the connected customer experience

However, customer feedback can become valuable with smaller sub-populations or when addressing intersections of multiple dimensions; for example, sexuality, race, or gender identity.

ZDNET: How can interdisciplinary approaches enhance the understanding and reduction of AI bias?

AL: Lenovo's Responsible AI Committee consists of people with very different backgrounds and areas of expertise, including security, sales, privacy, law, and diversity and inclusion. We benefit greatly from that diversity of opinions and very rigorous review of technology.

And we complement that with peer-reviewed studies and research conducted with different goals and scopes. AI is not new, but the current scale and speed of deployment is unprecedented, so we need to be extremely thoughtful and vigilant.

ZDNET: What advice would you give to other tech companies in tackling the issue of AI bias?

AL: As humans, no matter how hard we try, we inherently have biases – both conscious and unconscious. There will always be some level of bias within the various levels of programming, but we can remain diligent in ensuring people understand and recognize their biases.

Also: Do companies have ethical guidelines for AI use? 56% of professionals are unsure 

This is also why it's necessary to build teams with different experiences, backgrounds, and perspectives.

ZDNET: Any other thoughts you want to share with ZDNET's global audience?

AL: AI has the potential to completely shift how our world operates. As with any technology, we must understand its capabilities, as well as the drawbacks of its use. Leaving AI with little supervision can be problematic, especially as this technology becomes smarter.

Instead, we need to question and challenge the outputs and examine those controlling the inputs. We should explore AI and use it as an assistant, but it has not reached the point where we can fully rely on it.

Final thoughts

ZDNET's editors and I would like to share a huge shoutout to Ada for taking the time to engage in this in-depth interview. There's a lot of food for thought here. Thank you, Ada!

Also: The best Lenovo laptops: Expert tested

What do you think? Did Ada's recommendations give you any ideas about how to improve problems of bias and diversity in your organization? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards