X
Innovation

Five ways to use AI responsibly

Professionals are panning for data nuggets in the artificial intelligence gold rush. This expert tells you how to reap rewards without taking risks.
Written by Mark Samuels, Contributor
ai-gettyimages-1455352989

Just as professionals have found that it's difficult to move their data from one cloud provider to the next, so they should be aware of the potential costs as they build their AI models and supporting IT infrastructures.

Westend61/Getty Images

The inexorable rise of generative artificial intelligence (AI) and automation has huge implications for professionals and the organizations they work for.

Operational tasks are being automated and job roles are being changed. However, the full impact of this revolution on businesses, employees, customers, and society at large is almost impossible to gauge.

As AI is adopted and adapted during the remainder of the decade, the full effects of this transformation will become clearer. For now, increasing numbers of professionals are experimenting with these emerging technologies, such as ChatGPT and more.

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

As these explorations take place, responsible AI -- which Accenture defines as the practice of designing, developing, and deploying AI with good intentions -- becomes increasingly critical.

Yet this requirement for responsibility is not necessarily being enacted: the consultant reports just 35% of consumers trust how AI is being implemented by organizations.

So, as professionals start scaling up their use of AI, how can they ensure that they and their organizations are utilizing this emerging technology with responsibility in mind?

That's the key priority for Carter Cousineau, vice president of data and model governance with news and information specialist Thomson Reuters, who helps her company utilize AI and machine learning in a responsible manner.

Like other blue-chip enterprises, Thomson Reuters is only just starting to explore the potential of many of these emerging technologies, especially generative AI.

However, Cousineau says one thing is already clear from these nascent moves: "Responsible AI has to be embedded in ethics throughout the entire lifecycle."

Also: Today's AI boom will amplify social problems if we don't act now

Her views on the importance of the ethical implementation of AI are not just based on her time at Thomson Reuters.

She was formerly the Managing Director of the Center for Advancing Responsible and Ethical Artificial Intelligence at the University of Guelph. Her research interests span a range of topics from human-to-computer interactions to trustable AI.

Alongside her academic pursuits, Cousineau -- who spoke with ZDNET at the recent Snowflake Summit 2023 in Las Vegas -- has worked with technology start-ups, not-for-profit organizations, smaller businesses, and Fortune 500 companies.

Since joining Thomson Reuters in September 2021, she's put into practice some of the things she learned during her research activities.

"It's been exciting to go into a corporation and to think about how we can influence and change the culture to help drive trust," she says.

Also: AI and advanced applications are straining current technology infrastructures

From the moment data is used all the way through to the decommissioning of a model, her global team covers the AI lifecycle and ensures information and insight is used in a well-governed and ethical manner.

She says there are five key things professionals must consider if they want to exploit AI responsibly.

1. Get ready for regulations

After putting the foundations for ethical AI in place at Thomson Reuters, Cousineau is now making sure employees stick to these well-honed principles on an ongoing basis.

She recognizes, however, that different lines of business have different requirements for AI and automation. What's more, those demands change as further pressure comes from external lawmakers.

Whether it's the General Data Protection Regulation, the EU AI Act or pending rules on automation, her team puts the right checks and balances in place to help Thomson Reuters staff innovate with data and models in a flexible but also safe and secure manner.

"We look at all those regulations and ensure that, when new rules come around, we're ready," she says.

Also: AI ethics toolkit updated to include more assessment components

2. Stay open to change

Cousineau is not a fan of businesses creating one, standardized checklist for AI and then assuming the job is done.

"Your models -- especially in generative AI -- keep learning. Your data keeps transforming and there are different uses of that data. So, creating visibility of your data model assets is huge."

She says openness to change in terms of legal obligations is crucial, but there's also a point where that emphasis shifts -- and openness becomes operationally beneficial, too.

"It's important your teams start to see and understand how to build that culture of responsible AI because they need to know more [about] how those data models are used through a producer or consumer lens."

Also: Leadership alert: The dust will never settle and generative AI can help

3. Use data impact assessments

Whether a model is being used at the proof-of-concept stage of moving toward production, Cousineau says Thomson Reuters continually checks for responsible AI across all use cases.

Right from the early days of automation, her team undertakes data impact assessments. This groundwork has proven crucial as new use cases relating to generative AI have emerged.

"We partnered with general counsel and our approach utilizes skip logic. Based on your use case, it will prompt you to the appropriate stages or ethical concerns that you would have to answer and asks for the use case at hand," she says.

"That approach builds a picture of supporting privacy needs, data governance, model governance, and ethics for your use case. And it's something that allows us to react quickly. So, as we get the results of that data Impact assessment, our team works right away on mitigating those risks within various teams -- and the risks are very specific."

Also: Train AI models with your own data to mitigate risks

4. Build trusted partnerships

Cousineau says Thomson Reuters works with a range of individuals and organizations to ensure privacy, security, and responsibility are front and center across all AI use cases.

Those partnerships span lines of business and go beyond the firewall to include relationships with customers and technology partners, including Snowflake.

"For me, responsible AI goes back to partnering with trusted organizations. It's about building an infrastructure and creating that collaborative culture," she says.

"That work is about being able to ensure your models are transparent, that you can explain them, and that they're interpretable and treating people and their data fairly. It's also about considering sustainability and the computing power that's needed to power your models."

Also: Nvidia teams up with Snowflake for large language model AI

5. Be aware of financial costs

Finally, as you start to experiment with AI, Cousineau says it's crucial to remember that turning systems on can be easier than turning them off.

Just as professionals have found that it's difficult to move their data from one cloud provider to the next, so they should be aware of the potential costs as they build their AI models and supporting IT infrastructures.

Responsible AI, therefore, involves thinking about long-term financial exposure.

Also: These are my 5 favorite AI tools for work

"Be aware of the complexity of costs," she says. "Once you've built generative AI into your use case -- and that's something you want to keep using going forward -- then it's very hard to migrate off that large language model, should you want to go to a different language model, because it's learned off that system. So, the cost in the migration work is very complex."

Editorial standards