The World Economic Forum has launched a new report that details how organisations can take an ethical approach to designing technology and using it responsibly.
The Ethics by Design -- An Organizational Approach to Responsible Use of Technology detailed three design principles that can be integrated to promote ethical behaviour when it comes to creating, deploying, and using technology.
These principles include paying attention in a timely manner on the ethical implications of technology by building awareness through training and internal communication channels, developing organisational "nudges" such as checklists and using due diligence reminders, and weaving value and ethics into the company culture.
Another principle advised in the report is developing a system that helps people recognise ethical conduct, such as by introducing frameworks for ethical decision-making, involving leaders in the promotion of ethical decisions, creating an organisation that is diverse.
Introducing incentives and culture-change activities to encourage ethical behaviour is another design principle recommended by the report. It said that by taking this approach it will foster empathetic relationships, create organisation coherence, and promote an organisational environment that remains flexible, adaptable, and stable.
According to World Economic Forum head of artificial intelligence and machine learning Kay Firth-Butterfield, following these design principles will help prompt better and more ethical behaviours.
"The ethical challenges will only continue to grow and become more prevalent as machines advance. Organisations across industries – both private and public – will need to integrate these approaches," she said.
The report also identified five traits shared by organisations that use technology ethically and are willing to consider the potential of ethical tech as part of their decision-making process. This included technical knowledge, social responsibility, foundation of trust, ethical deliberation, and leadership commitment.
See also: AI and ethics: One-third of executives are not aware of potential AI bias (TechRepublic)
The report also outlined 10 recommendations that organisations could take -- and have been proven effective -- that are beyond conventional incentives, such as compliance training, financial compensation, or penalties. Some of these suggestions are investing in the said design principles, using assessments to gain an understanding of how mature an organisation is currently, teaching practices to members of any ethical deliberation bodies, and integrating these ethical commitments during hiring, orientation, training, and evaluation.
Ethics in technology, particularly when it comes to artificial intelligence, has grown increasingly topical and something that both governments and tech giants have taken a stance on.
For instance, last month, the White House issued a guidance to US federal agencies on how to regulate AI applications that are produced in the US. It also signed a second executive order focusing on AI development last week.
Meanwhile, the Australian Human Rights Commission has published a guide on how to recognise and prevent AI bias. Similarly, the Singapore Computer Society launched a reference document to guide businesses in the development of AI.
- Blockchain aims to solve AI ethics and bias issues
- Humans in space: The ethical and policy risks explained
- The trouble with AI: Why we need new laws to stop algorithms ruining our lives
- The state of AI in 2020: Biology and healthcare's AI moment, ethics, predictions, and graph neural networks
- The algorithms are watching us, but who is watching the algorithms?
- Australian insurer uses AI for car accident claims
- Australia gets a national guide to help assess effectiveness of STEM initiatives