Workplace monitoring is everywhere. Here's how to stop algorithms ruling your office

A new report lays out five recommendations to protect us from the rapid rise of automated workplace-monitoring and decision-making tools.

AI (artificial intelligence) and face recognition concept

Algorithms are increasingly playing a role in organizational decision making, raising a number of social and ethical issues.

Image: Getty Images / iStockphoto

Tougher rules are needed around the use of algorithms to monitor employees in the workplace, as well as their role in decision making, after a huge rise in workplace monitoring, according to a committee of MPs and peers.

The UK's All-Party Parliamentary Group (APPG) on the Future of Work warned that the growing reliance on algorithmic surveillance and management tools is associated with "significant negative impacts on the conditions and quality of work across the country". 

Artificial intelligence: How to build the business case

AI might be a hot topic but you'll still need to justify those projects.

Read More

In particular, it found that pervasive monitoring and automated decision making were associated with "pronounced negative impacts on mental and physical wellbeing, as workers experience the extreme pressure of constant, real-time micro-management and automated assessment".

SEE: There's been a big rise in monitoring workers at home. We should all be worried

The group's report, The New Frontier: Artificial Intelligence at Workcame as the European Commission's Joint Research Council published separate research on electronic monitoring and surveillance in the workplace. It too found that explosive growth of AI-based tools poses a profound risk to worker wellbeing, threatening to erode trust between employer and employees and risking further psycho-social consequences unless action is taken to regulate its use.

"AI is transforming work and working lives across the country in ways that have plainly outpaced, or avoid, the existing regimes for regulation," the APPG report reads.

"With increasing reliance on technology to drive economic recovery at home, and provide a leadership role abroad, it is clear that the government must bring forward robust proposals for AI regulation to meet these challenges."

The APPG makes five recommendations aimed at ensuring more fairness and transparency in the UK's AI ecosystem, particularly around the use and regulation of algorithm-based monitoring, management and decision-making tools.

Recommendation 1: The Accountability for Algorithms Act  

Center to the cross-parliamentary group's recommendations is the creation of an Accountability for Algorithms Act, or 'the AAA'.

This would provide "an overarching, principles-driven framework for governing and regulating AI" in the workplace, as well as mechanisms to ensure that real people (as opposed to machines) maintain oversight of any significant decisions made by algorithms.

SEE: Algorithms will soon be in charge of hiring and firing. Not everyone thinks this is a good idea

The AAA would include new rights and responsibilities, to ensure that all significant impacts from algorithmic decision-making on work or workers are considered.

To be effective, the AAA would need to be based on four planks, namely:

  • Identifying individuals and communications who might be impacted by algorithmic decisions – particularly vulnerable people and those with disabilities.
  • Undertaking risk analysis aimed at outlining potential pre-emptive actions – specifically, "preventing individual and social injury".
  • Taking appropriate action in response to any analysis undertaken – meaning designers, developers and anyone else involved in the supply chain would need to address and mitigate any risks identified in their algorithms.
  • Ongoing impact assessment and appropriate responsive action - impact assessment should be addressed at the earliest stage in the design cycle and should be an ongoing and transparent process.

Recommendation 2: Updating digital protection

The AAA should fill gaps in existing protections against technology at work, including providing workers with easy-to-access information detailing the "purpose, outcomes and impact of algorithmic systems at work" and a right for them to be involved in shaping their design and use.

However, it also calls for a number of protections for employees who are spending an increasing proportion of their time online, and spending more time exposed to the negative aspects of work in the digital age.

SEE: More bosses are using software to monitor remote workers. Not everyone is happy about it

For example, the report says all employees should be given a right to flexible working unless there is a strong business incentive not to do so, as well as the right to disconnect from work outside of agreed hours – something that is garnering traction in European countries such as Portugal, which recently introduced a law effectively banning bosses from contacting their employees outside of work.

The report also calls for greater protection for employees who might be subject to monitoring, including transparency into how their data might be used to track their performance.

As such, while the AAA should be a vehicle for greater clarity into the use and purpose of workplace-monitoring tools, there should also be safeguards in place to protect developers from copyright infringement and stop algorithms being exploited or "gamed" by workers.

Recommendation 3: Enabling a partnership approach

To ensure AI-based tools are designed with the interest of the wider public in mind, the government should develop partnerships with developers and the wider AI ecosystem.

Unions and NGOs should also be given additional rights when it comes to requesting transparency and involvement regarding how algorithms are used in the workplace.

"This should start with employers informing relevant trade unions when algorithmic systems with significant impacts are adopted in a workplace so that meaningful consultation can commence," the report reads.

The report calls for unions to be integrated into the AI ecosystem even further. "Unions should also be allowed to develop new roles within the AI ecosystem to redress a growing imbalance of information and power and help deliver genuinely human-centred AI in the public interest," it reads. It proposes that the UK Trade Union Congress (TUC) be given the role of developing and delivering artificial intelligence training to workers.

Recommendation 4: Enforcement in practice

The report says the Government's Digital Regulation Cooperation Forum (DRCF), which currently consists of the Information Commissioner's Office (ICO), the Competition and Markets Authority (CMA) and the Office of Communications (Ofcom), should be expanded to include the Equality and Human Rights Commission, which promotes and upholds human rights in Britain.

It argues that, while the DRCF was created to ensure greater cooperation on digital and online regulation, there is still "a very mixed picture" when it comes to who is responsible for what. This makes transparency an issue when it comes to trying to figure out which body is accountable when it comes to upholding workers' digital rights, particularly as new technologies evolve rapidly.

SEE: The algorithms are watching us, but who is watching the algorithms?

"We need new mechanisms and resources to establish regulatory common capacity and enforce the AAA, alongside existing protections," the report states. "The members and remit of the DRCF should be expanded to include the EHRC and new single enforcement body for employment rights."

The DRCF should also be given the means to run regulatory sandboxes to experiment with different approaches to governance.

Recommendation 5: Supporting human-centred AI

The APPG's fifth recommendation proposes that the UK incorporates a collection of fundamental rights and values into the development and application of new AI and automation-based technologies in the workplace.

Specifically, it calls for the APPG's Good Work Charter – which incorporates the rights and freedoms protected in the European Social Charter and International Covenant on Economic, Social and Cultural Rights – to play a central role in the UK's national AI Strategy, which was published in September.

"A sharper focus on Good Work for all will enable the development of human-centred AI and a human-centred AI ecosystem," the report reads.

"The evidence contributed to this inquiry from organisations across civil society, business, the trade union movement and academia has made a compelling case that a fresh approach to regulation is needed to maximise the opportunities and address the challenges of fast-paced technological change at work," the report says.