Fujitsu has developed a new AI model that it claims can capture and quantify how much a person is concentrating when performing tasks.
Conventionally, AI models that are designed to quantify concentration have been created by training algorithms to recognise the expressions and behaviours of people performing specific tasks, such as e-learning, Fujitsu explained.
But since facial expressions and behaviour differ depending on the tasks involved and the cultural background in which each person grew up, this meant that it has been difficult to develop AI models that could analyse different, specific situations, the company said.
With its new AI model, Fujitsu claimed that it can detect a person's concentration across various tasks due to it identifying common features and separating analysis by each muscle group so it does not become easily influenced by the subjects' cultural backgrounds.
"The technology captures changes over a short period of a few seconds, such as a tense mouth, and long-term changes over periods of tens of seconds, such as staring intently, with time frames optimised for each action unit," Fujitsu said.
When building the AI model, Fujitsu collected data from 650 people across China, Japan, and the United States as part of its machine learning dataset. The dataset included information focused on people engaging in tasks, like memorisation and searching, that require concentration.
Using the newly developed AI model, the concentration levels of participants in each country was estimated for a series of tasks, and it was possible to estimate the degree of concentration with more than 85% accuracy, Fujitsu claimed.
In addition, when the developed AI model was evaluated using data that included both concentration and non-concentration due to drowsiness, Fujitsu said it was able to confirm declines in concentration due to sleepiness indicators.
Fujitsu hopes the latest AI model can be applied across various services such as online classes, online meetings, and sales activities, which have expanded due to the uptick of remote learning and working.
Across the globe, organisations such as the American Civil Liberties Union (ACLU) have condemned the use of faulty facial recognition systems. In a blog post last year, ACLU senior staff attorney Ashley Gorski said: "Over the last couple years, it's become increasingly clear that facial recognition technology doesn't work well, and would be a civil liberties and privacy nightmare even if it did."
Meanwhile, the Council of Europe published new guidelines at the start of the year that criticised the deployment of facial recognition technologies in workplace and learning contexts.
The watchdog advised that where the technology is used exclusively to determine an individual's skin colour, religious belief, sex, ethnic origin, age, health or social status, the use of facial recognition should be prohibited, unless it can be shown that its deployment is necessary and proportionate.
Under the same conditions, the Council of Europe said the ban should also apply to digital tools that can recognise emotions, detect personality traits, or mental health conditions, as they could be used unfairly in hiring processes or to determine access to insurance and education.
- Facial recognition: Now algorithms can see through face masks
- Brazil trials facial recognition with retired public servants
- Facial recognition: Don't use it to snoop on how staff are feeling, says watchdog
- IBM pushes for US to limit facial recognition system exports
- Singapore adds face verification, multi-user SMS to SingPass 2FA
- Australian and Korean researchers warn of loopholes in AI security systems