X
Innovation

Smartphone usage patterns can reveal your personality type: Research

Employers are increasingly using AI-based psychological profiling as part of their hiring processes. That's a labour rights issue, and it should worry us.
Written by Stilgherrian , Contributor

Data collected from your smartphone usage patterns can be used to predict four of your Big Five personality traits, according to new research from Princeton University.

The Big Five model, also known as the OCEAN model, was first developed in the 1980s. It is the most widely used and well-established system in psychology for organising personality traits.

The Princeton researchers say that the data they gathered could partially predict four of these traits: Openness to experience; conscientiousness; extroversion; and emotional stability. They could not reliably determine agreeableness, however.

Data was gathered on users' communication and social behaviour, music consumption, app usage, mobility, overall phone activity, and daytime and nighttime activity levels.

Some data streams were more useful than others. For example, in predicting the "love of order and sense of duty" facet of the conscientiousness trait, an especially useful data stream was the phone's average battery level when it wasn't on a charging cable.

"The accuracy of these predictions is similar to that found for predictions based on digital footprints from social media platforms," the researchers wrote.

"[This] demonstrates the possibility of obtaining information about individuals' private traits from behavioural patterns passively collected from their smartphones."

The Princeton team noted the obvious ethical concerns.

"The present work serves as a harbinger of both the benefits and the dangers presented by the widespread use of behavioural data obtained from smartphone," they wrote.

On the plus side, they listed the ability to use this data in staff recruitment processes rather than mere estimates derived from self-reported questionnaires.

"At the same time, we should not underestimate the potential negative consequences of the routine collection, modelling, and uncontrolled trade of personal smartphone data," they wrote.

"Mounting evidence suggests that these data can and are being used for psychological targeting to influence people's actions, including purchasing decisions and potentially voting behaviours, which are related to personality traits."

Using AI to spot the job hoppers

Meanwhile, Australian startup PredictiveHire reckons it can tell whether a job candidate is likely to "job hop" or change jobs more frequently than an employer might like, based solely on their responses to open-ended questions from a chatbot.

"The language one uses when responding to interview questions related to situational judgment and past behaviour is predictive of their likelihood to job hop," the company's researchers wrote in a recently-published paper [PDF].

"The ability to objectively infer a candidate's likelihood towards job hopping presents significant opportunities, especially when assessing candidates with no prior work history," they wrote.

"On the other hand, experienced candidates who come across as job hoppers, based purely on their resume, get an opportunity to indicate otherwise."

The company said it found a "positive correlation" between a candidate's likelihood to job hop and the personality trait "openness to experience" in the HEXACO model, an alternative to the OCEAN model that adds a sixth trait called "honesty-humility".

As your correspondent has previously written, however, big data is dangerous, faith-based ideology. It envisages a world free of human biases, where theory-free correlation is as good as causation, if only you have enough data.

Last year, researchers at Cornell University reviewed vendors of algorithmic pre-employment assessments and found that most of them claimed to be a fairer alternative to human-based hiring.

Rather than removing biases, unexamined algorithms often simply learn gender and racial biases from our own biased language or otherwise replicate the real-world biases, such as in racist crime prediction systems.

And while potential biases are starting to get attention, that's not the whole story. Algorithmic hiring is a labour rights issue.

According to Nathan Newman, adjunct associate professor at the John Jay College of Criminal Justice, big data is being used to drive down workers' wages.

"Data analysis is increasingly helping to lower wages in companies beginning in the hiring process where pre-hire personality testing helps employers screen out employees who will agitate for higher wages and organise or support unionisation drives in their companies," he wrote in a 2017 law paper [PDF].

"With big data, the best way to defeat a drive to organise a union in a company's workplace is to never hire people willing to stand up to their employer in the first place."

This "algorithmic surveillance" can also allow employers to "assess which employees are most likely to leave and thereby limit pay increases largely to them".

Solon Barocas, one of the Cornell researchers, would agree.

"We should not let the attention that people have begun to pay to bias and discrimination issues actually crowd out the fact that there are a bunch of other issues," Barocas told MIT Technology Review.

"Job hopping, or the threat of job hopping, is one of the main ways that workers are able to increase their income."

One privacy professional has described these AI-based hiring practices as "digital phrenology". If a prospective employer uses this technology, run for the door. Sage advice on both counts.

Related Coverage

Editorial standards