Bad drivers beware: MIT's self-driving car AI rates how selfish you are on the road

The MIT tech could improve the way self-driving vehicles respond to human drivers around them.

MIT refines self-driving cars' algorithm to make them drive like humans The MIT's AI tech rates how selfish you are on the road.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an AI system for self-driving cars that classifies human drivers' social personalities to improve decision-making in tricky road situations.  

Special Feature

Special Report: Autonomous Vehicles and the Enterprise (free PDF)

This ebook, based on the latest ZDNet / TechRepublic special feature, examines how driverless cars, trucks, semis, delivery vehicles, drones, and other UAVs are poised to unleash a new level of automation in the enterprise.

Read More

MIT researchers believe it could also be used to train self-driving cars to exhibit more human-like behavior that other people on the road could understand more easily. 

The AI stems from the idea that self-driving cars can be programmed to classify the social personalities of other drivers. This classification allows the vehicle to make better predictions about upcoming dangers.

SEE: The new commute: How driverless cars, hyperloop, and drones will change our travel plans (TechRepublic cover story) | Download the PDF version

The researchers use 'social value orientation' to assess how egotistical or 'prosocial' other drivers are. Using this assessment, the system generates real-time driving trajectories for self-driving cars. 

The system could be used to address the mismatch in behaviors and expectations between human drivers and autonomous vehicles. 

While humans sometimes make calculated risks to avoid a situation, self-deriving cars are programmed to be cautious and obey road rules, particularly at difficult-to-navigate intersections and four-way stops.  

This caution could explain the high number of incidents of human drivers rear-ending self-driving cars. It is how Apple's self-driving vehicle got into its first crash last year as it was slowly merging onto a highway.   

"Creating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV's actions," said Wilko Schwarting, the lead author on the new research paper

The algorithm they have developed gives people scores for the qualities cooperative, altruistic, or egoistic based on how much the driver demonstrates care for themselves versus care for others. 

The approach also draws on social psychology and game theory for understanding social situations among competing players.

For example, in the case of a car merging into a lane, the two outcomes are to let somebody merge – cooperative – or not – egotistical. The researchers found that cars trying to merge into lanes behave more competitively than those already in the lane. 

The system was also trained to understand when it's appropriate for a self-driving car to display more assertive behavior, such as when making a lane change in heavy traffic.

SEE: The top 3 companies in autonomous vehicles and self-driving cars

The MIT researchers say the system is not ready for real roads yet. However, they'll soon start exploring how their model works for pedestrians, bicycles and other road users. They also intend to use social value orientation to assist decision-decision-making in household robots.

They argue that the system could also be helpful for human drivers, providing a second set of eyes that could detect an aggressive driver in a blind spot. The system could give an alert in the rear-view mirror, allowing the driver to adjust their driving patterns.