Self-driving cars are undoubtedly the future. They have the potential to significantly reduce collisions, be more economical, and eventually remove the need for driving licenses.
Currently, though, they’re far from perfect. Despite thousands of hours of training on practice courses, they still make mistakes. Other humans can lead to unpredictable situations, and poor decisions can lead to injury or death.
As a result, a new model from Microsoft and MIT focuses on identifying weaknesses. During training, researchers let the AI know each time it was about to make a mistake and included it in the machine learning model.
The result is an AI that can hopefully pinpoint situations where it may need more information. The AI was validated using video games, where a human corrected the path of a trained CPU when it made an error.
“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors,” said Ramya Ramakrishnan, a graduate student at MIT’s science and AI lab.
Eventually, the technique could be used in real-world scenarios. For example, a human may be able to take control in uncertain situations and the AI can take note of what they do. The ultimate goal is for self-driving cars to require no input, but that could be a long way off. In the meantime, solutions like this could keep users safer on the road.