Assuming that equipped software and hardward work decently. It has hundreds of sensors that can "see" much more than human and thus, making the calculation for any decision making is much more accurate.
The problem is not the number of sensors but the understanding of what's going on. Vision systems have a hard time with rare events avoiding false positives where they panic unnecessarily because the sensors catch sight of a street sign or something so they tend to ignore such circumstances. But sometimes there is really a pedestrian or a parked car.
Here's another example where the Tesla autopilot drove over 100 Meters of traffic cones before slamming into a cop car. https://www.zerohedge.com/news/2018-12-12/tesla-autopilot-slams-police-car-despite-100-meters-traffic-cones-and-warning-0 it also talks about how when Musk was on 60 minutes, the autopilot made an apparently illegal lane change.
The underlying problem is AI's and computer vision systems don't have any human understanding so they are easily fooled by unusual situations and especially by adversarial actions. For example an autonomous car can run you over without realizing if you are in a strange place during strange lighting. For a good essay with links see https://www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html?fbclid=IwAR0KF3AhWtKQSkcJsqXjZ9ly1elFOcz7D-m8R1t7l-h69vrqYbpMNkP9X0Y
it's hard to guess how far we are from capturing understanding in algorithms. My book "what is thought?" http://www.whatisthought.com argued that the vast computational resources of evolution contributed critically in which case we may never get there.