My understanding is that autonomous cars are more a matter of regulation than technology now. One problem, though, is that they will have to be much safer than human drivers to win that regulatory approval. We can tolerate human fallibility, but we can't tolerate a programmable machine that hasn't taken account of known safety issues. At this point, any safety problem with cars will be assumed known, so when an autonomous vehicle fails in a given situation, the public will go nuts. Pushing this hysteria will be all of the industries that depend on people as drivers for economic gain (and there are a lot of them).
I don't think we're close to capable of making the risk/benefit decision: these cars are 75% safer than human drivers, so we will overlook the failure to deal with these rare, but known situations, or these rare, but known flaws with the detection systems. In real terms, we won't trade 40,000 deaths per year at human hands for 10,000 deaths per year at machines' programming if that programming results in 10,000 deaths that a good human driver would have avoided.