advertisement


Tesla ‘Autopilot’.

https://en.wikipedia.org/wiki/Tesla_Autopilot

As we move forward FSD (Full Self Driving) will use AI to help make its decisions. This will not be about programming a car to do X, Y or Z. Explainable AI is a huge area of research, understanding how and why autonomous systems (not just cars) make the decisions they do, is very difficult.

Would swerving around the obstacle risk a collision with oncoming traffic or pedestrians?

so in this situation do you let the car decide to kill the pedestrians or collide with on coming traffic (say a cyclist) and kill them? We can be in a situation where any impact is unavoidable. Scenarios like this are played out in labs across the world everyday, and certainly make for an interesting debate amongst my students.
 
Agree with the above, though in this scenario the only two things on the road were the car and the deer. The car is fitted with sensors front, back left and right so should have been ‘aware’ of this. The ‘kill a deer or risk an accident’ argument simply doesn’t apply. I’m also very interested to know if it has the AI to recognise the deer isn’t a child, recumbent cyclist, or wheelchair user, all of which would have a similar pixel or radar footprint. Did the AI fail to recognise anything at all, or did it make a calculation that even on a totally empty road that ploughing through without even braking was the safest scenario? I’d like to see this one debugged!
 
Agree with the above, though in this scenario the only two things on the road were the car and the deer. The car is fitted with sensors front, back left and right so should have been ‘aware’ of this. The ‘kill a deer or risk an accident’ argument simply doesn’t apply. I’m also very interested to know if it has the AI to recognise the deer isn’t a child, recumbent cyclist, or wheelchair user, all of which would have a similar pixel or radar footprint. Did the AI fail to recognise anything at all, or did it make a calculation that even on a totally empty road that ploughing through without even braking was the safest scenario? I’d like to see this one debugged!


1) It could have decided what was a deer was road spray, a reflection anything. As i understand the is little or no AI onboard ATM. AI and is being tested for FSD is undergoing testing ATM.

There is no debugging of AI systems - understanding why a neural network made a specific decision is exceptionally complex.
 
My Subaru has an ‘Eyesight’ system which is essentially adaptive cruise and lane assist with emergency braking etc. I probably wouldn’t have ordered it but it’s standard equipment so had no choice. After initial scepticism, I love it. The system definitely improves driver safety, no question. Particularly good on a long journey and in heavy traffic, makes driving so much easier. Of course, I’m still steering and can override at any point. Full autonomy is clearly the way it will go.
 
Autonomous cars will never be 'bombproof'. But sooner or later they will be better than humans driving (maybe they already are?). Humans are far from bombproof which can be seen in the statistics. The most interesting with autonomous cars to me is the legal issues, who's fault is it when a vehicle causes an accident? The maker? The driver (who wasn't driving)?

I myself thinks driving a car is among the most fun one can have as a human and has no wish to have an autonomous cars. I the rather save the money and go by train/bus or even a taxi and use the savings to buy better HiFi.

Horses where autonomous, btw. Coming out drunk from the pub you just mounted the carriage and told the horse 'Home!' and it knew how to get there.
 
Tesla’s advertising is disgraceful. “Autopilot” is nothing more than adaptive cruise-control with a lane-change function, and while I fully understand that an plane’s autopilot is even less sophisticated, that is only because I learned this from a pilot: the general public think it really is an automated pilot.

Every time a car running this system ploughs a car into a clearly visible barrier, some Tesla spokesperson comes along and says “the driver has to be alert - it’s not a self-driving car - etc. etc”, but Telsa doesn’t fit an attention-detector to its cars, and it allows the car to be driven hands-off for far longer than is wise. They are calling the next beta release “Full Self-Driving”, when it is anything but. (One of the more recent cases was a blind drunk owner using the system, whose car smashed into the back of a stopped police patrol car at 70mph. The police car was pulled over with blue lights flashing at the time.)

The system is not legal for use in Europe because Tesla could not show that it was adequately tested for the traffic conditions in Europe. This is a major problem with machine learning, as @gintonic pointed out: these aren’t programmed systems; they’re trained. You keep feeding the network with data, and twiddle some switches until you get the right outputs from it, but the actual “wiring” inside is opaque. Tesla takes other risks by relying heavily on a visual system, even though visual AI systems are notoriously easy to fool, and degrade badly in snow, rain or fog (everyone else supplements with radar, or LIDAR, which is technically a visual system, but one with far more accurate ranging).

The company I work for bought a Tesla as a promotional item for customers. On the first trip on the Interstate, being a car full of engineers, my colleagues decided to test out the “autopilot”. Within five minutes, the car nearly killed them twice: once during a lane-drop it refused to yield to an 18-wheeler truck that was ahead of it in the lane to the right, and had literally nowhere else to go, and a second time by trying to jump into a space that another car was entering from the other side. No way on earth could we give it to a member of the public... and that was on Texas roads, with three to five lanes each way, where people sit in lane, and don’t make the constant lane-changing you see in Europe.
 
Not sure what your comment even really means, but a move to autonomous vehicles will be a huge advance. It's OK as it's human nature to struggle to embrace change so you're not alone, but thank heavens for those of us with more vision.
I look forward to working autonomous cars. Tesla is not even close.
My point is that even the fairly early cars were a step forwards in safety compared with horses.
I really hope that the day will come when I am told yo use a self driving car for my own good.

I am interested to know how Tesla could identify a deer optically and classify it as small. N America has the moose, which would be a bad idea to hit. To a purely optical system, how to tell
 
It's worth a read of the levels of autonomous driving, here from a Tier 1 automotive supplier explaining the standarised levels:

https://www.aptiv.com/en/insights/article/what-are-the-levels-of-automated-driving

  • Level 0 - No Automation. This describes your everyday car. ...
  • Level 1 - Driver Assistance. ...
  • Level 2 - Partial Automation. ...
  • Level 3 - Conditional Automation. ...
  • Level 4 - High Automation. ...
  • Level 5 - Full Automation.

Tesla is level 2 as is my BMW. Interestingly drivers prefer the more functional Tesla system to the BMW Driver Assistance Pro but NCAP prefer the BMW system as it requires more driver involvement as it is safer due to the driver being more alert and therefore more able to take over when needed.

The higher levels need car-to-car communication but not just that in my view, it needs all cars communicating as a single rogue human driver could cause havoc.

There is a very real issue designing the software which must decide between killing a pedestrian, baby in pram, crash into a cyclist etc when an accident is unavoidable, which is a least bad option?...this all needs to be built into the software if vehicles are to be truly autonomous.
 
There is a very real issue designing the software which must decide between killing a pedestrian, baby in pram, crash into a cyclist etc when an accident is unavoidable, which is a least bad option?...this all needs to be built into the software if vehicles are to be truly autonomous.

key tenet in some of my lectures and tutorials on the module in Explainable AI and Ethics.
 
1) It could have decided what was a deer was road spray, a reflection anything. As i understand the is little or no AI onboard ATM. AI and is being tested for FSD is undergoing testing ATM.

That's really surprising - I had assumed all of the camera image analysis (detecting road boundaries, other vehicles, pedestrians, traffic signs) is already AI based - I can't imagine any other way to do it. Perhaps collision avoidance and automatic braking is based on radar and not image ? Even so it's surprising that the car would hit the deer.
 
I've been involved with some of the angst between Tier 1s and the vehicle OEMs. It's a minefield.
 


advertisement


Back
Top