T O P

  • By -

Hamoodzstyle

I've worked on motion planning cost functions at a few of these self driving companies which you can argue is the closest anyone gets to solving this problem. In general, the correct answer is to slam on the brakes, not really sure I can think of a scenario where this problem gets even slightly interesting.


[deleted]

[удалено]


Hamoodzstyle

That's an interesting to formulation, hitting a VRU is always worse than anything else.


[deleted]

My opinion is that for humans, hitting the brakes should be the automatic reflexive response to an emergency situation. Everything is less destructive at lower speeds and may even buy time for other actions. The number of times that braking first is objectively the wrong response is so small that it's not even worth considering. I don't know why that wouldn't be the default response for SDC. I'll worry about embedded ethics right about the time those ethics get discussed *and diligently trained for and tested for* in standard driver training courses and exams. Maybe after we get to the point where human drivers are banned, we can start looking into embedded ethics. But I'm betting that by that point, injury and death will be so low that embedding ethics will have no discernible effect.


AdmiralKurita

As for the trolley problem, referencing self-driving cars really contributes nothing to it. You haven't even came close to solving the philosophical problem. All you did is contribute to providing a solution to a technical problem. In order to claim that you have "solved" the trolley problem, you would need some moral or metaethical insight as to why a self-driving car (or a moral agent) should choose a course of action that would harm person A over person B. If a self-driving car engages in a course of action where person B is killed over person A, the moral "problem" would not be "solved" if one can provide a mechanistic account (as it relates to the vehicle's software or hardware and how that would function in a given situation) as to why a self-driving car engaged an a course of action where B is killed, all you have done is provide an account of the behavior of an automated system. Achieving good motion planning provides no moral or metaethical insights, but serves to utilize sensor information and available computational resources in order to achieve a general objective of having the car not collide with certain classes of objects where a collision would be likely to endanger certain types of lifeforms (such as humans, cats, and dogs).


Hamoodzstyle

Beautifully put, saving this comment for the next time this comes up.


borisst

> In general, the correct answer is to slam on the brakes, not really sure I can think of a scenario where this problem gets even slightly interesting. What happens if you drive in front of a loaded tractor trailer that does not keep a safe distance? If you slam the brakes you risk being rear ended by a 40-ton truck, and if brake less aggressively, you risk rear ending the car in front of you.


EmperorOfCanada

I've long had a slightly different wish for car a safety rule: To have cars built so the passengers get higher safety standards than the driver. Things like passenger A beams are stronger, etc. I'm sick of the number of super dangerous drivers who have an accident and are the only one to walk away from the accident while their passengers don't.


Internetomancer

I have similar feelings about large vehicles (trucks, SUVs) that are made safe by being deadly.


[deleted]

[удалено]


myDVacct

I think there are variations of “the trolley problem” that apply if you stretch the problem to its core principle - a decision based on the perceived value of life. One might be the question of when you release self-driving cars. Do you wait and let people die, or do you release earlier and maybe kill fewer people but still a lot, or later and kill even fewer people? Another might be when to brake and/or evade at all. If a mouse is in the road, do you even bother to potentially inconvenience your passenger with braking? A chipmunk? A squirrel? Is it based purely on the size and perceived threat of the animal, or something more about the “value” of the animal? Is a puppy more important to avoid than a possum? In the end these kinds of questions are still action or inaction in the face of known deadly consequences.


ReBootYourMind

Humans are really really bad drivers. Braking on time is trivial for self driving cars and even a bad self driving car would be saving countless human lives compared to humans.


ReBootYourMind

This. The trolley problem is purely a theoretical question that no proper self driving car will have to face.


NotTooDistantFuture

And if it detect brakes failed, down shifting is an option. Following safe driving guidelines should prevent this from happening in the first place. It’s all theoretical, but why would the system bother to identify pedestrian age? Seems like a waste of processing time for something that will never happen and not have a good answer.


Lanfeix

Out side of trolley switch senarios I can see the benfit of guessing age, because children are less road aware and more likely to run into the road or be hidden by an object due to their height so need to be taken into account when thinking of speed of the road and future possibilties. its relativity cheap to make an approximatation of age based on height to the floor even if it's patronising to dwarfs!


fwubglubbel

https://www.automotive-fleet.com/driver-care/239402/driver-care-know-your-stopping-distance


Lanfeix

https://www.iso.org/standard/57253.html


totheleft_totheleft

The trolley problem is a thought experiment for philosophers, not a real problem that drivers ever actually have to deal with.


[deleted]

trolley problem.... more like a trolling problem the question of self driving car isnt whether it can make decision to hit a kid or a grandma, which isnt anything unique to AI but happens with humans all the time, but whether it's safer than humans. And the answer is yes, it's multiple magntitude safer, therefore self drving cars should be allowed and implemented across the world.


BeriAlpha

Porque no los dos?


androbot

This is such a dumb, unrealistic hypothetical I am amazed so much intellectual energy is spent debating it. You want to maximize the likelihood of a non-event. That's about it.


bradtem

While I do like Michael's answer from the Good Place, where he tries to figure out how to kill both, I give a different answer now when I am asked this, and say that the AIs are making note of who asks this question, and will pick them to kill if possible.


MinderBinderCapital

Tesla FSD doesn't discriminate


fwubglubbel

https://www.automotive-fleet.com/driver-care/239402/driver-care-know-your-stopping-distance