• Self-Driving Uber Car Kills Arizona Pedestrian
    111 replies, posted
It was inevitable.
Got any sources on those claims there? 10 years behind what are you talking about? Maybe uber is sure whatever but what about Google's efforts? How can you say it's unverifiable? The software is still in development, they would be logging EVERYTHING. Your telling me after an accident they just sift through that data and go gee it's a mystery why this happened guess will never know lol? And unstable? How do you know that? Look I'm not saying self driving cars are perfect but what you are saying is absurd to me, please explain further.
Damn, I bike to class on that road every day
I don't think it is wise to have self driving car software + hardware that isn't publicly auditable. Maybe not open source per-say: but in the same way that an architect has to meet certain standards - automated driving software is potentially even more important to have checked than a design for a bridge due to the potential millions of vehicles that would be on it.
Auto companies would fight anything that requires this with tooth and nail. Trade secrets are very important to them.
They're currently relying on ML approaches (read: neural networks) to interpret the sensor data, especially camera data (with LIDAR you can use simpler algorithms to detect obstacles and make sure it at least doesn't smack into anything, but there's a pretty big shortage on those sensors, and afaik they're also expensive). Presumably they also rely on it to predict things like unseen pedestrians, but I don't think any of them publish much about specifics. The problem is that a CNN (convolutional neural network) is pretty much a black box. You create one by twiddling the numbers inside until the right thing comes out (based on the training set). You can't really verify that it gives correct output even under limited conditions; you can only test it and hope that it works. If something goes horribly wrong in one of those, it's an absolute mystery why and all you can do is add more data to the training set, which may cause it to fail on inputs that worked before. They're horribly unstable in the sense that extremely small input variations can produce radically different outputs. Of course they don't rely on ML for everything, presumably a lot of the actual driving happens through hardcoded logic you could verify. But there is no other known way of going from camera data (and recognising objects on LIDAR) to something that logic can deal with. And not only are CNN-based approaches fairly bad at outputting reliable confidence scores (so far, at least); they're forbidden from being able to perform object recognition on their own by complexity theory (assuming it's at least as hard as boolean satisfiability). Until someone finds a way to do computer vision (both camera and LIDAR) and prediction in a way that can at least partially be proven reliable and is somewhat auditable in case of an accident, self-driving cars shouldn't be on the roads.
I read that source you posted, and watched the video. Nowhere does it say she was cycling at the time. It did say she wasn't on a crosswalk though. On the other hand, from the NYT source: [quote]Sgt. Ronald Elcock, a Tempe police spokesman, said during a news conference that a preliminary investigation showed that the vehicle was moving around 40 miles per hour when it struck Ms. Herzberg, who was walking with her bicycle on the street. He said it did not appear as though the car had slowed down before impact and that the Uber safety driver had shown no signs of impairment. The weather was clear and dry.[/quote]
Actually, Google managed to exemplify this pretty well with reCaptcha. Ever had to fill out one of those image captchas with cars? Yeah, guess what that's for. They're actually delusional enough to think that it'll work if they just throw enough data into the pile. (Not that Google were involved here; but Uber isn't gonna be much better)
What I wanted to say, car should not be programmed to expect random stuff jumping on a road
Uhhhhhhh, yes it should. Why shouldn't it?
Throwing more data on to the pile is an excellent way to build training sets and to do things like detect what is and is not a street sign for the purpose of map making, what are you talking about
What? When I drive I'm expecting something to fall/run into the road at any moment. A self-driving car should do the same.
Thats the issue right now: there are no standards. The standards are still being developed and tested, and theres no way you can test technology like this except in the real world. The real world cost of this technology is some people who dont quite follow pedestrian or cycling laws may get hurt or killed. Asking for standards to come in place for SD cars now would be like asking the Wright brothers for safety standards after they touched down on their first flight.
Yes and no. They know how to get decent results with ML. Does that mean they have any clue what they're doing? Evidently, fuck no. They and everyone following started believing their own "AI" bullshit to the point of insanity. ML has almost nothing to do with "intelligence" (ostensibly you could use it inside a system that would qualify though), and the image recognisers that started the fad still have no ability to even "see" objects like they describe. For the most part they're simply looking for unrelated patterns and textures (clearly visible by looking at failure cases, or by using adverserial generation). I'll never understand why people think of Google as some bastion of competence.
Perhaps because they have the ability to pick absolutely nothing but the most competent workers and they've proven time and time again that they're exceedingly good at what they do? I literally have no idea what you're talking about right now, google has made no claims (as far as I am aware) that are bullshit related to ML.
https://files.facepunch.com/forum/upload/238785/f74e0d5a-7257-4c56-aaec-32060bc02d24/image.png Seems to be doing about as good a job of semantic segmentation as the ol' mark 1 human. The "unrelated patterns and textures" part sounds like overfitting? If that is the case, isn't that a fantastic argument for preferring larger quantities of lower-quality labels? Also, read the interviews of the go masters who were beaten by AlphaGo. They remark on its forward planning and creativity, which to me are marks of intelligence. If you are talking about artificial general intelligence then sure, we're not there yet; the field is full of open problems. You seem to have quite a derisory attitude toward google, but I'm not sure what they have to prove to you.
I feel like writing off autonomous vehicles (and apparently now machine learning as a whole? lol) over a single death is pretty premature. Considering how many autonomous vehicles are driving around doing millions of miles in complex, busy cities and roadways and this is the first (?) pedestrian death, that seems pretty safe statistically. Maybe we should let them investigate a bit and perform a post-mortem before we completely write off autonomous vehicles as this impossible black box. I'm not an expert in neural nets but I'm pretty positive the advent of self-driving cars in the past like, 5 years, was the result of a bunch of junior engineers poking at a Python script cluelessly until it worked.
Police are saying that the crash may not be Uber's fault. Also worth noting that the NTSB is looking into this incident as well. https://www.ntsb.gov/investigations/Pages/HWY18FH010.aspx https://www.sfchronicle.com/business/article/Exclusive-Tempe-police-chief-says-early-probe-12765481.php
Do we have any footage of the incident yet? I'd be interested in seeing how it was difficult to avoid the collision. Did she randomly jump from the sidewalk across a lane of traffic to the bike lane?
From the fact that the car didn't even slow down, i.e. not even the human pressed the brake pedal, it does seem like this is a tragic accident that would have happened whoever was behind the wheel. Hopefully uber records this kind of data and we will know for sure if the pedal was pressed.
Jaywalking is almost never enforced here.
dude what the hell lmao If we didn't need to worry about plastering jaywalkers we would have self-driving cars 10 years ago. But we do. Because that's obvious.
Self driving cars miss shit all the time, there was the one with the Kangaroo that completely threw off its sensors.
Too bad something like this happens. But you have to look at it from a different angle, it's clear that something like this will happen sooner or later, but only in this way the software in the cars can be adjusted so that it won't happen again. I feel sorry for the woman who died in the accident and I hope that this will not happen again.
I didn't say they never miss stuff, only that they should be able to avoid almost anything in almost any condition. Also, that was in June of 2017 and Volvo was aware of the issue, which was one that appeared while they were developing it .
That pretty much confirms what I thought at first. Almost a positive thing for self-driving cars because those cases could probably only prevented by the amount of sensors a self-driving car has. Too bad it couldn't in this case but it the data could still be useful for the future.
Yehhh ur right, what was I thinking yesterday when I wrote that shit. Yes, car should stop when it sees shit on road and be aware of surroundings. Agree 100%. I was more thinking of situation where someone jumps on road where it really shouldn't, should car brake 100%? I mean, it risks passengers inside and cars behind that car. But ok, cars behind should have safety distance anyway, and passengers should just sit tied on their chairs anyway. Still, it can break passengers nose/face if car breaks 100% on wrong moment.
Yeah this is true, the result of training is a black box. However this is true of humans too. We can't do formal verification like with non-ML software or with chip design, but we can test extensively. If I'm about to go under a surgeon's knife, I can't look inside his head and understand the complex processes that control his hands; I can trust the system that educated and tested him, and the evidence of his past successes. I do feel like some engineers/researchers are a bit too eager to reach for ML when other approaches work as well/better. Tobba is right to say it is a fad. Hopefully we will see some interesting results though from the huge injection of grant dollars and VC funding
How many million miles have non-self-driving cars driven?
It was bound to happen as the tech is still early, that was a really easy call to make. What I'm worried about is that people will want to ditch this technology or decry is as too dangerous because of one accident when all it needs is just more investment and research.
Sorry, you need to Log In to post a reply to this thread.