• Google Cars 1 million miles with 0 accidents upsets media.
    112 replies, posted
[QUOTE=rampageturke 2;47747801]I fail to see how being rear ended is ALWAYS the fault of the driver behind.[/QUOTE] The fact that they weren't following at a safe distance to stop in time.
[QUOTE=Levelog;47747844]The fact that they weren't following at a safe distance to stop in time.[/QUOTE] I guess in theory someone could swerve into the lane directly in front of you then slam on the breaks, but that seems like something which would be very rare because it would almost have to be intentional.
[QUOTE=rampageturke 2;47747801]I fail to see how being rear ended is ALWAYS the fault of the driver behind.[/QUOTE] Depends on your definition of "at fault." Legally and for insurance purposes the only way you cannot be at fault for a rear-end collision is if someone backs into you or strikes your vehicle and then proceeds to push you into someone else. Now, if we're going for more of a causation definition, there's lots of ways you can be at fault for causing a rear end accident, like slamming on your brakes for no goddamn reason.
[QUOTE=Levelog;47747844]The fact that they weren't following at a safe distance to stop in time.[/QUOTE] People break checking isn't uncommon, and a lot of the time in situations where you get over taken, the gap that results between you and the car in front is tiny. [editline]18th May 2015[/editline] Also when merging traffic
[QUOTE=AntonioR;47747758]There are several problems here: - First those people will lose their job (which is possibly payed more than the average trucker) - Then Discovery channel won be able to make a adrenaline boiling show like World's Toughest Trucker or something - Then you wont get your daily dose of truck entertainment You see how many people this inflicts :v:[/QUOTE] This is Discovery you're talking about. They'll manage to make footage of a driver fucking around with his phone whilst the truck drives itself seem much more hype than it actually is.
[QUOTE=Antdawg;47741848]Makes you wonder how many accidents they avoided by having the driver take over from the computer at a moment's notice I'm not saying this to diss self-driving cars, but a faultless record after a million miles is a huge thing. I haven't read anywhere about how many times a driver has had to override the computer because I don't think Google has published that.[/QUOTE] Wow this guy.
[QUOTE=Fourm Shark;47748001]I can't tell you how many times I've nearly been in a collision because some idiot didn't check his blind spot before lane changing. Self driving cars cant come soon enough.[/QUOTE] Worse case are those who can't the traffic laws. A crazy women and her two kids (No license I guess) who was following her in a second car. She was 8~10cm from hitting me in a roundabout, shortly followed by her two kids. If I haven't fixed my old break on that bike ...
[QUOTE=deadoon;47741919]Probably 0 due to human reaction time is usually around 1/4 second, compare to a computers reaction time in the milli or micro seconds.[/QUOTE] [QUOTE=willtheoct;47746999]Wrong. They do exactly as you tell them. They are not capable of doing otherwise. They also are [B]much[/B] faster than humans in reaction time. To say a computer isn't flawless carries a tone of "computers make mistakes too, y'know", which is the most ludditic(If that is an adjective?) sentence up there with "One time I was using a calculator and it came up with the wrong number"[/QUOTE] [QUOTE=Rapscallion92;47748004]Wow this guy.[/QUOTE] This is why all aircraft are fully automated and pilots as a profession are obsolete. As we all know, computers do exactly what you tell them and have very fast reaction times, and that's all it takes to handle the complexity of a task like flight perfectly with no human input. There has never, ever been a case of a pilot needing to take over for autopilot when something unexpected arose that the autopilot couldn't handle, nor has there ever been a case of autopilot making a wrong decision, especially during testing and prototyping. Anyone who would [I]dare[/I] to ask if a [I]prototype[/I] has ever had bugs or issues that required a human to take over must clearly be some sort of Luddite. :rolleyes: Some people are getting awfully defensive over a reasonable question, is this just techno-fetishism or what? As far as I'm concerned self-driving cars can't come soon enough but wtf is with this 'you're a Luddite if you imply computer control might not be 100% perfect yet'?
[QUOTE=catbarf;47748446]This is why all aircraft are fully automated and pilots as a profession are obsolete. As we all know, computers do exactly what you tell them and have very fast reaction times, and that's all it takes to handle the complexity of a task like flight perfectly with no human input. There has never, ever been a case of a pilot needing to take over for autopilot when something unexpected arose that the autopilot couldn't handle, nor has there ever been a case of autopilot making a wrong decision, especially during testing and prototyping. Anyone who would [I]dare[/I] to ask if a [I]prototype[/I] has ever had bugs or issues that required a human to take over must clearly be some sort of Luddite. [/QUOTE] To be fair, flight is a lot different and more complex then driving on a road.
[QUOTE=catbarf;47748446]This is why all aircraft are fully automated and pilots as a profession are obsolete. As we all know, computers do exactly what you tell them and have very fast reaction times, and that's all it takes to handle the complexity of a task like flight perfectly with no human input. There has never, ever been a case of a pilot needing to take over for autopilot when something unexpected arose that the autopilot couldn't handle, nor has there ever been a case of autopilot making a wrong decision, especially during testing and prototyping. Anyone who would [I]dare[/I] to ask if a [I]prototype[/I] has ever had bugs or issues that required a human to take over must clearly be some sort of Luddite. Some people are getting awfully defensive over a reasonable question, is this just techno-fetishism or what? As far as I'm concerned self-driving cars can't come soon enough but wtf is with this 'you're a Luddite if you imply computer control might not be 100% perfect yet'?[/QUOTE] The problem of automating flying is much harder; there's no points of reference, navigation is in 3D space, not all obstacles are detectable (turbulence), speeds are much greater, and the aircraft can't simply stop if the automated systems detect a failure in a sensor. A car, on the other hand, is navigating through what is effectively a 2D environment using sensors to directly detect its environment and any potential obstacles. Now, the implication was that Google is effectively lying, and perhaps they are; the point of this thread is that some media outlets, just like the post that started this discussion, presume (or give the impression that) Google is twisting the data to be favorable. Also, you're the only one that mentioned "perfect"; the point isn't that self-driving cars are flawless (they aren't), it's that they can be (and perhaps already are) less flawed than humans.
[QUOTE=OfficerLamarr;47749015]To be fair, flight is a lot different and more complex then driving on a road.[/QUOTE] [QUOTE=DaMastez;47749129]The problem of automating flying is much harder; there's no points of reference, navigation is in 3D space, not all obstacles are detectable (turbulence), speeds are much greater, and the aircraft can't simply stop if the automated systems detect a failure in a sensor. A car, on the other hand, is navigating through what is effectively a 2D environment using sensors to directly detect its environment and any potential obstacles.[/QUOTE] There's a reason we've had autopilot for decades but computer-driven cars are such a new thing. Flight is a clean environment where instruments provide all the necessary data for decision-making, the kind of situation perfect for computer control, and even still pilots are considered necessary for handling abnormal situations. Nobody's questioning computers as decision-makers, but a computer can only make decisions on the input it receives, and how to process the enormous amount of visual information a human can easily take in has always been a cutting-edge issue in robotics. What Google is doing is basically unprecedented, and it's not unreasonable to question whether the technology as it stands is 100% reliable on its own.
[QUOTE=ridinmybike;47741865]this is just the beginning, next the google cars will take our jobs[/QUOTE] You mean they will transform into humanoid shapes and taek err jerbs? [editline]18th May 2015[/editline] [QUOTE=DaMastez;47749129]The problem of automating flying is much harder; there's no points of reference, navigation is in 3D space, not all obstacles are detectable (turbulence), speeds are much greater, and the aircraft can't simply stop if the automated systems detect a failure in a sensor. A car, on the other hand, is navigating through what is effectively a 2D environment using sensors to directly detect its environment and any potential obstacles. Now, the implication was that Google is effectively lying, and perhaps they are; the point of this thread is that some media outlets, just like the post that started this discussion, presume (or give the impression that) Google is twisting the data to be favorable. Also, you're the only one that mentioned "perfect"; the point isn't that self-driving cars are flawless (they aren't), it's that they can be (and perhaps already are) less flawed than humans.[/QUOTE] You always have a point of reference when using GPS or INS that is commonly available. And for the point of sensor failures, that is why you got redundancy. Even if a computer fails, the other 2 in the plane still work. Just think about it: Is it easier programming a pathfinding algorithm to clear all these obstacles or an algorithm which controls heading and altitude. That is why we got autopiloted airplanes many many decades before we even started with autopiloted cars
[QUOTE=catbarf;47749154]There's a reason we've had autopilot for decades but computer-driven cars are such a new thing. Flight is a clean environment where instruments provide all the necessary data for decision-making, the kind of situation perfect for computer control, and even still pilots are considered necessary for handling abnormal situations. Nobody's questioning computers as decision-makers, but a computer can only make decisions on the input it receives, and how to process the enormous amount of visual information a human can easily take in has always been a cutting-edge issue in robotics. What Google is doing is basically unprecedented, and it's not unreasonable to question whether the technology as it stands is 100% reliable on its own.[/QUOTE] Now, the question is: how long have we had computers that can learn actions by doing them with a basic instruction set? Also, I had noted the flaw of his argument that a person might be able to avoid an accident by taking over when they see a problem. The reason I call that a flaw is that they will always be "behind" the computer time wise, when he takes over his action might be either wrong or contradictory to what the computer was doing in that quarter second period of time. The quarter second being perfect world scenarios like press the button when it lights up, real world will require more time to process making it even longer, combine with a simple turn left turn right comparison, the time which the computer was acting before the person acted might cause them to switch between wasting much more time(1/4 second turning left will take a half second turning right to compensate if swapped at the same speed instantly wasting a full half second to avoid something). Doesn't help when you quote me on this, when a couple posts later I specifically state this: [QUOTE=deadoon;47741968]Computers may not be flawless, but their system is a learning AI type computer. [B]The more situations where it detects a near accident or actual one, the better it will be at avoiding them. [/B] Also, there is nothing you can do about someone being a dumbass and rear ending you usually. If a person takes over for the car, that will mean that there is now an increased delay between something happening and when it is reacted to. The current AI is not perfect(there is always some sort of compromise when dealing with any system) and is a designed to be a bit overly cautious, which means that it will take longer to get places than an aggressive driver(a flaw technically, but also a compromise of design), yet will get in less accidents.[/QUOTE] The bolded point is a very key component of what I am arguing on how this is a good idea.
[QUOTE=smidge146;47741886]And it will probably bring on a lot more jobs.[/QUOTE] So all the drivers, chauffeurs, taxi drivers, concierges and so on and so forth - everyone whose jobs are related to driving, hell maybe soon even flying or sailing!, will just go [QUOTE]FUCK IT! I'm off to greener pastures![/QUOTE] and hop seamlessly right into the IT/experimental engineering and AI industry under google's watchful eye, right? Don't think so. A driver knows how to drive, not how to engineer and implement advanced algorithms for driverless vehicles.
[QUOTE=catbarf;47749154]There's a reason we've had autopilot for decades but computer-driven cars are such a new thing. Flight is a clean environment where instruments provide all the necessary data for decision-making, the kind of situation perfect for computer control, and even still pilots are considered necessary for handling abnormal situations.[/QUOTE] That's the point, airplanes have had autopilot for a long time but they aren't designed to fly the plane, and they can't deal with "abnormal" situations; they aren't meant to replace pilots, instead they assist them. You list flying having "clean environment where instruments provide all of the data" as a plus, which I would argue is the biggest issue with flying; it's primarily based off of instruments, especially for autopilot. Instruments can be wrong or misleading. Sure, visual data is "messy" but it's also real; it's a real picture of the environment around you, the environment you're navigating through. Also, humans might "take in" a lot of visual information, but they don't process it all that well. A lot of it gets skipped or ignored, lost due to attention being focused elsewhere, even for a short period of time. It also helps that the car doesn't necessary need to identify in great detail everything, it just needs to pick out what it doesn't want to hit, which is as hard of a task. I do agree with your general point; questioning how good Google's self-driving car is something which needs to be done. Accepting whatever Google says is just as bad as dismissing whatever Google says as a lie. I think part of it is (as other threads have shown) there are people who simply don't want self-driving cars for any reason, and those that come off as trying to simply outright dismiss the viability of self-driving cars tend to be, consciously or not, grouped into that category. [QUOTE=Impact1986;47751530]You always have a point of reference when using GPS or INS that is commonly available. And for the point of sensor failures, that is why you got redundancy. Even if a computer fails, the other 2 in the plane still work. Just think about it: Is it easier programming a pathfinding algorithm to clear all these obstacles or an algorithm which controls heading and altitude. That is why we got autopiloted airplanes many many decades before we even started with autopiloted cars[/QUOTE] I would argue that autopilot being around for so long while still requiring a human pilot is simply an indication the technology hit a roadblock at some point; that creating a airplane that can fly itself, including dealing with non-ideal conditions, is hard. Though I would suspect liability, and the general stakes involved, has a lot to do with it.
[QUOTE=Antdawg;47741848]Makes you wonder how many accidents they avoided by having the driver take over from the computer at a moment's notice I'm not saying this to diss self-driving cars, but a faultless record after a million miles is a huge thing. I haven't read anywhere about how many times a driver has had to override the computer because I don't think Google has published that.[/QUOTE] statiscally, 3 or 4 of those considering that more miles were driven without a driver
[QUOTE=coyote93;47756163]Why would anyone buy a google car? Driving is fun.. The only use I can see for it is as taxi's or something.[/QUOTE] It's really just more of an annoyance for me now for the daily commute. If I could be remoted into a system getting some work done while my car drove me to work, I'd prefer that.
[QUOTE=Levelog;47757293]It's really just more of an annoyance for me now for the daily commute. If I could be remoted into a system getting some work done while my car drove me to work, I'd prefer that.[/QUOTE] I wouldn't be able to get work done anyways. I get motion sickness if I'm not driving. Trying to do other things only makes it worse.
[QUOTE=coyote93;47756163]Why would anyone buy a google car? Driving is fun.. The only use I can see for it is as taxi's or something.[/QUOTE] Not everyone enjoys driving. There's plenty of people who don't want or shouldn't be driving.
I think the argument that a computer can have errors isn't even relevant even if X people died after 10 years from automated cars, no matter what the death count is going to be so insanely low compared to humans driving it doesn't even become an argument. no matter how you put it, automated cars are [U]objectively[/U] superior drivers, up until at least the element of human instinct comes into play, variables that computers can't improvise
[QUOTE=catbarf;47748446]Some people are getting awfully defensive over a reasonable question, is this just techno-fetishism or what? As far as I'm concerned self-driving cars can't come soon enough but wtf is with this 'you're a Luddite if you imply computer control might not be 100% perfect yet'?[/QUOTE] The problem is when the mentioned luddites spread their beliefs to others who aren't as informed on the subject, which eventually leads to fear and uncertainty, and high risks of: -Automated cars getting banned in certain states -Unnecessary or even destructive regulations -Unnecessary amounts of testing before coming to market -Negative stigma, which keeps consumers buying manual cars, and keeps the rate of accidents high -The stigma can also put a dig into Google's profits Where has this happened before? Google Glass. By calling out the luddites and exposing the many flaws in thinking going on, we can help strengthen advances in technology that would otherwise be banned.
Sorry, you need to Log In to post a reply to this thread.