So We Know This Won't Have a Manual Transmission Available - Page 2 - Dorkiphus.net
Navigation » Dorkiphus.net > Technical & Track Disussions > Other Technical Discussions » So We Know This Won't Have a Manual Transmission Available

Other Technical Discussions A place for technical discussions NOT related to Porsche or BMW. Other makes, home DIY, etc.

Reply
 
Thread Tools Display Modes
  #11  
Old 03-27-2018, 01:24 PM
smdubovsky's Avatar
smdubovsky smdubovsky is offline
 
Join Date: Mar 2004
Location: Silver Spring, MD
Posts: 5,281
smdubovsky has three HoF threadssmdubovsky has three HoF threadssmdubovsky has three HoF threads
Default

Quote:
Originally Posted by HughA44s View Post
I am trying to understand the problem they are trying to solve here.
1) Safer cars
2) more free time.

Already happened to commercial airplanes. Almost all of it is autonomous now w/ massively increased safety. In most situations, true autonomous cars w/ sensor arrays (and not just a Tesla driver 'assist' w/ a simple camera or two) are probably safer drivers than the majority of drivers now. They don't actually have to achieve zero accidents to be successful (though public perception may not be the same.) They just need to achieve a safety rate better than humans. The rise of the machines is upon us.

I *ENJOY* most driving. I never txt/talk on a phone. Radio isn't important. Heck, I don't even talk to real passengers much. But I have to admit if I had a long boring traffic laden commute I could think of many more productive things to do with my time than sitting there swearing at other ignorant drivers.
__________________
Stephen
www.salazar-racing.com
1970 914/6 - 3.0L GT
1983 911SC - 3.32L IROC
1984 930
2008 S2R1000, dirt bikes (some gas, some electric), Sherco trials bike
Sold: 2001 Boxster (hers), 2003 996tt x50 , SpecE30, 1996 E36M3 GTS2 racecar, 2015 Mustang GT
Reply With Quote
  #12  
Old 03-27-2018, 03:37 PM
Croc R's Avatar
Croc R Croc R is offline
 
Join Date: Jan 2005
Location: Chevy Chase, MD
Posts: 302
Croc R
Default

Quote:
Originally Posted by HughA44s View Post
Interesting in light of an driverless UBER car recently mowing down and killing a lady in a cross walk.
From the overhead photo I saw of the accident scene, there is no visible crosswalk. Nevertheless, the car should have detected the pedestrian, who was walking her bicycle across a multi-lane divided road. Also, there was a backup driver in the car, and he was apparently distracted.
__________________
Harleigh
(ex-'60 356B Roadster, '70 914-6, '85 911, '94 968, 2012 Cayman R)
Reply With Quote
  #13  
Old 03-27-2018, 08:51 PM
HughA44s HughA44s is offline
 
Join Date: Aug 2014
Location: Woodbridge, VA
Posts: 603
HughA44s has one HoF thread
Default

These two posts illustrate the difficultly for developing large scale autonomous systems (In this case a large scale driverless car network – necessary for the “rise of the machine scenario) and the differences in the engineering issues faced, especially from a observability / controllable perspective. In the first case (aircraft landing systems) all of the parameters and events necessary for a successful landing can be detected and or controlled. The NTSB and the industry has ensured this is so through interface document, procedures, system design, regulations, etc. The un-observable or random events have been reduced to an absolute minimum. The landing of a aircraft is a controlled event and isolated to a interactions with defined systems. Case in point, in most situations the landing of an aircraft is not subject to random events such as a lady walking across the runway with a bicycle – by purpose and design. It is interesting to note that I can get these highly controlled closed systems to fail 99% of the time if I introduce events outside their ability to observe and control. In this case the system is define as the airport and the airplane in the act of landing. This example is not a good analogy to a system of driverless cars (having to react to a un-ending series of random events) for that reason. Now having said that, the example of the aircraft belly flopping at SF and the UBER incident do have one thing is common which is that the person acting as back-up to prevent the system from failing was a moron and trusted the system to much.

Now with the above case determined to be an incorrect analogy from an observability /controllability perspective, lets look at the issue of the UBER incident. Now, lets for a minute define AI as being able to recognize an event (lady with a bicycle in front of me) and react correctly (not running her down) without human intervention which is the only way to address construction of a system to operate in an environment successful in with a large number of random and un-predicable events and conditions, and objects, and etc.

Lets assume (hope) that there is a requirement present that roughly states: The AI sub-system shall recognize objects which it does not want to run down.

I am sure the accident report will address some of the following as a cause:

1. The sensor was at fault because it failed to picked up the photons reflected by the lady and turned them into electrons – not likely

2. The image produced by the single processing system was not recognized as a lady with a bicycle and therefore should not run her down. – very likely. This could be because the person loading the object images (for comparison and matching – AI function) into the AI system never contemplated that it would have to a recognize a lady with a bicycle, a lady yes, but with a bicycle no. At issue here is vast number of objects and orientation of those objects that we do not want the UBER car to run down which have to be defined, characterized for orientation, digitized, etc. Funny thing about AI systems, they have to be told what to recognized. I am sure that these systems have progressed but still have a big issue with unpredictable objects etc.

3. The AI system did not recognize that a lady with a bicycle could exist outside of a cross walk – most likely case. In short, the UBER car was programmed to recognize that the cross walk always has a white line for detection and therefore must stop. This would be a simple AI case in terms of observability / controllability (white line stop) – no white line go. I can easily imagine some SW Engineering at UBER coming up with this and getting promoted. After all, it was the ladies fault for crossing a road in front of an UBER car that did not have a white line. We know that these systems like to follow road markings.

4. The back-up driver was a moron as were the pilots in the SF crash.

So, lets take the above AI observability / controllability issues discussed above (infinite set of cases and conditions) and multiple them by 50 cars moving at 65 mph on 495 we get 50(factorial)*550 UBER Engineers)*(1,000,000 objects and conditions to be modeled) and you get a massive disaster. So, lets get back to the posts:

What problem are we solving:

More free time: I do not consider sitting in stark fear of what or who my UBER car is going to run down “free-time”. If I get bored, I can simply wonder if the next ignored DR or feature is going to kill me.

Safer Driving: Perhaps as long as every case detailed above, and every possible condition, and every possible event is controlled and verified and regression tested, and validated, etc then maybe but until then I will drive myself and avoid driverless cars.

The rise of the machines is upon us: No worries – Human Beings are to stupid develop machines that replace them. It turns out that humans are pretty good at recognizing random and unpredictable events and reacting to them correctly most of the time. AI and robots will continue to be successful in controlled conditions and environments – I agree with that.
Reply With Quote
  #14  
Old 03-27-2018, 09:20 PM
Vicegrip's Avatar
Vicegrip Vicegrip is offline
Porkchop & SGB for prez!
 
Join Date: Jan 2003
Location: The other Woodstock.
Posts: 13,242
Vicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threads
Default

Quote:
Originally Posted by HughA44s View Post
These two posts illustrate the difficultly for developing large scale autonomous systems (In this case a large scale driverless car network – necessary for the “rise of the machine scenario) and the differences in the engineering issues faced, especially from a observability / controllable perspective. In the first case (aircraft landing systems) all of the parameters and events necessary for a successful landing can be detected and or controlled. The NTSB and the industry has ensured this is so through interface document, procedures, system design, regulations, etc. The un-observable or random events have been reduced to an absolute minimum. The landing of a aircraft is a controlled event and isolated to a interactions with defined systems. Case in point, in most situations the landing of an aircraft is not subject to random events such as a lady walking across the runway with a bicycle – by purpose and design. It is interesting to note that I can get these highly controlled closed systems to fail 99% of the time if I introduce events outside their ability to observe and control. In this case the system is define as the airport and the airplane in the act of landing. This example is not a good analogy to a system of driverless cars (having to react to a un-ending series of random events) for that reason. Now having said that, the example of the aircraft belly flopping at SF and the UBER incident do have one thing is common which is that the person acting as back-up to prevent the system from failing was a moron and trusted the system to much.

Now with the above case determined to be an incorrect analogy from an observability /controllability perspective, lets look at the issue of the UBER incident. Now, lets for a minute define AI as being able to recognize an event (lady with a bicycle in front of me) and react correctly (not running her down) without human intervention which is the only way to address construction of a system to operate in an environment successful in with a large number of random and un-predicable events and conditions, and objects, and etc.

Lets assume (hope) that there is a requirement present that roughly states: The AI sub-system shall recognize objects which it does not want to run down.

I am sure the accident report will address some of the following as a cause:

1. The sensor was at fault because it failed to picked up the photons reflected by the lady and turned them into electrons – not likely

2. The image produced by the single processing system was not recognized as a lady with a bicycle and therefore should not run her down. – very likely. This could be because the person loading the object images (for comparison and matching – AI function) into the AI system never contemplated that it would have to a recognize a lady with a bicycle, a lady yes, but with a bicycle no. At issue here is vast number of objects and orientation of those objects that we do not want the UBER car to run down which have to be defined, characterized for orientation, digitized, etc. Funny thing about AI systems, they have to be told what to recognized. I am sure that these systems have progressed but still have a big issue with unpredictable objects etc.

3. The AI system did not recognize that a lady with a bicycle could exist outside of a cross walk – most likely case. In short, the UBER car was programmed to recognize that the cross walk always has a white line for detection and therefore must stop. This would be a simple AI case in terms of observability / controllability (white line stop) – no white line go. I can easily imagine some SW Engineering at UBER coming up with this and getting promoted. After all, it was the ladies fault for crossing a road in front of an UBER car that did not have a white line. We know that these systems like to follow road markings.

4. The back-up driver was a moron as were the pilots in the SF crash.

So, lets take the above AI observability / controllability issues discussed above (infinite set of cases and conditions) and multiple them by 50 cars moving at 65 mph on 495 we get 50(factorial)*550 UBER Engineers)*(1,000,000 objects and conditions to be modeled) and you get a massive disaster. So, lets get back to the posts:

What problem are we solving:

More free time: I do not consider sitting in stark fear of what or who my UBER car is going to run down “free-time”. If I get bored, I can simply wonder if the next ignored DR or feature is going to kill me.

Safer Driving: Perhaps as long as every case detailed above, and every possible condition, and every possible event is controlled and verified and regression tested, and validated, etc then maybe but until then I will drive myself and avoid driverless cars.

The rise of the machines is upon us: No worries – Human Beings are to stupid develop machines that replace them. It turns out that humans are pretty good at recognizing random and unpredictable events and reacting to them correctly most of the time. AI and robots will continue to be successful in controlled conditions and environments – I agree with that.
Way over thinking this. The system does not need to know Bike or Person. It needs to know location and vector. Matters not if it is a bike or a brick wall it is not supposed to hit it. The complexity comes in when it has to pick between the bike or brick wall.

The cars are using sensing systems well outside of the spec of the video you see. In this case I suspect the car did not sense or did not sense and process the person and bike in time.

One thing that might be a factor is the woman walking the bike was right at the edge of a pool of light. There might have been enough small just right/wrong factors to delay the sensing of the person pushing the bike.

Agree with SMD. In the long run AI format cars will be safer than human cars. Then you can dither away all day long on your phone. I drive around in Tysons corner and I think I am going to start carting around a paint ball gun to shoot the assholes that take an extra 8 seconds to put the phone down and sloooowly start to drive through the green light. 2 or 3 of them in one cycle and 1/2 of the poor bastards in the back miss the light. The dip shits that slow down and stall/stop 30 feet behind the next car while face down in a phone get shot too.
__________________
http://vimeo.com/29896988

“Those that can make you believe in absurdities can make you commit atrocities.” Voltaire.

"There is grandeur in this view of life...." Darwin.

The mountains are calling and I must go.

“The earth has music for those who listen”
Shakespeare.

You Matter.
(Until you multiply yourself times the speed of light squared. Then you Energy)

“We’ve got lots of theories, we just don’t have the evidence’.
Reply With Quote
  #15  
Old 03-27-2018, 10:10 PM
HughA44s HughA44s is offline
 
Join Date: Aug 2014
Location: Woodbridge, VA
Posts: 603
HughA44s has one HoF thread
Default

Lets see - Will concede to this point: "The system does not need to know Bike or Person." I will however continue to contend that there are a limitless set of conditions available to cause a rules based algorithm to make bad "decisions", and put 10 of these cars together on a Freeway and the situation becomes impossible to characterize or test adequately. Agree to disagree on the long run safety of these systems.
Reply With Quote
  #16  
Old 03-27-2018, 10:57 PM
Trak Ratt's Avatar
Trak Ratt Trak Ratt is offline
Senior Curmudgeon
 
Join Date: Mar 2003
Location: Alexandria/Mt. Vernon, Va
Posts: 27,268
Trak Ratt has one HoF thread
Default

Quote:
Originally Posted by HughA44s View Post
Interesting in light of an driverless UBER car recently mowing down and killing a lady in a cross walk.
I watched the video several times. Even slowing it down and didn't acquire the women till just before collision. Don't think she was in cross walk, but that doesn't usually carry a death sentence.
__________________
David

I hope to arrive to my death, late, in love, and a little drunk!

Just because I don't care doesn't mean I don't understand... Homer Simpson

"That's what's keeping me out of F1.... Too much mental maturity...." N0tt0n

Some cause happiness wherever they go; others whenever they go.

CHAOS, PANIC, AND DISORDER my work here is done...

Live without pretending, Love without depending, Listen without defending, Speak without offending
Reply With Quote
  #17  
Old 03-28-2018, 08:28 AM
realroadrage's Avatar
realroadrage realroadrage is offline
Trust.....but verify
 
Join Date: Feb 2005
Location: Gaithersburg, MD
Posts: 2,395
realroadrage
Default

Quote:
Originally Posted by HughA44s View Post
Lets see - Will concede to this point: "The system does not need to know Bike or Person." I will however continue to contend that there are a limitless set of conditions available to cause a rules based algorithm to make bad "decisions", and put 10 of these cars together on a Freeway and the situation becomes impossible to characterize or test adequately. Agree to disagree on the long run safety of these systems.
Same is true for human drivers.
__________________
Everything has changed
Reply With Quote
  #18  
Old 03-28-2018, 08:56 AM
Dandelion's Avatar
Dandelion Dandelion is offline
 
Join Date: Apr 2011
Location: Herndon/Reston, VA
Posts: 1,919
Dandelion
Default

Quote:
Originally Posted by HughA44s View Post
4. The back-up driver was a moron as were the pilots in the SF crash.
To nitpick this a bit (hey, it's Dorki!) - the landing system at SFO was out of service in the Asiana accident (assuming that's what you're referring to), and the crew knew it, and landing without the ILS is something pilots are trained to be able to do routinely.

Given the ILS was out of service, there was no way for the aircraft to land itself that day given the available NAVAIDs - so the pilots needed to fly the aircraft, and were at the controls flying the aircraft. So there was no automation which failed, for which the pilots were acting as backup.

More specifically: there was no outside event for which the aircraft did not have a sensor, there was no outside event which caused unanticipated behavior of the automation, there was no new condition for which the automation was not programmed, and there was no failure of the automation itself.

The pilots in this accident were at fault, in that they were unaware of what the aircraft was being commanded to do (i.e. autopilot mode awareness), and furthermore, exposed some cultural issues with effective use of Cockpit Resource Management, as more junior crew was unable to bring the issue to the attention of the captain.

But this is not the first case where a flight crew misunderstood what the airplane was being commanded to do (see Air Inter 148, an A320 accident in 1992), and unfortunately it will probably not be the last.

At the same time, it seems likely that autopilot mode awareness failure is something that we may have already seen in the autonomous car world.

Also, FWIW, I tend to agree with Hugh on the issues he raises - but I'm a pessimist in this regard.

ed
__________________
ed

2016 GT4
2012 Cayman R
2005 Lotus Elise
1994 RX-7 R2

Last edited by Dandelion; 03-28-2018 at 11:19 AM.
Reply With Quote
  #19  
Old 03-28-2018, 09:04 AM
Vicegrip's Avatar
Vicegrip Vicegrip is offline
Porkchop & SGB for prez!
 
Join Date: Jan 2003
Location: The other Woodstock.
Posts: 13,242
Vicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threadsVicegrip has seven HoF threads
Default

Quote:
Originally Posted by HughA44s View Post
Lets see - Will concede to this point: "The system does not need to know Bike or Person." I will however continue to contend that there are a limitless set of conditions available to cause a rules based algorithm to make bad "decisions", and put 10 of these cars together on a Freeway and the situation becomes impossible to characterize or test adequately. Agree to disagree on the long run safety of these systems.
I am talking about this one event. The car was not able to detect and react to a solid object in its path. This goes to detection and process time. I suspect there was a compromise in the detection / process systems along with external factors that pushed the detection threshold below the conditions.

For basic hard line process it needs not determine if it is an elk or a deer first just that there is something there and where “there” is.
__________________
http://vimeo.com/29896988

“Those that can make you believe in absurdities can make you commit atrocities.” Voltaire.

"There is grandeur in this view of life...." Darwin.

The mountains are calling and I must go.

“The earth has music for those who listen”
Shakespeare.

You Matter.
(Until you multiply yourself times the speed of light squared. Then you Energy)

“We’ve got lots of theories, we just don’t have the evidence’.
Reply With Quote
  #20  
Old 03-28-2018, 09:38 AM
N0tt0N's Avatar
N0tt0N N0tt0N is offline
 
Join Date: Sep 2013
Location: DC
Posts: 4,742
N0tt0N has five HoF threadsN0tt0N has five HoF threadsN0tt0N has five HoF threadsN0tt0N has five HoF threadsN0tt0N has five HoF threads
Default

Ed - Watch it. Some of my best friends are Asianaians and they pilot just fine.

People watch too many movies if they think AI is magic. Plenty of coverage of the challenges of not only object detection and identification but also the decision tree for choosing how the driver/passengers (or other 'objects' outside the vehicle) die in a zero-options outcome. My personal training and experience in AI is sadly dated at this point. The semi-autonomous cars are crashing into fire trucks, Jersey walls with stripes leading into them, etc. To be fair, they can apparently avoid Star Wars Walkers just fine.

There's a reason advances weapons systems cost one or more dollars. And they don't meet the apparent success criteria of 'zero collateral damage' last I checked.

That said I would rather have the other 99.9% of cars on the road be autonomous than from Maryland At least the lemmings would follow each other off the cliff instead of randomly changing lanes and making GPS turns at the last minute.

Every time I see a headline that starts with "If the Federal Government doesn't act now..." I know this will end badly.

Time for Dirk to start doing checkout rides for EVERYONE on the highway! Green Run Group -> Far Right Lane, etc.
__________________
Martin
2011 Cayman S (Gone) - Hardtop Blechster
2006 Cayman S (DD)
2016 Mazda CX-5 (Her DD)
2002 Boxster S (Gone) - Ragtop Blechster - Pura Patina!

Dorkiphus: I buy it for the articles
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump