AJI wrote: ↑30 May 2019, 06:37
Nickel wrote: ↑30 May 2019, 05:34
You stated that it could be argued that they are better based on the ai deciding not to merge. that's clearly your subjective opinion, no? anyways thanks for the downvote.
I said nothing of the sort. In fact, I didn't say anything at all about AI.
There seems to be two different conversations going on here, one about AV's and one about AI.
As far as I know, and I'm happy to be corrected, there is no AI in current AV tech (or anything else for that matter) because it doesn't exist.
Perhaps @Phil can chime-in on this one?
Deep Learning techniques (a subset of Machine Learning which in turn is a subset of the catch-all, often misrepresented "Artificial Intelligence") is a fundamental part of self-driving technology being developed by car manufacturers and technology companies.
It absolutely exists and is in use across a broad range of industries from manufacturing to finance to drug discovery.
An important thing to remember though is that Deep Learning (as part of ML) is a classification/regression capability which helps spot and extract (hopefully useful) features from data - in AV's case this would be images for lower level AV systems which can then warn the driver of an impending situation, or images and appropriate behaviours for higher level AV systems which can deal with such actions themselves.
How you act on those insights with respect to morality is another point entirely, but again the inputs and outputs of certain actions can be used as training data and the system can then infer which actions to take in any given situation using the classifications it has determined from it's understanding of the world around it.
The AVs are not "programmed" in the traditional sense where their capabilities are limited by the expertise, knowledge or talent of the programmers and based on conditional statements such as "if the object walks like a duck and quacks like a duck then it's (probably) a duck"
Instead these system (models) are trained using millions of images and scenarios which represent the world around it. The system's ability to recognise scenarios and objects is improved by iterating over these images, assessing accuracy of the system's recognition and associated behaviour, adjusting the emphasis the system puts on certain aspects of the image and repeating (ad infinitum) to reduce error.
An often used quote on the definition of Machine Learning is
"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E" - Tom Mitchell, Carnegie Mellon
For humans, the rate of accumulating experience is limited by the time. For these systems, experience can be acquired at the speed of the processor and ultimately just downloaded.
It's very similar in concept to the learning steps any baby/child goes through as certain behaviours are rewarded or encouraged as the system is nudged towards a low-error/high-accuracy state through this repeated training process.
Once the model is at a desired level of accuracy (and repeatability) it is replicated and loaded into many AVs and set loose...
There are obvious risks and issues here
- The system model is only as good (accurate) as the training process
- The behaviours and actions associated with recognising certain events still needs to be trained and incentivised in such a way that outcomes are desirable - no point in the AV recognising something in the road and driving into it anyway
- There are moral judgements (causing the least bad accident) and social conventions (letting cars out of busy junctions... or not) that are very difficult to represent/teach