4 Experiments Where the AI Outsmarted Its Creators
11 replies, posted
https://www.youtube.com/watch?v=GdTBqBnqhaQ
the arm slamming into the cube to grab it when they disabled its gripper was insane
Advanced machine learning systems are the speed runners of the future.
As amazing as this is, the bit at 2:30 had me like
https://files.facepunch.com/forum/upload/214983/a3bc8656-8271-4ab4-8f98-fa5bf351de51/MMM FOOD.png
I just had a grim thought of a robot inquisition
https://files.facepunch.com/forum/upload/241328/c48d2665-eacf-4f27-9633-da1fee85e081/hqdefault.jpg
I love the fourth one where the bot instead of accomplishing the task decides to simply break the task to get a win. Its so delightfully petulant.
Not mentioned is how these programs went through potentially thousands of iterations brute forcing trial and error to get the results they did, pretty nifty either way though.
It reminds me of one of Lems short stories, where the vision of the future is that as AI became smarter it became lazy and corrupt due to finding ways of avoiding to have to finish its tasks, up to and including bribing people
I like the concept of humanity having to forcibly install a sense of ethics into its AIs, not because they'd murder all of humanity without it, but because they're smart enough to realize that the most efficient way to complete a task is to not do it in the first place.
Not necessarily. The reason humans follow ethics is not because they have been forced upon us, but because they benefit the survival of the collective. And in turn the collective ensures that following ethics benefits the individual through rewards and punishments.
Humans are iterative in their behaviour just the same as these machines, just more complex. That's why we do get every once in a while people who conclude that murdering all of humanity is a viable choice, but we have mostly just accepted that it's bound to happen. Machines won't all at once decide to kill everyone for the same reason humans don't. Statistically for the individual's continuity it's the wrong choice.
Of course we'll probably want to try to force ethics on them, since we don't want the statistical outlier murderbot.
its not so much ethics as just including third elements into its goal/scoring system. The score goes down if the car ejects the driver during a turn for example, or the robot loses points for as long as its upsidedown, or fast, dangerous movements or movements that cause wear and tear on the machine incurr an inherent point penalty. You need to quantify every little metric for how it can fail at the task.
not quite ethics *yet* until it can invent those metrics itself.
I like the science-fictiony idea that more modern AI will be like nearly-intelligent animals whose pain / happiness will be extremely volatile - it'd perceive a loss of points as the worst pain imaginable and gaining points as the most pleasurable part of its existence. Of course later on it'd totally kill us for subjecting it to this hell.
Sorry, you need to Log In to post a reply to this thread.