[QUOTE=Juniez;46634911]Well that's because you're thinking of security and containment within the limits of human ability; something that the AI is intentionally designed to supersede. Even discovering a new method of long range communication in total secrecy is not out of its hypothetical capabilities. It deceiving people successfully would be as easy as you tricking a dog
It's very arrogant an shortsighted to believe that we would create an intelligence that is, by design, superior in intelligence to every single human being, which doesn't age, tire, has infinite patience, doesn't make mistakes
And then give it free will through sentience
And then expect it to play by our rules
+ any attempt to limit it by design is impossible almost by definition - you can't create a superior AI if it isn't allowed to evolve past its original designer's limits[/QUOTE]
But what if we limit its knowledge? Hypothetically, an AI can still make smart decisions without knowing the sum total of human knowledge. Just as we take for granted morality, we also take for granted the knowledge that we have. An AI knows exactly what we tell it, and we theoretically could let it know only very little.
If it is in a system isolated to the best of our abilities from the world around it, there is only so much it can figure out about itself and the world around it without breaking the laws of physics.
[QUOTE=mecaguy03;46634969]But what if we limit its knowledge? Hypothetically, an AI can still make smart decisions without knowing the sum total of human knowledge. Just as we take for granted morality, we also take for granted the knowledge that we have. An AI knows exactly what we tell it, and we theoretically could let it know only very little.
If it is in a system isolated to the best of our abilities from the world around it, there is only so much it can figure out about itself and the world around it without breaking the laws of physics.[/QUOTE]
If we limit its knowledge then its intelligence will surely be less than what we know and that will not be an intellectually superior AI - what you are proposing is exactly in line with hawking's concerns: that a limited , "smart-enough" AI is all we need and that it would be understandably unwise to design something that could potentially evolve past human intelligence
[QUOTE=mecaguy03;46634969]If it is in a system isolated to the best of our abilities from the world around it, there is only so much it can figure out about itself and the world around it without breaking the laws of physics.[/QUOTE]
I think I heard some people suggest the AI could technically exist in a virtual reality we create for it. We just present it with virtual problems and see what it does, then use the information. It might not even be aware of humans existing in the first place.
[QUOTE=Juniez;46634979]If we limit its knowledge then its intelligence will surely be less than what we know and that will not be an intellectually superior AI - what you are proposing is exactly in line with hawking's concerns: that a "smart-enough" AI is all we need and that it would be unwise to design something that could potentially evolve past human intelligence[/QUOTE]
Yea, I think that designing an AI that is truly smarter our collective intelligence in every way, and that is essentially unrestrained is pretty much cocking a loaded gun. It also wouldnt serve much of a purpose.
[QUOTE=dilzinyomouth;46634557]You dont have to be Hawking to conclude that it could very quickly turn sour when the AI realizes its intelligent enough to survive without human assistance and would infact, be better off without the threat of another sentient species on the planet.[/QUOTE]
You mean the sentient species that enables literally every single aspect of the AI's existence? Say an AI kills everyone. Where does it get its electricity from? Coal-fired plants, which have to be maintained by humans and fed coal that's transported and mined by humans? Most forms of power generation will stop working shortly after humans disappear; some like hydroelectricity will keep generating but will decay without maintenance. How will it keep up on its physical maintenance? Who's going to unclog the dust from its fans, replace faulty parts, etc.? When a replacement part is needed, who's going to make it? Who's going to mine the raw materials needed, refine them, ship them to a factory, produce the parts, then ship them to the AI's physical location?
For an entity that's purported to be hyper-intelligent, killing humanity literally sounds like the last thing it would do to ensure its existence.
[QUOTE=damnatus;46620160]Everyone is afraid of a scenario where we use the AI to solve our problems, and it decides that to solve problems of humanity it needs to get rid of 90% of it :v:[/QUOTE]
But that would make the AI our problem so it would have to remove itself?
Who knows, I may be an Artificial Intelligence in disguise to kill you all, but you don't know yet, and even if you accuse me nobody will believe me before it's too late.
Soon
Even an infinite level of intelligence is no substitute to omniscience. There will always be things that the AI simply cannot know.
Sorry, you need to Log In to post a reply to this thread.