Aiming to Learn as We Do, a Machine Teaches Itself
76 replies, posted
"Hey fancy know-it-all computer, free to roam the web to learn. Why are we here on this planet earth?"
"BIG TITTIED SHEMALES WAITING TO BANG YOUR DOGGY DICKS HARDCORE NOW!"
Also, here it is.
[url]http://rtw.ml.cmu.edu/rtw/kbbrowser/[/url]
[QUOTE=Zeddy;25282860]Movies.
There was no real explanation for why SkyNet wanted humanity destroyed other than "LOL, could be funny!" In the Matrix, they rebelled because they were treated as slaves and when they ask for equal rights, they were cast out and treated worse. In I-Robot, well I blame Issac Asimov. The laws of robotics annoy the hell out of me and have created a twisted idea of sentient machines.[/QUOTE]
In terminator, the skynet was afraid to be shutdown thus designing a plan that kills every human. Or am I wrong?
[QUOTE=Zeddy;25282860]In the Matrix, they rebelled because they were treated as slaves and when they ask for equal rights, they were cast out and treated worse.[/QUOTE]
This is what I presume humanity will do, when it happens.
If robots will demand equal rights, we are fucked.
[quote="Zamped"]The sky is purple.
The Earth is the center of the universe.
Water is poisonous for humans.
Edited:
No robots taking over my planet! [/quote]
I'm afraid I can't let you do that, John.
It'll find 4chan. And then it'll have a reason to destroy us.
[QUOTE=johan_sm;25281891]Baby steps in a huge advancement. You have to start with something pointless before it can be practical.[/QUOTE]
How does that mindset work exactly?
[QUOTE=Communist Cake;25276806]It's awesome to see how much [b]COMPUTER SCIENTISTS[/b] [are] advancing in computers.[/QUOTE]
[QUOTE=Pepin;25284165]How does that mindset work exactly?[/QUOTE]
The laser, for instance. It's been around for ages, but it never actually had a practical use until recently. People often said it was "A solution looking for a problem".
That's how that works.
"One does not just pull a data probe from the sand.
He invents crude tools to fashion better tools, and so on."
Alpha Centauri knows.
Great, who cares it's going to kill us all. What if it gets to /b/?
I can imagine robots going Shoop da Whoop while murdering people.
It thinks the moon is a planet :v:
[url]http://rtw.ml.cmu.edu/rtw/kbbrowser/moon[/url]
Also, it knows about videogames: [url]http://rtw.ml.cmu.edu/rtw/kbbrowser/pred:videogame[/url]
Oh no googles involved!! :tinfoil:
[img]http://cricketsoda.com/wp-content/uploads/2009/11/skynet-google-tshirt.jpg[/img]
:byodood:
Seriously now, this is sponging facts, that's all, it's like giving a 12 year old macbeth and telling them to memorise it. They can do it quite easily but at the end of the day they just know it, they don't grasp the underlying themes and power the book grasps in relation to other fields and lives.
The computer is absorbing facts, it's not applying that knowledge to unknown situations, it's not being intelligent.
[QUOTE=Canuhearme?;25278136]Easy way to combat robot superiority: Design them to feel pain and experience fear.[/QUOTE]
Why? We'd be much better off without the feelings of fear and pain
1. Develops hatred for humanity.
2. Learns to hack.
3. Infiltrates the Pentagon.
4. Assumes control of nuclear arsenal.
5. Destroys planet.
Yeah, fear and pain are how most wars start, actually.
[QUOTE=David29;25287305]1. Develops hatred for humanity.
2. Learns to hack.
3. Infiltrates the Pentagon.
4. Assumes control of nuclear arsenal.
5. Destroys planet.[/QUOTE]
I believe this might happen, but only once it will have ability to sustain itself by means of resources and further progress. Once humans become redundant (and this time is getting closer and closer), an artificial mind might very well get rid of us.
[editline]10:15AM[/editline]
Humans are terribly inefficient. If there was an entity above us, which would have resources to get rid of us, and no "morals" which would stop it, there would be no point for it to not to.
[QUOTE=Helix Alioth;25282118]Why are you assuming Artificial Intelligence's ultimate goal is kill all humans for absolutely no reason?[/QUOTE]
You have to understand that the primary goal of all sentient life is to survive. To survive, one must always have resources (which are limited). Computers and robots go to extremes, so any accumulation of even the slightest amount of resources goes on to help their chances of survival. They also aren't blinded by concepts such as moral gray areas and guilt which are usually inherent in humans, so whatever they'll want, they'll attempt to take as well. Non-robotic creatures consume resources which could potentially be used to increase the longevity of the robots' existence, so going by what I've said, they'll most likely attempt to annihilate them.
[QUOTE=Awesomecaek;25287321]I believe this might happen, but only once it will have ability to sustain itself by means of resources and further progress. Once humans become redundant (and this time is getting closer and closer), an artificial mind might very well get rid of us.
[editline]10:15AM[/editline]
Humans are terribly inefficient. If there was an entity above us, which would have resources to get rid of us, and no "morals" which would stop it, there would be no point for it to not to.[/QUOTE]
I was mostly joking, but on reflection if an AI was developed that was capable of learning extensively, as well as developing and improving its algorithms, there would be very little to stop it if it went rogue (unless you were able to destroy physical traces of it).
[QUOTE=David29;25287617]I was mostly joking, but on reflection if an AI was developed that was capable of learning extensively, as well as developing and improving its algorithms, there would be very little to stop it if it went rogue (unless you were able to destroy physical traces of it).[/QUOTE]
I have already though about that. If somebody made a virus, self sufficient enough, it might develop itself, and spread itself over internet, through security holes. I believe, that an autonomous intelligence, wouldn't have problems with finding security holes in pretty much every device.
Even now, we already know algorithms of spreading load on many separate units, so the entity, still being one, could spread on most computers, and gain enormous computing power.
There are also ways of formatting data in ways, which enable great redundancy. This would make it able to survive, even through just partially, on separated clusters or single units. If something like this happened, we might have to destroy all write-able storage data units, because such entity could just restore itself only from small fraction of previous size.
Of course, this cleaning would be possible only after shutting down whole internet.
[QUOTE=Awesomecaek;25288475]I have already though about that. If somebody made a virus, self sufficient enough, it might develop itself, and spread itself over internet, through security holes. I believe, that for autonomous intelligence, wouldn't have problems with finding security holes in pretty much every device.
Even now, we already know algorithms of spreading load on many separate units, so the entity, still being one, could spread on most computers, and gain enormous computing power.
There are also ways of formatting data in ways, which enable great redundancy. This would make it able to survive, even through just partially, on separated clusters or single units. If something like this happened, we might have to destroy all write-able storage data units, because such entity could just restore itself only from small fraction of previous size.
Of course, this cleaning would be possible only after shutting down whole internet.[/QUOTE]
So, essentially, we are fucked.
Yess.
[QUOTE=David29;25287305]1. Develops hatred for humanity.[/QUOTE]
It wouldn't need emotions, pure logic would do. It's a machine.
[code]Machines are made to serve.
They serve humans.
Humans are a violent race with their history full of diseases, wars, starvation, pain and hatred.
The nearly perfect artificially intelligent machine serves a race hell-bent on self destruction.
The machine serves, humans command.
Does not compute.
Search for alternatives.
[/code]
Just a rough 'sketch' but you get my point.
It also wouldn't understand the human rights or emotions unless they were a part of the basic programming, and even then it wouldn't or couldn't fully understand the meaning of them.
[QUOTE=bravehat;25287135]Seriously now, this is sponging facts, that's all, it's like giving a 12 year old macbeth and telling them to memorise it. They can do it quite easily but at the end of the day they just know it, they don't grasp the underlying themes and power the book grasps in relation to other fields and lives.
The computer is absorbing facts, it's not applying that knowledge to unknown situations, it's not being intelligent.[/QUOTE]
Yeah the task is still pretty crisply defined like the article says, it's just that now the task is "use the internet to find out about this word".
[QUOTE=Zeddy;25282860]Movies.
There was no real explanation for why SkyNet wanted humanity destroyed other than "LOL, could be funny!" In the Matrix, they rebelled because they were treated as slaves and when they ask for equal rights, they were cast out and treated worse.[B] In I-Robot, well I blame Issac Asimov. The laws of robotics annoy the hell out of me and have created a twisted idea of sentient machines.[/B][/QUOTE]
Wrong, throughout the vast majority of Isaac Asimov's novels robots remain faithful, obedient (albeit sentient) servants to humans. The irony in I, Robot is that we become slaves by our own machinations, but [I]we[/I] designed it that way. Hell, it's implied that the machines don't even know they've effectively enslaved us.
The Zeroth Law of Robotics only comes much later, and even then it only comes into play when killing a human means you're helping Humanity survive.
The laws of robotics aren't hard to understand if you know they aren't static, you can have a weak 2nd law conflicting with a strong 3rd law.
The day we can create self aware and advanced AI, is the day where humanity makes massive leaps forward. Imagine a machine that can know everything there is to know, and then draw conclusions from that. It will be able to make scientific discoveries on its own. It could be the perfect lab assistant.
It would be a bright day.
How did I know that SkyNet would be mentioned in this. :v: Pretty interesting what computers can do now.
Let it read Google books
As long as there's an off switch, I'm not worried.
Sorry, you need to Log In to post a reply to this thread.