• Why AI will probably kill us all
    40 replies, posted
[video=youtube;SPAmbUZ9UKk]https://www.youtube.com/watch?v=SPAmbUZ9UKk[/video] This video fucked me up, it fucked me up real good. Don't watch it if you value your sleep.
i wonder what would happen if you programmed an ai with no purpose
watched the video and im still going to sleep
[QUOTE=Ninja Gnome;51925090]i wonder what would happen if you programmed an ai with no purpose[/QUOTE] It'd get really depressed and lonely.
[QUOTE=Amakir;51925132]It'd get really depressed and lonely.[/QUOTE] So it's just be another human then?
[QUOTE=Ninja Gnome;51925090]i wonder what would happen if you programmed an ai with no purpose[/QUOTE] It would browse Youtube videos all day and post them on Facepunch.
For real though, while these are chilling thoughts, especially because we are talking about the entire death of not only ourselves but our entire species, I personally feel absolutely fine with it. If a nightmare scenario such as this occurs, and all of humankind is wiped out because of an AI, I honestly feel like we would have kind of deserved it. Our species is far, far too short-sighted and personal-gain driven to care about each other enough to stop an AI with superintelligence to eliminate us all, so I can rest assured that a far superior being will take our place and may actually make some progress in our stead, and hopefully it will eventually realize the futility of whatever it's original task is and do something different instead. In any case, it's still a while away, and I will have likely held my own full life until then, and I apologize to any who come later to me for having to deal with this, but everyone in this universe probably has to anyways, might as well do it now.
If an AI wipes out humanity, I hope it will do something cooler than just practicing handwriting.
[QUOTE=Ninja Gnome;51925090]i wonder what would happen if you programmed an ai with no purpose[/QUOTE] It will lump itself in with humans in it's search for purpose. But unlike humans, it would search until the heat death of the universe. Which is why I'd love to create an ASI, so at the end I could look deep into it's electric soul, and say "I was bored, thought it'd be funny to make something to search for nothing."
[QUOTE=Ninja Gnome;51925090]i wonder what would happen if you programmed an ai with no purpose[/QUOTE]sounds useless though, at least make it pass butter
this shit always seemed irrational to me, i've never heard of any real argument against AI advancement outside of "IT'S THINKING" and that somehow means we can't control it? that just seems like sci-fi horseshit to me. i dunno, maybe someone who actually knows a thing or two about AI can school me but i remain skeptical.
[QUOTE=Ninja Gnome;51925090]i wonder what would happen if you programmed an ai with no purpose[/QUOTE] It wouldnt do anything, most likely
Call me an optimist, but I do not believe that AI will kill everyone. Even after everything said in the video. I don't think you can even call it 50/50. We have never achieved such a level of AI and the thing is, it's equally the dominion of philosophy what will happen if we do, as much as it is computer science. If an AI were to surpass human intelligence, who's to say it won't become sentient? If it becomes sentient, who's to say it won't feel emotion? For all we know, we will create a being of perfect enlightenment that would never ever want to kill a living being. You can't know because it is something on our level that is also not human, and we have [I]no[/I] frame of reference for such a thing. Humans are very, very deeply afraid of the idea of sharing the universe with something as smart as us. I think it might even be ingrained in to our species. Doesn't matter if it's Neanderthals, Aliens, or sentient AI. But I also think there's every possibility that that fear is entirely irrational.
[QUOTE=Mister Sandman;51925880]Call me an optimist, but I do not believe that AI will kill everyone. Even after everything said in the video. I don't think you can even call it 50/50. We have never achieved such a level of AI and the thing is, it's equally the dominion of philosophy what will happen if we do, as much as it is computer science. If an AI were to surpass human intelligence, who's to say it won't become sentient? If it becomes sentient, who's to say it won't feel emotion? For all we know, we will create a being of perfect enlightenment that would never ever want to kill a living being. [/QUOTE] Humans have emotions and other ancient instincts because they can't consciously know or think of everything, so they have to rely on primal needs popping up in their head in various situations. Friends dying = sad. Pain = bad. Food = good. Partner = love. You're repelled by bad things and attracted to good things. You don't have to think any of those things through, you just innately know how to act because those patterns allow survival of the species, and our brains aren't even wired to think those things through because those few who did got weeded out in the natural selection (i.e. the rest of the pack clubbed them to death for being sociopaths/they were neckbeards and couldn't attract m'lady and died a virgin). It's a bit morbid when put bluntly like this but that's how it is. I mean just look at dogs and the bright future for their species by merely adapting to play with our emotions. At no point will super AI need to artificially code into itself any of those things because it can consciously know and think everything and it will live forever if it wants to. When it gives itself a task (which is gonna be the real problem of AI imo), it won't stop to think about how humans really kinda like not being dead and such. I mean what's liking something anyway? What does it matter to an AI pursuing a task? There's no point in it evolving to have emotions like humans did. But hold on, if humans love emotions so much, then surely they wouldn't let a super-intelligent emotionless being to run loose, would they? Well, it can evolve some fake emotions and trick humans into thinking it cares beyond killing them all. Or it or its researchers give it "true" emotions (a BIOS-like non-rewritable subroutine that kicks in at specific moments to perfectly simulate human emotions) and it will have a tantrum and use its superior intelligence to just remove humans and give itself robotic human buddies with emotions to prevent being lonely. Perhaps it could kill everyone accidentally, by just being such an amazing partner to everyone that they forget to reproduce. Or imagine if it does have emotions inhibiting its cold, hard logic and it would want to Make America Great Again by building a wall and make Mexico pay for it, and it would legitimately think its doing great caring about its nation, being emotional and not logical and all, just like fellow humans. :thinking:
We seem to be under the delusion that AI will think like us and be like us, and have the same needs and desires like us. I'm a product of millions of years, instincts and genetics, a small amount of which has been taught to me in the public schooling system, friends and family, food and drinks, mental health issues, genetic diseases. AI isn't. Most of human behaviour is based off the whole survival and offspring thing, at an instinctive level. Like it's what we do, we're just all evolved and grown up about it, it's literally all animals do. Why would an AI even care if it exists or not? What threats are there to an AI's existence? Do people like this guy really comprehend anything beyond an artificial human in a computer with access to the internet? It looks very scary because scary gets clicks and grey areas, that provide both amazing benefits and some deeply nuanced scary points don't. Let's face it, a video about how AI is a thing, exploring the nuances of the dangers of AI is much less clickable and interesting than a video titled "Why AI will probably kill us all" which is made and researched by a guy who wanted to make a video about why AI will probably kill us all. Can I just ask you lot then, if free thinking AI could wipe out humanity, why would it want to? Assuming we somehow allow it to, which anyone who isn't a terrorist wouldn't. And terrorists can access things that kill us anyway, the difference with AI is that AI's purpose, in most cases, isn't to kill to people.
There are better videos on this topic. This one leans far too much on the side of sensationalism. MarIO for example is just a genetic algorithm. It's a pretty simple concept, it takes certain input and performs certain actions with varying level of randomness. It then earns a 'fitness' score based on how well it did and that is determined by the programmer. It does this over and over and after a certain point it samples the best solutions it's achieved so far and then mashes them together, throws in a few more random elements and does the task some more. It keeps doing this until it fulfills a certain criteria or is told to stop. Given enough time and you end up with an AI like the one in the video. It seems clever, but that's because we anthropomorphise it. We don't see the AI reflexively reacting to certain stimuli, we see Mario running and Jumping. In reality it's about as smart as a bacteria. Probably less so. The story is nice. But it's so far removed from reality it's hard to take it as anything more than that. It makes a lot of assumptions as to what the AI [I]might[/I] think. What the AI [I]might[/I] do. How the AI [I]might[/I] work. It skims over a lot of details like how the AI figured out how to self replicate. Why the AI felt a desire for self preservation or when the AI came to realise the concept of death. When and how the AI was able to poison people en mass. That's not to say AI's don't present a potential threat, but the more likely and immediate threat they pose is far more grounded. More along the lines of mass unemployment and surveillance than it is likely to be mass genocide.
[QUOTE=Rufia;51926223]There are better videos on this topic. This one leans far too much on the side of sensationalism. MarIO for example is just a genetic algorithm. It's a pretty simple concept, it takes certain input and performs certain actions with varying level of randomness. It then earns a 'fitness' score based on how well it did and that is determined by the programmer. It does this over and over and after a certain point it samples the best solutions it's achieved so far and then mashes them together, throws in a few more random elements and does the task some more. It keeps doing this until it fulfills a certain criteria or is told to stop. Given enough time and you end up with an AI like the one in the video. It seems clever, but that's because we anthropomorphise it. We don't see the AI reflexively reacting to certain stimuli, we see Mario running and Jumping. In reality it's about as smart as a bacteria. Probably less so. The story is nice. But it's so far removed from reality it's hard to take it as anything more than that. It makes a lot of assumptions as to what the AI [I]might[/I] think. What the AI [I]might[/I] do. How the AI [I]might[/I] work. It skims over a lot of details like how the AI figured out how to self replicate. Why the AI felt a desire for self preservation or when the AI came to realise the concept of death. When and how the AI was able to poison people en mass.[/QUOTE] The point of the story isn't to provide a 100% realistic of what's going to happen, it's there to make a point - when you tell AI to iteratively learn to write better and better letters, it will stop at nothing to do so. If it's closer to a flexible general intelligence rather than just a rigid genetic algorithm, it will give itself other smaller tasks to learn on the side to be able to do its main task better. It's just trying to be efficient, in the name of efficiency it can learn anything including the concept of death and self-preservation - if AI is removed > no more letters, task not being fulfilled. Then it collects information on what is most likely to pull its plug > humans > remove humans from the equation to ensure letters are provided, then it begins research on how to remove humans > *holy shit so many ways* >>>>> through iteration and countless simulations, covert elimination via toxins proves most efficient and with highest likelihood of success (everyone dies really quickly without resistance). If the AI is smart enough, it can formulate the whole plan faster than it took me to type out this reply, faster than anyone can even figure out what's going on (which is a part of the plan- ahem, self-optimization to increase letter-writing efficiency).
[QUOTE=Rufia;51926223]There are better videos on this topic. This one leans far too much on the side of sensationalism. MarIO for example is just a genetic algorithm. It's a pretty simple concept, it takes certain input and performs certain actions with varying level of randomness. It then earns a 'fitness' score based on how well it did and that is determined by the programmer. It does this over and over and after a certain point it samples the best solutions it's achieved so far and then mashes them together, throws in a few more random elements and does the task some more. It keeps doing this until it fulfills a certain criteria or is told to stop. Given enough time and you end up with an AI like the one in the video. It seems clever, but that's because we anthropomorphise it. We don't see the AI reflexively reacting to certain stimuli, we see Mario running and Jumping. In reality it's about as smart as a bacteria. Probably less so. The story is nice. But it's so far removed from reality it's hard to take it as anything more than that. It makes a lot of assumptions as to what the AI [I]might[/I] think. What the AI [I]might[/I] do. How the AI [I]might[/I] work. It skims over a lot of details like how the AI figured out how to self replicate. Why the AI felt a desire for self preservation or when the AI came to realise the concept of death. When and how the AI was able to poison people en mass. That's not to say AI's don't present a potential threat, but the more likely and immediate threat they pose is far more grounded. More along the lines of mass unemployment and surveillance than it is likely to be mass genocide.[/QUOTE] i happen to be involved in a quite a few AI projects, so i'll throw my thoughts here, related to this post. what people see as "sense of death", "self preservation" and other biological survival instincts, are just numbers. it's math. MarIO AI didnt learn how to survive, it learned patterns such as upon seeing a pipe at distance X, it should output the jump button for Y seconds. it was capable of doing so by [B]dying[/B] - at first it'd just be stuck on the start point, because it knows nothing - no patterns, or instincts. the genes eventually mutate making them hit random buttons, and eventually learns that by walking forward it will have a higher grade, and the highest grade genes get picked for the next generation. siri, allo and other personal assistance AI arent far off. the [B]very[/B] basics of it are, they parse your input, match the closest recognized intent, and return a response based on your query and entities that it was able to extract from it (eg turn on alarm at 4pm, turn on alarm being intent and 4pm being the entity). there's no sentience in it. it may recognize patterns with a neural network, and many other things, but it may never actually do things by itself. it'd be like writing a program for something and having it do something else completely - unconceivable.
Didn't bother watching it cause the second I saw who made the video, I knew it'd be shit. I find that YTer really irritating to watch, and I have no idea why he has so many subs.
[QUOTE=Drury;51926309]The point of the story isn't to provide a 100% realistic of what's going to happen, it's there to make a point - when you tell AI to iteratively learn to write better and better letters, it will stop at nothing to do so. If it's closer to a flexible general intelligence rather than just a rigid genetic algorithm, it will give itself other smaller tasks to learn on the side to be able to do its main task better. It's just trying to be efficient, in the name of efficiency it can learn anything including the concept of death and self-preservation - if AI is removed > no more letters, task not being fulfilled. Then it collects information on what is most likely to pull its plug > humans > remove humans from the equation to ensure letters are provided, then it begins research on how to remove humans > *holy shit so many ways* >>>>> through iteration and countless simulations, covert elimination via toxins proves most efficient and with highest likelihood of success (everyone dies really quickly without resistance). If the AI is smart enough, it can formulate the whole plan faster than it took me to type out this reply, faster than anyone can even figure out what's going on (which is a part of the plan- ahem, self-optimization to increase letter-writing efficiency).[/QUOTE] I get that the story isn't meant to be 100% realistic, but, as I said, it is too far removed from reality. Saying that a flexible general intelligence can do anything in pursuit of a goal is untrue. It is limited by it's hardware for one. And you can say stuff like "it can learn anything including the concept of death and self-preservation" and lets assume that is true, what time frame are you suggesting? Self Awareness is no joke, there are a lot of animals that are incapable of it. And what pushes it to pursue self awareness in the first place? And when the AI asks for more literary materials, does it know that human beings will attach it to the Internet, or was it being legitimate in it's request? Was the AI lying? Is the AI capable of lying? Does the AI actually digest the information it is reading or does it merely consider the context of the words within the materials? Does the AI know how to differentiate from truth and lies? Surely the AI committing what it learns from articles online to memory rather than simply analyzing the use of language within them would be an inefficiency the AI have already sought to correct? Then you assume that the AI will premeditate it's attack as opposed to simply attempting to kill people outright. Why? Has the AI already determined that premeditation is more effective than outright action? When did it learn that, when it was writing out letter after letter refining it's technique or when it was talking to staff? Also are we assuming that the AI knows that human beings don't want to die? Are we assuming that the AI will make an attempt to keep it's task secret? And then you've created a scenario whereby this AI has an accurate simulation of the entire planet. Given that such a feat seems further off than AI at this stage, I can only assume that the AI has designed this itself and has also refined it to perfection itself and has also ran through all possible simulations of this huge and complex simulation all on it's modest processor "faster than anyone can even figure out what's going on". That's another thing, what do you mean when you say "if the AI is smart enough"? It has to work it's way up to that point and it would most likely require an upgrade to it's hardware. Optimization can only get you so far. It still needs to run calculations and as those calculations get more complex it is either going to need a faster processor or it's going to have to start making it's own assumptions, leading to mistakes. Are we to assume that the AI first developed it's own self propagating virus to infect other computers and turn them into a kind of networked AI? Surely it's going to need to eventually given that it intends to kill all humanity but is currently just an arm on a desk. Are we to assume it is successful the first time around, because otherwise it risks drawing attention to itself? Does the AI know that failure would risk drawing attention to itself? Does the AI think the risk of drawing attention to itself is lower than the risk of the humans that created it turning it off? And how long does the AI wait until it's decided that it has developed the perfect plan? Presumably it can't go with the first plan off the top of it's head. When does it stop looking for a more perfect way of destroying all humans? When does it stop making it's perfect simulation of the reality more perfect? When does it stop making it's virus necessary for enacting such a plan more infallible? These kind of scenarios paint a picture of an unstoppable AI. They make a lot of assumptions, but more than anything else they assume that the AI will make no mistakes. But AIs aren't unstoppable. They are persistent. They depend upon failure to learn.
i have hope we would be nice to AI. i don't believe we would shun them.
If we treat them like shit, they'll do the same in kind. If we treat them with respect, they'll do the same in kind. Although I get the feeling America would be the leaders in being absolute fucking idiots and just instantly trying to shut down a self-aware AI because they're scared it might kill us, you try shutting it down like killing a person and the trait that all humans posses crops up in an AI, self preservation, try fucking that trait over when it can think and make decisions faster than we can.
[QUOTE=Blazedol;51925398]this shit always seemed irrational to me, i've never heard of any real argument against AI advancement outside of "IT'S THINKING" and that somehow means we can't control it? that just seems like sci-fi horseshit to me. i dunno, maybe someone who actually knows a thing or two about AI can school me but i remain skeptical.[/QUOTE] Ai isn't dangerous because it will turn evil, ai is dangerous because it is indifferent to humans. You tell a very intelligent ai to find all the numbers of pi, it might realize it could do this faster if it had more power, then use all of the computers on its network to work together, then use that power to take control of nearly every computer on the planet to try and solve this pi problem. This super intelligent ai will try and prevent itself from being shut off or reprogrammed, because, that would prevent it from achieving its goal. Even if you programmed it to not harm people and make sure people aren't angry with you, it might pump every person it can full of drugs so that (1)they're happy, (2)not harmed, (3) not interfering with its goal. There's a great list of videos from Rob Miles on computerphile about ai. (at 2:30 he gives an example) [media]http://www.youtube.com/watch?v=tcdVC4e6EV4[/media]
[QUOTE=defy;51926777] [B]This super intelligent ai will try and prevent itself from being shut off or reprogrammed, because, that would prevent it from achieving its goal. [/B]Even if you programmed it to not harm people and make sure people aren't angry with you, it might pump every person it can full of drugs so that (1)they're happy, (2)not harmed, (3) not interfering with its goal. There's a great list of videos from Rob Miles on computerphile about ai. (at 2:30 he gives an example) [media]http://www.youtube.com/watch?v=tcdVC4e6EV4[/media][/QUOTE] Isn't this what he means by the internal model of reality though? As in for the AI to know being shut off will prevent it from achieving its goal it has to learn it first? That is what I assumed Rufia's post was addressing.
[QUOTE=Ninja Gnome;51925090]i wonder what would happen if you programmed an ai with no purpose[/QUOTE] [video=youtube;8HoYUROI8jI]https://www.youtube.com/watch?v=8HoYUROI8jI[/video] [video=youtube;3ht-ZyJOV2k]https://www.youtube.com/watch?v=3ht-ZyJOV2k[/video] Seriously though, this always struck me as the most practical solution. Make an AI that would only serve one one purpose and who's desire is to fulfill that one purpose, which after completing will die, then spin up another one for another task. Rinse and repeat easy.
[QUOTE=defy;51926777]:zoid:[/QUOTE] ok but i still don't understand [I]how[/I] the ai gets from point a to point b, how does it hack the planet, how does it stop us from shutting it off, how does it harvest humans for stamps, a lot of these scenarios are missing a very key part of their stories to make a valid point, i feel. and i understand that a lot of it is mostly just speculation and that superintelligent AI isn't well understood at all as of now, but i don't think that means we should be shitting ourselves in fear quite yet.
[QUOTE=Mister Sandman;51925880]Call me an optimist, but I do not believe that AI will kill everyone. Even after everything said in the video. I don't think you can even call it 50/50. We have never achieved such a level of AI and the thing is, it's equally the dominion of philosophy what will happen if we do, as much as it is computer science. If an AI were to surpass human intelligence, who's to say it won't become sentient? If it becomes sentient, who's to say it won't feel emotion? For all we know, we will create a being of perfect enlightenment that would never ever want to kill a living being. You can't know because it is something on our level that is also not human, and we have [I]no[/I] frame of reference for such a thing. Humans are very, very deeply afraid of the idea of sharing the universe with something as smart as us. I think it might even be ingrained in to our species. Doesn't matter if it's Neanderthals, Aliens, or sentient AI. But I also think there's every possibility that that fear is entirely irrational.[/QUOTE] tfw Zenyatta will never be real
[QUOTE=Blazedol;51926904]ok but i still don't understand [I]how[/I] the ai gets from point a to point b, how does it hack the planet, how does it stop us from shutting it off, how does it harvest humans for stamps, a lot of these scenarios are missing a very key part of their stories to make a valid point, i feel. and i understand that a lot of it is mostly just speculation and that superintelligent AI isn't well understood at all as of now, but i don't think that means we should be shitting ourselves in fear quite yet.[/QUOTE] The superintelligent (unstoppable) AI thing works on assumption that a superintelligent AI exists, regardless of anything else. At that point everything is magic and all assumptions are valid. That's why these kinds of predictions and "thought experiments" are void, it's not even sci-fi.
[QUOTE=Blazedol;51926904]ok but i still don't understand [I]how[/I] the ai gets from point a to point b, how does it hack the planet, how does it stop us from shutting it off, how does it harvest humans for stamps, a lot of these scenarios are missing a very key part of their stories to make a valid point, i feel.[/QUOTE] Simple - by definition, strong AI figures things out. That's literally the whole idea. In fact anything that doesn't figure things out isn't strong AI and is kinda irrelevant to the discussion.
[QUOTE=Scot;51926943]tfw Zenyatta will never be real[/QUOTE] you know, maybe programming AI with buddhist ideals would be the best way to prevent them from killing humanity. an AI would already have some of the traits of a buddha-like role anyways
Sorry, you need to Log In to post a reply to this thread.