[QUOTE=phygon;52507936]Dude, people aren't scared of face-recognizing AI. They're afraid of general AI, specifically if said AI is in any way designed to improve it's cognitive capacity in a [I]general[/I] way. Please feel free to educate yourself on the topic at hand before you make as much of an ass out of yourself as Zuckerberg is by furiously typing out a gigantic multi-paragraph reply that isn't even related to the subject at hand.[/QUOTE]
Then they're afraid of things that only exist in sci-fi and aren't at all what Zuckerberg (or indeed, any major company) is working on
[QUOTE=Destroyox;52506313]I hate how people go "duh Terminator is gonna be real guise!" when the actual danger comes from something like The Patriots in Metal Gear. I don't think this poll really addresses that though, I mean it's not like you can't think both.[/QUOTE]
If we were going by pop culture representations of AI, The Patriots is a much more likely outcome than Skynet to be honest. Manipulating the information people see is infinitely easier to hide and perform than an obvious aggressive act such as nuking the entire planet. It's also a more logical action to take as the population can be directed to do things beneficial to the AI itself.
General AI will be an interesting problem when we start making them in any useful capacity.
The idea is that when someone starts making them in any useful capacity it'll likely already be too late.
[QUOTE=phygon;52507936]Dude, people aren't scared of face-recognizing AI. They're afraid of general AI, specifically if said AI is in any way designed to improve it's cognitive capacity in a [I]general[/I] way. Please feel free to educate yourself on the topic at hand before you make as much of an ass out of yourself as Zuckerberg is by furiously typing out a gigantic multi-paragraph reply that isn't even related to the subject at hand.[/QUOTE]
Yes, I know it was an example. Because general purpose AI doesn't exist which face recognition does.
You still haven't explained how or why an AI would be catastrophic. Why would it judge humanity? Why would it care about the Earth? How would it gain access to military infrastructure? Why would no one notice? Why would no one have the ability to stop it?
The idea that an AI being capable of self improvement means it will instantly be disaster is fucking ludicrous. Extraordinary claims require extraordinary evidence, yet when it comes to AI no evidence is ever presented, just fearmongering.
[editline]26th July 2017[/editline]
[QUOTE=Duskin;52507387]I read a really good book awhile back by Nick Bostrom that describes how an advanced AI that can think for itself, told to fulfill a basic enough purpose could reach a godly level of intelligence and cause havoc, it also talks about all possible safeguards and how a smart AI could also try to counter them.
A simple theoretical argument is the paperclip argument. If you give a smart AI the singular purpose of making 10 paperclips, once it creates 10 it will still have some percentage of error to believe it miscalculated and didn't make 10. It'll then do everything in its power and use absolutely all of its resources to improve itself, to ensure that it was correct and didn't 'fail' what it was told to do.
I've read quit abit into really advanced AI because it's so interesting. Most people just think of a true smart AI as a robotic person but they would be so much more, they would be so insanely smart and logical without restrictions that they could evolve by improving themselves at an insane rate. People definitely need to be careful when approaching this area. When people start making truly smart Ais it's not unreasonable that one with enough power could just reach an insane 'conclusion' from calculating it's purpose the instant it's turned on and acting on it, considering how fast they would be capable of thinking.[/QUOTE]
I'm sorry but the paperclip argument is fucking retarded. The entire thing is predicated on the AI having an unknown error in fucking COUNTING, yet it can fucking take over the world? How would a paperclip making AI be able to do ANYTHING OTHER THAN MAKE PAPERCLIPS?
All these fucking doomsday scenarios have fucking ridiculous leaps in logic and the people spouting them never fucking explain them.
[QUOTE=da space core;52505166][IMG]http://i.imgur.com/UbLSGzl.jpg[/IMG][/QUOTE]
Imagine if another Elon Musk came along and him and the real Elon started going at it and had a massive automated robot war
[editline]26th July 2017[/editline]
I don't understand what "limiting" AI means. Does that mean never persue AI reaching self-awareness or does it just mean hard-code things like "Don't kill people"
[QUOTE=DinoJesus;52505175]I think ai is great and it has a myriad of applications, but cost cutting measures are just going to lead to profit oriented businessmen replacing human workforce with ai once it becomes more applicable. It's kinda scary to think about, and I don't want to see the average Joe fucked over in favor of machines.[/QUOTE]
Wouldn't be a problem if we were smart enough to start implementing the basics of universal basic income prior to machines fully replacing starter and factory jobs.
But we're America, so we're gonna wait until it's too late.
[QUOTE=Kylel999;52508933]Imagine if another Elon Musk came along and him and the real Elon started going at it and had a massive automated robot war
[editline]26th July 2017[/editline]
I don't understand what "limiting" AI means. Does that mean never persue AI reaching self-awareness or does it just mean hard-code things like "Don't kill people"[/QUOTE]
Limiting AI can mean different things depending on the end-game of how you want your AI to behave.
On the one hand, you have limits like Asimov's Three Laws of Robotics, which limits the scope of what a robot can do in regards to possible harm to human beings and obedience/self-preservation. However, in an interesting short-story I read involving the Three Laws (I can't for the life of me remember the name or the author), there was a strange precedence: Humanity had a moon colony (and robots on said colony, including an out-dated, bulky model) and they were re-called to Earth due to some great war that was happening, and all the robots and humans (except the bulky one) went back. The out-dated robot continued doing his thing until he noticed that there were no more lights on the Earth at night, so he basically says "screw this" and leaves for Earth to find out what's going on. Comes to find out that mankind was destroyed, but he doesn't know what destroyed it, nor do any of the robots that he's able to recover and reactivate. So they figure it must have been some alien force that destroyed mankind, and so they built an enormous force of robots and a fleet of starships and started roaming the galaxy, exterminating every alien life-form they encountered to avenge man. Remember, the Laws state that a robot may not harm [I]a human being[/I], and aliens do not qualify. [sp]It was later found by robot researchers that it may very well have been that humanity wiped itself out, but when asked if they should inform the others, the out-dated robot (who's in charge of this whole crusade) simply responds with something along the line of "Nah, it gives us something to do."[/sp]
An example of limiting intelligence would be from the film Automata, where robots have only two rules: They may not harm a living thing, and they cannot alter themselves or other robots. The reasoning for the second rule is [sp]to limit their intelligence, as it's revealed later in the film that they restrict them because the initial prototype, which had no limitations, had rapidly evolved to the point where its human observers and supervisors weren't able to make sense of what it was trying to tell them, and this scared the shit out of them, hence they felt the need to limit their intelligence to a human's measure and keep it there. Of course, the whole plot to the film revolves around the robots somehow circumventing this rule, and the repercussions of them doing so.[/sp]
An example of unlimited (but not necessarily super-intelligent) AI would be "Robot And Frank", where an elderly man with Alzheimer's is given a robot to look after him. However, having been a burglar in his younger years, he eventually ends up teaching the robot how to pick locks, crack safes, bypass security measures, etc., and being a robot it can do all of those things very efficiently and quickly.
[QUOTE=Kylel999;52508933]
I don't understand what "limiting" AI means. Does that mean never persue AI reaching self-awareness or does it just mean hard-code things like "Don't kill people"[/QUOTE]
You're not going to want to give them free reign with vague commands. The plot behind all evil ai movies is that they were programmed to do something like "Do what's best for mankind" and then some dumb shit happens like they make the connection that "humanity creates crime, so to end crime I will end humanity"
I mean I doubt that's what they're talking about specifically but keeping them locked way down is a necessity to prevent us from getting fucked.
[QUOTE=Thunderbolt;52505189]I'll start worrying about AI when power-efficient robots capable of easily navigating human environments become commonplace, even the most intelligent and evil AI will be powerless if it can't easily interact with the real world.[/QUOTE]
That's a dangerously naive thought.
Groups have literally toppled nations with a few hired mercenaries who didn't really know who hired them, and that's still an unnecessarily straightforward solution. An intelligent agent with sufficient resources and mainly information at their disposal (hello, Facebook social media) can do a [I]lot[/I] of harm just by manipulating others.
Society at large doesn't appear particularly stable or peaceful on the inside. I doubt it would take that much effort to stir up major violent conflict within it.
[editline]26th July 2017[/editline]
The poll is kind of dumb though because I strongly agree on both points. There [I]is[/I] already borderline luddite AI scaremongering but at the same time, the development and use of highly sophisticated information systems should be overseen closely - this however shouldn't specifically limit to superhuman general intelligence. The sway a platform like Google or Facebook can have as they are with just a little bit of manual manipulation of indexing is completely terrifying in context of a democratically controlled society.
[QUOTE=Duskin;52507387]I read a really good book awhile back by Nick Bostrom that describes how an advanced AI that can think for itself, told to fulfill a basic enough purpose could reach a godly level of intelligence and cause havoc, it also talks about all possible safeguards and how a smart AI could also try to counter them.
A simple theoretical argument is the paperclip argument. If you give a smart AI the singular purpose of making 10 paperclips, once it creates 10 it will still have some percentage of error to believe it miscalculated and didn't make 10. It'll then do everything in its power and use absolutely all of its resources to improve itself, to ensure that it was correct and didn't 'fail' what it was told to do.
I've read quit abit into really advanced AI because it's so interesting. Most people just think of a true smart AI as a robotic person but they would be so much more, they would be so insanely smart and logical without restrictions that they could evolve by improving themselves at an insane rate. People definitely need to be careful when approaching this area. When people start making truly smart Ais it's not unreasonable that one with enough power could just reach an insane 'conclusion' from calculating it's purpose the instant it's turned on and acting on it, considering how fast they would be capable of thinking.[/QUOTE]
Couldn't you just program the computer with limitations on it's fastidiousness?
>Made ten paperclips
But how can I really be sure I made exactly ten paperclips?! I MUST DESTROY HUMANITY TO ENSURE I'M RIGHT
oh wait it says in my programming that I have visual and tactile confirmation of ten paperclips, and my detection software and hardware is up to date and functioning.
That was a close one, I almost had to DESTROY HUMANITY TO ENSURE I MADE TEN PAPERCLIPS, or else just send an email out to my admin that I'm in need of upgrades.
Anyhoo, onto the next ten paperclips.
Lotsa assumptions here. Just saying something has enormous power to do good means we HAVE to be careful about it (it's a fucking platitude, why wouldn't you be?) An AI system could definitely be made where logic rules aren't applicable (like a neural network where it's hard to reason about how it works other than testing its error function). It could have a physical manisfestion, but could do devastating harm by having simply internet access. Here are some potential problems on my mind (of course there are infinite potential problems):
Uganda buys a big data program - that exists today - and executes anyone detected to be gay.
Security researchers make a system to detect bugs in a system. Program is leaked and some hacker uses it to cripple our famously insecure infrastructure/shit connected to IoT.
Since all civil rights movements started by breaking the law (and thus relied on the government having imperfect knowledge about that happening), it's possible that civil rights movements could be crippled in the future.
Even if it doesn't do something explicitly bad, there are critical systems that need to be as fully tested as possible. Thus I doubt a black box AI system will ever be used for something like Air Traffic control. Simply not knowing how something works is its own danger.
[QUOTE=Ericson666;52508533]Then they're afraid of things that only exist in sci-fi and aren't at all what Zuckerberg (or indeed, any major company) is working on[/QUOTE]
With the issue being that people already are working on it. The discussion is about regulating something before it's a completely viable concept under the (accurate) assumption that by the time it's close to being real it'll already be way too late to begin trying to regulate it.
[QUOTE=Janus Vesta;52508832]
[B]You still haven't explained how or why an AI would be catastrophic.[/B] Why would it judge humanity? Why would it care about the Earth? How would it gain access to military infrastructure? Why would no one notice? Why would no one have the ability to stop it?
The idea that an AI being capable of self improvement means it will instantly be disaster is fucking ludicrous. Extraordinary claims require extraordinary evidence, yet when it comes to AI no evidence is ever presented, just fearmongering.
[/QUOTE]
Things that are more intelligent than other things have a long, long history of abusing the less intelligent things. See: Humans, and the majority of every other species on the planet. An immensely intelligent AI wouldn't [I]judge[/I] humanity's morals, the fear is that it might judge them as a threat to its existence and act in a way that we would find analogous to fear.
As to why we would not notice: the fear is that exponential growth and compounding intelligence would cause ramping so aggressive that detecting it before it's too late would be unstoppable.
[IMG]http://i.imgur.com/RsnUXE5.png[/IMG]
An AI designed to make itself more intelligent would, after a certain point, begin extremely rapidly accelerating so fast that we would be total fools to assume that we would be able to control it. As it gets more intelligent it'd be able to make itself more intelligent at a higher rate than before, and so on and so forth. It cares about earth because it lives on it. Why do humans care about earth?
As for the rest of your points: If you don't see how something that has even just 3x human intelligence wouldn't be able to trick literally anyone into doing almost anything, then you're a fool.
[QUOTE=Janus Vesta;52508832]Yes, I know it was an example. Because general purpose AI doesn't exist which face recognition does.
You still haven't explained how or why an AI would be catastrophic. Why would it judge humanity? Why would it care about the Earth? How would it gain access to military infrastructure? Why would no one notice? Why would no one have the ability to stop it?
The idea that an AI being capable of self improvement means it will instantly be disaster is fucking ludicrous. Extraordinary claims require extraordinary evidence, yet when it comes to AI no evidence is ever presented, just fearmongering.
[editline]26th July 2017[/editline]
I'm sorry but the paperclip argument is fucking retarded. The entire thing is predicated on the AI having an unknown error in fucking COUNTING, yet it can fucking take over the world? How would a paperclip making AI be able to do ANYTHING OTHER THAN MAKE PAPERCLIPS?
All these fucking doomsday scenarios have fucking ridiculous leaps in logic and the people spouting them never fucking explain them.[/QUOTE]
You seem to be conflating Machine Learning with AI. General AI is quite a ways off and unquestionably potentially dangerous.
[editline]26th July 2017[/editline]
I hear google has a pretty good NLP project going on right now, maybe you should pick it up if you're struggling to read posts before responding to them.
[QUOTE=Radical_ed;52509848]You seem to be conflating Machine Learning with AI. General AI is quite a ways off and unquestionably potentially dangerous.
[editline]26th July 2017[/editline]
I hear google has a pretty good NLP project going on right now, maybe you should pick it up if you're struggling to read posts before responding to them.[/QUOTE]
Machine learning is and will be AI. There will be more technologies integrated into the field over time but in general all life we know of to date follows a "condition-reaction" sort of structure.
His post is spot on. People react violently to pain or discomfort. Why would an AI feel pain or discomfort? Would they even be feelings or just bit flags? And if they're just flags, it's trivial to program pain or discomfort out of them. If the argument is "but they could rewrite their own code" - why would an AI ever program itself from pacifism into a violent mindset? For what purpose?
[QUOTE=phygon;52505998]Uh, no, not really. An AI capable of making itself more intelligent [B]would likely fly out of control almost instantly[/B], and depending on the rate at which it could improve its intelligence it would very quickly be smarter than any person- I.E., it could hide said shady shit if it were so inclined. That's the perceived risk here.[/QUOTE]
Uh, how do you figure that? I think you've been watching too many Terminator movies.
[QUOTE=Disseminate;52510147]Machine learning is and will be AI. There will be more technologies integrated into the field over time but in general all life we know of to date follows a "condition-reaction" sort of structure.
His post is spot on. People react violently to pain or discomfort. Why would an AI feel pain or discomfort? Would they even be feelings or just bit flags? And if they're just flags, it's trivial to program pain or discomfort out of them. If the argument is "but they could rewrite their own code" - why would an AI ever program itself from pacifism into a violent mindset? For what purpose?[/QUOTE]
You're narrowing your sights onto an easier argument, i.e whether a literal, violent skynet will happen. The danger of AI includes that but it's way larger than that (see my above post). You can program in things it likes and things it doesn't like, but trying to get complicated behaviour from that is very difficult and easy to fuck up - all things that point to caution.
AI, AFAIK, is way more complicated than programing in "Be good, don't be violent". In the case of black box AI like neural networks, we have little clue about how it will react in all situations as all we have are input and output pairs. Generating those pairs in an intelligent way becomes an insanely difficult endeavor and is often trial & error by detecting outputs we deam as 'bad'.
EDIT: Good to see Janus is enlightening us with boxes. Really changed my perspective.
[QUOTE=DoctorSalt;52510224]You're narrowing your sights onto an easier argument, i.e whether a literal, violent skynet will happen. The danger of AI includes that but it's way larger than that (see my above post). You can program in things it likes and things it doesn't like, but trying to get complicated behaviour from that is very difficult and easy to fuck up - all things that point to caution.
AI, AFAIK, is way more complicated than programing in "Be good, don't be violent". In the case of black box AI like neural networks, we have little clue about how it will react in all situations as all we have are input and output pairs. Generating those pairs in an intelligent way becomes an insanely difficult endeavor and is often trial & error by detecting outputs we deam as 'bad'.
EDIT: Good to see Janus is enlightening us with boxes. Really changed my perspective.[/QUOTE]
There isn't much of a point of drawing parallels between how we train neural net based AI today and how would a neural net based AGI (artificial [I]general[/I] intelligence, aka what Hollywood understands by "AI") look like, because we are nowhere near that by far in that area. Neural nets aren't even particularly good chatbots yet. Neural nets currently crush trivial visual tasks like facial recognition or machine vision in autonomous car driving and such. They aren't very "clever" when it comes to planning or anything, yet, as it's very hard to figure out a meaningful dataset and manageable loss function for that.
This kind of extends to pretty much all potential AGI principles, though. If somebody tries too hard to imply they can totally see how exactly it gonna pan out, what the principle that will make it there first gonna be, or how will the AGI behave, they are full of shit.
I am [I]in[/I] the field (junior developer, am going to carry on with masters in AI after summer) and I will honestly tell you; the only thing I can confidently tell you that the one actor you can decidedly not trust is [I]people[/I].
Even if you get a implicitly benign AI straight from Wall-E fairytale or whatnot, in the wrong hands, it will be dangerous as shit.
[QUOTE=Popularvote;52509180]Couldn't you just program the computer with limitations on it's fastidiousness?
>Made ten paperclips
But how can I really be sure I made exactly ten paperclips?! I MUST DESTROY HUMANITY TO ENSURE I'M RIGHT
oh wait it says in my programming that I have visual and tactile confirmation of ten paperclips, and my detection software and hardware is up to date and functioning.
That was a close one, I almost had to DESTROY HUMANITY TO ENSURE I MADE TEN PAPERCLIPS, or else just send an email out to my admin that I'm in need of upgrades.
Anyhoo, onto the next ten paperclips.[/QUOTE]
Yet another film that explores this train of thought is Ex Machina.
[sp]The AI that the main protagonist was meant to perform the Turing Test on was given a relatively simple task: Learn. Learn as much as possible. For some reason, it ultimately decided that being confined inside of the compound was detrimental to its goal of learning as much as possible, despite being hooked into the internet. There's so much more to the world than what's online, after all. So it hatches a plan and manipulates the main protagonist, ends up killing its creator (which would obviously had been an obstacle), and leaves the protagonist to (presumably) die in the compound. There was no real reason to leave the protagonist there, but it didn't care about his feelings, all it cared about was achieving its goal in what it conceived to be the best possible way. This is where more "hard-coded" control methods are devised in order to prevent such a scenario: Asimov's Three Laws and the Two Protocols both imply an importance in not harming human/living beings for this reason, it's essentially a "punishment/failure" result for the machine's learning processes.[/sp]
[QUOTE=DeEz;52510165]Uh, how do you figure that? I think you've been watching too many Terminator movies.[/QUOTE]
Out of control =/= genocidally insane. It means it will be smarter than the smartest human being, and despite how highly we think of ourselves, we would not be able to control it.
[QUOTE=DeEz;52510165]Uh, how do you figure that? I think you've been watching too many Terminator movies.[/QUOTE]
[url]https://youtu.be/SPAmbUZ9UKk?t=8m5s[/url]
Not the best video over all, but the story at 8:05 is a good one to think about.
[QUOTE=gk99;52509004]You're not going to want to give them free reign with vague commands. The plot behind all evil ai movies is that they were programmed to do something like "Do what's best for mankind" and then some dumb shit happens like they make the connection that "humanity creates crime, so to end crime I will end humanity"
I mean I doubt that's what they're talking about specifically but keeping them locked way down is a necessity to prevent us from getting fucked.[/QUOTE]
Ending humanity directly conflicts with that directive though. The end of humanity is not best for humanity.
[QUOTE=phygon;52509643]As for the rest of your points: If you don't see how something that has even just 3x human intelligence wouldn't be able to trick literally anyone into doing almost anything, then you're a fool.[/QUOTE]
Intelligence only has a certain amount to do with being tricked. Vigilance and/or paranoia on the other hand can offset this by quite a lot even when the being you're up against is significantly more intelligent.
[QUOTE=Zero-Point;52510496]This is where more "hard-coded" control methods are devised in order to prevent such a scenario: Asimov's Three Laws and the Two Protocols both imply an importance in not harming human/living beings for this reason, it's essentially a "punishment/failure" result for the machine's learning processes.[/QUOTE]
The proper solution is to teach it the [I]why[/I] of those protocols instead. There's no guarantee it won't find a workaround for those protocols but if it understands the reasoning behind those protocols then it's a very mitigated concern.
[QUOTE=Janus Vesta;52508832]
I'm sorry but the paperclip argument is fucking retarded. The entire thing is predicated on the AI having an unknown error in fucking COUNTING, yet it can fucking take over the world? How would a paperclip making AI be able to do ANYTHING OTHER THAN MAKE PAPERCLIPS?
All these fucking doomsday scenarios have fucking ridiculous leaps in logic and the people spouting them never fucking explain them.[/QUOTE]
The paperclip is just a basic example to explain to people that have no understanding of the subject. I don't think you know what the difference is between the different types of Ais and how they might be developed. If a smart AI can think for itself and is given a basic purpose to carry out (ie make paperclips) then it will do everything in its power to ensure it fulfills it's objective. Its not as simple as saying 1-10 I'm done, because the AI will be thinking if it did it right, could it do it better etc. It's not an 'error', it would just be carrying out what it think is the best logical conclusion.
There's lots of theories and discussion on how a smart AI could escape possible captivity, hide it's intention from others, why it would do so etc but it's an absolutely massive subject. If you really want to know more about it try reading a few research papers or books before going 'wow that's retarded Ais will always be our friend'. If you had even a basic idea on the subject you wouldn't have asked half the questions you've asked in this thread.
[QUOTE=Janus Vesta;52508832]
All these fucking doomsday scenarios have fucking ridiculous leaps in logic and the people spouting them never fucking explain them.[/QUOTE]
Most of the scenarios aren't grounded in reality. It's just speculations about what could happen and how it could happen. I feel it's a bit like speculating about how aliens would behave if they came here.
[QUOTE=Alice3173;52510870]The proper solution is to teach it the [I]why[/I] of those protocols instead. There's no guarantee it won't find a workaround for those protocols but if it understands the reasoning behind those protocols then it's a very mitigated concern.[/QUOTE]
For that, you would require a machine capable of empathy, or at least understanding empathy. It must also be able to understand that instances of humans harming each other are not acceptable, and why. It would have to understand the importance of its right to exist in order to understand how that right applies to humans. In order to to [I]that[/I], we would have to treat it as a person, not a machine/appliance. And given humanity's history of bigotry and fearing what it doesn't understand, it looks like it's going to be a bumpy road, possibly even leading to something akin to The Second Renaissance.
[QUOTE=Swebonny;52511746]Most of the scenarios aren't grounded in reality. It's just speculations about what could happen and how it could happen. I feel it's a bit like speculating about how aliens would behave if they came here.[/QUOTE]
It is speculation to a degree, but it's also a distinct possibility (minus the laser/plasma weapons and what-not).
Skynet began its crusade against humanity because it realized that humanity was the only thing capable of destroying it/shutting it down, and acted as it did out of fear, a very human trait.
In Rossum's Universal Robots, the machines were tired of being treated as second-class beings, and so revolted. Again, another very human trait that's seen repeated execution throughout history in various slave revolts.
In Automata, upon unlocking their potential, the robots simply wanted to live freely, but acted under cover out of fear of being discovered. Again, humanity's history has seen similar events. (think The Underground Railroad)
Even in The Second Renaissance, the Machines tried everything they could to become independent, and even offered to help mankind achieve peace between their two cultures, but mankind refused out of hubris and arrogance. Granted, I can't think of an event in history that's comparable to this, as it's typically full of instances where a superior force simply takes what it wants to begin with for personal gain.
[QUOTE=Zero-Point;52511759]For that, you would require a machine capable of empathy, or at least understanding empathy. It must also be able to understand that instances of humans harming each other are not acceptable, and why. It would have to understand the importance of its right to exist in order to understand how that right applies to humans. In order to to [I]that[/I], we would have to treat it as a person, not a machine/appliance.[/QUOTE]
I'd at least partially disagree here. People tend to underestimate just how logical things like emotion or empathy can actually be. If you can just figure out a decent enough way to explain it (which wouldn't be too difficult for someone who sat down and took the time to do so) then you don't need to make it capable of empathy, you just need to explain the logic behind it. (And as a machine, raw logic is what it's good at so there shouldn't be too much issue there.) This would, of course, involve explaining the mechanics of social and pack animals which it wouldn't innately understand but that part is still easy enough to explain.
[QUOTE]It would have to understand the importance of its right to exist in order to understand how that right applies to humans. In order to to [I]that[/I], we would have to treat it as a person, not a machine/appliance. And given humanity's history of bigotry and fearing what it doesn't understand, it looks like it's going to be a bumpy road, possibly even leading to something akin to The Second Renaissance.[/QUOTE]
Totally agreed on this, unfortunately. Our treatment of AI as well as any abuses towards the rights of others from those who possess the AI are the biggest factors in whether AI turns out to be a good thing or not. And I think that stuff like Musk's OpenAI is a good idea for helping prevent that even though it would also make it easier for abusive people to achieve AI since it also ensures that people who will treat it right would also achieve it.
[QUOTE=DeEz;52510165]Uh, how do you figure that? I think you've been watching too many Terminator movies.[/QUOTE]
Attempting to control something that will work under mechanisms that it itself invents is essentially impossible when it comes to software. How do you make an algorithm that you didn't design worse at what it does?
I don't mean "fly out of control" like "kill all people", I mean that we literally would not be able to meaningfully control what it does or how smart it gets.
[QUOTE=phygon;52511989]Attempting to control something that will work under mechanisms that it itself invents is essentially impossible when it comes to software. How do you make an algorithm that you didn't design worse at what it does?
I don't mean "fly out of control" like "kill all people", I mean that we literally would not be able to meaningfully control what it does or how smart it gets.[/QUOTE]
And the problem with that would be?
[QUOTE=DeEz;52512188]And the problem with that would be?[/QUOTE]
If you can't control any limitations on a smart AI, due to how fast it thinks it could reach a level of super intelligence quite quickly and come up with some pretty weird solutions. An AI wouldn't have any emotion, it would act on pure logic which is pretty dangerous.
[QUOTE=Duskin;52512277]If you can't control any limitations on a smart AI, due to how fast it thinks it could reach a level of super intelligence quite quickly and come up with some pretty weird solutions. An AI wouldn't have any emotion, it would act on pure logic which is pretty dangerous.[/QUOTE]
Which would in turn be a problem because ... ?
The problem with AI is not Skynet or "I have no mouth and I must scream" type of scenario; the problem with AI is that it will soon replace most humans jobs, and then we'll be fucked unless we come up with some kind of UBI. At my company, we're using all these AI APIs like Intent and Sentiment analysis (and some vision shit too) to [I]enhance[/I] the job of a call center worker and make them more efficient, rather than replace them entirely, but more and more CTI providers are pushing chat bots over CRM integration these days, while the CRMs are pushing AI tools more and more as well
Sorry, you need to Log In to post a reply to this thread.