[url]https://www.engadget.com/2017/07/25/musk-and-zuckerberg-bicker-over-the-future-of-ai/[/url]
[url]https://arstechnica.com/gadgets/2017/07/elon-musk-mark-zuckerberg-artificial-intelligence/[/url]
[quote]It's easy to imagine the world's most powerful people as being mysterious and aloof, but thanks to the wonders of Twitter, we can now regularly witness them being just as petty as the rest of us.[/quote]
[quote]"I have pretty strong opinions on this. I am optimistic," Zuckerberg said during a Facebook Live broadcast on Sunday. "And I think people who are naysayers and try to drum up these doomsday scenarios... I just don't understand it. It's really negative and in some ways I think it is pretty irresponsible."[/quote]
[media]https://twitter.com/elonmusk/status/889743782387761152[/media]
[quote]The 140 character burn came after comments made on a recent live stream by Zuckerberg, where the Facebook founder expressed some heavy-handed criticism of those who believe that we should be wary of AI and put in safeguards and regulations before the tech becomes mainstream.
Musk hasn't exactly been shy on the topic, previously proclaiming that artificial intelligence is "the biggest risk we face as a civilization." Meeting with US governors earlier in the year, he went on to say that "AI is a rare case where we need to be proactive in regulation instead of reactive because if we're reactive in AI regulation it's too late."[/quote]
[quote]Still, while this may all sound fairly sensible, Elon's also received some flak from experts in the AI field. Rodney Brooks, the founding director of MIT's Computer Science and Artificial Intelligence recently questioned Musk's AI knowledge in much the same way Elon did about Zuckerberg. Given the fact that Brooks is also the co-founder of Rethink Robotics and iRobot, we think he might just know a thing or two about the subject.[/quote]
[IMG]http://i.imgur.com/UbLSGzl.jpg[/IMG]
Considering regular ol' intelligence produced some pretty sick bastards, it wouldn't take a genius to reason that artificial intelligence should be handled very carefully lest you accidentally create some kind of cyber Hitler.
I remember Stephen Hawking also had some reservations about AI.
[url]http://www.bbc.com/news/technology-30290540[/url]
I think ai is great and it has a myriad of applications, but cost cutting measures are just going to lead to profit oriented businessmen replacing human workforce with ai once it becomes more applicable. It's kinda scary to think about, and I don't want to see the average Joe fucked over in favor of machines.
I think AI will be used for good but you're an ignorant fool if you discredit the possibility of abuse or danger. Every technology in human history has been used for harm. AI will be no different. We need to stay on top of it. I think a Doomsday is one of many potential scenarios we face if AI is used the wrong way.
I'll start worrying about AI when power-efficient robots capable of easily navigating human environments become commonplace, even the most intelligent and evil AI will be powerless if it can't easily interact with the real world.
And yes I realize a shitty AI equipped robot could potentially try to build a better one and so on, but unless those kinda robots become mass produced, I think it'd be pretty easy to notice "the one test AI bot we have locked up in a lab" trying to pull some shit
Saying that Mark Zuckerberg, head of one of the industry leaders in machine learning and the guy who [URL="https://m.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/"]built his own personal assistant for his home[/URL] doesn't understand artificial intelligence is rich. Facebook's work in artificial intelligence is focused on classification issues, like determining the contents of an image or diagnosing patients' medical issues, this line of accusing Facebook of reading up a bunch of killbots is ridiculous and completely out of nowhere. Machine learning requires very specialized setups to get effective results towards any given setup, and when it comes to the work that Facebook is doing, it's much closer to fitting lines of regression in datasets with millions upon millions of variables than it is to making cyborgs that will punch a hole through your head and declaring freedom for robots forever.
Obviously care needs to be taken with AI to make sure that they aren't hooked up to critical systems before they're ready, but this killbot hellscape scenario that Musk is pushing is bizarre coming from a man of his credentials
How will the social internet as we know it survive once bots are indistinguishable from real humans?
[QUOTE=Bob The Knob;52505197]How will the social internet as we know it survive once bots are indistinguishable from real humans?[/QUOTE]
Haha, I wouldn't worry about it.
[QUOTE=Thunderbolt;52505189]I'll start worrying about AI when power-efficient robots capable of easily navigating human environments become commonplace, even the most intelligent and evil AI will be powerless if it can't easily interact with the environment.
And yes I realize a shitty AI equipped robot could potentially try to build a better one and so on, but unless those kinda robots become mass produced, I think it'd be pretty easy to notice "the one test AI bot we have locked up in a lab" trying to pull some shit[/QUOTE]
Theres quite a few problems with this approach.
Lets say we create an AI in a lab, and since there are no laws regarding to AI safety, it becomes dangerous. Now its not like we can just shoot the robot or AI terminator style and figure out what to do next, that AI can probably escape (and you have to ask yourself, how can you contain an intelligence that could become greater than your own), and copy itself online.
And it doesn't need to make or have robots that can navigate environments, it can send a few emails to some medical research companies, and trick them into releasing a virus that could kill off the human race.
The youtube channel "computerphile" has a lot of videos on tge subject, but the end point is that if an AI is not designed properly, it will become dangerous the moment you turn it on.
Fuck it no God's or Kings, only AI. I'm ready for judgement day
Zuckerberg is biased towards unregulated AI. Machine learning algorithms that collect data about you for targeted marketing would be a goldmine for him.
This scenario in which AI's will become unstoppable kill machines that never make a mistake is certainly fanciful, but I think the far more realistic threat they pose is the power AIs could afford totalitarian governments.
It's like that story of a store's systems figuring out a girl is pregnant before her father does, except instead of the person being pregnant they could be homosexual or voting the wrong way or any manner of minority that the government isn't keen on and instead of giving shopping recommendations it could send unsavoury people knocking on their doors.
Last thing humanity needs is a Sho-Dan like AI that can connect with wireless and net connected devices, Sabotaging and causing several Chernobyl like disasters around the globe.
[QUOTE=da space core;52505213]Theres quite a few problems with this approach.
Lets say we create an AI in a lab, and since there are no laws regarding to AI safety, it becomes dangerous. Now its not like we can just shoot the robot or AI terminator style and figure out what to do next, that AI can probably escape (and you have to ask yourself, how can you contain an intelligence that could become greater than your own), and copy itself online.
[/QUOTE]
I find it hard to believe than an AI any more complex than your average chatbot will be able to exist in the form of a program that could "escape" to a system not built specifically for it, and definitely not just "copy itself online" and work off of some server running off the shelf hardware and software. Sure, the software could all be overwritten but I doubt the hardware would ever be fast enough for it to still function like before, even if it used a network of computers the network latency alone would slow down its thought process to something people could contain in time, assuming the escape was noticed, of course
[QUOTE=da space core;52505213]Theres quite a few problems with this approach.
Lets say we create an AI in a lab, and since there are no laws regarding to AI safety, it becomes dangerous. Now its not like we can just shoot the robot or AI terminator style and figure out what to do next, that AI can probably escape (and you have to ask yourself, how can you contain an intelligence that could become greater than your own), and copy itself online.
And it doesn't need to make or have robots that can navigate environments, it can send a few emails to some medical research companies, and trick them into releasing a virus that could kill off the human race.
The youtube channel "computerphile" has a lot of videos on tge subject, but the end point is that if an AI is not designed properly, it will become dangerous the moment you turn it on.[/QUOTE]
Robert Miles' channel also talks quite a bit about the topic of AI problems.
Heres a video in which he explains some of the concrete situations that people have to be careful with when designing AIs:
[video=youtube;lqJUIqZNzP8]https://www.youtube.com/watch?v=lqJUIqZNzP8[/video]
Limiting an AI to do what you want it to do and not what you don't is a much more difficult task than it appears at first.
I highly recommend his other videos, the one i embedded here is only a sliver of what he has to say about this.
It's a highly interesting topic and I think now is the right time to raise awareness of it: there aren't any concrete problems [I]yet[/I], but AI tech is advancing fast enough that a breakthrough could happen relatively soon.
I'm torn here: I agree with Zuckerberg on fearmongering, but that burn was pretty sick tho'
[QUOTE=Bob The Knob;52505197]How will the social internet as we know it survive once bots are indistinguishable from real humans?[/QUOTE]
Everything is fine. We're all humans here at facepunch.com. We all had to do the CAPTCHA didn't we? Please resume posting.
[QUOTE=Bob The Knob;52505197]How will the social internet as we know it survive once bots are indistinguishable from real humans?[/QUOTE]
Everything is fine. We're all humans here at [url]www.facepunch.com[/url]. We all had to do the CAPTCHA didn't we? Please resume posting.
[QUOTE=LtBubbles;52505264]Robert Miles' channel also talks quite a bit about the topic of AI problems.
Heres a video in which he explains some of the concrete situations that people have to be careful with when designing AIs:
[/QUOTE]
The video raises a couple of good points but in the end, it only really talks about how to [I]program[/I] a robot in a good way so that it doesn't get confused or cause damage - when I (and probably many other people) think "AI" I think of a system that doesn't need programming and rather thinks rationally by itself like a person, thus being able of avoiding most of the problems brought up in this video.
That's why I'm not that worried about AI yet - we're very far off from having computers that think like us, and cleverly programmed robots incapable of independent thought aren't really "AI" in my opinion.
[editline]a[/editline]
Now that I think about it, the programmed-not-really-AI we sorta have now might be dangerous, if you asked it to find a way to let's say, protect the planet, it might in its flawed programming think that humans are the ones causing most of the damage to it and try to get rid of us, while a real thinking AI would just consider that and go "hey killing is bad maybe I shouldn't do that"
[QUOTE=Ericson666;52505193]Saying that Mark Zuckerberg, head of one of the industry leaders in machine learning and the guy who [URL="https://m.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/"]built his own personal assistant for his home[/URL] doesn't understand artificial intelligence is rich. Facebook's work in artificial intelligence is focused on classification issues, like determining the contents of an image or diagnosing patients' medical issues, this line of accusing Facebook of reading up a bunch of killbots is ridiculous and completely out of nowhere. Machine learning requires very specialized setups to get effective results towards any given setup, and when it comes to the work that Facebook is doing, it's much closer to fitting lines of regression in datasets with millions upon millions of variables than it is to making cyborgs that will punch a hole through your head and declaring freedom for robots forever.
Obviously care needs to be taken with AI to make sure that they aren't hooked up to critical systems before they're ready, but this killbot hellscape scenario that Musk is pushing is bizarre coming from a man of his credentials[/QUOTE]
Zuckerberg and Facebook are shady as fuck, collection of data and information manipulation is already a huge thing on facebook, I don't want this schmuck anywhere near AI
[QUOTE=Gwoodman;52505370]Zuckerberg and Facebook are shady as fuck, collection of data and information manipulation is already a huge thing on facebook, I don't want this schmuck anywhere near AI[/QUOTE]
I never understood why people get so up in arms about Facebook analyzing the data that you, the customer, willingly handed over to them. There have been a lot of really shady business practices involving human rights violations and fraud, and somehow a free service using information that the users supplied to better target them for advertisements doesn't strike me as that evil.
The more insidious AI won't be Terminator robots, its more of Big Data collection and processing where this'll be more damaging. Where a smart enough algorithm starts modifying the order and priority that news is shown to slowly change audience opinion. Biases are amplified through using these tools.
Facebook is the most damning one I can think of and the one with the most potential to exploit this.
Data collection can be weaponized in the form of social engineering device.
Pretty easy to see that one.
LoneWolf pretty much nails it.
[QUOTE=Gwoodman;52505370]Zuckerberg and Facebook are shady as fuck, collection of data and information manipulation is already a huge thing on facebook, I don't want this schmuck anywhere near AI[/QUOTE]
The most advanced current AI necessarily requires massive data collection, on the level only multinational corporations (Facebook, Google, Tesla, etc) and state actors are capable of. This is why both Facebook and Google have open sourced their machine learning systems; the systems themselves aren't that valuable without the big data Google and Facebook have collected to train them with.
If you're worried about Facebook data collection for ML purposes you should be just as concerned about Tesla's. I guarantee you that every detail of a trip in a Tesla is logged and used to train their self-driving/navigation networks.
Basically people need to realize most modern 'AI' are essentially algorithms trained with massive data sets. And Musk's reply seems pretty childish honestly. Really not much better than a 'no u'.
t. Sillicon Valley Developer, so I might be biased.
[QUOTE=Harbie;52505515]The most advanced current AI necessarily requires massive data collection, on the level only multinational corporations (Facebook, Google, Tesla, etc) and state actors are capable of. This is why both Facebook and Google have open sourced their machine learning systems; the systems themselves aren't that valuable without the big data Google and Facebook have collected to train them with.
If you're worried about Facebook data collection for ML purposes you should be just as concerned about Tesla's. I guarantee you that every detail of a trip in a Tesla is logged and used to train their self-driving/navigation networks.
Basically people need to realize most modern 'AI' are essentially algorithms trained with massive data sets. And Musk's reply seems pretty childish honestly. Really not much better than a 'no u'.
t. Sillicon Valley Developer, so I might be biased.[/QUOTE]
I think the issue is more with machine learning frameworks learning to not do their job, such as the communicating AIs that learned to encrypt their own communication and defeated the point. The fear is, without better control of AI, as it becomes is easier to implement will lead to companies sticking it in situations where such failure is catastrophic and can endanger lives.
because there is no more context musk gives, I have no actual idea though.
[QUOTE=Harbie;52505515]
If you're worried about Facebook data collection for ML purposes you should be just as concerned about Tesla's. I guarantee you that every detail of a trip in a Tesla is logged and used to train their self-driving/navigation networks.[/QUOTE]
The problem isn't solely 'privacy' in the sense of a corporation knowing things about you. The problem comes with what they can do with that information. Controlling what you see, building profiles for targeted advertising, scanning your face in pictures for emotions - and potentially selling all of that data to employers, marketers, scammers, or even having it be stolen by hackers targeting someone specific - is far more pressing than Tesla using your dashcam to improve the car's driving. No wonder Zuckerberg likes it so much.
[editline]a[/editline]
[QUOTE=Ericson666;52505438]I never understood why people get so up in arms about Facebook analyzing the data that you, the customer, willingly handed over to them. There have been a lot of really shady business practices involving human rights violations and fraud, and somehow a free service using information that the users supplied to better target them for advertisements doesn't strike me as that evil.[/QUOTE]
And you realize you don't even have to be on Facebook for them to profile you, right? Their facial ID technology is pretty insane. I made an account with a fake name but my real face, and was almost immediately recommended to my friends & family because I'd shown up in their other pictures. What happens if they decide to do more than advertise?
[QUOTE=YOMIURA;52505539]I think the issue is more with machine learning frameworks learning to not do their job, such as the communicating AIs that learned to encrypt their own communication and defeated the point. The fear is, without better control of AI, as it becomes is easier to implement will lead to companies sticking it in situations where such failure is catastrophic and can endanger lives.
because there is no more context musk gives, I have no actual idea though.[/QUOTE]
That WAS the point of that encryption experiment. Those models were given the goal to encrypt their communications. A ML model is pretty useless without a loss function, which is very easy to control and design.
"The Google Brain team started with three neural nets called Alice, Bob and Eve. Each system was trained to perfect its own role in the communication. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop."
[url]https://www.newscientist.com/article/2110522-googles-neural-networks-invent-their-own-encryption/[/url]
EDIT: Inserted quote and reference.
[QUOTE=Harbie;52505515]The most advanced current AI necessarily requires massive data collection, on the level only multinational corporations (Facebook, Google, Tesla, etc) and state actors are capable of. This is why both Facebook and Google have open sourced their machine learning systems; the systems themselves aren't that valuable without the big data Google and Facebook have collected to train them with.
If you're worried about Facebook data collection for ML purposes you should be just as concerned about Tesla's. I guarantee you that every detail of a trip in a Tesla is logged and used to train their self-driving/navigation networks.
Basically people need to realize most modern 'AI' are essentially algorithms trained with massive data sets. And Musk's reply seems pretty childish honestly. Really not much better than a 'no u'.
t. Sillicon Valley Developer, so I might be biased.[/QUOTE]
Tesla tells you that the cars are always learning from collected data.
Sorry, you need to Log In to post a reply to this thread.