The Artificial Intelligence Thread - Only 50 years before Robots enslave us
46 replies, posted
[IMG]http://i.imgur.com/IHr9AR7.png?1[/IMG]
[B]What is it:[/B]
This thread is for the discussion of Artificial Intelligence, Machine Learning, Robotics, Computer Vision, Speech Recognition and all that other technology that scares fans of the Terminator franchise. It's a place to talk about the current state of AI and how close it is to the human brain.
[B]Nvidia's Computer Vision for self driving cars:[/B]
[video=youtube;zsVsUvx8ieo]http://www.youtube.com/watch?v=zsVsUvx8ieo[/video]
[B]IBM's Watson, it's one of the most advanced Machine Learning systems around[/B]
[video=youtube;_Xcmh1LQB9I]http://www.youtube.com/watch?v=_Xcmh1LQB9I[/video]
[B]Asimo, one of the most advanced humanoid robots around[/B]
[video=youtube;FShZddlsjkA]http://www.youtube.com/watch?v=FShZddlsjkA[/video]
Other Relevant media:
[url]http://www.youtube.com/watch?v=ldXEuUVkDuw[/url]
[url]http://www.youtube.com/watch?v=_ozUGDd3Cik[/url]
I apologise for any lack of content. I'll add more later.
robots are cool
Cool thread. Time to brag. I'm specializing myself into Computational Biology which causes me to take a lot of AI related courses. Currently I'm doing pure Neuroscience at Karolinska Institutet. I'm also working on my Bachelor's degree about 2D self-organisation in swarm robotics.
I want to be robot engineer :D
Oh I've also written a semi-serious paper with a few other students about reducing the size of of word prediction systems by creating another word prediction system on top of the existing one, although it only predicts for a certain set of words that it knows the user often uses. Turns out it works and shrinks the used memory considerably, but also fucks up accuracy.
So yeah it turned out shit :v:
[QUOTE=Swebonny;47381630]Oh I've also written a semi-serious paper with a few other students about reducing the size of of word prediction systems by creating another word prediction system on top of the existing one, although it only predicts for a certain set of words that it knows the user often uses. Turns out it works and shrinks the used memory considerably, but also fucks up accuracy.
So yeah it turned out shit :v:[/QUOTE]
Well at least you learned something :v:
Better to have tried and failed than not to have tried at all.
Obligatory reading material:
[IMG]http://ecx.images-amazon.com/images/I/51eZddz71zL._SY344_BO1,204,203,200_.jpg[/IMG]
[url]http://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?ie=UTF8&qid=1427147692&sr=8-1&keywords=superintelligence[/url]
As recommended by Elon Musk:
[IMG]https://dl.dropboxusercontent.com/u/10518681/Screenshots/2015-03-23_21-56-47.png[/IMG]
(and famous Facepunch user Trumple)
It's a fascinating read, would probably be a bit difficult to get into if you don't have a technical background (it uses a lot of Machine Learning terminology) but don't let that put you off. You can learn as you go along
AI is the best shit in computer science.
[QUOTE=MuffinZerg;47381979]AI is the best shit in computer science.[/QUOTE]
Have you actually taken an AI computer science course? The actual courses you learn are the theories behind AI like computability and heuristics, which are extremely boring.
IMO, AI is only cool and interesting when you're looking at the final products applying it. The actual low level theories that backs it up is quite dull.
[QUOTE=Trumple;47381908]Obligatory reading material:
[IMG]http://ecx.images-amazon.com/images/I/51eZddz71zL._SY344_BO1,204,203,200_.jpg[/IMG]
[url]http://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?ie=UTF8&qid=1427147692&sr=8-1&keywords=superintelligence[/url]
As recommended by Elon Musk:
[IMG]https://dl.dropboxusercontent.com/u/10518681/Screenshots/2015-03-23_21-56-47.png[/IMG]
(and famous Facepunch user Trumple)
It's a fascinating read, would probably be a bit difficult to get into if you don't have a technical background (it uses a lot of Machine Learning terminology) but don't let that put you off. You can learn as you go along[/QUOTE]
Speaking of stuff recommended by Elon, read this:
[url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html[/url]
It's thought provoking that we could be on the verge of such a drastic change in history.
[QUOTE=Trumple;47381908]Obligatory reading material:
[IMG]http://ecx.images-amazon.com/images/I/51eZddz71zL._SY344_BO1,204,203,200_.jpg[/IMG]
[URL]http://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?ie=UTF8&qid=1427147692&sr=8-1&keywords=superintelligence[/URL]
As recommended by Elon Musk:
[IMG]https://dl.dropboxusercontent.com/u/10518681/Screenshots/2015-03-23_21-56-47.png[/IMG
(and famous Facepunch user Trumple)
It's a fascinating read, would probably be a bit difficult to get into if you don't have a technical background (it uses a lot of Machine Learning terminology) but don't let that put you off. You can learn as you go along[/QUOTE]
I need to get that book. It seems to be written by someone that knows his shit.
Edit: Oh neat he's even Swedish. And he did a PhD in the same area I'm heading towards.
Although I still think fear of any kind of AI is irrational. I'll see if the book will change me :v:
[QUOTE=B!N4RY;47382325]Have you actually taken an AI computer science course? The actual courses you learn are the theories behind AI like computability and heuristics, which are extremely boring.
IMO, AI is only cool and interesting when you're looking at the final products applying it. The actual low level theories that backs it up is quite dull.[/QUOTE]
Compatibility and heuristics are great. The theory can be boring sometimes, but still.
I am self learning so I probably have a different experience.
AI is an amazing development, it brought the modern world to our current phase of progress and wealth. Where we go with it next depends on our morality and greed, provided that we don't kill ourselves first, methinks.
[QUOTE=B!N4RY;47382325]Have you actually taken an AI computer science course? The actual courses you learn are the theories behind AI like computability and heuristics, which are extremely boring.
IMO, AI is only cool and interesting when you're looking at the final products applying it. The actual low level theories that backs it up is quite dull.[/QUOTE]
I actually think AI courses in CS is generally much more fun than other CS courses. The stuff feels much more exotic than the usual things you learn.
[QUOTE=Swebonny;47384801]I actually think AI courses in CS is generally much more fun than other CS courses. The stuff feels much more exotic than the usual things you learn.[/QUOTE]
Neuronets are fucking amazing when they uncover correlations you couldn't have thought of.
[QUOTE=ToastedNaan;47384785]AI is an amazing development, it brought the modern world to our current phase of progress and wealth. Where we go with it next depends on our morality and greed, provided that we don't kill ourselves first, methinks.[/QUOTE]Most intelligent people and those who are interested in AI, tend to have a good moral grounding and are generally more interested in the technology rather than how they how use it to turn a profit.
[QUOTE=RoboChimp;47384842]Most intelligent people and those who are interested in AI, tend to have a good moral grounding and are generally more interested in the technology rather than how they how use it to turn a profit.[/QUOTE]
I think this is quite an optimistic standpoint. If a superintelligence breakthrough is published, it wouldn't be long before the world has access to it, and the world is not just full of moral people
[editline]24th March 2015[/editline]
[QUOTE=B!N4RY;47382325]Have you actually taken an AI computer science course? The actual courses you learn are the theories behind AI like computability and heuristics, which are extremely boring.
IMO, AI is only cool and interesting when you're looking at the final products applying it. The actual low level theories that backs it up is quite dull.[/QUOTE]
I've taken a Machine Learning course and I found it absolutely fascinating. There's sometimes a lot of maths, but it's no different from the maths you learn on other courses, only now you're using those fundamentals to make a bit of software that predicts the future
[editline]24th March 2015[/editline]
[QUOTE=Swebonny;47382970]I need to get that book. It seems to be written by someone that knows his shit.
Edit: Oh neat he's even Swedish. And he did a PhD in the same area I'm heading towards.
Although I still think fear of any kind of AI is irrational. I'll see if the book will change me :v:[/QUOTE]
I think it may change your opinion, but that's just me :)
The way I see it is, when we reach the point of self-improving machines, where does it end? It would, overnight, become a very powerful tool. You could wake up one morning to find all of the low-paid jobs can be done by some clever super-intelligent software. So, the greedy companies take note and sack them all, replacing them with robots that don't need to eat, sleep or rest, and unemployment shoots through the roof. As time goes on, more complex jobs that used to require a lot of talent are now being replaced by these flawless machines. It's this transition phase that would ruin us. Sure, after many years, society could catch up and adapt to having a world where everything is performed for us. But during that transition phase, we'll go through some strange limbo where we still need money to live, but the corporations don't need humans to take our money. I.E., the corporations get free labour, but still sell products at cost.
We already have the robotic side down, we just need the brains. Sooner or later, the brain breakthrough will happen. Then what is the point of employing you or I when a flawless robot can do it 100x faster?
[editline]24th March 2015[/editline]
[QUOTE=OvB;47382819]Speaking of stuff recommended by Elon, read this:
[url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html[/url]
It's thought provoking that we could be on the verge of such a drastic change in history.[/QUOTE]
That's a very good read, thanks
Any idea if books by Ray Kurzweil (e.g. "How to Create a Mind" and "The Singularity is Near") are good?
[QUOTE=Trumple;47384852]
The way I see it is, when we reach the point of self-improving machines, where does it end? It would, overnight, become a very powerful tool. You could wake up one morning to find all of the low-paid jobs can be done by some clever super-intelligent software. So, the greedy companies take note and sack them all, replacing them with robots that don't need to eat, sleep or rest, and unemployment shoots through the roof. As time goes on, more complex jobs that used to require a lot of talent are now being replaced by these flawless machines. It's this transition phase that would ruin us. Sure, after many years, society could catch up and adapt to having a world where everything is performed for us. But during that transition phase, we'll go through some strange limbo where we still need money to live, but the corporations don't need humans to take our money. I.E., the corporations get free labour, but still sell products at cost.
We already have the robotic side down, we just need the brains. Sooner or later, the brain breakthrough will happen. Then what is the point of employing you or I when a flawless robot can do it 100x faster?[/QUOTE]
As you said, hopefully we'll be able to adjust to not having to perform the menial jobs that intelligent robots could do for us, and it'll free us up to pursue all kinds of other interests. I believe we'll definitely need safeguards against a potential AI threat, but it is possible we could also harness the exponential growth of the AI's intelligence to boost our own knowledge faster than ever before.
[QUOTE=Trumple;47384852]I think this is quite an optimistic standpoint. If a superintelligence breakthrough is published, it wouldn't be long before the world has access to it, and the world is not just full of moral people
[/QUOTE]
Yes but the people who build it will probably foresee it being abused and take that into account. I assume the first human level AIs will require hardware which will probably only be available to a select few due to cost. I don't think it will be a product to being with. Military abuse is possible but I assume the people involved with the AI would consult.
My other point is that with decades of development and observation the researchers will have a much greater idea about how the AIs act under certain situations, it's not as though it's going to go from the intelligence of a rat to a human overnight. So it's not going to be like opening pandora's box.
[QUOTE=Trumple;47384852]
I think it may change your opinion, but that's just me :)
The way I see it is, when we reach the point of self-improving machines, where does it end? It would, overnight, become a very powerful tool. You could wake up one morning to find all of the low-paid jobs can be done by some clever super-intelligent software. So, the greedy companies take note and sack them all, replacing them with robots that don't need to eat, sleep or rest, and unemployment shoots through the roof. As time goes on, more complex jobs that used to require a lot of talent are now being replaced by these flawless machines. It's this transition phase that would ruin us. Sure, after many years, society could catch up and adapt to having a world where everything is performed for us. But during that transition phase, we'll go through some strange limbo where we still need money to live, but the corporations don't need humans to take our money. I.E., the corporations get free labour, but still sell products at cost.
We already have the robotic side down, we just need the brains. Sooner or later, the brain breakthrough will happen. Then what is the point of employing you or I when a flawless robot can do it 100x faster?
[/QUOTE]
Those are indeed very valid concerns in the event an AI like that gets made and exploited to the max. The book has been ordered. Should be here on Monday or Friday if I'm lucky.
[QUOTE=dnqboy;47384912]As you said, hopefully we'll be able to adjust to not having to perform the menial jobs that intelligent robots could do for us, and it'll free us up to pursue all kinds of other interests. I believe we'll definitely need safeguards against a potential AI threat, but it is possible we could also harness the exponential growth of the AI's intelligence to boost our own knowledge faster than ever before.[/QUOTE]
Yeah that's an interesting possibility - learning from something we've created. The link OvB posted touches on that briefly - it might be like us trying to teach a chimp or an ant even our most basic way of thinking. We can train a chimp to do certain things maybe, but that can't grasp even a fraction of what we understand. Even less for an ant - it's as though their perspective is too limited. Perhaps trying to learn from a superintelligent machine would be somewhat like trying to teach an ant addition!
[editline]24th March 2015[/editline]
[QUOTE=RoboChimp;47384971]Yes but the people who build it will probably foresee it being abused and take that into account. I assume the first human level AIs will require hardware which will probably only be available to a select few due to cost. I don't think it will be a product to being with. Military abuse is possible but I assume the people involved with the AI would consult.
My other point is that with decades of development and observation the researchers will have a much greater idea about how the AIs act under certain situations, it's not as though it's going to go from the intelligence of a rat to a human overnight. So it's not going to be like opening pandora's box.[/QUOTE]
Ehh I'm not so sure. I'd love to think that but history would disagree. Look at the Black-Scholes model, developed in 1973. Black-Scholes model is a way of pricing things called "options" in the finance world. What options are isn't really relevant to this discussion, but what is relevant is how the Black-Scholes model was abused by banks and hedge fund managers. People saw this new tool, saw the great possibilities that it could bring, and ran with it. People made a lot of money. Firms that exploited this new tool perofmed better than ever. People started to get cocky, it was abused (or rather, simply just used but with too much confidence in its abilities), and that's when things went wrong. The model isn't really suited to be able to predict extreme market fluctuations - rather, it predicts small fluctuations, the little bumps you see on a day-to-day basis for a stock.
[IMG]https://dl.dropboxusercontent.com/u/10518681/Screenshots/2015-03-24_12-46-47.png[/IMG]
Green - bits where Black-Scholes might work
Red - bits where it might not!
To cut a long story short, the Russian financial crisis happened in the late 90s, and since the Black-Scholes model didn't cope well with such extreme market fluctuations, the aforementioned firms that abused the model lost billions. They were bailed out etc. and people learned their lesson, but not until after the damage was done.
I wouldn't underestimate human greed!
Here's a link to an article explaining the above story in more detail: [URL]http://www.bbc.co.uk/news/magazine-17866646[/URL]
I don't know what your views on Kurzweil are, but whether you think he's a visionary or a deranged old man you should still give this a read. It's really interesting and it tries to dissect how a human brain works and how to apply that into designing an artificial intelligence.
[IMG]http://d.gr-assets.com/books/1355117137l/13589153.jpg[/IMG]
Personally I'm really interested in computer vision, but it's so complicated and would require me to quit work to go to uni, and at the end of the day I probably still wouldn't be smart enough to actually contribute anything to the field. I may just have to sit back, relax and watch it evolve from the sidelines.
[QUOTE=Buck.;47385555]Personally I'm really interested in computer vision, but it's so complicated and would require me to quit work to go to uni, and at the end of the day I probably still wouldn't be smart enough to actually contribute anything to the field. I may just have to sit back, relax and watch it evolve from the sidelines.[/QUOTE]
That's how I feel, love computer vision, but I'm not smart enough to contribute anything to it.
[QUOTE=MuffinZerg;47384775]Compatibility and heuristics are great. The theory can be boring sometimes, but still.
I am self learning so I probably have a different experience.[/QUOTE]
I really don't think you know what I'm talking about.
[QUOTE=Swebonny;47384801]I actually think AI courses in CS is generally much more fun than other CS courses. The stuff feels much more exotic than the usual things you learn.[/QUOTE]
There are quite a lot of people who enjoys theoretical computer science more than the applied stuff. If you're one of those people, I wouldn't blame you for feeling that way.
I'm pretty sure most of the people visiting this type of thread may have already seen it, but seeing as it's an issue that's been brought up a few times already (the issue of us learning from a superintelligence), then I recommend that those of you who haven't seen Automata yet should do what they can to see it (it's even on Netflix!), as it has a brief period where they cover such a scenario.
[url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html[/url]
This is a pretty interesting read, its what got me into this whole AI thing in the first place.
Presented my first artificial neural network lab today. Slides if anyone are interested [URL]https://docs.google.com/presentation/d/1SDzH8zKVGfWUo7kY5p0pWa1ETjib--qyqx7PALTL0E4/edit?pli=1#slide=id.g8fa5275b4_0126[/URL]. It's about two of the most common neural networks and then some results after letting it classify sets of data and approximate a 3D gaussian function.
[video=youtube_share;40riCqvRoMs]http://www.youtube.com/watch?v=40riCqvRoMs[/video]
Watch in 1.5x speed, it's quite long if you're already familiar with Neural Nets
But basically they gathered up 15 million images, cleaned them and labelled them, and fed them to a Neural Network.
The data can be obtained here: [url]http://image-net.org/[/url]
The book still isn't here. Haven't even been shipped. I also ordered it with an Interstellar production book, so I guess that's why it's taking time.
Sorry, you need to Log In to post a reply to this thread.