• TED Talks: How we're teaching computers to understand pictures
    3 replies, posted
[video=youtube;40riCqvRoMs]http://www.youtube.com/watch?v=40riCqvRoMs[/video] A kind of "progress report" from current researchers studying machine learning, in particular visual recognition. This video gives a summary of the problem of visual recognition and why it isn't as simple as one might think for implementing artificially, followed by current examples of their own progress in this field. To date, they estimate that their visual recognition algorithm is capable of identifying objects within pictures at the level of a 3-year old.
Fascinating area of research, it's gonna be decades before we crack it, but Neural networks and better heuristics are getting us there.
I feel like in order to make any AI to a standard that people can actually interact or work with, it would have to fully simulate every area of the human brain. This one can simulate very basic image recognition, but on its own it is useless, it needs speech to actually output. But speech output requires access to an internal memory or repertoire of words linked to internal prototypes of "things" in order to choose a correct word. To even mimic human speech, it needs yet another network that can generate appropriate grammar and get the correct syntax. At that level, it would still wouldn't sound right because humans are highly emotional, use metaphors, and don't use solely literal speech, so yet another network would need to be added to mimic emotionality, which would also need to tie into memory in order to understand the correct context of the situation. Further, it would then need an entire frontal system to keep itself in check. It will take a long time and a lot of research in order to get a system that mimics humans past that of a toddler, I think.
Come join us in this thread if this stuff fascinates you: [url]http://facepunch.com/showthread.php?t=1457207[/url]
Sorry, you need to Log In to post a reply to this thread.