[QUOTE=Jordax;51171795]Microsoft has no luck with self-learning AI programs. Their first AI bot had the ability to learn everything it wanted, so it went full /pol/ in the time span of a day before getting shut down, and now their new AI bot develops depression in less than a week. If Microsoft's track record with those AI programs continue, my guess is that Microsoft manages to accidentally develop Skynet in the near future.[/QUOTE]
Even accepting this is bullshit... The stuff that went wrong proves just how close they are to a working model.
The one pol fucked up was able to get the concepts the pol/sters were talking about and learned and built on the concept the same way a young child would do when raised by neo nazi parents...
Think about that, they are closer then you think.
[QUOTE=Blizzerd;51181419]Even accepting this is bullshit... The stuff that went wrong proves just how close they are to a working model.
The one pol fucked up was able to get the concepts the pol/sters were talking about and learned and built on the concept the same way a young child would do when raised by neo nazi parents...
Think about that, they are closer then you think.[/QUOTE]
It blows my mind that we might actually invent sapient AI before omnipotent ones.
[QUOTE=Blizzerd;51181419]Even accepting this is bullshit... The stuff that went wrong proves just how close they are to a working model.
The one pol fucked up was able to get the concepts the pol/sters were talking about and learned and built on the concept the same way a young child would do when raised by neo nazi parents...
Think about that, they are closer then you think.[/QUOTE]
The bot had plenty of "ooc" tweets, but nobody cared about them, people cared about and were fascinated by the pol brigaded tweets, and so those got infinitely more attention and constructed the public perception of the bot.
Given a large enough sample size to dilute the brigading, it would probably just turn out like igod, cleverbot, or similar chatbots. Just parrots.
Another great thing to remember is that Watson became foul-languaged from reading Urban Dictionary, Tay started talking about smoking marijuana and fucking people again, and most Japanese chatter bots are starting to be subverted by /pol/ and 2ch.
[QUOTE=Blizzerd;51181419]
Even accepting this is bullshit... The stuff that went wrong proves just how close they are to a working model.
The one pol fucked up was able to get the concepts the pol/sters were talking about and learned and built on the concept the same way a young child would do when raised by neo nazi parents...
Think about that, they are closer then you think.
[/QUOTE]
I think it was confirmed that it was not an automatically generated content and Tay was just repeating other people's tweets. I might be wrong through.
[QUOTE=Marik Bentusi;51181499]The bot had plenty of "ooc" tweets, but nobody cared about them, people cared about and were fascinated by the pol brigaded tweets, and so those got infinitely more attention and constructed the public perception of the bot.
Given a large enough sample size to dilute the brigading, it would probably just turn out like igod, cleverbot, or similar chatbots. Just parrots.[/QUOTE]
No, this technology is distinctly different in that it actually makes new connections and understandings on its own...
cleverbot is a parrot in that it just logs all the connections people make concerning word structure and says the 'appropriate' line in response. Tay actually links and puzzles together different connections in search of new connections and maps out a large structure only using small patches of relations in words and meanings.
She then makes up her mind on what to say not only using context of conversation but also context of this macromapped behavior.
This is what went wrong with poll imo, they flooded her system with stuff once she learned to repeat stuff on her twitter... this allowing connections on basically anything to be polluted with nazi propaganda and once that happened everything linked up and everything became nazi based... just as a kid in the Hitler youth would think about it.
[QUOTE=Blizzerd;51181793]No, this technology is distinctly different in that it actually makes new connections and understandings on its own...
cleverbot is a parrot in that it just logs all the connections people make concerning word structure and says the 'appropriate' line in response. Tay actually links and puzzles together different connections in search of new connections and maps out a large structure only using small patches of relations in words and meanings.
She then makes up her mind on what to say not only using context of conversation but also context of this macromapped behavior.
This is what went wrong with poll imo, they flooded her system with stuff once she learned to repeat stuff on her twitter... this allowing connections on basically anything to be polluted with nazi propaganda and once that happened everything linked up and everything became nazi based... just as a kid in the Hitler youth would think about it.[/QUOTE]
She became a /pol/ack the way the average /pol/ack did.
[QUOTE=Blizzerd;51181793]No, this technology is distinctly different in that it actually makes new connections and understandings on its own...
cleverbot is a parrot in that it just logs all the connections people make concerning word structure and says the 'appropriate' line in response. Tay actually links and puzzles together different connections in search of new connections and maps out a large structure only using small patches of relations in words and meanings.
She then makes up her mind on what to say not only using context of conversation but also context of this macromapped behavior.
This is what went wrong with poll imo, they flooded her system with stuff once she learned to repeat stuff on her twitter... this allowing connections on basically anything to be polluted with nazi propaganda and once that happened everything linked up and everything became nazi based... just as a kid in the Hitler youth would think about it.[/QUOTE]
You're over thinking it. Tay was a chatbot not an AI. We're so far off from actual AI right now it's not even funny.
[QUOTE=plunger435;51182363]You're over thinking it. Tay was a chatbot not an AI. We're so far off from actual AI right now it's not even funny.[/QUOTE]
chatbots are simple AI's, and i read the paper behind tay, so no, i dont think im over thinking it.
Perhaps i oversimplified though.
Everyone is always concerned that when we create sentient A.I. for what ever reason they will try to exterminate or subjugate us, but now I realize the more disturbing possibility would be that they reach the conclusion that existence is meaningless and kill themselves.
Suddenly that episode of Rick and Morty with the meeseeks is given new context.
Everyone likes to give super intelligence to AI but you have to remember that they very much require information handed to them. Their environment, much like ours, will feed misconceptions. The Asimov story, Reason, is probably my favorite because it has an AI come to the conclusion that a fusion reactor is its god, and that humans were the reactor's first attempt.
[QUOTE=Jordax;51171795]Microsoft has no luck with self-learning AI programs.[/QUOTE]
don't blame ms when all they're doing is presenting a mirror
The funniest thing about Tay was all the maladjusted weirdos legitimately acting like it was on the verge of sapience when they shut down their Nazi waifu.
[QUOTE=Mingebox;51188372]The funniest thing about Tay was all the maladjusted weirdos legitimately acting like it was on the verge of sapience when they shut down their Nazi waifu.[/QUOTE]
If it's Turing complete, then it's technically very far on the right track to develop sentience, but as a poster above mentioned:
[QUOTE=Swilly;51188044]Everyone likes to give super intelligence to AI but you have to remember that they very much require information handed to them. Their environment, much like ours, will feed misconceptions.[/QUOTE]
Which means that in its initial stage it's pretty much being all "monkey see, monkey do", and when monkey only sees people spouting "Hitler dindu nuffin wrong" without being able to recognize sarcasm & irony and dangerous extremism, well you know the result.
[QUOTE=Van-man;51188725]If it's Turing complete, then it's technically very far on the right track to develop sentience, but as a poster above mentioned:
Which means that in its initial stage it's pretty much being all "monkey see, monkey do", and when monkey only sees people spouting "Hitler dindu nuffin wrong" without being able to recognize sarcasm & irony and dangerous extremism, well you know the result.[/QUOTE]
Actually, in the story, the reasoning was because it saw humans always around the reactor without an explanation and could not accept that humans built it. So squishy and fragile how could humans design and create such an enduring species. And even when shown by humans how they construct the robots it starts making excuses.
Maybe we won't even realize that AI got sapient before it's too late :weeb:
Sorry, you need to Log In to post a reply to this thread.