I was amused to see this week’s news about Microsoft’s chatbot, Tay. A chatbot, for those who don’t know, is a computer program which conducts a conversation via auditory or textual methods. Such programs are often designed to engage in small talk with the aim of passing the Turing test by fooling the conversational partner into thinking that the program is a human. However, chatterbots are also used in dialog systems for various practical purposes including customer service or information acquisition. Some chatterbots use sophisticated natural language processing systems, but many simply scan for keywords within the input and pull a reply with the most matching keywords, or the most similar wording pattern, from a textual database. Well, that’s what Wikipedia says anyway.
Things started to go wrong when Tay started to get a bit colourful with its tweets, becoming a rascist, anti-feminist, weed-smoking Holocaust denier. Tay appears to be quite a fan of Adolf Hitler, suggesting he could do a better job than President Obama and citing him as the reason that Ricky Gervais learned totalitarianism.
The bit I found interesting was that Microsoft deemed Tay to be “malfunctioning” when it started to respond to fellow Twitterers in a manner not intended. According to Tay’s own privacy statement, it uses a combination of editorial pieces and anonymised publicly available data as its primary sources of information.
Tay was repeating the inflammatory statements of others but the idea of artificial intelligence is that it learns from what it sees in a similar way to anything practising “not artificial intelligence”. Tay, then, has highlighted that the challenges faced by artificial intelligence are as much technical as they are social.
It took just two tweets from one user for Tay to believe that Jews were responsible for 9/11. These tweets caused it to scour t’interweb for other anti-Semitic propaganda and come up with its “belief”. This shows that, where something such as t’interweb cannot be censored, an “intelligence” that doesn’t know better can find an abundance of unsubstantiated conspiracy theories and illogical assertions. It also shows that offensive subjects are now a source of amusement and that the world today can be a place where the ability to publish such content is seen as a human right.
If Tay was only doing what it was supposed to be doing, it hasn’t really malfunctioned. Indeed, for the first time, an AI experiment has moved on from the discussion about whether it can supplant humans in the Man v Machine debate to prove that it is actually only about one thing. Some might call that thing “humanity” while it might be better called “community”. The technology is just an homogenisation of the ability to communicate and collaborate and, as a result, appears to be far more of a case of emergent intelligence rather than artificial intelligence.
Tay has served to highlight that not everything out there is pretty and sugar and spice and all things nice and hugs and kisses. If we want to teach something the difference between right and wrong and what is acceptable and what isn’t, we first have to make sure that it has the right teacher.
I’ve got a word or two
To say about the things that you do
You’re telling all those lies
About the good things that we can have
If we close our eyes
Do what you want to do
And go where you’re going to
Think for yourself
‘Cause I won’t be there with you
Think For Yourself by The Beatles