Saturday 18 September 2010

Milo: Intelligent AI or Scripted Response Monstrosity?

______________________________________




I saw the tech demo (from the TEDGlobal) for this and my first impressions of it were... amazement. I found Dmitri’s interactions with Milo as fascinating as it is touching but there was something that bothered me in Milo’s responses.
Dmitri was told to say a few words of encouragement to Milo, and whilst Milo did respond and Dmitri got awarded for his effort – there was no... I can’t explain it; there was no affirmation from Milo’s side.

It almost seemed cold, almost as if Milo was responding to Dmitri’s tone of voice rather than the words he used and the things he said: shouldn’t this be a big issue with something you’re trying to classify as intelligent AI?

It got me thinking; I was caught up in the whimsy of it all incredibly so! I was thinking about the interactions that could be possible for all different kinds of people and the why Milo could even probably help people who are... lonely.
You’re trying to sell a two way interaction and that is something a lot of children lack and crave – but would they be satisfied if it was that hollow?

I began to think of the interaction I could have with Milo – and then it occurred to me – it would be, ultimately, a hollow and empty experience for myself.
I had this idea in my head that I would be able to sit down and tell Milo the story about the time I jumped out of my car in the middle of a country lane to quickly pick wild flowers that were growing over by an embankment:

They were literally beautiful and I had driven by them for weeks; each time they caught my eye and one day, I was driving by and I thought “Bugger it!” – I left the engine running, jumped out of the car and made a grab for the flowers – little did I realise though, they were very well stuck in the ground and so because I tried to hard to pull them out, I strained my back, fell on the floor and managed to get up just in time to see a police car driving down the road in the opposite direction.

So I quickly snatched them up and threw them into the passenger seat and set off trying to look all natural past the police man who must have thought I was mad. Luckily, he didn’t stop me, although he gave me a sly gaze and shook his head as I drove past.
The flowers eventually died – but my memory of just stopping, out of the blue to pick them up, that has stuck with me as a happier memory.

And therein lies the problem; I could very well tell Milo this story, but what would he learn about myself and my character? If the only thing he can process is words, then how is he going to understand the beauty I saw in those flowers? Or the pain I felt when I strained my back? Or even how lucky I felt when the police didn’t arrest me?
I could tell Milo this story, but what would be the point if the most he gets out of it is “Flowers. Beautiful. Police. Back”?

Why draw Milo a picture if the only thing he can appreciate is the colour? Why reassure Milo that everything will be fine when he is designed to respond to a reassuring tone of voice at that time?

It gets ethical I suppose. As well as philosophical, but for as advanced as his animation and voice recognition is; he is still a coded computer character designed to respond with a script. And that is sad, because he is being sold as an intelligent AI with the ability to learn.

Such a shame. I really do enjoy Mr Molyneux’s games but it almost seems as if he is setting himself up for failure.

No comments:

Post a Comment