Turing tests III: The notion of NO intelligence

One commenter on the previous post raised an interesting point in identifying our financial markets as post-human, and those algorithms and agents as essentially opaque black box systems. And predicting that they would be the origin of artificial intelligence.

First, I think that the origin of AI is an interesting question. If we believe that intelligence will emerge from a set of algorithms it seems important what algorithms they are, since all algorithms are under different selective pressures. We could imagine that algorithms selected for in financial markets are different than those selected for in turing tests, driving cars or playing jeopardy. Intelligence stemming from DRM would be an interesting premise for a sci-fi short story.

Second, I think the black box nature of some algorithms also makes it interesting to think about what we ascribe zero intelligence to. If we can predict exactly what results an interaction with a system will lead to. If we can predict the system completely, we ascribe no intelligence to it. A clock is not intelligent, and we can reproduce and predict its behavior easily. But as soon as we leave the wholly predictable we end up with a system where the question of intelligence can be raised.

This says something about why we as human beings detect intelligence. We do it in order to find ways an methods to predict systems, and intelligence simply provides us with a new set of tools to do that. If we think a system is intelligent we shift to the conceptual structures, of wants, needs, rationality, feelings, logic et cetera, and we have developed a pretty good predictive apparatus for that.

And that may be the evolutionary explanation for why we are so well-equipped to detect intelligence. Because once we do, we are able to better predict the systems, and interact with them to our, sometimes, mutual benefit.

Now, I find this interesting, because you could imagine a weird world in which people failed the Turing test all the time, i.e. failed and identified systems as intelligent that did not meet any more in-depth examination of intelligence. Why should we have the ability to identify intelligent systems, one may ask, and the answer is probably that it has a strong selective advantage in predicting what the system will do.

That which you can predict and explain fully, however, has no intelligence whatsoever. This needs more thought.

Cleverbot and being somewhat human…or a unicorn

Cleverbot is an interesting project that just recently crossed my radarscreen. I like the online interface, and it has also brought an interesting instance of the Turing test to my attention. In this case it is rapid fire Turing tests, where the participants rate “how human” something is on a scale from 0-100 percent. Cleverbot, in one of its incarnations, got to 42.1% human.

This highlights a weakness of the Turing test, I think. The notion that something can be somewhat human is clearly wrong, a misuse of the concept humanity. Or is it? Worrying examples from human history indicate that we think this way about enemies, other tribes and other races to an extent that is awful. We dehumanize that which we think we need to destroy, or that which threatens us, for scares us. But would any racist say that another race is just 42.1% percent human? No, probably not. It doesn’t work that way. We would probably agree with them being human or we would define them as something else. Being human might actually be far more binary than we think.

In one way, however, cleverbot is a great example of the limitations of the Turing test. But it has also produced some great humor. Witness this fantastic video on cleverbot talking to another instance of itself, to see how language sounds when it, truly, runs on empty.

My favorite comment. “I am not a robot. I am a unicorn”.

An interesting aside. Their discussion sounds very much like an old married couple arguing in parts. Myabe that is also language running on empty…

But I may be a “meanie”.

 

Turing tests II: Wittgenstein and Voigt-Kampff

Could a machine think? — Could it be in pain? — Well is the human body to be called such a machine? It surely comes as close as possible to being such a machine.

But a machine surely cannot think! — Is that an empirical statement? No. We only say of a human being and what is like on that it thinks. We also say it of dolls and no doubt of spirits too. Look at the word “to think” as a tool.

Ludwig Wittgenstein Philosophical Investigations 359-360

Wittgenstein’s note comes to mind as I continue thinking about Turing tests. I think this quote has been read as saying that there can be no test for intelligence or thinking, but it is really not at all saying that. It merely says that we use the concept of things that are like us. And the interesting part about that is that there are so many qualities that could be used to assess that. At what point will we simply give up and call something human?

Enter the Voigt-Kampff machine, the monster Turing test in the movie Bladerunner. In that test Decker, the bounty hunter, has to use extreme equipment to detect the tell-tale lack of empathic response that reveals someone as a replicant. There is a legitimate ethical question here about how much we are allowed to test an entity to determine that it is not human. And how arbitrary those tests can be. The Turing test easily degenerates to a Shibboleth. Here Decker uses the Voigt-Kampff to determine if Rachael is a artificial or real mind:

Oh, and on the note on alternative Turing tests, I now add civilization-scale tests (is this an intelligent civilization) and the meta-Turing test: is this person able to detect that they are in a Turing test situation. More to come. Bear with me…