Turing tests III: The notion of NO intelligence

One commenter on the previous post raised an interesting point in identifying our financial markets as post-human, and those algorithms and agents as essentially opaque black box systems. And predicting that they would be the origin of artificial intelligence.

First, I think that the origin of AI is an interesting question. If we believe that intelligence will emerge from a set of algorithms it seems important what algorithms they are, since all algorithms are under different selective pressures. We could imagine that algorithms selected for in financial markets are different than those selected for in turing tests, driving cars or playing jeopardy. Intelligence stemming from DRM would be an interesting premise for a sci-fi short story.

Second, I think the black box nature of some algorithms also makes it interesting to think about what we ascribe zero intelligence to. If we can predict exactly what results an interaction with a system will lead to. If we can predict the system completely, we ascribe no intelligence to it. A clock is not intelligent, and we can reproduce and predict its behavior easily. But as soon as we leave the wholly predictable we end up with a system where the question of intelligence can be raised.

This says something about why we as human beings detect intelligence. We do it in order to find ways an methods to predict systems, and intelligence simply provides us with a new set of tools to do that. If we think a system is intelligent we shift to the conceptual structures, of wants, needs, rationality, feelings, logic et cetera, and we have developed a pretty good predictive apparatus for that.

And that may be the evolutionary explanation for why we are so well-equipped to detect intelligence. Because once we do, we are able to better predict the systems, and interact with them to our, sometimes, mutual benefit.

Now, I find this interesting, because you could imagine a weird world in which people failed the Turing test all the time, i.e. failed and identified systems as intelligent that did not meet any more in-depth examination of intelligence. Why should we have the ability to identify intelligent systems, one may ask, and the answer is probably that it has a strong selective advantage in predicting what the system will do.

That which you can predict and explain fully, however, has no intelligence whatsoever. This needs more thought.

Be Sociable, Share!

1 Kommentar.

  1. Maybe in the near future we will see finance algos starting to learn and being creative i.e some sort of machine learning., ANN’s – artificial neural networks.

Kommentera gärna!