Music this weekend – VCMG

OMG! OMG! My friend Mikael pointed me to this, and I just remembered that Mykel mentioned it too (synth friends have to be named Mikael it seems). It and it is awesome. This is what Speak & Spell would have sounded like had it been released now, or like, a prequel to Speak & Spell.

OK, that messed my music listening plans up. Now I will be listening to this for the foreseeable future. Vince Clark. Martin Gore. (Bows and says “iamnotworthyiamnotworthy” :)

Alternative Turing Tests I: The Double Test and the Network Test

Lately the notion of testing systems for intelligence has occupied some of my free time. One thing that interests me is alternative designs for the well-known Turing test. There are quite a few variants already out there, but I wanted to add two more.

The first is the double Turing test, where the computer is tasked with determining if the interlocutor on the other side is human and the human is tasked with detecting a computer:

The salient question becomes if the algorithms are different. We know they will be, since we have seen reverse Turing tests and captchas, but who will detect the other first? This version, a Turing game, seems more fair.

The other variant that I think is interesting is a Network Turing Test. Here the task is for one network to determine of another network if it contains more humans than computer actors:

The network that gets closest wins. The reason for tweaking the test this way is that we rarely if ever should think about intelligence as a non-networked concept. Variants on this test would be to detect at least one human/computer or eliminating the possibility that any one node in the network is either.

In principle, though, this raises the more important question of what it means to test for intelligence, or equivalence here. The Wittgensteinian answer seems to be that we misunderstand the word intelligence or its grammar if we think we can test for the existence of it. Maybe the concept of the Turing test is an inverse indication of something…

Commando capitalism a one cycle pony?

I have just started reading Acemoglu and Robinson on Why Nations Fail. It is an interesting book for many reasons, but one of the arguments the authors have explored that I find interesting is the notion that the commando capitalist economies that seem to be doing so well will ultimately fail. The argument has three parts. The first is essentially that growth in our society is Schumpeterian, that we need periods of creative destruction, and that is how capitalism grows:

The authors then note that the process of creative destruction always pits entrepreneurs against incumbents. The incumbents will behave as rent-seekers and seek legislation to protect their interests and hope that they can legislate away change. Entrepreneurs will power through and create new technologies, organizations and business models that disrupt the old.

And then finally Acemoglu and Robinson make the observation that in a commando economy the state will own the majority of the big companies, or be entwined with them. The will to see change, innovation and to encourage the engine of growth, creative destruction, declines radically with the investments you have in the incumbents.

Acemoglu & Robinson also point out that in commando economies a small elite will make these decisions, almost guaranteeing that there will be no real impetus to embrace disruption and destruction. Thus a commando economy can get one really good cycle, but that is about all. Certainly an interesting discussion, and a welcome counter-argument to the notion that commando capitalism is better than open and free markets. Let’s see what the evidence tells us in a couple of years. The book, however, is a great read.

The Barlow Theorem

The more I think about it, the clearer it seems to me that with the reduction in the cost of technologies comes the increased need for controlling people if you want to control the technologies.

There is a tipping point somewhere, where a technology hits the mainstream and essentially through ease of use and availability becomes very hard to control. Think about mobile phones and regulating who takes pictures where, or go back further and think about writing. How would one regulate writing in our society? That is a technology that has passed the point of regulability in a sense. This may be worth thinking more about. In the meanwhile I will give the notion a name, let’s call it the Barlow Theorem, and sketch out some ideas. Here is a first:

The idea here is that in the technological society we see a phase shift at a certain cost for some technologies where the regulability of the technology vanishes (what does that mean, and how would we measure it?) and it instead becomes necessary to regulate the use and possession of the technology, and essentially control people. Not give state licenses for use but prohibit possession of certain technologies.

What the Barlow theorem would imply is that power you need to exercise to control a technology is inverse to, in a sense the cost of the technology. In order to stop extremely low cost technology you need draconian power. That does not seem wholly unlikely.

Of course, there is something unsatisfactory with that notion too, like, isn’t it obvious? Why state it with such pathos? Is there something different happening in the information society that was not happening before, here? Well, with the acceleration of technological change you could argue that the time required for a technology to reach the point where the shift happens is shrinking very quickly.

I could also imagine someone saying “big deal, the state has always controlled people, not technology”, but that misses the wider point. It is direct coercion vs requirement for the code. It is the notion that the code is less and less centrally controlled, written and open to control. It is an empirical question if this is true or not, of course. How would you measure the control over code in a society today? That is an interesting question, too, actually.


Cyberlibertarianism 2.0

Mathias Klang has written an interesting piece on my last post about the next Internet war. His theory is essentially that “power abhors a vacuum” and that the powers that exist now will re-assert themselves when they are challenged. I agree. I only think that the price gets higher in each iteration of that process. We have lived through the first cycle of Internet policy where Barlow gave way to Lessig who gave way to Wu&Goldsmith, essentially going from “the net will set us free” over “code will regulate us” to “nation states can still lock you up and torture you”. All of that is true. But as technology becomes more and more powerful, the control over technology will slowly converge with control over people. Chokepoints will disappear and individual ability to connect will increase. As that happens the next iteration starts, with a new cyberlibertarianism, and this time the ability of nation states to lock us up and torture us becomes the focus of the cycle. If they do not they actually lose control. So am I saying the net will solve all problems? Absolutely not. What I am saying is that in order to make John Perry Barlow’s prophecies stay prophecies the state will have to exercise more force and coercion over individual citizens.

Mathias right. We have heard it before. Marx noted that history repeats itself, first as tragedy, then as farce. In a sense we can but hope that cyberlibertarianism 2.0 will be a farce. We have lived through the tragedy of a net that is ever more tightly regulated and restricted, censorship, kill-switches, surveillance programmes. We deserve a farce with a happy ending.

The thing that sometimes worries me is that the alternative is not the status quo. It is not tinkering with the net as is. Because the net will continue to evolve and technology will make us even more powerful. The second time around the alternative to Barlow is not Lessig or even Wu&Goldsmith. It is Solzhenitsyn.

I think I get Mathias’ point. My counterpoint would be that the price of cynicism just went up a notch.