What is your care/share-ratio?

By Thursday, December 22, 2011 2 No tags Permalink 0

This article in MIT Technology review made me think. The author sets out comparing Moore’s law and Zuckerberg’s law and notes that there are fundamental differences between them. Z’s law says that online sharing will double yearly over the foreseeable future. The article author notes that this is not interesting in itself without an additional concept of “caring”.

We could simplify this and think of the caring/sharing ratios for different network: how much material we share and how much we care about or actually consume. (Presumably we could even define social networks as networks that depend on their care/share ratio for survival long term). As the c/s-ratio approaches zero there is a point at which people just quit.

That allows us to ask a couple of interesting questions about social networks:

1) Is the c/s-ratio dependent on the number of connections or other network properties? I.e. is there a social density relationship? This seems almost certain, but I am not sure what the relationship looks like.

2) Is the c/s-ratio dependent on social segmenting (lists, circles et cetera)? Does social segmenting help us care more?

3) What is the unit of measurement? Time spent caring/sharing? Or mbs shared/cared about? One way to understand Zuckerberg’s law would be to say that it applies to the amount of data shared, as in the size of the shared data set. That will grow as we share more high-resolution images, and that will happen as laws like Moore’s increase capacity of technology. But if I share a 1 mb picture one year and a 2 mb picture next year — is it really accurate to say that my sharing doubled? It seems clear that Zuckerberg’s law was conceived based on mb, not hours. Therein lies a lot of the challenge to it.

4) Is the c/s-ratio applicable to other online phenomena like gaming? I share and care at the same time when I play WoW, right? It seems that there is something asynchronous going on in the c/s-analysis.

5) It seems obvious that the c/s ratio will develop in a certain way over time, and that it will compete with other things we care for. Attention economics needs to be developed more in detail. Is there any way that we can increase our attention? I think visualization – as the article author hints – maybe really important here. Applications like Wisdom allow us to “meta-care” about things aggregated into visualizations.

6) What does the c/s-ratio look like if we order the networks we participate in today? 4sq, FB, Twitter, G+…I think my order is G+, FB, Twitter and then everything else (but of course I am biased). I guess my email has a c/s-ratio too. Hm.

There is a lot more (some notes also here), but those are some initial thoughts for now. On a not so related note there is snow in Sweden, and everything is very christmasy. It is kind of nice to have snow in December. I did not think I would feel this way.

Argument from different model – a new foul in the theory of argument?

By Friday, December 9, 2011 2 No tags Permalink 0

In Dietrich Doerner‘s excellent book on decision making, charmingly called the Logic of Failure, he develops the following problem solving model:

  1. Formulation of goals
  2. Formulation of models AND gather of information
  3. Prediction and extrapolation
  4. Planning of actions, decision making and execution
  5. Review of effects and revision
This is a great model for several reasons, but the most important thing that Doerner has in his model that is left out of many other models is the first element in the second step: the formulation of, exactly, a model. It is essential to problem solving, and far too little time is usually spent on it — because it is hard.
So what does it mean to develop a model? Let’s look at a practical problem. Let’s assume that I want to win an election. That is my goal. Now, what has to happen next is not that I run out and look at stats and opinion polls and good knows what, but rather that I agree with my team on how elections are won. What is the model we are working with? Do we believe that the electorate consists of three different categories of votes, for example? Right, swing and left? If so, do we care about the left and right or only the swing vote? How do we think someone decides if they are a swing voter? What matters most? If one member of the team thinks it is about the appearance of winning (i.e. the sharp jaw, the great hair) and another thinks it is the issues, and specifically the economy their gathering of information and planning will look very different. If it is a little bit of both, maybe that is a good thing to agree on too.
A weak model will almost guarantee a weak outcome. If you want to beat a chess master or a karate fighter your model will be chess or karate as a formal system, and you will try to figure out if they favor an opening or a special kick through gathering information. You will not expect the chess player to kick you, or the karate fighter to move his queen.
Yet still many people have only very hazy models of what the game they are playing looks like. So a new generation of problem solvers have recommended that you think about your problem as a game. What does playing the game look like? How do you win? How do you keep score? Any more complex problem deserves being turned into a game if for no other reason than the utility in showing you your model, and examining it with others.
Whether you do it by formal gamestorming or just by thinking about this as a systems science challenge (as in systems thinking) does not really matter, I think. But being explicit about my models is one of the things I am working hard on – and I think an exciting thing to think about.
A classical example of model failures is perhaps found in politics. It is the case where someone believes that they are right on the substance, and cannot understand why the system still comes to the wrong conclusion. When that happens frustrations ensues and people start calling each-other names, claim that they are irrational or evil. But that is rare. It is far more common that people have different models of reality than that they share your model and are evil.
Model disconnect is probably a very common phenomenon, and a very upsetting one. I can think of several examples of this in economics, climate change and information policy that would do well from recognizing that.
Perhaps we should introduce a new foul in the theory of rational argument? Just as we have outlawed argumentum ad hominem we could introduce argument from different model as a foul – but for both parties.
If they do not agree on the model, well, then they are not really arguing at all. They are just loudly misunderstanding each-other.
There is a lot of that, though, I fear.

The imperative of openness for data society

By Wednesday, December 7, 2011 4 No tags Permalink 0

Aficionados of Isaac Asimov’s Foundation series will like this article from MIT Technology Review. In the article we can read bout how physicists have developed methods that allow forecasting, much like weather forecasting, of purchasing decisions. And the way they do this should sound familiar to Asimov fans:

Here’s how they do it. These guys think of humans as if they were atoms interacting with each other via three different forces. The first is advertising, which they think of as a general external force, like a magnetic field. The second is a word-of -mouth effect, which they model as a two-body interaction. Finally, they think of rumours as an interaction between three bodies.

The possibility of predicting the systems as the number of forces working on the atoms increase is of course the really tricky question. Asimov also used the analogy of a gas and establishes a number of important theorems that form the basis of psychohistory. One of the most important ones, but one that I believe is often excluded from enthusiastic discussions of how we could use data sets to predict different things, is the second (or third, accounts vary) theorem of psychohistory. It says:

that the population should remain in ignorance of the results of the application of psychohistorical analyses

I have seen people react differently to this idea, that the usefulness of a predictive system declines with how well the predictions are known, but most reactions seem to want to trivialize the problem. Yes, people will say, but let’s try anyway. I think that is a good attitude, but it is theoretically interesting to examine what it would mean if we move down the road that research like the one referenced in the MIT Tech Review opens up.

In one sense psychohistory is a manipulative tool. A society predicted loses some, perhaps all, of its democratic nature, and we move from a free market to a more planned economy. Hayek famously argued that the knowledge coordination problem is not only best solved, but only solvable, through markets, but there are those who believe that modern technology proves him wrong — that we could predict and plan society much better now with the large data sets that are accumulating everywhere in our society. They are not necessarily wrong, but their conclusion, per the second theorem, depends on those data sets and predictions being kept secret from others.

But, they argue, that is fine. Because the value of living in a carefully predicted and planned society is larger than the value of living in a society where everyone has access to the data used for the predictions and where these predictions are thus rendered useless. Their arguments will range from national security, health, economics to social equality.

Let’s assume for a moment that the complexity facing any attempt at psycho-history is not such that it renders all attempts futile. That we could in fact develop social predictions with high accuracy through the use of large data sets, and that we could act on them and plan our societies accordingly. The question we then have to ask ourselves is this:

Does the value of predictability trump the value of openness?

There is an assumption here that is worth highlighting. And that is that for a democracy to remain open it can not be predictable by only a few. That is a complex and perhaps provocative assumption that I think we should examine. I believe this to be true, but others will say that our democracy already is predictable, in some sections and instances, only to a few and that they build their power base on that information asymmetry, but that it is reasonably open still. Maybe. But I think that those asymmetries are not systematic to our democracy, but confined to those phenomena, like stock markets, where they are certain to be important, but where they also do not threat the nature of democracy as such.

In summary, if we share the data and allow everyone to use it, then predictability goes down. And I think it decreases fairly fast. So in a simplistic graph:

This leads to the guess that in the information society, open access to data is not only a great way to encourage new entrepreneurship, it is actually also a safe-guard for an open democracy. Governments are still, by far, the largest data holders, I think, and the data they collect is very suitable for social predictions. (And yes, companies should do this too. We do, and others do too, through tools like trends and insights, and we believe that the world is better for it, with lots of research flowing from that shared data).

I remember reading Asimov, and re-reading him, with awe. I loved the idea of psycho-history, and though that if only the math-geeks were in control then things would turn out just fine. I thought Asimov himself was a prophet. Turns out that he was quite sceptical, and often talked about both the advantages and dis-advantages of being able to predict societies and plan them.

In fact, in Asimov’s later writings it turns out to be a robot that encourages Seldon to develop his science into a social means of control. As we all know, the robots in Asimov operate under the three laws, and they want to reduce harm to humans. A predictable society would allow for that to be done as carefully as possible, but it would also curtail some of the human spirit, creativity and holy insanity that make us human.

If there is a conclusion here it seems to be to explore the amazing value of data under the imperative of openness to the the full extent possible to ensure that our societies gain from this new, fantastic age of data innovation, discovery and exploration that we are entering into, but never compromise on that openness in the pursuit of macro-social predictability.

At what reform cost does democracy break?

By Tuesday, December 6, 2011 0 No tags Permalink 0

Imagine that a lawmaker came to you and said that he wanted to propose a law that would contain the following provision:

This law can only be changed after 15 years of complete consensus in the legislature and after paying 10 million USD.

What would your natural reaction to that provision be? I am guessing that it would be mixture of incredulity and horror, right? Because if we start legislating like this we completely invalidate the democratic processes that are enshrined in our constitutions.

But all laws are a little bit like this. To change a law is always harder than it is to put it in place. Revoking a law is very, very hard, since we believe that there is some wisdom in evolving the institution of the law respectfully (an idea that I agree with, to a certain degree – I do think that society goes through periods of punctuated equilibrium where it is appropriate to change and revoke many laws, but that is another matter).

One way to understand this is to think about it as a matter of cost. All laws have, because of the system they are embedded in, a certain reform cost associated with them. A number of observations flow from this assumption.

First, that not all laws have the same reform cost. Laws that are embedded in systems with greater inertia have a higher reform cost than laws that are more or less isolated in national legislation. A simple example would be a European directive vs a national law in a non-harmonized area. The European directive takes longer to change, and the cost associated with reforming it is greater than the cost of changing purely national legislation.

Second, the order of magnitude of reform cost goes up for every system that the law ties into. Let us define a way to measure this. Let u be the reform cost of a law. Here is my proposed hypothesis:

u for any given law can be calculated as the cost of changing the law of the different independent regulatory systems it is embedded in.

This is somewhat unexact, and only works as a guiding principle, but let’s take an example. The European directive on electronic signatures has been embedded in 27 systems, so the reform cost for that is 27 times what it would cost to change a rule that had only been embedded in a single country’s legislation. But, that does not really capture the complexity of different levels in the regulatory system, right? A European directive has to be implemented in national law, so changing it is a twofold process – first change the directive, then implement across 27 different countries. I think that deserves another multiplicator. So we end up with the notion that the reform cost depends on the number of regulatory systems that need to change times the number of actual regulatory levels (international, regional, national). That seems to give us an interesting way to extend the hypothesis. So.

u of a law is the cost of reforming it times the regulatory systems it is embedded in, times the levels of regulation that come into play.

In the EU case we would get 27*2*reform cost. The number of countries times the number of levels times the cost. We could always throw a national constant in there for specific cost adjustments in some countries. So if a country just accepts EU directives without implementation, well, their constant would be reflect canceling the lions part of the two-level cost.

So, why is this interesting, then? Very legitimate question (and thanks for staying with us this long!). The reason it is interesting is that I would argue that very high reform costs undermine the legitimacy of not only legislation but democracy. Just as in the example, we start thinking that this particular legislation breaks democracy since it is so hard to change it. Why bother to engage?

Now to the next observation. As we go towards a more and more globalized context, our legislation accrues higher and higher reform costs. A law flowing from an international treaty, embedded in a directive, implemented in national law and protected through a trade agreement has four levels of complexity, and if it is implemented in any number of countries, well, reforming it is almost impossible.

Should we allow for that?

Well, you may argue. We arrived at those laws in legitimate ways, so why not? If a law is decide upon in a legimate way, and put in place without force, it is legitimate. I think this is a flawed argument. The reason is really interesting. I think the legislative process is asymmetric: it is easier to produce laws than to revoke them. This asymmetry creates a situation where the sum total of democratically arrived at legislation may be undemocratic in effect.

Let’s say that again. The sum of democratically decided legislation is not always democratic in effect. In fact, there can be cases where it hurts democracy, because of exorbitant reform costs.

So what can we do? We need to start thinking about symmetry in legislation. How our processes for revoking and sunsetting legislation can be enhanced, how we can craft instruments for challenging sclerosis in the legislative system. We need to start measuring and carefully thinking about reform cost as a very peculiar cost on democratic and civic engagement. Reform cost is a powerful deterrent for anyone who is engaged in an issue. If I want to change something, and now that I face a condition like the one described in the beginning of this text, I will be dissuaded.

Activism, if we want it to be something else than empty posturing, needs to have a chance to succeed. Civic participation, for it to make sense, must make a difference. In a society with exorbitant reform costs none of that is true.

Let’s figure out how to change that. Maybe change begins by measuring.

 

ADBJ 30 år — fem frågor för de kommande 30 åren

By Tuesday, December 6, 2011 3 No tags Permalink 0

I dag hade jag möjlighet att via länk få hålla ett föredrag om de kommande 30 årens utmaningar där rätten möter tekniken. Som tidigare styrelsemedlem, medlem och allmän beundrare av ADBJ var det extra roligt. Nedan en sammanfattning av de punkter som jag talade utifrån.

  1. Informationssamhället utvecklas i spänningen mellan två olika framtidsbilder. Den ena är bilden av ett ordnat och kontrollerat samhälle där tekniken stödjer en rationell människa, den bild som kanske tydligaste artikulerats i Paul Otlets Mundaneum. Den andra bilden är bilden av ett samhälle där tekniken exploderat, rationaliteten vajar och kaos öppnar sig, en bild som Borges ger i Babels bibliotek. Rätten navigerar mellan ordning och kaos, kontroll och öppenhet.
  2. De kommande trettio åren kommer vi att se en mängd nya utmaningar. De handlar inte bara om ny teknik, utan ytterst om nya begrepp. De nya begrepp som följer av teknikens användning. Den modell för att förstå juridikens och teknikens samspel som jag tror ADBJ och IRI alltid stått för är en där tekniken inte bestämmer juridikens förvandlingar, men juridiken ej heller reglerar tekniken, utan där tekniken och juridiken båda följer en föränderlig social praxis, nya användningsmönster. Jag valde fem frågor som intresserar mig, och som jag tror kommer att bli viktiga.
  3. Hur reglerar vi artificiell agens? Där vi förut trodde att frågan handlade om artificiell intelligens, blir det alltmer uppenbart att den spännande frågan för rätten är frågan om agens, frågan om vem som gör något. Det gäller inte bara ansvarsfrågorna där frågan om vem det själv är som kör en självkörande bil är uppenbar, utan också upphovsrätten där den artificiella kreativiteten utmanar de klassiska verks och upphovsmannabegreppen. När Apples assistent Siri ger tips om var man kan gömma kroppar uppvisar hon början till agens, men är det medhjälp? Den artificiella agensen utmanar juridiken på djupet, eftersom vi alltid antagit att ansvar är något bara människor kan ta.
  4. Hur reglerar vi kunskapsproduktion? Vi befinner oss i ett samhälle där mängden saker vi skulle kunna ta reda på faktiskt är större än mängden saker vi vill veta. Det finns en mängd exempel på detta, inte minst den privata marknaden för genetiska tester, som 23andme.com visar. Vi ser en dataexplosion utan like, och anar att innovation, forskning och kreativitet kommer att söka sig nya uttryck i data, men vi har alltid, som människor, haft en obekväm relation till kunskap. Tänk på Kunskapens träd, på Prometheus, på Faust — vi har en väldigt ambivalent relation till kunskapens värde. Regleringen av kunskapsproduktion blir central inte minst när staten delar med sig av insamlade data. När Cameron nyligen valde att försöka öppna upp NHS databaser möttes han av protester som kräver att han avstår, i integritetens namn. Hur reglerar vi kunskapsproduktion i balans med integritet, immaterialrätt och yttrandefrihet?
  5. Hur reglerar vi berättelser och berättande? Informationstekniken har visat sig vara en berättarteknik, och berättelsen har blivit den nya relevanta enheten för att förstå hur samhället utvecklas. Vi berättar all om oss själva, om andra och om världen runt om oss på Facebook, Twitter och andra sociala nätverk. Ricoeurs observation att all identitet är narrativ slår igenom på integritetsområdet, frågan om vem berättaren är ges nya aktualitet när det gäller den kinesiska 50 centsarmén. Frågan om när vi får berätta något debatteras hätskt i samband med nedstängningen av Internet i Egypten, debatten om nedstängning i Storbritannien i samband med upplopen och den faktiska nedstängningen av nätet på de kaliforniska pendeltågen BART. Hur vi reglerar det mänskliga berättandet kommer att påverka samhällets utveckling i grunden.
  6. Hur reglerar vi gränser? Tekniken är global, politiken lokal och juridiken fortfarande nationell. Den ökande spänningen mellan de tre förvärras av att den socio-ekonomiska tätheten ökar. Vi går från sex mellanleds sammankoppling av mänskligheten till fyra. Vi kopplas upp och samman i helt andra strukturer än de som traditionellt burit reglering, politik och maktutövning. Tekniken frestar med återterritorialisering via IP-geolokalisering, men något grundläggande går förlorat om vi går den vägen. Så vad gör vi?
  7. Hur reglerar vi oss själva? Vi blir en del av alltmer komplicerade nätverk. Sensorer, kameror, mikrofoner och nya former av teknik förvandlar oss till cyborger sammankopplade med tekniska system. Vi internaliserar tekniken och blir en del av den. Denna, nästan Heideggerska, slutpunkt där vi ställer oss själva till förfogande i de tekniska systemen ställer frågor om vilka vi är och hur vi får rättigheter, hur vi kan förändra oss och vad vi har rätt att förändra.
  8. I ett framtida Mundaneum, en kontrollerad och ordnad värld, kommer vi kanske att kräva mänskligt ansvar, begränsa vad vi kan reda på, kontrollera berättelserna, upprätthålla de territoriella gränserna och försöka hålla en båtskillnad mellan det mänskliga och tekniken. I ett framtida Babels bibliotek är artificiell agens ett faktum, kunskapsproduktionen absolut (vad vi kan veta vet vi), berättelserna darwinistiskt konkurrerande, men inte begränsade av lagen, gränserna upplösta och människan ett nytt begrepp, ett utökat begrepp som också täcker de system vi blivit en del av.
  9. I det gränsland där ADBJ är verksamt kommer alla dessa frågor att avgöras, och det är viktigare än någonsin att vi för en öppen debatt om var vi vill hamna. Framtiden är öppen.
Jag ser fram mot 60-års jubileumet 2041!

Should we teach coding in schools?

By Monday, December 5, 2011 9 No tags Permalink 0

In the UK right now there is an interesting debate about whether schools should teach programming instead of having kids do boring things in office software. I think the answer is a resounding yes. Not only because, as some say, “coding is the new latin”, but because there is something else here that I think is extremely valuable: the way of thinking that coding teaches.

Computer science, to me, is interesting not chiefly because it is about computers. It is because it is actually a new kind of science. The way you are taught to think about the world in computer science, the way you are taught to approach problems and challenges, all of that is what to me makes it clear that teaching coding in school is teaching rationality in motion.

The Economist, as usual first on the ball, realized this early on and wrote in a 2006 article about a Microsoft project that Computer Science is changing science:

That claim is not being made lightly. Some 34 of the world’s leading biologists, physicists, chemists, Earth scientists and computer scientists, led by Stephen Emmott, of Microsoft Research in Cambridge, Britain, have spent the past eight months trying to understand how future developments in computing science might influence science as a whole. They have concluded, in a report called “Towards 2020 Science”, that computing no longer merely helps scientists with their work. Instead, its concepts, tools and theorems have become integrated into the fabric of science itself. Indeed, computer science produces “an orderly, formal framework and exploratory apparatus for other sciences,” according to George Djorgovski, an astrophysicist at the California Institute of Technology.

Couldn’t we just teach math, then? Math is all good and fine, but programming is taking math and ensuring that you use it to do things. The kick you get out of completing your first program is amazing. I still remember the canned-response AI-wanna-be program I wrote on my first Swedish computer, and ABC 80. It was, um, horrible, but I learned so much from that experience. And this is where coding is a way of doing science. I discovered a lot by trying to build an intelligent program, about both computers and humans. From the basic insight that humans obviously do not have canned responses (well, some might, but you know what I mean), to why it is really hard to construct software that can answer questions and how little needs to change in an input string for the conditional IF-test not to work…the things that exercise taught me was about programming, sure, but it was also a great way to understand the problem I wanted to solve. I discovered and understood things about the world through trying to code models of them.

Computer science offers a new way to understand the world, to think about it as algorithms and data structures and data sets. That is extremely powerful. So should we teach kids coding instead of teaching them to cut and paste in word processing software? It does not seem to be a very hard question does it?

Now, this requires that teachers know coding, or that people who know coding want to teach. And it requires that coding is taught in teachers’ seminars. It requires rethinking computer science, and not classifying it as science about computers, but a new kind of problem solving with new kind of tools.

But it is definitely worth it. The last paragraph in a column in the Guardian today brought that home to me:

That’s why software is like magic: all you need is ability. And some children, for reasons that are totally and wonderfully mysterious, have an extraordinary aptitude for programming – just as some have a musical, mathematical or artistic gift. If the government excludes computer science from the national curriculum then it will be effectively slamming the door to the future.

Opening up for kids who are algorithmic learners, say, who understand the world as data structures and algorithms, seems to be equally important as adapting to visual and auditory learners.

In Sweden today we start learning English at a young age. That is great. Let’s add learning coding from the third grade. Imagine a world where no kid leaves school in Sweden without six years of coding – six years of discovering what the world looks like through the lense of computer science…that would be interesting, now, wouldn’t it?

Vanderlyle Crybaby Geeks live in San Francisco

By Monday, December 5, 2011 0 No tags Permalink 0

Okej, så det var en ganska fantastisk konsert med The National igår. En vän sade nyligen att publiken i SF är svårflörtad, och desperat vill visa hur laid back den är (något som inte gick under VNV Nation konserten direkt, eftersom Ronan skriker åt publiken att dansa och röra på sig rätt så oavbrutet) – och jag måste säga att jag är litet småbesviken på min medpubliken. Det fanns gott om tillfällen att hoppa litet, men tydligen är det koolare att svaja med ett glasrödvin litet sådär lagom insiktsfullt spillande. Det kan i och för sig vara gruppen som uppmuntrar detta. Sångaren drack sisådär en flaska vin under konserten. Det syns också i det sista numret, en allsång på mitt favoritspår från nya plattan: Vanderlyle Crybaby Geeks.

Någon vänlig själ fångade det hela på YouTube och det är värt att se:

Den enda gången folk hoppade var mer i chock under Mr November, då sångaren helt prompt hoppade ut i folkhavet, promenerade runt och sjöng för full hals. Döm om min förvåning när han dök upp bredvid mig, klappade mig snällt på ryggen och försvann vidare i publikmassan. Här ser man hur han smyger runt litet i största allmänhet och sjunger så gott det går. Stackars mic-killen.

Det är något med konserter, alltså. Jag tror att det kan vara så att den gamle AI-skeptikern Hubert Dreyfus (Heideggerian galore) är något på spåren när han skriver att vi behöver hitta tillbaka till de ögonblick då verkligheten glöder dionysiskt (i boken All Things Shining). Eller så är det bara att jag blivit så gammal att jag måste ha ett filosofiskt alibi för att hoppa runt som en tjugoåring och sjunga med halvbra i refränger. Det är förmodligen det senare.

Hursomhelst: I will explain everything to the geeks.

Önskelista – The Ultimate Christmas Hamper!

By Saturday, December 3, 2011 0 No tags Permalink 0

Jag blir alldeles sjuk är jag ser saker som de här:

The Ultimate Hamper från Bureau Direct i Storbritannien. Vilken perfekt julklapp! Och endast 235 pund! Vad är det med anteckningsböcker, pennor och foldrar som är så magiskt? Sanningen är att jag inte riktigt vet. Jag tror att det är att de representerar en sorts redskap för lärande och tänkande, och att de innehåller ett skakigt löfte om tid att förkovra sig.

Förkovran. Det är ett vackert ord. Jag skulle vilja förkovra mig mer. Gå på djupet. Skriva mer. Läsa mer.

Under tiden som jag funderar ut hur jag ska hitta den tiden kan jag ju alltid köpa The Ultimate Christmas Hamper. Om jag har alla de där anteckningsböckerna, reservoarpennorna och blocken, ja då måste ju nästan tiden böja och kröka sig efter dem, eller hur? Tiden måste väl känna sig skyldig att ge mig en chans att fylla de där blanka sidorna med reflektioner (och reflexioner), epigram, lösryckta fragment och aforismer – själva existensen borde inse vikten av att låta min inre Horace fritt marschera, likt en tankens fältmarskalk, över de vita sidorna.

Det är ju givet. Bara att beställa. (Visslar med sådan muntration att verklighetsförnekelsen knappt ens märks).

Margulis on being controversial

By Saturday, December 3, 2011 0 No tags Permalink 0

Q: Do you ever get tired of being called controversial?
A: I don’t consider my ideas controversial. I consider them right.

Lynn Margulis response when interviewed in a magazine about here ideas on evolution, Gaia and more. While I am more inclined to believe in the Medea hypothesis than the Gaia hypothesis, I think Margulis’ response is a very thoughtful one . She recently passed away. Edge has a small piece that is refreshingly honest, with a note from Jerry Coyne that I think shows more respect than many other sickeningly sweet obituaries do.

Optimal social density?

By Friday, December 2, 2011 3 No tags Permalink 0

According to some interesting research out of Facebook the degrees of separation between different nodes in the social network is decreasing. In but a few decades, if these numbers are right, we have gone from 6 degrees of separation to somewhat more than 4. Now, the interesting question is if this is a good thing or not. Are lower degrees of separation (or let’s call it sloppily higher social density) a good thing or not?

Obviously, making the case that it is good seems easy. We could argue that a more connected world will be more peaceful, that knowledge will be shared more easily and that we will all be able to understand each-other better. But that seems to imply that the lower the number, the better. This seems obviously untrue, though, right? Imagine a world in which everyone is connected to everyone else. That would be the same as not being connected to anyone since any interaction is still limited by basic human nature.

So here is a question: what would be the optimal social density for a general purpose online network? There are a number of different possible answers to that question.

  • Any density that follows from the total number of nodes in the network and a first set of connections that do not violate the Dunbar number. You could imagine that social networks are most useful when you can use and connect with all of your contacts in a natural way.
  • Any density that follows from the scarcity of attention. I.e. you could imagine a social network as an attention consumption device, and the density should be determined by the attention you are willing to invest in it, with a ceiling value of your total attention overall.

It is also possible to argue that the overall number is without interest, and that what we should focus on is the number of hops within cliques. What happens with nations, companies and other clique like networks over time? Do we see the same reduction in the number of hops there? 

Is it possible to think of values or advantages with systems that have really high degrees of separation? 7 and up? I sometimes wonder what society would be like if the degrees of separation would be the double of what they are today, like 12 or more. How would such a society work? Better or worse? As some of the comments point out in the RWW article we may live in such a society, and we may only have the degrees of separation of the people who have joined FB. That set is surely biased. This is true, but still allows us to ask what optimal social density looks like and what determines it. It seems that there is a cut-off rate somewhere, where the lack of separation leads to social meltdown.
I wonder if there are any natural experiments that we can look at where we could see two groups with different degrees of separation compete about the same resources, and whether we can see if one social density is superior to another. Or if there would be a way to simulate such an experiment to see what social densities are optimal.
What would that look like? I guess you could imagine an equation that sort of took into account the benefits of social density and then we would have to model the costs. One way to do it would be to say that the speed of signals depends on social density. but that the accuracy of the signal depends on separation. That is not quite right, though. As we know signals often deteriorate if they have to travel through a number of different nodes in a network, so lower social density would also increase the unreliability of the signal. Maybe there is something there, and maybe attention needs to factored into the simulation too.
I wonder if there is a sci-fi story about to civilizations with completely different social densities at war? Ok, off to check that. Enjoy your Friday, folks.