Neutral Emergence

On Sunday, I wrote about cheating in probabilistic systems, but one thing I left out was that these systems are actually neutral systems. A while ago, John Robb (quoting the Nicholas Carr post I referenced) put it well:

To people, “optimization” is a neutral term. The optimization of a complex mathematical, or economic, system may make things better for us, or it may make things worse. It may improve society, or degrade it. We may not be able to apprehend the ends, but that doesn’t mean the ends are going to be good.

He’s exactly right. Evolution and emergent intelligence doesn’t naturally flow towards some eschatological goodness. It moves forward under its own logic. It often solves problems we don’t want solved. For example, in global guerrilla open source warfare, this emergent community intelligence is slowly developing forms of attack (such as systems disruption), that make it an extremely effective foe for nation-states.

Like all advances in technology, the progress of self-organizing systems and emergent results can be used for good or for ill. In the infamous words of Buckethead:

Like the atom, the flyswatter can be a force for great good or great evil.

Indeed.