In the latest edition of CloudViews Unplugged Andi Mann and George Watt discuss, amongst other topics, research done by the University of California in Berkeley. The topic of this research is Swarm Computing, which appears to be somewhat related to research on robots. Swarm Computing goes way beyond simply interconnecting many devices through the Internet and enabling them to exchange information. It creates a self-organizing network of entities whose rules of interaction and reaction can create something resembling a living organism.
Most of you will know the book ‘The Swarm’ written by Frank Schätzing where protozoae work together to form a collective intelligence manifesting itself through forming bigger orgnisms. The same principles are applied when enabling robots in remote locations to work together to accomplish common tasks, e.g. gather in one location after having done individual exploration. All robots will need to perform some active tasks to complete that mission, but the master algorithm controlling it all is above the individual algorithms owned by each robot.
The idea behind Swarm Computing is the same: individual devices all have their own software and capabilities, but they are enabled to collectively pursue a bigger goal, an overarching purpose, something that is beyond their individual reach. This is not accomplished through some sort of centralized control instance that has access to all devices, understands their capablities and instrumentalizes them. Remember robots, maybe on the moon or on mars? There is no practical way of exercising centralized remote control – too much distance from earth, no command center onsite, and far too little knowledge about the situation and condition of every individual robot. Rather, the individual devices need to be able to take care of themselfes, be able to understand and communicate their ability to contribute to an external command, without neccessarily being able to understand the purpose behind that command.
Let’s try and get practical. Imagine we would like to optimize energy generation and consumption in a given country, and let’s limit this to private households for simplicity’s sake. There’s atomic plants, coal plants, solar and wind energy, there’s the power grids, and there’s a large number of devices demanding power. Now imagine that every device (or at least one per household?) were able to understand it’s own power demand patterns and exercise some measure of control, e.g. my central heating knows I want the water to be at 65°C, but I don’t really care at what time exactly it is being heated up, so it has the freedom to decide this on it’s own within the limits I specify. Now enable all those devices to exchange some basic messages. Rather than trying to control it all from one central location, devices within a certain proximity (thus sharing a common part of the power grid) can start adjusting their power usage, e.g. optimizing this for when the most solar and wind energy is available within this area (e.g. from solar panels on private roofs), and so each logical part of the national grid starts optimizing itself. Then on the next level up grid controllers can start talking to other grid controllers in order to coordinate power requirements, and talk to the devices below to make them re-arrange their power consumption if needed. None of the devices involved needs to understand the whole picture, yet we’d get a self-organizing mesh of devices that is highly flexible and scalable.
How would this work in IT? Imagine Swarm Computing applied to Infrastructure Management, Automation, Monitoring, Performance and Capacity Management. Rather than following the traditional approach of defining, provisioning and registering instances, we’d have those instances established with some basic self-awareness and the ability to communicate and react to other instances. Cloud isn’t that far away from this today. Think about automatically adjusting virtualized workloads across hypervisor grids. The big difference is: with Swarm Computing we wouldn’t use one central piece of automation software, rather the virtualized instances would talk to each other and to their nearest grid controller, those would talk to other grid controllers (potentially even outside the perimeter of their own datacenter), and workloads would then adjust themselfes.
With kind regards!