A wireless sensor network consists of a large number of spatially distributed sensors. Each sensor (or node) can send to, and receive packets from, other nearby sensors. Message packets are passed from node to node in the direction of a sink. To this end, each node is aware of its 'hop' distance from the sink; the hop distances are refreshed by the periodic transmission of small control packets. The sensors are low power, low capacity devices, designed for unattended operation over extended time periods.
One example of a possible use of such a network is the early detection of forest fires. The sensors are distributed throughput the forest. An onboard heat detector triggers the sensor into action and data packets are transmitted towards the sink. Typically many sensors in a small area might start streaming packets at the same time, leading to congestion and lost packets. The design of routing algorithms to deal with such scenarios is an active research field. In particular, the routing algorithm must minimise congestion.
Transmission failures arise from a number of factors. For example, if two or more packets are transmitted towards a specific node at about the same time, they are liable to 'collide' i.e. electromagnetically interfere. The packets are not received and must be retransmitted. Furthermore, sensors have a limited memory; packets are stored in a short queue awaiting retransmission. If the queue of a target node is full, the arriving packet will be rejected. Packet congestion, therefore, degrades the performance of the network.
The assignment is an investigation of a simple sensor strategy based on swarm intelligence. Naively we might propose a simple 'greedy' network algorithm: nodes transmit packets to neighbouring nodes with lower hop distance. A random winning node is chosen in case of a tie. However there is great potential for congestion since packets do not base their movement on other packets, they only do what is best for them. Swarming, which is a type of collective intelligence, may help.
At a simple level, it has been suggested that swarming animals (for example bees) follow two opposing rules:
where a is a constant and q is the queue length on the target node, so that queues of length 0 and length 2a have zero attraction, a queue of length a has a maximum attraction.
Packets also need to get home, so we add the greedy part:
The algorithm is encoded in the assignment source code, SwarmNet, which simulates high traffic in a random sensor network.
Your task is to investigate under what conditions (if any) swarming might offer superior performance as compared to the greedy algorithm. The common measures of performance are the packet delivery rate and the end-to-end delay.
Write a short comparative analysis of the performance of the possible parameterisations of algorithms, and then present it to your peers and supervisors.