Check article by Michael Marshall on this subject:
Virtual robots have “evolved” to cooperate – but only with close relatives. The finding bolsters a long-standing “rule of thumb” about how cooperation has evolved, and could help resolve a bitter row among biologists.
They created simple robots, and simulated their behaviour over 500 generations. Each robot had 33 ‘genes’, so robots with more common genes where more related, it defined a fonction of ‘closeness’ between them. They earned points from getting ‘food’, and could keep them or share points with another robot.
They did the experiment 200 times, changing relatedness parameters, and the level of reward for sharing points with any other robot. At each ‘generation’ they took the most successful – the highest scoring – robots. Here are their conclusions:
The team found that, over several generations, a pattern emerged: robots became more likely to share points with another if the two robots were highly related and if the benefit associated with a cost was high. In detail, a robot would share its points only if the number of points received by the second robot, multiplied by a fraction indicating the relatedness of the two robots (with “0” indicating no genetic relationship and “1” indicating identical genetics), was greater than the number of points donated by the first robot. As a result robots with few or no genes in common were unlikely to share points, while those with many genes in common were more likely to share.
The result of the experiment is greatly dependent on the rewarding mechanism implemented (cost/benefit of sharing points). If the tested reward mechanism from sharing didn’t take the ‘keen-ness’ between robots into consideration at the beginning (or it was balanced through the 200 runs), ithe conclusions are great to explain how we evolved to now. Aren’t you curious to know what they will discover if they let it run 500 or 5000 generations more?