Just finished a paper describing some new results on open-ended memetic evolution from the Artificial Culture project. I describe in some detail one particular experiment in which 2 robots imitate each others' movements. However, here the robots don't simply imitate the last thing they saw; instead they learn and save every observed movement sequence, then when it's a robot's turn to dance it selects one of its 'learned' dances, from memory, at random.
Here is a plot of the movements of the 2 robots for one particular experiment; this picture has been generated by a tool developed by Wenguo Liu that allows us to 'play back' the tracking data recorded by the Vicon position tracking system. The visualisation tool changes the colour of each 'dance', which makes it much easier to then analyse what's going on during the experiment.
Epuck 9 (on the left) starts by making a 3 sided 'triangle' dance, numbered 1 above. Epuck 12 (on the right) then imitates this - badly - as meme number 2, which is a kind of figure-of-8 pattern. It is interesting to see that this 4-sided figure-of-8 movement pattern then appears to become dominant, perhaps because of the initially poor fidelity imitation (1 → 2), then the high fidelity imitation of 2 by epuck9 (2 → 3), then the re-enaction of meme 2 as meme 4. And then subsequent copies of the same figure-of-8 meme then appear to be reasonably good copies, which reinforces the dominance of that meme.
Since the robots are selecting which observed and learned meme to enact, at random, then there is no 'direction' to the meme evolution here. Memes can get longer or shorter - both in the number of sides to the movement pattern, and the length of those sides, and the resulting patterns arise in an unpredictable way from the imperfect 'embodied' imitation of the robots. Thus, we appear to have demonstrated here, open-ended memetic evolution.
Here is a screen captured low-resolution (sorry) movie of the sequence: