Evolving Virtual Ecology: Machine learning thesis project

Tools: Java Eclipse Processing

I developed an artificial life system that contains virtual creatures that evolve, die, feed, reproduce and learn audio commands that a user can input using a mic. Presented at the ArtsIT international conference. The project started as a master's degree thesis and evolved into an article. All of the algorithms were written by me from scratch.

Inspiration

Inspired by cellular automata and other living systems within the field of artificial life, I decided to create my own model of a living system. Projects such as "Conway's Game of Life", flocking of Craig Reynolds, evolving creatures of Karl Sims and "Polyworld" of Larry Jaeger inspired me a lot.

What is it?

It is an artificial life model which is, basically, a 2D world that contains various artificial creatures in it. Each creature has a brain of its own and acts in accordance with its genes, brain impulses and primal needs. An interesting thing about these creatures is that a user can interact with them by playing sounds or saying things through the microphone. The creatures can learn from these interactions and ultimately a user can establish control over them.

So how does the interaction work?

A user produces a sound into a mic (plays a note on a piano) and the creatures will react by moving in a random direction. When the application starts, their neural network is still untrained so they will react randomly. Let's say that each time I play the note A, I want the creatures to move to the right. For them to learn to associate the direction right with the note A, I will click and drag the mouse to the right every time after I play the note A. After I repeat this 3 or 4 times, they should learn to associate this note with the movement to the right. Soon I will be able to use my piano as if it was a game controller to control the creatures.


How does it work?

Deep Q-Learning is famous for teaching computers to complete classical arcade games and even play Doom. I decided to use it because I wanted to learn it and because I wanted the creatures to be able to analyse previous actions when deciding what to do next.

Deep Q-network configuration for interpreting 513 audio spectrum values.

My Deep-Network analyses 513 spectrum values from the microphone in real time and calculates the output immediately. Backpropagation is calculated only when the user drags the mouse in a direction. That direction becomes associated with the set of audio spectrums similar to the ones that were heard before the user dragged the mouse. I used tanh activation function for the output and mean square error calculation for backpropagation of each layer.

An additional Deep Neural Network was used for developing an interaction between the agents and their environment. The food for the creatures produces audio signals so they can locate it using their audio sensors. The data from their audio sensors is fed forward through the Deep Neural Network and as a result, the output determines whether they will eat that food, ignore it or avoid it.

Problems and solutions

One of the key problems present throughout the development was how the agents would be able to perceive sounds from their own environment. The solution to this was representing an artificial ear in the form of a grid of sensors around an agent.

Because the sounds are being transmitted as pings, it was necessary to implement a temporary memory system for each of the sensors so they would remember which ping was produced a moment ago. Whenever a sound is produced, a sensor fires and then keeps the value for several frames, unless it is overwritten with a different value.

Artificial ear system.Each of the three images is a graphical representation of a grid of sensors. Each of the agents on the image is producing sounds. The selected agent has a grid of sensors around it. Numbers on a grid represent detected and memorised audio frequencies. 1 means no frequency detected.

Some of the artificial neural nets written for this project are Deep Q Neural Networks and therefore some of them rely on n-dimensional arrays (tensors). It is common to assume that such arrays require a pre-written library, as they are not a part of Java programming language. There are lots of libraries available online that provide them, however, none of them were used in this work. All of the calculations were written by me. The n-dimensional arrays that were used in this work are as well of my own design.