Machine learning experiments

Tools: Java Processing

Self-set challenges related to implementation of the Neural Networks and Evolutionary algorithms with the goals of learning, fun and getting ready for a bigger project (Evolving virtual ecology).

What was done?

During the preparation for my final thesis, I've created several systems utilizing neural networks and evolutionary algorithms. "Jumping over ball" is a system that teaches the agents to jump over obstacles like in a platformer video game; "bug world" is a virtual ecosystem populated with creatures that feed, die, reproduce and evolve through the generations of mutation; and "Deep colour guess" is a deep neural network that learns to name colours.

Jumping over ball

In this challenge a deep neural network had to, through the evolutionary computation, learn to navigate through obstacles in a 2D world. To increase the complexity of the task, positions of circles were randomised each time so Deep Neural networks would never be able to rely on time in order to learn correct actions. They would have to rely on sensors, similar to self-driving cars.

As a result, it proved possible for the agents to figure out correct jumping times and movement actions after around 20 generations.

Bug world

Bug world is simple ecosystem of cannibalistic creatures - bugs. The bugs are represented as black dots that start their life cycle at a very small size. They move pointlessly, controlled by the need for feeding and their fear.

This model is very far from the realistic biology, but realism was not my goal. With this project, I aimed to create a virtual ecosystem with fictional rules. I wanted to experiment with the capabilities of evolutionary algorithms and test how the behaviour of very simple creatures would be influenced by evolution. What inspired me to do so, was the book Biological Bits by Alan Dorin.

Naturally, with more training cycles the accuracy increases.

Deep colour guess

The code draws 7 rectangles of different colours, Deep Neural Network writes colour names underneath the rectangles. The table shows the outputs with different numbers of training cycles, ranging from 0 – no backpropagation to 10000 backpropagations.