Skip to main content

Genetic Algorithms: Crossover

So far in this series on genetic algorithms, I've talked about encoding individuals and selection methods. The next task in a generic algorithm is to mate individuals using an operator called crossover.

Crossover produces two (or more) new individuals (children) from two (or more) existing individuals (parents). The idea behind crossover is twofold: Sharing of worthwhile traits allows for individuals to be created which inherit the best traits of their parents (optimization), and new combinations of traits may reveal as-yet-unexplored possibilities (search).



In this post I'll describe a few general crossover techniques. These approaches are not order preserving, meaning that they do not ensure that a viable solution is created when the solution involves the ordered traversal of genes (e.g., traveling from city to city for the TSP). If you want a good 10,000 foot overview of order preserving approaches (plus some nice graphic illustrations), I suggest starting with the Wikipedia page on crossover.

Single-point Crossover


Single-point crossover creates two new individuals from two existing individuals - chosen via selection - by picking a single point in the chromosome string (usually at random), and swapping the contents of the two parents at the crossover point (vertical pipe, see below).
aaaaa | aaaaaaaaaaa               Parent A
bbbbb | bbbbbbbbbbb Parent B

aaaaa | bbbbbbbbbbb Child AB
bbbbb | aaaaaaaaaaa Child BA

Multi-point Crossover


Multi-point crossover is exactly like single-point, except that more than one crossover point is chosen. This approach allows for more than two offspring to be created from two parents:
aaaaa | aaaaaa | aaaaa             Parent A
bbbbb | bbbbbb | bbbbb Parent B

aaaaa | aaaaaa | bbbbb Child AAB
aaaaa | bbbbbb | aaaaa Child ABA
aaaaa | bbbbbb | bbbbb Child ABB
bbbbb | aaaaaa | aaaaa Child BAA
bbbbb | aaaaaa | bbbbb Child BAB
bbbbb | bbbbbb | aaaaa Child BBA

Uniform Crossover


Uniform crossover gives every gene in a pair of selected individuals a constant probability of being swapped in the child. In this approach, there are 2^k-2 possible children available from 2 parents.

There are lots of other crossover techniques possible that I haven't discussed, and many of the contributions in genetic algorithms research is in new crossover operations for particular individual encoding schemes. There are as many possibilities as there are potential individual encodings.

Please feel free to leave me comments or reach out on Twitter or LinkedIn.

Popular posts from this blog

Neural Network Dense Layers

Neural network dense layers (or fully connected layers) are the foundation of nearly all neural networks. If you look closely at almost any topology, somewhere there is a dense layer lurking. This post will cover the history behind dense layers, what they are used for, and how to use them by walking through the "Hello, World!" of neural networks: digit classification.

Arrays of Structures or Structures of Arrays: Performance vs. Readability

It's one of those things that might have an obvious answer if you have ever written scientific software for a vector machine. For everyone else, it's something you probably never even thought about: Should I write my code with arrays of structures (or classes), or structures (or classes) of arrays. Read on to see how both approaches perform, and what kind of readability you can expect from each approach.

Neural Network Pooling Layers

Neural networks need to map inputs to outputs. It seems simple enough, but in most useful cases this means building a network with millions of parameters, which look at millions or billions of relationships hidden in the input data. Do we need all of these relationships? Is all of this information necessary? Probably not. That's where neural network pooling layers can help. In this post, we're going to dive into the deep end and learn how pooling layers can reduce the size of your network while producing highly accurate models.