Programming Collective Intelligence chapter 11 notes

Published: 2014-06-07
Tagged: python readings guide

Programming Collective Intelligence Notes - Chapter 11

The last and final chapter of Programming Collective Intelligence introduces the reader to genetic programming. Genetic programming bears a slight resemblance to what we did in chapter 5 - the genetic optimization algorithm. Both of these approaches make extensive use of mutation and crossover functions in order to generate an optimal solution. However, the previous code generates an optimal set of function parameters whereas the current chapter actually writes code to solve a problem.

You got that right - this chapter introduces a tool that writes code for us! Mandatory "robots will take our jobs joke here". Our code in this chapter will be fairly simple but it will illustrate just how far one can go with an hour of spare time. The code will only cover such functions as if, greater than, addition, subtraction, and multiplication. Using this very simple set of function, our code will develop solutions to two problems. The first - a mathematical function. The second - a very simple game.

Some Theory

How does this magic work? As with many cool things in computer science, it's about trees and recursion. The program that our code will generate will be made out of nodes. There will be three kinds of nodes: a general node (Node class), a parameter node (ParamNode class) and a constant node (ConstNode class). Starting from the last, the ConstNode is a node that simply contains a value, which is to remain constant. Evaluating this node returns this value. The ParamNode is used to pass variable inputs along a tree of nodes. Evaluating this node will return a value pointed to by the node itself. Finally, we have the jack-of-all-trades Node, which contains references to any type of node as well as a function. Evaluating a Node node will first trigger a recursive collection of all ParamNode and ConstNodes and will then run a predetermined function on the collection of results from the previous steps. If a Node has child Nodes, the recursive collecting function will also evaluate those nodes.

Imagine a tree composed of the above nodes. Feeding it some parameters and calling it to evaluate itself will launch a cascade of nodes calling other nodes for answers. These answers will travel back up the tree until finally we arrive at a single value - the answer. Fascinating, isn't it?

The genetic in genetic programming comes in when we generate many trees and evaluate them. We then choose the ones with the least amount of error and promote them to the next generation. We sprinkle in a bit of mutation and crossover, so that we end up with a new population of trees that are in some way related to the top winners of the previous generation. Then we evaluate the trees again, pick out the best ones, "breed" them and on this process goes. We either get to the correct answer or at least very close to it.

As a quick reminder about mutation and crossover, we say that mutation randomly changes a part of our tree to something random whereas a crossover takes two trees and swaps a part of the first tree with a part of the other tree. It's easy to remember what these functions do because mutating creates random change in a tree (think Hulk) and is fairly rare and crossover "breeds" two trees together, like passing on genes from two parents.

Practice!

Let's step over a few examples of using the code from this chapter to get a more intuitive understanding of what's happening inside. First, we'll use the aptly named example_tree function to create just a random tree to evaluate it and see how it goes.

import gp
example_tree = gp.example_tree()
example_tree.display()
if
 isgreater
  p0
  3
 add
  p1
  5
 subtract
  p1
  2

example_tree.evaluate([3, 3])
1
example_tree.evaluate([4, 2])
7

The handy display function makes it easy to visualize the tree. Let's see how the first evaluation example works it's way through the tree: if parameter 0 is greater than 3, add 5 to parameter 1 or else subtract 2 from parameter 1. Parameter 0 is 3, so we subtract 2 from parameter 1 and get 1. All is well. The second evaluation takes the first branch instead, since parameter 0 is larger than 3, and adds 5 to parameter 1 and obtains 7.

Let's try our hand at building our own tree:

tree = gp.Node(gp.ifw, [gp.Node(gp.gtw, [gp.ParamNode(0), gp.ParamNode(1)]), gp.Node(gp.addw, [gp.ParamNode(0), gp.ParamNode(1)]), gp.Node(gp.subw, [gp.ParamNode(0), gp.ParamNode(1)])])
tree.display()
if
 isgreater
  p0
  p1
 add
  p0
  p1
 subtract
  p0
  p1

tree.evaluate([5, 3])
#[Out]# 8
tree.evaluate([3, 5])
#[Out]# -2

This tree is just as simple as the previous one - if parameter 0 is greater than parameter 1 - add them, else subtract them. Creating the tree by hand might mean a confusing number of parenthesis, but I really advise going at it and making a few simple trees like this, because it really improves the understanding of the material.

Now we can tackle the first problem: a mathematical function. First, we will build a set of data that combines both features and results. We will use this set of data to evolve a tree that can take the same parameters and output the same results as are in the set of data. This example will illustrate the use of score_function, which calculate the error of a tree when applied to the set of data. The lower the score, the closer we are to the correct function.

rf = gp.get_rank_function(gp.build_hidden_set())
gp.evolve(2, 500, rf, mutation_rate=0.2, breeding_rate=0.1, pexp=0.7, pnew=0.1)
5876
4222
1200
400
225
225
200
0
score!
add
 multiply
  p0
  p0
 add
  5
  add
   p1
   add
    add
     p0
     p0
    add
     p1
     p0

First, we create a scoring function tailored to our randomly generated set of example data. Then we use the evolve function to do the mutating, crossing-over and evolution business for us. We can see that with each iteration, the score keeps going down until we hit a 0 and score! The displayed tree is a tree that gets 0 errors on our example hidden data set. We were able to correctly create a tree that mimics the hidden mathematical function! For the curious, you can actually check out the real hidden function on line 121. You'll quickly notice that the hidden function is much more concise than what our tree is doing. This is due to the evolution algorithm throwing everything at getting as close to correct as possible. This can be fixed by pruning the tree, which unfortunately isn't covered in The Book.

Creating SkyNet

The first thing that comes to mind for any sane programmer is to use what we learned so far to create some game AI. The "AI" is going to use only the basic functions we've defined so far and so it'll require a very simple game called Grid War. The game is played on a grid board with two players. The players can move up, down, left, and right. The aim of the game is for one player to capture the other player by moving onto the same field as the other one. The players take turns moving. Make one wrong move and you're captured.

Here we'll mainly use three functions: grid_game, tournament, and evolve. We already covered evolve. gridgame is the actual *game engine* if we can apply this term to something so simple. It manages players, their moves, and score. tournament, on the other hand, takes a list of players and feeds them to gridgame one by one, to see which player will come out on top. Using these function in conjunction with the evolve function gives us a way to, well, evolve a more sophisticated tree.

First, let's create two random trees and play them against each other:

p1 = gp.makerandomtree(5) p2 = gp.makerandomtree(5) gp.grid_game([p1, p2]) #[Out]# 1

In this case player 2 (p2) is the winner. It would have returned 0 if player 1 won and -1 if it was a tie. p1 and p2 were completely random trees so it shouldn't be a surprise if either one of the players moved in circles or tried to ram through the walls. Now we have to evolve a program and what better way to test it out than to play a human against it?

winner = gp.evolve(5, 100, gp.tournament, maxgen=50) gp.gridgame([winner, gp.HumanPlayer()]) #[Out]# 0 gp.gridgame([winner, gp.HumanPlayer()]) #[Out]# 0 gp.gridgame([winner, gp.HumanPlayer()]) #[Out]# 0 gp.grid_game([winner, gp.HumanPlayer()]) #[Out]# 0

Well, I got beat 5 times in a row. Either I suck really bad at grid war or the AI got to be pretty decent. I wonder if I can apply this to other games such as chess, go, or even tic tac toe? Or better yet - poker! That would get me lots of money, really fast - and trouble as well!

Something Ends, Something Begins

That's it. The Book ends. It was quite a journey, but thinking that I've reached an end would be a mistake - now would be the right time to head out and apply all of this knowledge to problems. Kaggle hosts a number of very interested problems for the machine learning crowd - some are easy and great practice and portfolio material whilst others could be considered jobs on their own (they have high rewards). I also have to mention that Kaggle is a great community.

Employment aside, machine learning is just plain fun. There's so many ways to make some cool projects based on this knowledge - neural network controlled rc car, face recognition, figuring out apartment/bicycle/airfare prices based on scraped data, figuring out accident/crime hotspots in the city you live in, and so much more. While the more black-boxy approaches require a bit more study (I'm looking at you, neural networks), some other algorithms like k-nearest neighbors, k-means clustering, decision trees, or naive Bayes' simple enough to code up in 30-45 minutes. That's something anyone can do while bored in class, waiting for a train or hell, even on their daily commute. As cliche'd as it is I gotta say that the sky's the limit.

Comments

There aren't any comments here.

Add new comment