For my game_agent, I tested 3 heuristics listed below. Each heuristic assumes I am Player 1.
Each agent was run 5 times in a row, and the results averaged out over the 5 runs.
Highest average Score: 77.29%, from the Aggressive agent.
I also tried a few other things, such as weighting the SCORE of each move with the DEPTH that it found it at, so a depth of 3 would increase the score by the constant 3 ( score = score * depth ). This didn't hurt performance, but it did not help either. I tried variations on this, adding a simple + DEPTH instead of multiplication, as well as exponential depth. Also tried was randomly selecting one of the above heuristics, which gave a score of 59.52%.
As you can see the Aggressive agent won by a hair, but all 3 were very close. The highest single score of any match came from our custom close_to_you heuristic, as well as our lowest single score.
For choosing the final agent to play, I concluded that the Aggressive agent was the best because:
Beating the ID_Improved score proved very difficult and it took many tries and heuristics to get an agent to come close and finally beat it out by a hair. All my heuristics are 'lightweight', trying to maximize the depth of the search rather than get the perfect move for each level.