Following our previous post about Guerilla, today we will be discussing how we’ve sought to differentiate Guerilla from previous neural network based chess engines; particularly Matthew Lai’s Giraffe. Each of the following sections serves as an introduction on one of the areas which we focused on changing. Each of these areas will be more extensively addressed in individual, future blog posts.
As I previously mentioned, my brother, Stéphane, and I have been working on a chess engine for several months now. Every summer we undertake side projects, ranging from making our own longboards, to writing a competitive Snake-like game as a battleground for simple game playing algorithms. This summer, as we travelled Europe (Figure 1), we took the opportunity to discuss and lay the groundwork for a chess engine. My brother, a competent chess player, had been looking to build one for several years now. I, on the other hand, am pretty terrible at chess and was more excited about the challenge; maybe I’d even learn a little more about the intricacies of chess along the way.
Although we had been floating the idea for making a chess engine for years, it wasn’t until recently that we felt we had the tools at our disposable to tackle the problem at a reasonable level. We had both gotten more comfortable with machine learning, and were curious and excited about applying its models and theories outside of class projects. With its historical importance, an abundance of data, and the ability to play it ourselves, a chess engine seemed like the perfect marriage of feasible and fun. We are well aware that many of the top chess engines in the world rely on finely tuned analyses of features (such as Stockfish) instead of machine learning, but our purpose was as much educational as it was exploratory. Plus, Google’s recent success in using a machine learning based Go engine, AlphaGo, to beat one of the top players in the world was rather inspiring.
We decided to call the chess engine Guerilla or “little war”, for the obvious, literal reason.
You can check out the source code, and progress so far, on GitHub.
I know its been sometime since my previous post, but unfortunately I haven’t gotten to a point on my current project (Guerilla) where I feel comfortable doing a complete write-up about it. It’s been a constantly evolving beast, and every time I think it’s at a point where I can start writing a post we run into something new and have to change our approach. As such, I’ve decided that I’ll break up my documentation of the project into parts, and instead of presenting a finished product, I’ll provide glimpses into our thought process and project progress. Really this is the approach I should have been taking in the first place…
So, I plan to write an introduction to the project later this week and build from there.
For today’s post I will be showing how I went about installing Tensorflow on an Amazon EC2 GPU-enabled instance, a process which took me much longer than it should have…Hopefully this tutorial saves you a few grey hairs.
I sometimes find myself surprised as to how deeply intertwined our lives are with the internet. That makes some people a little worried, but I gladly embrace our new online augmentation. This does mean however that it’s become increasingly important to both study and model user behaviour as they interact with web content. And I mean this from a strictly technical perspective; although the cognitive/neuroscience aspect of it is probably equally interesting. If we can predict what a user is going to do next we can provide a better user experience and help guide them through the sometimes confusing web that is, well, The Web.
The project I’ll be talking about today focuses on predicting the next page to be visited given a sequence of page visits from a user. The idea is to do this using online learning so that our prediction model is always learning from the stream of page requests received by web servers. We adapted the HMM model to this online learning problem through a novel algorithm which we have named -Safe. This modified model was applied to a World Cup page request dataset, where it achieves better accuracy than a naive predictor and performs almost as well as an offline trained HMM with full-knowledge.
Boggle is easily one of my top 3 tabletop games. My family and I have burnt some serious midnight oil over the years trying to best each other at this addictive word game. It’s gotten competitive to the point where one of my uncles trained between family gatherings to avoid the previous shamings he had received by the rest of the family’s superior Boggle-ing abilities. It’s smart, strategic, fast-paced, unforgiving, and I love it.
I recently had the opportunity to learn about GPU-programming and all the wonderful parallel possibilities it promises, and decided that Boggle provided the perfect playground to test my new-found knowledge. So today I present my GPU based Boggle solver.
As always you can find the source code on my GitHub.
I was feeling rather poetic yesterday, so I wrote this sonnet:
Eye of the BeholderEye that spoil and my checked thou her treasure;
Die dreams the next death to me memory,
Perforce where there and disgrace in pleasure,
But outward and shall hidden spheres who sky,
All to sue extremity slave unfathered,
Them woman that thou to some bear around,
Yea but dulling use me and my gathered.
Are I and soul a game of coals of ground,
Thou to like music am the number taste,
Made esteem and this sight do shall in side,
Tanned offender's there that thoughts a should the last,
Those breath an nor losing and raven hide,
Doubt storm-beaten report you thou is staineth,
She and his hell the proud loud they disdaineth.
Yes, I know, I’m a modern Shakespeare. Unfortunately, I don’t get any of the credit for this as it was written by an automated poem generator I worked on with a couple of classmates: Shakespeare Bot (9000).
What Is My Purpose?
Shakespeare Bot was created with the purpose to generate poems which resemble a Shakespearian sonnet. It does so by training a Hidden Markov Model (HMM) on these sonnets, as well as imposing some additional constraints to better match the Shakespearian style.
I’ve come to realize that some of my software projects tend to fall by the wayside once they’ve been completed, so in an effort to properly shelve them I’ve decided to start this blog. My hope is that it’ll motivate me to document them, and force me to write down those little development details which turn out to be super important – but too often fall through the cracks. Turns out its important to always leave a note! As a warm-up, and to make sure there’s at least something to read on here, the first few posts will be on previous projects, then hopefully I’ll get into some more current material.
Quick bio about me: Recent graduate from Caltech with a Master’s in Electrical Engineering, with an undergrad in the same field from McGill. But as I tell everyone, I’ve made the transition to software as of a couple years ago now. Turns out “hardware” is just a synonym for “magic”. Currently funemployed in Vancouver, Canada and looking for that elusive special someplace to sweep me off my proverbial feet. Currently my interests mostly lie in machine learning and data science.
That’s all for now! Thanks for dropping by and check back soon for some exciting new posts.