How I hacked a gaming site

A while back somebody showed me a casual gaming site where people play each other at a variety of card games. No money, you just play for points and to gain ranking among the thousands of players present there. The site offers a limited amount of daily free play time, beyond which you have to pay to get a subscription. I gave it a shot and played for a bit.

But then my AI instinct started thinking about how to make a computer play cards. Could I write a program that would play perfectly? And could I hack the system to obtain the stream of game data necessary for an automated player to participate in real, live games? It seemed an interesting technical challenge for a small project.

Having decided the particular card game to play, I had to figure out how to make the computer play it. I figured that with appropriate game play data I could implement a minimax type algorithm and a heuristic evaluation function to do a decent job. So that’s what I did, with a little tuning to get the heuristic correct and some basic probability judgements as to what cards could be dealt.

Minimax algorithm for tic-tac-toe (Sussex university)

Now came the hard part, getting the program to read in the game information as it’s being played. The program has to be in sync, seeing everything that happens in order to make its moves. A human just looks at the screen and sees the cards that are dealt, played, the points etc, but a program sitting on your computer can’t do that easily.

In fact, the visual route is one of three I considered. What it entails is capturing screenshots of the gaming area continuously, and then using sophisticated image recognition algorithms to detect dealt cards, played cards, scores and other visual components. It’s tricky and computationally expensive in real time. The game I selected plays very quickly, people make their moves in seconds, so it would be almost impossible in a mini project.

The second route I thought of was capturing network traffic with the gaming site. Since all the game information has to be transmitted to the game client (in the browser) then you could in theory monitor the connection and sniff out what you need. I brought up a browser console and looked at the requests and returned info. Not good. It seemed that the traffic was encoded with some scheme precisely to make it unreadable. Not something to be reverse engineered in a couple of days.

Now, something I haven’t said until now is that this gaming site uses java applets embedded in the web browser to create the gaming client interface, i.e. the gaming window where you see cards, players, make your moves etc. I inspected the html and found references to the code files (jar files). What about decompiling said code and trying to read the reverse engineered source code to see what was going on?

Compiling (Monash university)

I ran a decompiler and started looking at the reconstructed package structure and source classes. It was a lot, and some names were cryptic, but I managed to spot some files and packages that had meaningful names. Tracing calls between 20 or so files narrowed down the search to a candidate file that looked promising, its methods represented meaningful game events, and parameters and attributes appeared to contain the desired game state information.

But then what? Even if I could trace the code to the important locations, how could I get it to talk to my program? After all, that code could not be modified, it would come from the gaming site’s server. And even if there was some way to locally alter the client, the code would not compile after a decompiling process that was flaky.

Didn’t seem to be an easy way around this… except for using a tracing tool based on code instrumentation. Btrace allows you to inject code into an external jvm process at methods which you specify via type signatures. The code you can inject is subject to strong restrictions and requires you to use a special API to access the host code’s parameters and state. It’s cumbersome, but it works well. You connect to an external vm and it does its magic.

Simple btrace script (Lennart Schedin)

What I had to do, then, was inject the key methods with code to access the important information. I could output this data as text, and then pipe it to my program which would parse the input, synchronize game state, and run its minimax algorithm. The result would be a set of possible moves and scores, along with a search tree so that you could review the computer’s thinking.

All that was left was trying it out. Which meant connecting to the gaming site, launching the tracing tool piped to the game program, and making its moves manually in the browser. I could have done a bit more work to get the program to play directly, but the main interesting bits were done.

So, how did it do? Well, the whole setup had a 90%+ winning rate. Not surprising given the card data that it streamed combined with minimax. After a few days of free playing the rating shot up very quickly, and could easily achieve the top spot out of approximately 8000 players.

Wrapping up, it was an interesting technical challenge for a small project.

You’re probably thinking, isn’t using a computer bot cheating? Of course it’s cheating, that’s why I call this a hack, even though it’s just the game client that got hacked. The site should plug these holes so that people can’t do this to win systematically, especially considering that it’s a paid service. Although I suspect that anybody (more like nobody) setting up something like this is doing it for the challenge and shelves the solution as soon as it’s working.

Note: feel free to contact me for details if you think your site is the victim!

Loops, search and the creative process

I have described rationality as the ability to represent the world correctly (in a brain). Correct representation allows for correct predictions. Predictions in turn allow choosing those actions that will attain goals; an intelligent agent can predict the result of its actions and develop plans accordingly. It sounds complicated but it’s really a routine process. Here’s a pigeon doing something like it.

Without the ability to develop plans (and during learning stages) an agent is limited to trying every possibility and seeing what happens. But trying everything out in the real world can be slow and even dangerous. So intelligent agents simulate the outside world internally to carry out the trial and error process much more quickly, and safely. This trial and error process is a loop.

A creative process can also be seen as a search containing a trial and error loop. I’m using the term creative process loosely and inclusively. It applies to any process where we are creating something with some specific intent and criterion. Call the result art if you like. It can be a painting, a computer program, a piece of music or the design for a bridge. There are many manifestations, but they can all be interpreted as a search for a solution in a large space of possibilities; finding good solutions is not trivial.

This is why trial and error is necessary. It cannot be known ahead of time what the best solution will be, nor how good a particular solution is, except by trying them out. During such trials the creator evaluates a solution. This evaluation then propagates information back to the search for other solutions, establishing a feedback loop.

Does this view of the creative process offer any insights as to how to best carry it out?

First, it is clear that, all other things being equal, the more possibilities are considered the higher the quality of the final solution will be. So speeding up the feedback loop can dramatically improve the outcome; removing barriers and delays between creation and evaluation. It turns out that the speed of the feedback loop is very sensitive to advances in technology, here are a few examples:

All these advances and many others can be interpreted as a tightening of the feedback loop, accelerating the creative process and thus exploring more possibilities. A straightforward example above is digital photography. By eliminating chemical processing the time between taking a shot and judging the result is drastically reduced (including reductions due to limited film etc). There are potential pitfalls, a faster feedback loop may also encourage a less disciplined and principled approach. But this is more a side effect of the advance than a property of the advance itself, which ceteris paribus is usually a net gain.

3D modeling

And another point can be made. When searching for solutions, the creator will inevitably be limited to a fraction of the search space. And the quality of the final solution can only be as high as that present in that space. There’s a tradeoff, if the space is too large, the search will be bogged down in the filtering of many low quality results. But if it’s too small, the quality of the best solution present in the examined space will be bounded, producing stagnation.

So the second insight is that adding a random element to a creative process allows the search space to be expanded beyond that which would be possible otherwise. This is the role of accident in art. It allows the creator to transcend his/her limits in exchange for more labor or a possibly worse outcome.


Additional interpretations are possible in terms of search and search spaces, for example

– The role of knowledge and expertise as a search heuristic

– Process specifics as search bias

Learning vs memorizing

The other day the feasibility of AI came up in casual conversation. Somebody argued that machines can only do what they’re told to, that they just follow a pre-programmed set of rules or commands, and so on.

In response to this I normally cite the field of machine learning as an example of computers doing things (I’ll use classification here) they were not explicitly programmed to do. When recognizing characters, for instance, the computer is not explicitly given a set of rules to determine what letter a given handwritten element corresponds to. Rather it learns from a training set, which for classification is a set of elements together with their correct labels (classifications).

An objection was made to my response: that the process is supervised; even though the programming is not explicit, it is nonetheless programming in the form of example-classification pairs. In effect, the computer is just being fed rules, even if we call that process learning. Now, I could have gone on about unsupervised learning, but there’s a better point to make.

The supervised/unsupervised aspect is not the key to whether learning occurs or not. What is the defining characteristic of learning is the fact that the computer can correctly classify examples it has not seen before.

An agent that only memorizes the training set can correctly identify previously seen elements, but it cannot deal with unseen ones, its memorized knowledge has nothing to say about them. A learning agent, in contrast, manages to extract some knowledge from the training set that can be used to classify new unseen elements. The key to learning is the thus the ability to generalize from examples. It is the acquisition of knowledge from specific cases that is applicable in general.

However, this is not the end of the story. An agent may have the ability to learn, but that is not enough to guarantee that learning does in fact take place [1]. The extra necessary ingredient is that the target of learning must be learnable. If no “explanation” exists that reproduces the data, then there is nothing to be learned [2], the data is just “noise” [3].¬† This is why a phone book cannot be learned, it can only be memorized.


[1] This is another statement of the problem of induction

[2] Alternative formulations of this are: that¬† the data has no internal structure; the data cannot be compressed; the data’s constituent elements have zero mutual information

[3] Unfortunately there is no way to tell whether an “explanation” exists or not, short of finding it. Hence the uncomputability of Kolmogorov complexity.

Hints of generality in machine learning

One of the themes in AI is the progressive replacement of hand crafted knowledge/programming with autonomous learning. Another theme is the progressive shift from narrow domain specific abilities to generally applicable performance. These themes are closely related: domain specific performance is usually achieved via encoding domain specific expert knowledge, provided by humans, into the AI itself. This encoding is fixed and static, and is potentially brittle if the agent is subjected to a domain outside its narrow region of applicability.

In this talk we see hints of generality in machine learning via the replacement of hand crafted, tuned features (as input to a later stage learning phase) with a learning phase that autonomously learns these features. Because domain specific features, for example for vision or speech recognition, are typically designed by subject matter experts, learning is constrained by that specific manual task. One cannot use vision features in an audio problem and so forth.

However, when machine learning is targeted at learning the features themselves (in an unsupervised scheme), the potential for generality becomes present. If features can be autonomously learned for various domains, the machine learning process becomes general in so far as the feature learning’s performance is comparable or superior to that using hand crafted knowledge. This is exactly what is demonstrated here.

And, to make it even more relevant in terms of current research avenues, this is related to findings in neuroscience that suggest the general applicability of some kinds of learning in the neocortex, where for example patients can learn to see with their auditory cortex or their somatosensory cortex (by rewiring the optic nerve). This suggests the possibility that there is a unique learning algorithm at work, at least in the learning of concepts at low levels of the hierarchy close to the perceptual level, which is exactly where features reside.