Recently I was fortunate enough to be brought aboard a student-driven game design project by Jake Ross and some other students who are members of Texas A&M’s Visualization Lab (you should really check them out, they send a lot of animators to studios like Pixar and Dreamworks). Together, with a core team of about 8, we spent a year building an iPad game in our free time and named it Titan Ph.D. I built the AI (artificial intelligence), and this is the first of a 3-post series on how I did it.
Computers can seem pretty dumb sometimes, can’t they? Why can’t they just learn how to do things like we do? Learning comes so effortlessly to us humans; we don’t even remember learning something as extraordinarily complicated as speech – it just sort of happened. If I showed you 10 pictures, 5 with cats in them and 5 without (actually this is the internet, so 11 of those 10 pictures would have cats in them, but bear with me) you could easily identify which images contained cats. Because computers are basically math machines, unless you can very precisely define what a cat is, then a computer will not be very good at such a task. That’s where neural networks come in – what if we could simulate a human brain? And like a human brain, what if we could purpose our simulation to only look at cats?