Allan Habiger
Computer Science Graduate & Software Engineer
I am a graduate from the University of Minnesota, Twin Cities, with a Bachelor of Science in Computer Science. When it comes to software development, I enjoy being able to solve a problem that I have never thought of before.
Along side that, my interests are quite broad and this has given me the opportunity to work on a wide variety of topics and projects. Over the years I have got to explore machine learning, computer graphics, full-stack development, UI/UX research, databases, and many more topics.
Othello Agent
An Othello agent that leveraged a multi-objective Monte Carlo tree search, alpha beta pruning, and eight distinct evaluation functions, all of which were weighed based upon game stage.
This was a self-directed research project that I was inspired to do through a project assigned to me from a class. We had full freedom over what we could do, and I have always had a passion for attempting to solve games with machine learning. So I took it upon myself to make a Othello agent that could hold its own against other implementations.
I had no clue this game even existed before this project, so I had to learn about the rules, strategies, and existing solutions to the game. Thankfully there are many papers on the topic, and I could slowly piece together how I would like my agent to behave. The idea was to make an agent using multiple evaluation functions so it could accurately adhere to the basic strategies of the game. The evaluation functions were as follows: win, material, mobility, positional, corner, edge, frontier, and parity.
Win is a standard function that just sees if it is a winning move. Material is how many pieces you have. Mobility is how many moves you have. Positional uses a premade table of values for each position. Corner is how many corners you have. Edge is how many pieces you have on the edge. Frontier is how many pieces you have that are next to an empty space. Parity is whether there are an even or odd number of empty spaces left.
With this implementation I was able to beat Google's implementation of an Othello agent on the hardest difficulty as a search depth of eight. If I would like to go deeper I can apply this to a reinforcement learning approach. Using my current implementation for reward information, I could train a neural network to attempt to learn the best move for each position on the board.
HABe Engine
This has been a long term project of mine that I have always wanted to do. I have always enjoyed games and have already made a few of my own, and mods for others. One thing I haven't done though is make a game engine I could make a game with. After deciding to start this project, I had a lot of fun slowly building my way up to where I am now.
I first started out just simply making game loop that creates a window. This took me a little bit as it was all completely new. After figuring this out, I wanted to be able to see something so I used OpenGL to draw a triangle. I could see but I couldn't do anything, so I used what I had for the window and finally added input. This was what I would consider my first milestone when it came to both my game engine and my inspiration for the project.
After this I slowly started expanding the idea, redoing it from the ground up. While not directly planning, I was focused on trying to build one that I could add features to and would allow me to make a game easily through it. This led to many generations of game engines that helped me learn a lot. I explored cross platform development, rendering techniques, networking, and I slowly started to be able to piece together what I would like my game engine to be like.
For my current version, I took the time to make an extensive UML diagram for it before implementing anything. Ensuring the structure and design choices were set was necessary as this is a serious attempt. What this led to, was a multiple month design process which I was able to come up with the following design.
It is a windows only C++23 game engine with a runtime lifecycle, Win32 windowing, ECS world, rendering, animation, audio, assets, physic, and event systems. Uses DirectX 12 for rendering, implementing swap chaink, command submission, descriptor management, GPU resources, upload paths, and render graphs. XAudio2 and X3DAudio for audio, implementing 3D sound, spatialization, and reverb. Jolt for physics, adding collisions and constraints. Assets for each game need to be cooked into a optimized binary formatted file. This is done through an app implemented through the engine which exists as a file cooker.
Eigenmountains
A new project I recently am undertaking and I am very excited about it. I got this idea for an "eigenmountain" when I was in Nevada staring at the mountain ranges. Curious about how I could possible model the mountains creation, I started thinking about the ones I know, noise functions and physics simulations. Though for some reason I can't explain, I started thinking about eigenfaces and how you can use them to generate new faces. I was wondering if I could do the same thing for mountains with my now project name, eigenmountain.
Before settling on my current design, I had a lot of research to do on the modern techniques for machine learning and generative models. I knew I needed to classify the mountains somehow, but I also knew PCA is way to simple to do it. Looking into similar techniques I found variational autoencoders which are similar to PCA but they extract features from the data, linking features to scalar values. To me this sounded pretty much exactly what I needed so, then I went to work on taking the VAE output and using it to generate new mountains. Using the residuals from the VAE, I can tie them into their outputs, and now use a CNN to recreate the 2D heightmap, and then use that to generate a model of the mountain. Data was pretty simple to find, as I had previously looked for geological data like this for a different project, so I am using Copernicus DEM GLO-30 data, sanitized into the correct format.
With the base details ironed out, I did not believe this would work in the slightest. I knew how complex of a task this was, and I didn't know if the VAE would be able to extract features or if it would overgeneralize/combine them. So, to maximize my chances of this coming out better, I looked into papers that have versions of my techniques that are optimized better for this task. For my VAE, I am using a beta total correlation variational autoencoder β-TCVAE. VAE tend to combine features together, β-VAE fixes this but fails as the more independent features you get the worse the reconstruction is, and then finally β-TCVAE fixes all of these with less drawbacks. Then I am using a 2D ResU-Net CNN which is better for 2D images, and directly uses the residuals from my VAE to reconstruct the heightmap. The basic flow is the VAE gives new residuals, I use those to generate a heightmap with the CNN, and then I use the heightmap to generate a 3D model of the mountain.