CSCI 5611: Animation and Planning in Games
Theory behind algorithms used to bring virtual worlds to life. Computer animation topics. Real-time, interactive techniques used in modern games. Physically-based animation, motion planning, character animation, simulation in virtual worlds.
Bouncing Ball (Assignment 1 - Check In)
3D OpenGL rendering of a bouncing ball. It uses OpenGL, GLEW, GLM, and a custom shader manager based off of a homework assignment for CSC5607 Fundamentals of Computer Graphics by taught by Dr. Victoria Interrante.
This check in includes multiple camera angles. Pressing B tracks the ball while aligning the camera and increasing the projection draw distance keeping the platform in view. Pressing R resets the view to an orbiting camera.
A fragment shader and vertex shader was used to calculate lighting.
Particle Systems (Assignment 1)
I developed the animation in OpenGL with two separate GLSL shaders (one for phong illumination, and the other for particle rendering). I used tinyobjloader to load the 3D models and stb_image to load the textures.
The water simulation (without transparency) benchmarked at 212,408 particles at an average 36 frames per second on my laptop. With transparency (like the fire), I averaged ~50k particles at 40 frames per second.
I attempted to use animated textures for the camp fire, but was unable to completely animate the textures by the due date. It did create a pretty interesting visuals though!
Two required simulations in context. Water hydrant and Camp site fire
3D user-controlled camera (rotation and translation)
Particle obstacle interactions (water particles with the ball, cauldron interaction with the fire)
All four performance benchmarking (212,408 particles at an average 36 fps, thanks C++!)
[Attempted, see video] textured sprites for particles
Art Contest submission
The biggest difficulties I encountered were concerning the multiple shaders, loading the OBJ files, and wrangling all the vertex buffer objects. I had extremely limited experience with OpenGL (just a few weeks near the end of CSCI 5607), so the entire process was a learning experience. This submission doesn't have the polish I would like it to have had, but I feel like I understand OpenGL significantly more after this assignment.
Vertical Spring System (Assignment 2 - Check In)
In this assignment we will be creating a cloth simulation. For assignment 2 check in, we were expected to create a vertical thread modeled after a system of springs.
In addition to the spring simulation, I also added a few additional features to my base simulation:
Added the ability to press 'p' to enable picking mode, then drag to move draggable objects (accomplished via ray-casting)
Added the ability to press SPACE and SHIFT move up and down, respectively.
Smoothed the camera movement by moving a camera target, then linearly interpolating the eye position to that target.
Began investigating spatial hashing for object collisions. Inspired by Optimized Spatial Hashing for Collision Detection of Deformable Objects. The code has been commented out for this check in submission.
Physical Simulation (Assignment 2)
In this assignment we focused on physical simulations. I created a semi-advanced cloth as well as a 30x30x30 3D Eulerian Fluid simulation.
Realtime rendering (see video for FPS, usually ~50/60)
3D rendering, with a user-controlled camera (WASD, space to go up, shift to go down)
3D Mass-spring cloth simulation
Drag-terms (see cloth dragging on the table)
1-way cloth-object interaction with the rectangular table
Demo video shows a 30x30 cloth with obstacle interaction. Demo also shows a 60x60 flag.
Textured simulated objects (red cloth, American Flag)
Thread-parallel implementation with OpenMP. Allows the stable 60x60 flag. Maxes out at a 200x200 flag, but the threads become too long to render and increasing K makes the simulation unstable.
Attempted a spatial-data structure. Did not complete self-collision, so spatial-data structure was untested.
Billowing.blowing wind effect simulated with aerodynamic drag.
Attempted self-collision. Ran into issues with the triangle intersection and did not complete in time.
Extended the Stam GDC '03 paper with a 3D Eulerian fluid simulation. The demo shows the first attempt with water, which did not turn out very well. I had some issues imposing external forces (gravity) without the water disappearing, so I pivoted the fluid to be a smoke/fog. The demo shows a 30x30x30 simulation in the context of a steam engine locomotive.
In the additional video below, I demonstrate a 38x38x38 3D Eulerian fluid simulation at 30fps which exceeds the highest performance expectation of 200x200 (54,872 > 40,000).
Art contest submission.
At any time during the simulation, press P to toggle between picking and flying around. When picking, the mouse ray casts to the cloth. The controls of the 3D Eulerian fluid is E to emit smoke. G for gravity. And the arrow keys for directional forces.
The code is split between three branches in github.
'assignment2' is the cloth table.
'assignment2-flag' is the flag
'assignment2-fluid-3D' is the 3D simulation.
C++ / OpenGL (with shaders)
GLM for vectors
OpenMP for thread processing
Tinyobjloader for loading obj
stb_image for texture loading
Reference: Jos Stam, "Real-Time Fluid Dynamics for Games". Proceedings of the Game Developer Conference, March 2003.
"A Fast Triangle-Triangle Intersection Test" by Tomas Moller, 1997.
Motion Planning (Assignment 3 - Check In)
In this check-in, I have created a 0.5m radius game character is in a large 20m x 20m room. The character starts at the bottom left (-9,-9) and when the user presses N (for navigate, it navigates to the top right (9,9). There is a single obstacle in the room, represented by a sphere of radius 2 meters at coordinates (0,0). The user can also press M to navigate the character back to -9,-9 for a looping demo.
I am using a probabilistic roadmap (PRM) to plan a path for the agent from the start to the goal. It uses Dijkstra's algorithm for the path finding.
The code is in the #assignment3-checkin branch of my main opengl-animation repository.
Motion Planning (Assignment 3)
In this assignment, we focused on building a motion planning framework used to animate groups of agents navigating in complex environments.
For my assignment, I implemented:
Boids local interaction technique
PRM and A* global navigation strategy
The roadmap accounts for the extent of the agent with multiple obstacles
Agents are able to successfully navigate through environments with local minima
Nicely rendered 3D scenes with models to give context
Supports full 3D navigation with 3D flocking birds
The user can add and move obstacles at run time (Press "I" to interact
Dynamically choose agent start and goals at run time
The agent is animated as a virtual walking character
Character Rigging (played Mo-Cap in character animation file)
Path smoothing (walk to the furthest visible node on path)
Allow agents to rotate to reach a goal
Other things, not necessary related to the assignment:
Generates oriented boundry boxes for each entity! Includes ray casting for selecting items, and OBB-OBB collision test
ASSIMP importer with bone support. Bone matrixes are calculated on the GPU.
ASSIMP model animation support
Procedurally generated tile world! I didn't continue with it, but it was really fun to make.
Bounding box rendering
Press "I" to interact. Click to select an item (the bounding box will turn RED). Click and drag any obstacle. Rotate a selected obstacle with LEFT and RIGHT arrow keys. Click anywhere on the ground to direct a selected player entity.
Assimp for 3D models
OBB information from "Intersection method from Real-Time Rendering and Essential Mathematics for Games"
Plenty! First, importing modules required me to write my own ASSIMP importer. Since ASSIMP creates nodes in a tree format, I needed to re-write how my phong renderer worked.
The Oriented Bounding Boxes were a pain, specifically with all the change of coordinate spaces. If I had more time I would've transformed the bounding boxes with animations, allowing for the player to get "skinny" and slip through smaller paths.
For sweeping OBBs, I ended up just using small steps and testing a LERPed and SLERPed position vector and quaternion.
I originally had the oriented bounding boxes using their own modelview matrix, but simulating the object's extent around other objects caused this to quickly break down. This is a problem that was hard to track down because the bounding box renderer also used the modelview matrix. Eventually I just bit the bullet at switched to calculating the bounding box separate from the object's rendering logic.
Procedurally Generated Planet (Final Project)
For the final project, I wanted to implement Keplerian orbits with procedurally generated planets. This was an individual project.
Current State of the Art
Rendering Improvements to the State-of-the-Art
Cphere had some of the best rendering. For the planet, my goal was to implement similar ideas. I was only able to view the WebGL version, so everything had to be re-implemented by inspecting the OpenGL ES sources. In their demo, almost every renderer is strictly fixed around 0,0,0 and for only one planet. I was able to generalize the water rendering, the cloud translations, and lens flare. I was also able to reuse the textures used in the WebGL demo, which was really nice since I’m not an artist myself.
My implementation improves upon that demo by breaking up the shaders into separate renderers and generalizing the vector math algorithms around a given point instead of the world origin. I also expanded the shadow pass to run at the scale of planets and implemented the same scene graph and bounding box rendering system I had in my previous assignments.
I personally didn’t like the way the planetary sphere was implemented in the terrain generation (landmasses were generated separately from the planet and took a very long time to generate something). As an improvement, I wanted to use a Simplex noise function as an elevation on each of the sphere’s vertices. Since it simply deformed the planet’s mesh, this was much easier to work with, and I imagine less computationally intensive.
The cphere demo used a planetary sphere based off rings stacked on top of each other. This has a drawback of bunching up density textures near the top, which due to the simplex noise function meant there was more detail near the poles. My implementation uses a CubeSphere instead. This effectively evenly distributes the vertexes and makes drawing textures easier.
The last rendering improvement I made was implementing a multiple planet camera. I used the Three.JS trackball orbit camera algorithms to improve the Cphere camera controller. Much like how the trackball has a center focus area, my planet camera is centered on the planet that is currently selected. The selection algorithm is a simple ray sphere intersection test from the point where the user clicked.
The cphere planet rotates the based on longitude and latitude. My improvement uses the sample longitude and latitude ideas (which simplifies the zoom and smooth rotation around the planet), but outputs a camera position and quaternions, which I can then smoothly LERP and SLERP to if a new planet is selected.
Most planetary games use circular orbits, which visually work fine. I wanted to implement Keplerian orbits, which utilize 6 orbital parameters to fully define an elliptical orbit.
Terrain generation! I was able to implement ImGUI to make real-time adjustments and regenerate the currently selected planet. See the screenshot below:
In the end, the key algorithm that worked best for me was using two types of Simplex noise. The first was four samples of normal [-1, 1] ranged simplex noise to create the continents. The strength affects the total magnitude, the base roughness sets the initial frequency and the roughness multiplies frequency every layer. I also include a min-value to maintain the spherical shape near the oceans. Once the continents were created, I used a ridged noise function that is 1 - absolute value of the [-1, 1] range. This creates the mountain ranges, with similar parameters for each layer.
The shadow map mostly worked, as long as the sun is far away (a.k.a acts like a directional light source).
Multiple stable Keplerian orbits and rendering multiple planets! I was even able to get moons on my planets! See below:
What didn’t work
Camera motion between planets can add unnecessary rotations. I believe this is due to quaternion rotations conflicting with the longitude & latitudes rotations that I use to determine where on the new planet I should animate to.
Shadow mapping breaks when the sun is in the center. This is because the sun doesn’t know which direction to project it’s camera, so the camera becomes undefined. More on this in the future work section.
Objects that are orbiting something often collide over time. Since orbital parameters are constant, finding the correct ones that wouldn’t collide on planet generation would require me to forward simulate all of the possible orbital paths. This is a problem that I wasn’t able to come up with a good solution for.
Currently, this system only supports one body problems. As more planets are added, those planets would have their own gravitational influence. I attempted to simplify this behavior using spheres of influence (if a mass is far enough away, the gravity of another won’t influence it), but I was unable to get any noticeable results.
As we add more orbits and simulate them forward, the computational intensities would also increase. As I was unable to get orbital collision checking to work, but if that would be one of the first major bottlenecks.
The planet meshes themselves would also be a bottleneck and would need some sort of tessellation or scene graph to only render planets in view or at optimal detail. Additionally, far away planets (enough where terrain detail can’t be seen) could be replaced with a quad to simplify rendering.
In the future, I would like to incorporate transfer calculations into the simulation. It was something I was planning since the beginning, but I was unable to get it working in a way I was confident showing off. Turns out rocket science is hard.
I would also like to figure out a way for the shadow mapping to work when the sun is in the center. My initial ideas are to use something like a panorama camera, or let the problem solve itself it ray-tracing.
Finally, I would like to get more realistic terrain texturing. Grass is currently being determined via a simplex fractal noise and the rest is determined based on height. I would like to expand the noise-based texturing out to the different texture times, as currently the rest are determined based on height.
OpenGL, GLFW, C++
OpenMP for thread processing
stb_image for texture loading
OpenCV (for video export)
ImGUI for debugging and real time planet generation tweaking