Researchers Develop an AI that Generates Animations for Video Games
Researchers at University of Edinburgh and Method Studios have developed an artificial neural network system that helps animate video games.
The researchers, led by Daniel Holden, call their artificial neural network system as “Phase Functioned Neural Network” (PFNN). Traditionally, video games are made from painstaking pre-scripted animations. Holden told Ars Technica, “So, instead of storing all the data and selecting which clip to play with, [we] have a system which actually generates animations on the fly, given the user input.”
In the paper entitled “Phase-Functioned Neural Networks for Character Control” published on The Orange Duck, the researchers wrote that PFNN generates animations based on input user controls, the previous state of the character, and the geometry of the scene. PFNN’s was trained for 30 hours on a NVIDIA GeForce GTX 660 GPU. According to the researchers, once trained, PFNN generated animations “extremely fast and compact, requiring only milliseconds of execution time and a few megabytes of memory.”
As shown in the video shared by the researchers, the animated character developed using PFNN performs natural-looking movements such as walking and running over rough terrain, jumping over obstacles, climbing over large rocks, and crouching under low ceilings.
The researchers revealed that PFNN still cannot deal well with complex interactions with the environment, in particular, making precise hand movements such as interacting with other objects in the scene or climbing up walls.
The researchers hope that one day their work could be applied to physically-based animation, allowing an animated character to move around terrains in different physical conditions such as unstable rope bridges or slippery floors. They also hope to see their work applied in other fields, such as videos of periodic data, for instance, Functional magnetic resonance imaging (fMRI) images of heartbeats.