The 17th Annual ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG)
George Mason University- Mason Square (Arlington Campus)
November 21 – 23, 2024
Welcome to MIG 2024!
Motion plays a crucial role in interactive applications, such as VR, AR, and video games. Characters move around, objects are manipulated or they move due to physical constraints, entities are animated, and the camera moves through the scene. Motion is currently studied in many different research areas, including graphics and animation, game technology, robotics, simulation, and computer vision, as well as physics, psychology, and urban studies. Cross-fertilization between these communities can considerably advance the state-of-the-art in the area.
The Motion, Interaction, and Games conference aims to bring together researchers from these fields to present their most recent results, initiate collaborations, and help advance this research area. The conference will feature regular paper sessions, poster presentations, and keynote speeches by a selection of internationally renowned speakers in all areas related to interactive systems and simulation. The conference includes entertaining cultural and social events that foster casual and friendly interactions among the participants.
NEWS
- Registration is open! Early bird registration date is
October 21st, 2024November 1st, 2024. - Paper submission deadline has been extended to July 26th, 2024.
- Paper submission is open.
- Call for papers is announced.
- The 17th annual ACM SIGGRAPH conference on Motion, Interaction, and Games (MIG ‘24) will take place at George Mason University- Mason Square (Arlington Campus), 21-23 November 2024.
SCOPE
We invite original work on a broad range of topics, including but not limited to:
- Animation systems
- Behavioral animation
- Character animation
- Clothes, skin and hair
- Crowd simulation
- Deformable models
- Facial animation
- Game interaction and player experience
- Game technology
- Gesture recognition
- Group and crowd behavior
- Human motion analysis
- Interaction in virtual and augmented reality
- Interactive storytelling in games
- Machine learning techniques for animation
- Motion capture & retargeting
- Motion control
- Motion in Performing Arts
- Motion in sports
- Motion rehabilitation systems
- Multimodal interaction: haptics, sound, etc.
- Navigation & path planning
- Particle systems
- Physics-based animation
- Real-time fluids
- Virtual humans
SUBMISSION
We invite submissions of original, high-quality papers on any of the topics of interest mentioned above or any related topic. Submissions can be 4-6 pages for short papers, and up to 10 pages in length for long papers, excluding references. We encourage authors to submit their work as a short paper if the content can fit the 6-page limit. Videos are required for techniques involving motion or animation.
All accepted papers, long and short, will appear in the conference proceedings and be archived in the ACM Digital Library.
All submissions should be formatted using the SIGGRAPH formatting guidelines (sigconf). Latex template can be found here. For the review version, please use the command:
\documentclass[sigconf, screen, review, anonymous]{acmart})
The review process will be dual anonymous, and papers should not have previously appeared in, or be currently submitted to, any other conference or journal. All papers will be reviewed by at least three (3) experts from the Program Committee. There is no rebuttal process.
Papers and supplementary material should be submitted using EasyChair: https://easychair.org/my/conference?conf=mig2024
Extended Journal Submissions
After the conference, all authors of accepted long papers are invited to submit revised and extended versions of their work to a special issue of the Computers & Graphics journal. Extended papers are required to contribute at least an additional 30%, including new results, additional experiments, and/or improvements to the original methodology. Extended versions will be evaluated based on their scientific contribution rather than page length.
Posters
We also invite submissions of poster papers on any of the topics of interest and related areas. Each submission should be 1-2 pages in length. Two types of work can be submitted directly for poster presentation:
- Work that has been published elsewhere but is of particular relevance to the MIG community can be submitted as a poster. This work and the venue in which it is published should be identified in the abstract;
- Work that is of interest to the MIG community but is not yet mature enough to appear as a paper.
Posters will not appear in the conference proceedings or the ACM Digital Library.
IMPORTANT DATES
Long and Short Paper Submission Deadline: July 19, 2024 July 26, 2024
Long and Short Paper Acceptance Notification: September 6, 2024 September 13, 2024
Long and Short Paper Camera Ready Deadline: September 27, 2024
Poster Submission Deadline: September 13, 2024
Poster Notification: September 27, 2024
Final Version of Accepted Posters: October 4, 2024
All submission deadlines are 23:59 AoE (Anywhere on Earth)
PROGRAM
Time slot | Thursday, November 21 |
8:30-9:00 | Registration |
9:00-9:15 | Opening session |
9:15-10:15 | Keynote 1 – Lingjie Liu |
10:15-10:45 | Coffee break |
10:45-12:00 | Papers session 1 – Controlling Characters |
11:45-2:00 | Lunch (on your own) |
2:00-3:30 | Papers session 2 – Physics Simulation |
3:30-4:00 | Coffee break |
4:00-5:00 | Papers session 3 – Virtual Reality |
6:00-8:00 | Reception |
Friday, November 22 | |
8:30-9:00 | Registration |
9:00-10:00 | Keynote 2 – Moritz Baecher |
10:00-10:30 | Coffee break |
10:30-12:00 | Papers session 4 – Character Animation |
12:00-2:00 | Lunch (on your own) |
2:00-3:30 | Papers session 5 – Expressive Characters |
3:30-5:00 | Posters fast forward |
Saturday, November 23 | |
8:30-9:00 | Registration |
9:00-10:00 | Keynote 3 – Eakta Jain |
10:00-10:30 | Coffee break |
10:30-11:45 | Papers session 6 – Natural phenomena and AI |
11:45-12:00 | Closing ceremonies and awards |
KEYNOTES
Lingjie Liu (University of Pennsylvania)
Title: High-Fidelity Human Pose Tracking and Realistic Motion Synthesis in Real-World Scenes
Abstract: In recent years, accurately tracking human poses in world coordinate space and synthesizing realistic and precisely controllable motions that seamlessly integrate into the digital reconstruction of real-world environments has become crucial for advancing applications in augmented reality, virtual reality, robotics, and more. In this talk, I will introduce our approach to high-precision human pose estimation within world coordinates, enabling accurate human motion reconstruction even in challenging scenarios from in-the-wild video data. Next, I will discuss our work on training human motion generative models that synthesize realistic motions with fine-grained semantic and/or trajectory controls. These approaches enable us to incorporate synthesized motions within digital reconstructions of real-world environments. Finally, I will address current limitations in the field and propose directions for future research to tackle these challenges, paving the way toward more immersive and interactive digital environments.
Bio: Lingjie Liu is the Aravind K. Joshi Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania, where she leads the Penn Computer Graphics Lab. She is also a member of the General Robotics, Automation, Sensing & Perception (GRASP) Lab. Previously, she was a Lise Meitner Postdoctoral Research Fellow at Max Planck Institute for Informatics. She received her Ph.D. degree at the University of Hong Kong in 2019. Her research interests are at the interface of Computer Graphics, Computer Vision, and AI, with a focus on Neural Scene Representations, Neural Rendering, Human Performance Modeling and Capture, and 3D Reconstruction
Moritz Bächer (Disney Research)
Title: Breathing Life into Disney’s Robotic Characters
Abstract: At Disney, we are redefining entertainment robotics. In this talk, I will first discuss model-based techniques that aid with the design and control of expressive robotic characters. At the core of these techniques is a differentiable representation of the robot’s time-varying state. In the second part of my talk, I will provide insight into our learning-based tools that enable the rapid design of freely-roaming robotic characters that can execute artist-specified animations or generated motions, using deep reinforcement learning. I will highlight how a combination of self-supervised learning and reinforcement learning results in realistic, human-level robot movement, enabled by extracting the structure in human motion.
Bio: Moritz Bächer is the Associate Lab Director of Disney’s Zurich-based robotics team, where he leads a strategic program focusing on the development of novel model- and learning-based tools for the design and control of believable robotic characters. His core expertise is the optimal design and control of both soft and rigid systems, using a combination of differentiable simulation and reinforcement learning. Prior to joining Disney, Moritz received his Ph.D. from the Harvard School of Engineering and Applied Sciences and his master’s degree from ETH Zurich.
Eakta Jain (University of Florida)
Title: Interacting with Non-Human Intelligent Agents
Abstract: Robots are entering our lives and workplaces as companions and teammates. Interacting with non-human intelligent agents will become a necessary component of both life and work. While this may seem unprecedented, in reality, humans have successfully collaborated with non-human agents before: humans and animals have partnered for millennia. What can we learn from human-animal interaction that informs human-robot interaction? This talk will present a first examination of human-horse interaction to inform human-robot interaction. I will discuss our findings based on three sources gathered over a year of fieldwork: observations, interviews and journal entries. I will also offer design guidelines based on these findings and opportunities for interdisciplinary, multi-cultural research in this open scientific frontier.
Bio: Dr. Eakta Jain is an Associate Professor of Computer and Information Science and Engineering at the University of Florida. She received her PhD and MS degrees in Robotics from Carnegie Mellon University and her B.Tech. degree in Electrical Engineering from IIT Kanpur. She has industry experience at Texas Instruments R&D labs, Disney Research Pittsburgh, and the Walt Disney Animation Studios. Dr. Jain is interested in the safety, privacy and security of data gathered for user modeling, particularly eye tracking data. Her areas of work include graphics and virtual reality, generation of avatars, human factors in the future of work and transportation. Her research has been nominated for multiple best paper awards and been funded through faculty research awards from Meta and Google, federal funding from the National Science Foundation, National Institute of Mental Health, US Department of Transportation, and state funding from the Florida Department of Transportation. Dr. Jain is an ACM Senior Member. She served as the Technical Program Chair for ACM Symposium on Eye Tracking Research (2020) and Applications and ACM/Eurographics Symposium on Applied Perception (2021). She serves on the ACM SAP Steering Committee (2022-2024) and as a Director on the ACM SIGGRAPH Executive Committee (2022-2025).
PAPERS
All talks will be followed by a 5-minute audience Q&A.
Session + papers | Type | Duration |
---|---|---|
Papers Session 1 – Controlling Characters (Session Chair: Victor Zordan) | 55m | |
Deformable Elliptical Particles for Predictive Mesh-Adaptive Crowds. Dominic Ferreira, Liam Shatzel and Brandon Haworth | Long | 15m |
Controller ratings versus performance in a mobile augmented reality platform game. Aline Normoyle, Neha Thumu and Yi Fei Cheng | Short | 10m |
Social Crowd Simulation: Improving Realism with Social Rules and Gaze Behavior. Reiya Itatani and Nuria Pelechano | Long | 15m |
Papers Session 2 – Physics Simulation (Session Chair: Adam Bargteil) | 1h15m | |
Gram-Schmidt Voxel Constraints for Real-time Destructible Soft Bodies . Tim McGraw | Long | 15m |
Adaptive Sub-stepping for Constrained Rigid-body Simulation. Sheldon Andrews and Chris Giles | Short | 10m |
Adaptive Distributed Simulation of Fluids and Rigid-bodies. Haoyang Shi, Sheldon Andrews, Victor Zordan and Yin Yang. | Long | 15m |
Estimating Cloth Elasticity Parameters From Homogenized Yarn-Level Models. Joy Xiaoji Zhang, Gene Wei-Chin Lin, Lukas Bode, Hsiao-Yu Chen, Tuur Stuyck and Egor Larionov | Long | 15m |
Papers Session 3 – Virtual Reality (Session Chair: Eric Paquette) | 1h | |
The Effects of Virtual Character’s Intelligence and Task’s Complexity during an Immersive Jigsaw Puzzle Co-solving Task. Minsoo Choi, Dixuan Cui, Alexandros Koilias and Christos Mousas | Long | 15m |
A Comparative Study of Omnidirectional Locomotion Systems: Task Performance and User Preferences in Virtual Reality. Deyrel Diaz, Luke Vernon, Elizabeth Skerritt, James O’Neil, Andrew Duchowski and Matias Volonte | Long | 15m |
The Impact of Color and Object Size on Spatial Cognition and Object Recognition in Virtual Reality. Deyrel Diaz, Andrew Duchowski, Matias Volonte, Andrew Robb, Sabarish Babu and Chris Pagano | Long | 15m |
Papers Session 4 – Character Animation (Session Chair: Tim McGraw) | 1h20m | |
Real-time Diverse Motion In-betweening with Space-time Control. Yuchen Chu and Zeshi Yang | Long | 15m |
ReGAIL: Toward Agile Character Control From a Single Reference Motion. Paul Boursin, Yannis Kedadry, Victor Zordan, Paul Kry and Marie-Paule Cani | Long | 15m |
Dog Code: Human to Quadruped Embodiment using Shared Codebooks. Donal Egan, Alberto Jovane, Jan Szkaradek, George Fletcher, Darren Cosker and Rachel McDonnell | Long | 15m |
Factorized Motion Diffusion for Precise and Character-Agnostic Motion Inbetweening. Justin Studer, Dhruv Agrawal, Dominik Borer, Seyedmorteza Sadat, Robert W. Sumner, Martin Guay and Jakob Buhmann | Long | 15m |
Papers Session 5 – Expressive Characters (Session Chair: Aline Normoyle) | 1h15m | |
ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE. Sichun Wu, Kazi Injamamul Haque and Zerrin Yumak | Long | 15m |
Expressive Animation Retiming from Impulsed-Based Gestures. Bienvenu Marie, Pascal Guehl, Quentin Auger and Damien Rohmer | Short | 10m |
An Iterative Approach to Build a Semantic Dataset for Facial Expression of Personality. Satya Naga Srikar Kodavati, Anish Kanade, Wilhen Alberto Hui Mei and Funda Durupinar | Long | 15m |
EmoSpaceTime: Decoupling Emotion and Content through Contrastive Learning for Expressive 3D Speech Animation. Philine Witzig, Barbara Solenthaler, Markus Gross and Rafael Wampfler | Long | 15m |
Papers Session 6 – Natural Phenomena and AI (Session Chair: Paul Kry) | 1h | |
Twister Forge: controllable and efficient animation of virtual tornadoes. Jiong Chen, James Gain, Jean-Marc Chomaz and Marie-Paule Cani | Long | 15m |
Implicit and Parametric Avatar Pose and Shape Estimation From a Single Frontal Image of a Clothed Human. Fares Mallek, Carlos Vázquez and Eric Paquette | Long | 15m |
From Words to Worlds: Transforming One-line Prompt into Immersive Multi-modal Digital Stories with Communicative LLM Agent. Danrui Li, Samuel Sohn, Sen Zhang, Che-Jui Chang and Mubbasir Kapadia | Long | 15m |
Predicting Users’ Difficulty Perception in a VR Platformer Game
Erdem Murat, Liuchuan Yu, Siraj Sabah, Haikun Huang and Lap-Fai Yu
Joint Computational Design of Workspaces and WorkplansYongqi Zhang, Haikun Huang, Erion Plaku and Lap-Fai Yu
Enriching Physical-Virtual Interaction in AR Gaming by Tracking Identical Real Objects
Liuchuan Yu, Ching-I Huang, Hsueh-Cheng Wang and Lap-Fai Yu
Dragon’s Path: Synthesizing User-Centered Flying Creature Animation Paths in Outdoor AR
Minyoung Kim, Rawan Alghofaili, Changyang Li and Lap-Fai Yu
Reactive Gaze during Locomotion in Natural EnvironmentsJulia Melgare, Damien Rohmer, Soraia Raupp Musse and Marie-Paule Cani
Evaluation of Body Parts-based Latent Representations for Skeletal Human Motion Reconstruction
Philippe de Clermont Gallerande, Ludovic Hoyet, Ferran Argelaguet, Phillipe Gosselin and Quentin Avril
INTERNATIONAL PROGRAM COMMITTEE
- Aline Normoyle, Bryn Mawr College, USA
- Babis Koniaris, Edinburgh Napier University, United Kingdom
- Ben Jones, University of Utah, USA
- Brandon Haworth, University of Victoria, British Columbia, Canada
- Caroline Larboulette, Université de Bretagne, France
- Catherine Pelachaud, Université de Sorbonne, France
- Christos Mousas, Purdue University, USA
- Claudia Esteves, Universidad de Guanajuato, Mexico
- Damien Rohmer, Ecole Polytechnique de Paris, France
- Daniel Holden, Epic Games, Canada
- Donald Engel, University of Maryland, Baltimore County, USA
- Edmond Ho, University of Glasgow, Scotland
- Eric Patterson, Clemson University, USA
- Floyd Chitalu, Independent
- Hang Ma, Simon Fraser University, British Columbia, Canada
- Hong Qin, Stony Brook University, USA
- James Gain, University of Cape Town, South Africa
- Julio Godoy, University of Minnesota Twin Cities, USA
- Katja Zibrek, Inria Rennes, France
- Kenny Erleben, University of Copenhagen, Denmark
- Lesley Istead, Carleton University, Canada
- Marie Andréia Formico Rodridues, University of Fortaleza, Brazil
- Matthias Teschner, University of Freiburg, Germany
- Michael Neff, University of California Davis, USA
- Mikhail Bessmeltsev, University of Montreal, Canada
- Miles Macklin, Nvidia, USA
- Mubbasir Kapadia, Roblox, USA
- Nuria Pelechano, Universitat Politècnica de Catalunya, Spain
- Panayiotis Charalambous, CYENS – Center of Excellence, Cyprus
- Pei Xu, Stanford University, USA
- Rachel McDonnell, Trinity College Dublin, Ireland
- Rahul Narain, Indian Institute of Technology Delhi, India
- Rinat Abdrashitov, Meta Reality Labs, Toronto, Canada
- Ronan Boulic, Ecole Polytechnique Federale de Lausanne, Switzerland
- Shinjiro Sueda, Texas A&M University, USA
- Snehasish Mukherjee, Roblox, USA
- Sophie Joerg, University of Bamberg, Germany
- Stephen Guy, University of Minnesota, USA
- Tianlu Mao, Institute of Computing Technology Chinese Academy of Sciences
- Tiberiu Popa, Concordia University, Quebec, Canada
- Xiaogang Jin, Zhejiang University, China
- Yin Yang, University of Utah, USA
- Yiorgos Chrysanthou, University of Cyprus, Cyprus
- Yuting Ye, Meta, USA
- Zerrin Yumak, Utrecht University, Netherlands
ORGANIZING COMMITTEE
CONFERENCE
CHAIRS
ADAM BARGTEIL
PROGRAM
CHAIRS
SHELDON ANDREWS
SORAIA MUSSE
POSTER
CHAIR
ZACHARY FERGUSON
LOCAL
CHAIR
ERDEM MURAT
REGISTRATION
Registration is now open! Please use this link. If you have any questions or require any assistance with your registration, please do not hesitate to contact us at motioningames24@gmail.com.
Registration Type | Early (By 10/21/2024) | Regular (After 10/21/2024) |
ACM Professional Member | $400 | $450 |
ACM Non-Member | $450 | $500 |
ACM Student Member | $200 | $250 |
ACM Student Non-Member | $225 | $275 |
Visa Support
ACM is able to provide visa support letters to attendees as well as authors with accepted papers, posters, or members of the conference committee. If you are a recipient of ACM, SIG, or Conference funded travel grant, please include this information in your request. For visa support letters, please complete the following request online: https://supportletters.acm.org/. Please allow up to 10 business days to receive a letter. All requests are handled in the order they are received. The information below should be included with the request:
- Your name as it appears on your passport
- Your current postal mailing address
- The name of the conference you are registering for. Only accepted authors may request a visa support letter prior to registering for the conference.
- Your registration confirmation number
- If you have any papers accepted for the conference, please provide the title and indicate whether you are the “sole author” or a “co-author”Authors may indicate their paper title. If no paper, speakers can provide the title of their presentation
VENUE
George Mason University – Mason Square (Arlington Campus)
3351 Fairfax Dr, Arlington, VA 22201
Navigation
Upon arriving, please find the event registration in the lobby on the ground floor of Van Metre Hall (the main building).
If you enter from the front door shown in the picture below, the registration table is to your left at the end of the hall.
Reception
On Thursday at 6pm, we will have our reception at Lyon Hall, a restaurant just an 8-minute walk from Mason Square. Guests will be served hors d’oeuvres and a drink.
Transportation
Arrivals through either the DCA or IAD airport can take the metro to Virginia Square-GMU. DCA arrivals will need to take the blue and silver lines. IAD arrivals can directly take the silver line. The venue is an 8 minute walk away from the metro station.
Accomodation
Below are some near-by hotels that are near the venue. These hotels have also been marked on the map. Buses are present in the area and convenient for transportation.
Arlinton and Washington DC
Arlington, Virginia, is a vibrant and diverse community located directly across the Potomac River from Washington, D.C. Known for its rich history, Arlington is home to iconic landmarks such as the Arlington National Cemetery, the Pentagon, and the Marine Corps War Memorial. With its unique blend of urban and suburban environments, Arlington offers a wide range of cultural, recreational, and dining experiences.
Fun fact: Arlington is the healthiest city in America. You are likely to find many healthy food options and joggers around!
Just a short metro ride from Arlington, you’ll find Washington, D.C., our nation’s capital. It’s a city rich in history and packed with landmarks that define American democracy, like the U.S. Capitol, the White House, and the Supreme Court. At the heart of D.C. is the National Mall, a vast green space that’s home to amazing museums under the Smithsonian umbrella and famous memorials such as the Lincoln Memorial, the Washington Monument, and the Vietnam Veterans Memorial.