Program

Keynote Speakers

Prof. Dr. Barbara Solenthaler, Computer Graphics Lab, ETH Zurich

Title: From Digital Humans to Digital Patients

Abstract: Recent advances in graphics – particularly AI-driven digital human modeling, character animation, and conversational agents – have shown how effectively we can visualize and interact with digital characters. These breakthroughs power film, games, and XR, demonstrating the enormous potential of graphics methods for scalable, high-fidelity face modeling and new forms of human-computer interaction. Their potential in healthcare, however, remains largely untapped. Yet we cannot simply transfer existing methods into clinical practice: medical applications bring specific requirements such as anatomically grounded representations, the ability to model pathological and evolving anatomy, functional prediction, and generalization to out-of-distribution groups such as infants. Addressing these needs requires rethinking how we capture, represent, simulate, render, and interact with digital humans.

In this talk, I will share our efforts to bridge this gap: from creating anatomical face models for predicting treatment outcomes, to modeling infants and designing appliances and implants, to developing AI-driven characters as health companions. These projects illustrate not only what graphics can enable in healthcare, but also how the unique challenges of medical applications can drive new methods and insights for the broader graphics community.

Bio: Barbara Solenthaler is a Titular Professor at the Computer Graphics Lab, Department of Computer Science, ETH Zurich. She received her PhD in Computer Science from the University of Zurich and was a Hans Fischer Fellow at the Institute for Advanced Study, Technical University of Munich. Her research focuses on animation and simulation, with a particular emphasis on the intersection of computer graphics and healthcare, where her team develops technologies for 3D digital patient twins. She is also committed to translating scientific advances for societal benefit and contributes actively to the international computer graphics community through editorial and organizational service.

Prof. Dr. Marc Erich Latoschik, Universität Würzburg

Title: Determinants of a Metaverse: From Avatars to Zero Latency

Abstract: What happens when your avatar looks exactly like you — down to the last wrinkle? In this keynote, we’ll explore how lifelike 3D avatars and cutting-edge XR + AI research are reshaping the way we learn, heal, and connect. You’ll see how virtual embodiment is pushing the boundaries of identity, how social VR is turning classrooms into interactive playgrounds, and how VR therapy is helping people recover from knee surgery, manage obesity, and even sharpen their public-speaking skills.  But we’ll also peek at the flip side: when latency glitches break immersion, when VR gambling clouds judgment, and when privacy comes under fire from biometric tracking. Along the way, we’ll share fresh theories of XR, new risks we’ve uncovered, and bold ideas to keep the metaverse safe, human-centered, and inspiring.

Bio: Marc Erich Latoschik is Professor and Chair for Human-Computer Interaction at the University of Würzburg, where he leads one of the top research groups worldwide in Extended Reality (XR). His work bridges computer science, AI, psychology, and cognitive sciences, with a focus on immersive and interactive systems. After early contributions to multimodal VR interfaces in the late 90s, his current research covers topics such as virtual embodiment, avatar realism, social XR, gamification, and therapeutic and educational applications of VR/AR. Marc has published more than 400 peer-reviewed articles, received multiple awards, and serves on leading program committees in the field.

Prof. Dr. Amit H. Bermano

Title: Taming the beast: Controllability for Generative Diffusion Models

Abstract: Generative tasks are currently addressed almost exclusively using the foundational model approach. This means that large, typically diffusion-based, generative models are trained over generic data which are afterwards employed for specific tasks. Except for computational resources, perhaps the biggest challenge with this approach is controllability; it is challenging to balance specific user requirements with the vast knowledge these models carry.

In this talk, I discuss my recent attempts to control these beasts. Focusing primarily on works in the body motion generation domain, but also touching upon 2D images, I portray control methods that utilize the input condition and noise spaces, intervene between generation steps, and, of course, manipulate the attention mechanism. The talk presents methods for combining motion generation with physical environments, multi-topology motion generation, style transfer and personalization, and a deep discussion on the attention mechanism and how to exploit it.

Bio: Amit H. Bermano is an associate professor at Tel-Aviv University, now on sabbatical at ETH zurich. His research focuses on visual computing, with an emphasis on generative models in various visual domains including images, video, animation, and geometry. Amit obtained his undergrad and master degree in Israel, his doctoral degree at ETH, working in collaboration with Disney Research Zurich as a student and a post-doc. Most of his post doctoral work was performed at Princeton University, in the Princeton Graphics group.

Industry Speakers

Dr. Antoine Milliez, Creatures R&D Lead, Industrial Light and Magic

Title: Pragmatic Solutions in Creature Animation at ILM

Abstract: At any given time, over 50 shows are getting worked on in parallel at Industrial Light & Magic, by hundreds of artists located across 5 sites around the world. It can seem tricky to innovate in that constant buzz, as any disruption to existing workflows can have serious consequences. In addition, the support load carried by R&D teams in a 50 year old company is quite consequent.
So how do we do it? In this talk we’ll look at examples of pragmatic solutions to technical problems, and how they’ve been implemented at ILM. We’ll also talk about areas where we’ve integrated state of the art technology, and what it took to get there.

Bio: Antoine Milliez is a Staff R&D Engineer at Industrial Light & Magic and has worked on character rigging, animation and simulation tools since 2017. As Creatures R&D Lead, he oversees technical efforts across the character pipeline. He holds a PhD in Computer Graphics from ETH Zurich and has spent 5 years at Disney Research working on novel ways to stylize animations and simulations.

Dr. Thabo Beeler, Researcher, Google XR

Title: Digital Humans for Android XR

Abstract: Google and Samsung just released the first product powered by Android XR, the new operating system from Google developed for Augmented and Virtual Reality Devices. Digital Humans played a central role in developing Android XR and the Galaxy XR headset – from product design, to natural input, to user experiences. In this presentation I will give a quick overview of some of the underlying Digital Human technology we developed for Android XR and how it is being used.

Bio: Thabo Beeler is a Research Director at Google, where he is leading the work on Digital Humans for Android XR. Prior to that he was a Principal Research Scientist at Disney Research, where he was heading the Capture and Effects group and leading the research initiative on Digital Humans. He has worked on Digital Humans for more than 15 years, has published over 80 papers on the subject, and his research has been recognised by several awards, including the prestigious Eurographics Young Researcher Award and the Eurographics PhD Award. His work has been utilised in over 50 feature films to create photoreal digital actors, which has been recognised by a Technical Achievement Award of the Academy of Motion Pictures in 2019. More information can be found at thabobeeler.com.

Prof. Dr. Verónica Orvalho, Founder & CEO, Didimo

Title: From One to Millions: The New Science of Scalable Character Creation

Abstract: As virtual worlds grow in scale and interactivity, character creation has become a critical bottleneck. Studios still spend weeks on a single rig-ready character, while modern workflows, from UGC platforms to AI-assisted production and large-scale simulation, demand not individuals but populations: thousands of consistent, stylized, and animation-ready characters. In this presentation, I will share how our research at Didimo and the Popul8 platform automates this pipeline end-to-end, unifying morphable models, topology conversion, stylization learning, asset fitting, and animation retargeting into scalable infrastructure. Drawing from production deployments with Electric Square/Keywords, SONY, Colossal Order, and Playable Worlds, I will outline the research challenges ahead for motion, interaction, and population-scale character systems.

Bio: Verónica Orvalho is the founder and CEO of Didimo, the deep-tech company behind Popul8™, a platform that automates 3D character creation for games, simulations, and digital worlds. With a PhD in Computer Graphics and as a Professor at the University of Porto, she has built and patented breakthrough technologies that redefine how digital humans come to life, used by Sony, Universal Studios, Colossal Order, and leading game developers worldwide. A TEDx speaker and frequent keynote at international scientific conferences, Verónica blends academic depth with entrepreneurial energy, turning research into real-world impact. Recognized as a European Women Innovator, EIC Ambassador, and SIGGRAPH General Submissions Chair, she bridges science and creativity, transforming advanced AI and computer graphics into tools that empower artists and scale gaming. Her mission is simple: make the digital world more human.

Schedule

Paper Sessions

The published papers can be found in the ACM proceedings here.

Session 1 — Virtual Reality and Augmented Reality (Session Chair: Damien Rohmer)

TitleAuthorsType
Feel to Aim: Haptic Assistance for Enhanced Targeting in Virtual RealityJean Botev, Johannes Günter Herforth, Marnix Van den WijngaertLong
Enhancing Foveated Rendering with Weighted Reservoir SamplingVille Cantory, Darya Biparva, Haoyu Tan, Tongyu Nie, John Schroeder, Ruofei Du, Victoria Interrante, Piotr DidykLong
Storyboarding in Extended Reality: Leveraging Real-world Elements in Storyboard CreationFederico Manuri, Federico Mafrici, Andrea SannaShort
Beyond Buttons: A User-centric Approach to Hands-free Locomotion in Virtual Reality via Voice CommandsJan Hombeck, Henrik Voigt, Kai LawonnInvited
Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and GamesJulián Méndez, Weizhou Luo, Rufat Rzayev, Wolfgang Büschel, Raimund DachseltInvited

Session 2 — Animation and Style (Session Chair: Joseph Kider)

TitleAuthorsType
Implicit Bézier Motion Model for Precise Spatial and Temporal ControlLuca Vögeli, Dhruv Agrawal, Martin Guay, Dominik Borer, Robert Sumner, Jakob BuhmannLong
Trajectory-aware Smears for Stylized 3D AnimationsLou Tremolieres, Jean Basset, Pierre Bénard, Pascal BarlaShort
Earthbender: An Interactive System for Stylistic Heightmap Generation using a Guided Diffusion ModelDanial Barazandeh, Gabriel ZachmannLong

Session 3 — Games and Simulation (Session Chair: Aline Normoyle)

TitleAuthorsType
A Time- and Space-Efficient Adaptation of the Space Foundation System for Digital GamesDaniel Dyrda, Kerstin Pfaffinger, Claudio Belloni, Martin Schacherbauer, Johanna Pirker, Gudrun KlinkerLong
XPBD Simulation of Constitutive Materials with Exponential Strain TensorOzan CetinaslanLong
Understanding Player Dynamics in Battle Royale Environments: A Data-Driven Analysis Using the Caldera DatasetEmily Port, Christopher Jacobs, Joseph T. Kider Jr.Long

Session 4 — Character Animation (Session Chair: Ronan Boulic)

TitleAuthorsType
MIRRORED-Anims: Motion Inversion for Rig-space Retargeting to Obtain a Reliable Enlarged Dataset of Character AnimationsThéo Cheynel, Thomas Rossi, Omar El Khalifi, Oscar Fossey, Damien Rohmer, Marie-Paule CaniLong
DRUMS: Drummer Reconstruction Using Midi SequencesTheodoros Kyriakou, Panayiotis Charalambous, Andreas AristidouShort
Real-time Hand Motion Synthesis for Playing a Virtual GuitarRyan Canales, Sophie JörgLong
High-Fiving with the Machine: Synthesising Reactive Motion from Human Input with Interaction Contact LabelsOliver Hixon-Fisher, Jarek Francik, Dimitrios MakrisLong

Session 5 — Faces and Poses (Session Chair: Ryan Canales)

TitleAuthorsType
PhonemeNet: A Transformer Pipeline for Text-Driven Facial AnimationPhiline Witzig, Barbara Solenthaler, Markus Gross, Rafael WampflerLong
Prompt-to-Animation: Generating Cognitively-Grounded Facial Expressions with LLMsFunda Durupinar, Aline NormoyleLong
Data-driven Modeling of Subtle Eye Region DeformationsGlenn Kerbiriou, Quentin Avril, Maud MarchalInvited

Session 6 — Avatars and Agents (Session Chair: Funda Durupinar)

TitleAuthorsType
“I Don’t Like My Avatar”: Investigating Human Digital DoublesSiyi Liu, Kazi Injamamul Haque, Zerrin YumakLong
Social Presence in Virtual Reality: A Comparative Study of AI NPCs and Human InstructorsMarkus Schmidbauer, Johanna Pirker, Elisabeth Mayer, Thomas OdakerLong
Investigating How Text and Motion Style Shape Directness in Embodied Conversational AgentsMichael O’Mahony, Cathy Ennis, Robert RossShort
Projective Multi-Agent DynamicsTomer WeissLong

Poster Session (Download)

TitleAuthors
How Age Shapes Navigation Strategies: Insights from Two Serious GamesNana Tian, Daniel Daniel McKeown, Doug Angus and Victor Schinazi
Guided by Thought: Investigating Virtual Reality Environments for Immersive Psychoeducation and Emotion RegulationMarco Steiner, Georg Arbesser-Rastburg, Saeed Safikhani and Johanna Pirker
How Do I Implement It? Towards Software Patterns for Accessible Player ExperienceChrysa Bika, Kerstin Pfaffinger, Daniel Dyrda, Martin Schacherbauer and Johanna Pirker
Interaction Design for Exploring Complex Urban Data With Digital Twins in Virtual RealityGeorg Arbesser-Rastburg, Marco Steiner, Saeed Safikhani, Anna Schreuer, Jürgen Suschek-Berger, Lisa-Maria Fochler, Hermann Edtmayer and Johanna Pirker
LoomaXR: A Multi-User Platform for Performing Arts Co-Creation, Streaming and InteractionMercè Álvarez de la Campa Crespo, Kalli Koulloufidou, Fotos Frangoudes, Chrysostomos Chadjiminas, Ismail Hadjri, Theodoros Kyriakou, Alex Baldwin, Panayiotis Charalambous and Kleanthis Neokleous
Emotion-Driven Virtual Actors: EEG and Multimodal Data from Live Actor PerformancesNatali Tereza Chavez
“We Are Not Prompts”: Game Designers’ Perception of Generative AISultan Alharthi
Phase-Based Motion Reconstruction with Joint-Angle ConstraintsAmir Azizi, Panayiotis Charalambous, Andreas Panayiotou and Yiorgos Chrysanthou
User identification based on conversational gesturesAline Normoyle and Sophie Jörg
Unified Motion Retrieval by ExampleMarilena Lemonari, Nicolas Hadjisavvas, Chrysostomos Chadjiminas, Panayiotis Charalambous and Efstathios Stavrakis
Lab Presentation: The Bamberg Computer Graphics LabSophie Jörg and Luís Fernando de Souza Cardoso
Scroll to Top