Keynote Speakers

Prof. Dr. Barbara Solenthaler, Computer Graphics Lab, ETH Zurich
Title: From Digital Humans to Digital Patients
Abstract: Recent advances in graphics – particularly AI-driven digital human modeling, character animation, and conversational agents – have shown how effectively we can visualize and interact with digital characters. These breakthroughs power film, games, and XR, demonstrating the enormous potential of graphics methods for scalable, high-fidelity face modeling and new forms of human-computer interaction. Their potential in healthcare, however, remains largely untapped. Yet we cannot simply transfer existing methods into clinical practice: medical applications bring specific requirements such as anatomically grounded representations, the ability to model pathological and evolving anatomy, functional prediction, and generalization to out-of-distribution groups such as infants. Addressing these needs requires rethinking how we capture, represent, simulate, render, and interact with digital humans.
In this talk, I will share our efforts to bridge this gap: from creating anatomical face models for predicting treatment outcomes, to modeling infants and designing appliances and implants, to developing AI-driven characters as health companions. These projects illustrate not only what graphics can enable in healthcare, but also how the unique challenges of medical applications can drive new methods and insights for the broader graphics community.
Bio: Barbara Solenthaler is a Titular Professor at the Computer Graphics Lab, Department of Computer Science, ETH Zurich. She received her PhD in Computer Science from the University of Zurich and was a Hans Fischer Fellow at the Institute for Advanced Study, Technical University of Munich. Her research focuses on animation and simulation, with a particular emphasis on the intersection of computer graphics and healthcare, where her team develops technologies for 3D digital patient twins. She is also committed to translating scientific advances for societal benefit and contributes actively to the international computer graphics community through editorial and organizational service.

Prof. Dr. Marc Erich Latoschik, Universität Würzburg
Title: Determinants of a Metaverse: From Avatars to Zero Latency
Abstract: What happens when your avatar looks exactly like you — down to the last wrinkle? In this keynote, we’ll explore how lifelike 3D avatars and cutting-edge XR + AI research are reshaping the way we learn, heal, and connect. You’ll see how virtual embodiment is pushing the boundaries of identity, how social VR is turning classrooms into interactive playgrounds, and how VR therapy is helping people recover from knee surgery, manage obesity, and even sharpen their public-speaking skills. But we’ll also peek at the flip side: when latency glitches break immersion, when VR gambling clouds judgment, and when privacy comes under fire from biometric tracking. Along the way, we’ll share fresh theories of XR, new risks we’ve uncovered, and bold ideas to keep the metaverse safe, human-centered, and inspiring.
Bio: Marc Erich Latoschik is Professor and Chair for Human-Computer Interaction at the University of Würzburg, where he leads one of the top research groups worldwide in Extended Reality (XR). His work bridges computer science, AI, psychology, and cognitive sciences, with a focus on immersive and interactive systems. After early contributions to multimodal VR interfaces in the late 90s, his current research covers topics such as virtual embodiment, avatar realism, social XR, gamification, and therapeutic and educational applications of VR/AR. Marc has published more than 400 peer-reviewed articles, received multiple awards, and serves on leading program committees in the field.

Prof. Dr. Amit H. Bermano
Title: Taming the beast: Controllability for Generative Diffusion Models
Abstract: Generative tasks are currently addressed almost exclusively using the foundational model approach. This means that large, typically diffusion-based, generative models are trained over generic data which are afterwards employed for specific tasks. Except for computational resources, perhaps the biggest challenge with this approach is controllability; it is challenging to balance specific user requirements with the vast knowledge these models carry.
In this talk, I discuss my recent attempts to control these beasts. Focusing primarily on works in the body motion generation domain, but also touching upon 2D images, I portray control methods that utilize the input condition and noise spaces, intervene between generation steps, and, of course, manipulate the attention mechanism. The talk presents methods for combining motion generation with physical environments, multi-topology motion generation, style transfer and personalization, and a deep discussion on the attention mechanism and how to exploit it.
Bio: Amit H. Bermano is an associate professor at Tel-Aviv University, now on sabbatical at ETH zurich. His research focuses on visual computing, with an emphasis on generative models in various visual domains including images, video, animation, and geometry. Amit obtained his undergrad and master degree in Israel, his doctoral degree at ETH, working in collaboration with Disney Research Zurich as a student and a post-doc. Most of his post doctoral work was performed at Princeton University, in the Princeton Graphics group.
Industry Speakers

Dr. Antoine Milliez, Creatures R&D Lead, Industrial Light and Magic
Title: Pragmatic Solutions in Creature Animation at ILM
Abstract: At any given time, over 50 shows are getting worked on in parallel at Industrial Light & Magic, by hundreds of artists located across 5 sites around the world. It can seem tricky to innovate in that constant buzz, as any disruption to existing workflows can have serious consequences. In addition, the support load carried by R&D teams in a 50 year old company is quite consequent.
So how do we do it? In this talk we’ll look at examples of pragmatic solutions to technical problems, and how they’ve been implemented at ILM. We’ll also talk about areas where we’ve integrated state of the art technology, and what it took to get there.
Bio: Antoine Milliez is a Staff R&D Engineer at Industrial Light & Magic and has worked on character rigging, animation and simulation tools since 2017. As Creatures R&D Lead, he oversees technical efforts across the character pipeline. He holds a PhD in Computer Graphics from ETH Zurich and has spent 5 years at Disney Research working on novel ways to stylize animations and simulations.

Dr. Thabo Beeler, Researcher, Google XR
Title: Digital Humans for Android XR
Abstract: Google and Samsung just released the first product powered by Android XR, the new operating system from Google developed for Augmented and Virtual Reality Devices. Digital Humans played a central role in developing Android XR and the Galaxy XR headset – from product design, to natural input, to user experiences. In this presentation I will give a quick overview of some of the underlying Digital Human technology we developed for Android XR and how it is being used.
Bio: Thabo Beeler is a Research Director at Google, where he is leading the work on Digital Humans for Android XR. Prior to that he was a Principal Research Scientist at Disney Research, where he was heading the Capture and Effects group and leading the research initiative on Digital Humans. He has worked on Digital Humans for more than 15 years, has published over 80 papers on the subject, and his research has been recognised by several awards, including the prestigious Eurographics Young Researcher Award and the Eurographics PhD Award. His work has been utilised in over 50 feature films to create photoreal digital actors, which has been recognised by a Technical Achievement Award of the Academy of Motion Pictures in 2019. More information can be found at thabobeeler.com.
Schedule
Paper Sessions
Session 1 — Virtual Reality and Augmented Reality
| Title | Authors | Type |
|---|---|---|
| Feel to Aim: Haptic Assistance for Enhanced Targeting in Virtual Reality | Jean Botev, Johannes Günter Herforth, Marnix Van den Wijngaert | Long |
| Enhancing Foveated Rendering with Weighted Reservoir Sampling | Ville Cantory, Darya Biparva, Haoyu Tan, Tongyu Nie, John Schroeder, Ruofei Du, Victoria Interrante, Piotr Didyk | Long |
| Storyboarding in Extended Reality: Leveraging Real-world Elements in Storyboard Creation | Federico Manuri, Federico Mafrici, Andrea Sanna | Short |
| Beyond Buttons: A User-centric Approach to Hands-free Locomotion in Virtual Reality via Voice Commands | Jan Hombeck, Henrik Voigt, Kai Lawonn | Invited |
| Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games | Julián Méndez, Weizhou Luo, Rufat Rzayev, Wolfgang Büschel, Raimund Dachselt | Invited |
Session 2 — Animation and Style
| Title | Authors | Type |
|---|---|---|
| Implicit Bézier Motion Model for Precise Spatial and Temporal Control | Luca Vögeli, Dhruv Agrawal, Martin Guay, Dominik Borer, Robert Sumner, Jakob Buhmann | Long |
| Trajectory-aware Smears for Stylized 3D Animations | Lou Tremolieres, Jean Basset, Pierre Bénard, Pascal Barla | Short |
| Earthbender: An Interactive System for Stylistic Heightmap Generation using a Guided Diffusion Model | Danial Barazandeh, Gabriel Zachmann | Long |
Session 3 — Games and Simulation
| Title | Authors | Type |
|---|---|---|
| A Time- and Space-Efficient Adaptation of the Space Foundation System for Digital Games | Daniel Dyrda, Kerstin Pfaffinger, Claudio Belloni, Martin Schacherbauer, Johanna Pirker, Gudrun Klinker | Long |
| XPBD Simulation of Constitutive Materials with Exponential Strain Tensor | Ozan Cetinaslan | Long |
| Understanding Player Dynamics in Battle Royale Environments: A Data-Driven Analysis Using the Caldera Dataset | Emily Port, Christopher Jacobs, Joseph T. Kider Jr. | Long |
Session 4 — Character Animation
| Title | Authors | Type |
|---|---|---|
| MIRRORED-Anims: Motion Inversion for Rig-space Retargeting to Obtain a Reliable Enlarged Dataset of Character Animations | Théo Cheynel, Thomas Rossi, Omar El Khalifi, Oscar Fossey, Damien Rohmer, Marie-Paule Cani | Long |
| DRUMS: Drummer Reconstruction Using Midi Sequences | Theodoros Kyriakou, Panayiotis Charalambous, Andreas Aristidou | Short |
| Real-time Hand Motion Synthesis for Playing a Virtual Guitar | Ryan Canales, Sophie Jörg | Long |
| High-Fiving with the Machine: Synthesising Reactive Motion from Human Input with Interaction Contact Labels | Oliver Hixon-Fisher, Jarek Francik, Dimitrios Makris | Long |
Session 5 — Faces and Poses
| Title | Authors | Type |
|---|---|---|
| PhonemeNet: A Transformer Pipeline for Text-Driven Facial Animation | Philine Witzig, Barbara Solenthaler, Markus Gross, Rafael Wampfler | Long |
| Prompt-to-Animation: Generating Cognitively-Grounded Facial Expressions with LLMs | Funda Durupinar, Aline Normoyle | Long |
| Data-driven Modeling of Subtle Eye Region Deformations | Glenn Kerbiriou, Quentin Avril, Maud Marchal | Invited |
| Canonical Pose Reconstruction from Single Depth Image for 3D Non-rigid Pose Recovery on Limited Datasets | Fahd Alhamazani, Paul L. Rosin, Yu-Kun Lai | Invited |
Session 6 — Avatars and Agents
| Title | Authors | Type |
|---|---|---|
| “I Don’t Like My Avatar”: Investigating Human Digital Doubles | Siyi Liu, Kazi Injamamul Haque, Zerrin Yumak | Long |
| Social Presence in Virtual Reality: A Comparative Study of AI NPCs and Human Instructors | Markus Schmidbauer, Johanna Pirker, Elisabeth Mayer, Thomas Odaker | Long |
| Investigating How Text and Motion Style Shape Directness in Embodied Conversational Agents | Michael O’Mahony, Cathy Ennis, Robert Ross | Short |
| Projective Multi-Agent Dynamics | Tomer Weiss | Long |