Day 1 | Day 2

12 May 2021

*Note: Post-conference registration is open until 6 June 2021. Post-conference registrants will be able to access the on-demand recordings of ICIR until 30 June 2021. Thank you.

 

 

*All times are in Eastern Time US (ET).

 

Time Topic/Title On-Demand Link
09:00 – 09:15 Opening and Welcome
Wo Chang, ICIR General Chair, NIST, US
View Session
09:15 – 10:15 Moderator: Wo Chang, ICIR General Chair, NIST, US
Keynote – Mixed Reality, Robotics, and Spatial Intelligence
Marc Pollefeys, ETH Zurich and Director of Science, Microsoft, Switzerland
View Session
10:15 – 10:30 Break N/A
10:30 – 11:30 Session Chair: Ehsan Azimi, Provost’s Postdoctoral Fellow, Johns Hopkins University, US

Paper: xR4DRAMA: Enhancing Situation Awareness using Immersive (XR) Technologies
Spyridon Symeonidis, Sotiris Diplaris, Nicolaus Heise, Theodora Pistola, Athina Tsanousa, Georgios Tzanetis, Elissavet Batziou, Christos Stentoumis, Ilias Kalisperakis, Sebastian Freitag, Yash Shekhawat, Rita Paradiso, Maria Pacelli, Joan Codina, Simon Mille, Montserrat Marimon, Michele Ferri, Daniele Norbiato, Martina Monego, Anastasios Karakostas and Stefanos Vrochidis

Paper: Manipulating Avatars for Enhanced Communication in Extended Reality
Jonathon Hart, Thammathip Piumsomboon, Gun Lee, Ross Smith and Mark Billinghurst

View Session
11:30 – 12:10 Moderator: Kathy Grise, Senior Program Manager, Future Directions, IEEE, US
Invited Talk – Self Healing Systems in an Intelity World
Aishwarya Asesh, Data Scientist II, Adobe, US
View Session
12:10 – 13:00 Meal Break N/A
13:00 – 14:00 Moderator: Kathy Grise, Senior Program Manager, Future Directions, IEEE, US
Keynote – Learning from Multi-Agent, Emergent Behaviors in a Simulated Environment
Danny Lange, Senior VP of Artificial Intelligence and Machine Learning, Unity, US
View Session
14:00 – 15:00 Session Chair: Lyuba Alboul, Sheffield Hallam University, UK

Paper: Creating Immersive Experiences based on Intangible Cultural Heritage
Theodora Pistola, Sotiris Diplaris, Christos Stentoumis, Evangelos Stathopoulos, Georgios Loupas, Theodore Mandilaras, Grigoris Kalantzis, Ilias Kalisperakis, Anastasios Tellios, Despoina Zavraka, Panagiota Koulali, Vera Kriezi, Valia Vraka, Foteini Venieri, Stratos Bacalis and Stefanos Vrochidis

Paper: Augmented Reality Assisted Orbital Floor Reconstruction
Yihao Liu, Ehsan Azimi, Nikhil Dave, Cecil Qiu, Robin Yang and Peter Kazanzides

View Session
15:00 – 15:15 Break N/A
15:15 – 16:15 Moderator: Elizabeth Chang, ICIR Publication Chair, University of Maryland, US
Keynote – Spatial Perception in Immersive Virtual Environments
Victoria Interrante, Professor, University of Minnesota, US
View Session
16:15 – 17:05 Moderator: Elizabeth Chang, ICIR Publication Chair, University of Maryland, US
Invited Talk – Interactive Platform for Medical Procedures in Mixed Reality
Ehsan Azimi, Provost’s Postdoctoral Fellow, Johns Hopkins University, US
View Session
17:05 – 17:10 Day 1 Closing Remarks
Wo Chang, ICIR General Chair, NIST, US
View Session

 

Keynote Speakers:

Professor Marc Pollefeys, Professor, ETH Zurich and Director of Science, Microsoft – Mixed Reality, Robotics, and Spatial Intelligence

Abstract: Mixed Reality (MR) allows us to blend virtual information with the real world. The ability to map the spatial environment (and keep track of the device’s position within it) is critical to deliver compelling MR experiences. Being able to recognize objects and semantically understand different components of the scene can further enrich the interaction. Similarly, the same capabilities can enable robots to operate autonomously. When MR devices can also track and understand user activities, they can better assist her in performing a complicated task or facilitate collaboration with robots. In this talk, we will see how computer vision and AI technology enable compelling Intelligent Reality scenarios.

Dr. Danny Lange, Unity – Learning from Multi-Agent, Emergent Behaviors in a Simulated Environment

Abstract: A revolution in reinforcement learning is happening, one that helps companies create more diverse, complex, virtual simulations to accelerate the pace of innovation. Join this session to learn about particular environments already created that have yielded surprising advances in AI agents, and to better understand how emergent behaviors of multiple AI agents in a simulated environment can lead to the most optimal designs and real-world practices.

Dr. Victoria Interrante, University of Minnesota – Spatial Perception in Immersive Virtual Environments

Abstract: Immersive Virtual Reality (VR) technology has valuable potential applications in architecture and design, but ensuring accurate spatial understanding in VR is critical to the success of those efforts. In this talk I will review some of the work my lab has done to enhance the utility of VR for architecture and design applications, focusing primarily on the investigation of factors influencing spatial perception accuracy in immersive architectural environments, but also touching on methods for more effectively supporting interpersonal communication during immersive design reviews, and other issues of potential interest to architectural and interior designers.

Invited Speakers

Mr. Aishwarya Asesh, Data Scientist II, Adobe, US – Self Healing Systems in an Intelity World

Abstract: An intelligent reality is a technologically enhanced reality that improves human cognitive performance and judgment. It can make a worker better by displaying information from the Internet of Things in the physical reality. Since reality is real-time, analytics is a crucial component of intelligent realities. Consider an engineer looking at a machine while wearing a VR headset. He can see both the service history and prediction of future failures. This gives the engineer a view on the fourth dimension of time, both backward and forwards. Instead of having to take the machine apart and do the heavy lifting, the user can see an IoT-driven Mixed Reality rendering projected on the outside. Additionally, they could also see a virtual rendering of the operations of the same type of machine at a distant location. Then, they can interface with both artificial and human remote experts about the next steps, which could include the expert driving virtual overlays into the technician’s view. As a wearable computer, the headset brings distant resources into the engineer’s operational reality. They could interact with the entire system architecture without reaching for even a smartphone or laptop. This kind of technology can immensely drive the future of MR, including other applications like prosperity index mapping using Satellite images and data-driven decision Synopsys mapping to operational workflows.  

Dr. Ehsan Azimi, Provost’s Postdoctoral Fellow, Johns Hopkins University, US – Interactive Platform for Medical Procedures in Mixed Reality

Abstract: Mixed reality is an emerging technology that connects virtual space and the real world seamlessly and enables physicians to visualize anatomical targets directly on the patient’s anatomy. This talk presents the architecture and implementation of an interactive mixed reality platform for medical procedures. It discusses the critical challenges with head-mounted optical see-through systems that needed to be addressed in this implementation. Various use cases of this platform are discussed and evaluated in detail. For surgery, our navigation system uses mixed reality on a head-mounted display that directly overlays preoperative imaging on the operative field. This “x-ray” vision brings the information from preoperative imaging to where the surgeon needs to use it, reducing error and facilitating better outcomes. It provides the surgeon with the location of the desired target overlaid intraoperatively directly on the patient’s anatomy. It also assesses the benefits of using mixed reality for training medical procedures. Working with leading surgeons at Johns Hopkins Hospital, this presentation also discusses the practical implications of bringing our platform to clinical readiness for various procedures.

Sponsors