{"id":394,"date":"2026-03-25T23:58:56","date_gmt":"2026-03-25T23:58:56","guid":{"rendered":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/?page_id=394"},"modified":"2026-03-26T01:23:18","modified_gmt":"2026-03-26T01:23:18","slug":"program-2025","status":"publish","type":"page","link":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/program-2025\/","title":{"rendered":"Program 2025"},"content":{"rendered":"\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a>Monday 3rd February 2025<\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Electronic Imaging Symposium Highlights Session<\/strong><br>Location: Grand Peninsula D<br><br><\/td><td>Mon.&nbsp;11:00&nbsp;am&nbsp;&#8211;&nbsp;12:20&nbsp;pm<\/td><\/tr><tr><td>Join us for a session that highlights the breadth of the EI Symposium with short papers selected by their Chairs from EI conferences.<br>The full papers are given at other times in the program.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Electronic Imaging Welcome Lunch (lunch provided)<\/strong><br>Location: The Grove<br><br>The Electronic Imaging Symposium All-Conference Welcome Lunch provides a wonderful opportunity to get to know and interact with new and old EI and SD&amp;A colleagues. Plan to join us for this relaxing and enjoyable event.<\/td><td>Mon.&nbsp;12:20&nbsp;&#8211;&nbsp;2:00&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-left\" data-align=\"left\"><img fetchpriority=\"high\" decoding=\"async\" width=\"250\" height=\"317\" class=\"wp-image-412\" style=\"width: 150px;float: right;\" src=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/Milanfar_Peyman.jpg\" alt=\"Milanfar Peyman Photo\" srcset=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/Milanfar_Peyman.jpg 250w, https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/Milanfar_Peyman-237x300.jpg 237w\" sizes=\"(max-width: 250px) 100vw, 250px\" \/>EI Plenary 1<br>Location: Grand Peninsula D Mon. 2:00 &#8211; 3:00\u00a0pm<br><br><strong>EI Symposium Welcome<\/strong> <br><strong><br>PLENARY: Imaging in the Age of Artificial Intelligence<\/strong> <br><strong>Peyman Milanfar, Google (United States)<\/strong>  \u00a0 <br><br><em>Abstract:<\/em> AI is revolutionizing imaging, transforming how we capture, enhance, and experience visual content. Advancements in machine learning are enabling mobile phones to have far better cameras, enabling capabilities like enhanced zoom, state-of-the-art noise reduction, blur mitigation, and post-capture capabilities such as intelligent curation and editing of your photo collections, directly on device. This talk will delve into some of these breakthroughs, and describe a few of the latest research directions that are pushing the boundaries of image restoration and generation, pointing to a future where AI empowers us to better capture, create, and interact with visual content in unprecedented ways. <br><br><em>Biography:<\/em> Peyman Milanfar is a Distinguished Scientist at Google, where he leads the Computational Imaging team. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz for 15 years, two of those as Associate Dean for Research. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Over the last decade, Peyman&#8217;s team at Google has developed several core imaging technologies that are used in many products. Among these are the zoom pipeline for the Pixel phones, which includes the multi-frame super-resolution (&#8220;Super Res Zoom&#8221;) pipeline, and several generations of state of the art digital upscaling algorithms. Most recently, his team led the development of the &#8220;Photo Unblur&#8221; feature launched in Google Photos for Pixel devices. Peyman received his undergraduate education in electrical engineering and mathematics from the UC Berkeley and his MS and PhD in electrical engineering from MIT. He holds more than two dozen patents and founded MotionDSP, which was acquired by Cubic Inc. Along with his students and colleagues, he has won multiple best paper awards for introducing kernel regression in imaging, the RAISR upscaling algorithm, NIMA: neural image quality assessment, and Regularization by Denoising (RED). He&#8217;s been a Distinguished Lecturer of the IEEE Signal Processing Society and is a Fellow of IEEE &#8220;for contributions to inverse problems and super-resolution in imaging&#8221;.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Coffee Break<\/strong><\/td><td>Mon. 3:00 &#8211; 3:30&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Stereoscopic Displays and Applications Session 1<\/strong><br>Stereoscopy &amp; Spatial Perception<br>Location: Grand Peninsula D<br>Session Chair: Andrew Woods, Curtin University (Australia)<\/td><td>Mon. 3:20&nbsp;&#8211; 5:10&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>3:20 pm : <strong>SD&amp;A Conference Welcome and Introduction<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-B02\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>3:40 pm : <strong>3D distortion modeling: A software package to represent image space perception in stereoscopic displays<\/strong>, Eleanor O&#8217;Keefe | KBR; Richard Tompkins | Vision Products LLC; Eric Seemiller | KBR; Marc Winterbottom, Steven Hadley | USAF 711 HPW\/RHMO (United States) [SDA\u2060-\u2060329] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4302\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=2iEzmvzNyes&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=2\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-329\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><img decoding=\"async\" width=\"500\" height=\"621\" class=\"wp-image-413\" style=\"width: 150px;float: right;\" src=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/24_11_KatieFico_Headshot2.jpg\" alt=\"Katie Fico Photo\" srcset=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/24_11_KatieFico_Headshot2.jpg 500w, https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/24_11_KatieFico_Headshot2-242x300.jpg 242w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/>SD&amp;A Keynote 1 Mon. 4:00\u00a0pm &#8211; 5:00\u00a0pm<br><br><strong>KEYNOTE:<br>Beyond the Screen Plane:<br>Stereo at Walt Disney Animation Studios<\/strong> [SDA\u2060-\u2060330] <br><strong>Katie Fico, Stereoscopic Supervisor, Walt Disney Animation Studios (United States)<\/strong> \u00a0 <br><br><em>Abstract:<\/em> Moana 2 Stereoscopic Supervisor Katie Fico shares how Walt Disney Animation Studios uses 3D technology as a storytelling tool to create unique and compelling immersive experiences. She discusses insights on the creative and technical processes behind some of Disney Animation&#8217;s films, as well as the technological innovations of the craft that she has been a part of during her 25 year tenure.<br><br><em>Biography: <\/em>Katie Fico was raised in the Northridge neighborhood of Los Angeles, California, and received a bachelor&#8217;s degree in art from her hometown&#8217;s own California State University, Northridge. Her career at Walt Disney Animation Studios started in 1997 as a compositor on Disney Animation&#8217;s feature film Dinosaur, marking the beginning of her more than 25 years at the studio. Fico is a 3-time Advanced Imaging Society Lumiere Award winner for best Animated Stereography for her work on Zootopia, Frozen, and the animated short, Feast. She is currently working on Zootopia 2.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Session Break<\/strong><\/td><td>Mon. 5:00 &#8211; 5:20&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a>SD&amp;A 3D Theatre<\/a><br>Producers: John Stern, retired (United States); Eric Kurland, 3-D SPACE Museum (United States); Andrew Woods, Curtin University (Australia). Mon. 5:20 to 6:50&nbsp;pm<br>This ever-popular session of each year&#8217;s Stereoscopic Displays and Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened in the 3D Theater Session is shown in high-quality polarized 3D on a large screen. The final program will be announced at the conference and 3D glasses will be provided.<br><br>See: <a href=\"http:\/\/stereoscopic.org\/3dcinema\/index.html\">the list of exhibited content<\/a>.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>SD&amp;A Conference Annual Dinner Mon. 7:00 to 10:00&nbsp;pm<br>The annual informal dinner for SD&amp;A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a>Tuesday 4th February 2025<\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Stereoscopic Displays and Applications Session 2<\/strong><br>Visualization Facilities<br>Location: Grand Peninsula D<br>Session Chair: Laurie Wilcox, York University (Canada)<br><strong>Joint Session<\/strong> with the Engineering Reality of Virtual Reality (ERVR) conference<\/td><td>Tue. 8:40&nbsp;&#8211; 10:30&nbsp;am<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>8:40 am : <strong>SD&amp;A Day 2 Welcome<\/strong><\/p>\n\n\n\n<p>8:50 am : <strong>Case study: Love letter to skating &#8211; VR180 stereoscopic post-production workflow<\/strong>, Andrew Woods, Daniel Adams, Cassandra Edwards, Kerreen Ely-Harper, Andrea Rassell | Curtin University (Australia) [SDA\u2060-\u2060332] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4622\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=nlJc6jpyoqg&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=3\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-332\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>9:10 am : <strong>NEMO Explorer XR &#8211; The development of an ocean-based immersive co-design environment<\/strong>, Alyssa Liu, Rian Stephens, Elise Hodson, Carla Amaral, Christopher Ross, Jasmine Black, Paul Anderson, Ashley Hall, Bjorn Sommer | Royal College of Art (United Kingdom) [SDA\u2060-\u2060333] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4538\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=OTfkIHT1Kc4&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=4\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-333\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><img decoding=\"async\" width=\"500\" height=\"621\" class=\"wp-image-414\" style=\"width: 150px;float: right;\" src=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/daniel_sandin2.jpg\" alt=\"Daniel Sandin Photo\" srcset=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/daniel_sandin2.jpg 500w, https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/daniel_sandin2-242x300.jpg 242w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/>SD&amp;A Keynote 2 Tue. 9:30\u00a0am &#8211; 10:30\u00a0am<br><br><strong>Half a Century of Innovation in Interactive Electronic Displays for Art and Science at the Electronic Visualization Laboratory (EVL) at UIC and the Qualcomm Institute at UCSD<\/strong><br>[SDA\u2060-\u2060334] <strong>Daniel J. Sandin,<br>University of Illinois at Chicago (United States)<\/strong> \u00a0 <a href=\"https:\/\/www.youtube.com\/watch?v=ecZ8cIn_jaY&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=1\"><\/a> <br><br><em>Abstract:\u00a0<\/em>The Electronic Visualization Laboratory (EVL), established in 1973 at the University of Illinois at Chicago (UIC), specialized in interactive electronic displays even before the advent of the frame buffer. By ingeniously combining digital and analog systems, EVL enabled real-time interactive computer graphics through the Graphics Symbiosis System (GRASS). This system was instrumental for animator Larry Cuba in creating the computer graphics for the original 1977 &#8220;Star Wars&#8221; film, which was done frame by frame on 35mm film), as well as contributing to lesser-known movies like &#8220;UFO: Target Earth&#8221; for which the special effects were captured on video. SpiralPTL is a work preserved in the Museum of Modern Art&#8217;s video art collection. Throughout the decades, EVL advanced the technology of computer graphics but also deeply integrated art, science, and education researchers and teaching faculty. This collaboration led to the creation of an interdisciplinary MFA program in Electronic Visualization, bridging UIC&#8217;s Engineering College with its School of Art and Design. EVL&#8217;s later innovations include the development of numerous interactive stereoscopic and autostereoscopic systems, most notably the Cave Automatic Virtual Environment (CAVE). This paper will describe and analyze these technological advancements, discussing both their successes and their challenges in adoption by scientists, engineers, and artists. The narrative will reflect on how these technologies have shaped interdisciplinary collaboration and the evolution of electronic art and visualization techniques over the past fifty years.<br><br><em>Biography:<\/em> Daniel J. Sandin is director emeritus of the Electronic Visualization Lab (EVL) and a professor emeritus in the School of Art and Design at the University of Illinois at Chicago (UIC). As an artist, he has exhibited worldwide, and has received grants in support of his work from the Rockefeller Foundation, the Guggenheim Foundation, the National Science Foundation, and the National Endowment for the Arts. His video animation &#8220;Spiral PTL&#8221; is in the inaugural collection of video art at the Museum of Modern Art in New York. In 2007 Sandin received the IEEE VGTC Virtual Reality Technical Achievement Award. He went on to develop a series of VR display systems including Varier, a stereo VR display that does not require 3D glasses, and CAVE2, a large (20-foot) cylindrical display based on LCDs.<br><br>See also: <a href=\"https:\/\/www.youtube.com\/watch?v=AoFrKER3aBM&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=16\"><\/a> <a href=\"https:\/\/www.youtube.com\/watch?v=AoFrKER3aBM&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=16\">3D Film &#8220;A Study of 4D Julia Sets&#8221;<\/a> (as shown in Dan Sandin&#8217;s presentation)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Coffee Break<\/strong><\/td><td>Tue. 10:30 &#8211; 11:00&nbsp;am<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Stereoscopic Displays and Applications Session 3<\/strong><br>History &amp; Future of Immersive Technologies<br>Location: Grand Peninsula D<br>Session Chair: Takashi Kawai, Waseda University (Japan)<br><strong>Joint Session<\/strong> with The Engineering Reality of Virtual Reality (ERVR) conference<br><\/td><td>Tue. 11:00 am&nbsp; &#8211; 12:40&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:00 am : <strong>Collaborative spatial streaming: real-time auto-calibrating system for multi-device dynamic 3D capture<\/strong>, Tyler Bell | University of Iowa (United States) [SDA\u2060-\u2060335] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4554\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=Ai-Lb0e2TLU&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=5\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:20 am : <strong>Active 3D flat panel displays: A new implementation of an old idea<\/strong>, Michael Weissman, Peter Giokaris | independent (United States) [SDA\u2060-\u2060336] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4611\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=_YzHmtqOwGk&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=6\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:40 am : <strong>The rise and fall of SENSIO &#8211; Lessons for the next wave of consumer 3D<\/strong>, Nicholas Routhier | CubicSpace Technologies, Inc. (Canada) [SDA\u2060-\u2060337] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4576\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=2VseFsCiOGI&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=7\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>12:00 pm : <strong>Baskerville Small Performances Project<\/strong>, Mark Box, Scott Maloney | Cambridge University (United Kingdom) [SDA\u2060-\u20604175] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4175\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>] (cancelled)<br><\/p>\n\n\n\n<p>12:20 pm : <strong>Bringing historical stereographs to XR headsets<\/strong>, Nicholas Routhier | CubicSpace Technologies, Inc. (Canada) [ERVR\u2060-\u2060159] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4578\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=EkfSh4ec-kI&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=8\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Lunch Break (lunch not supplied this day)<\/strong><\/td><td>Tue. 12:40 &#8211; 2:00&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><img loading=\"lazy\" decoding=\"async\" width=\"250\" height=\"300\" class=\"wp-image-415\" style=\"width: 150px;float: right;\" src=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/Grace_Kuo.jpg\" alt=\"Grace Kuo Photo\">EI Plenary 2<br>Location: Grand Peninsula D Tue. 2:00 &#8211; 3:00&nbsp;pm<br><strong><br>PLENARY: Holographic Displays: Past, Present, and Future<\/strong> <strong>Grace Kuo, research scientist, Display Systems Research, Meta (United States)<\/strong> <a href=\"https:\/\/www.youtube.com\/watch?v=wmXYTUu9i0s&amp;list=PLoksP178KYM4TQsakiyBsGrGhsYVsdSnG&amp;index=2\"><\/a> &nbsp; &nbsp; <br><br><em>Abstract:<\/em> Holograms have captured the public imagination since their first media representation in Star Wars in 1977. Although fiction, the idea of glowing, 3D projections is based on real-world holographic display technology, which can create 3D image content by manipulating the wave properties of light. However, in practice, the image quality of experimental holograms has significantly lagged traditional displays until recently. What changed? This talk will delve into how hardware improvements met ideas from machine learning to spark a new wave of research in holographic displays. We&#8217;ll take a critical look at what this research has achieved, discuss open problems, and explore the potential of holographic technology to create head-mounted displays with glasses-form factor. <br><br><em>Biography:<\/em> Grace Kuo is a research scientist in the Display Systems Research team at Meta where she works on novel display and imaging technology for virtual and augmented reality. She&#8217;s particularly interested in the joint design of hardware and algorithms for imaging systems, and her work spans optics, optimization, signal processing, and machine learning. Kuo&#8217;s recent work on &#8220;Flamera&#8221;, a light-field camera for virtual reality passthrough, won Best-in-Show at the SIGGRAPH Emerging Technology showcase and received wide-spread positive press coverage from venues like Forbes and UploadVR. Kuo earned her BS at Washington University in St. Louis and her PhD at University of California, Berkeley, advised by Drs. Laura Waller and Ren Ng.<br><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Coffee Break<\/strong><\/td><td>Tue. 3:00 &#8211; 3:35&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Stereoscopic Displays and Applications Session 4<\/strong><br>Stereoscopic Vision<br>Location: Grand Peninsula D<br>Session Chair: Eleanor O&#8217;Keefe, KBR (United States)<\/td><td>Tue. 3:30 pm&nbsp; &#8211; 5:30&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>3:30 pm : <strong>Convexity biases in stereoscopically viewed ground terrain<\/strong>, Brittney Hartle, Robert Allison, Laurie Wilcox | York University (Canada) [SDA\u2060-\u2060339] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4235\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=_dg4Ki2YFFg&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=9\"><\/a>&nbsp; <a href=\"https:\/\/www.youtube.com\/watch?v=5WYzscpo5Aw&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=10\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>3:50 pm : <strong>Stereoscopic radiography: New possibilities in the digital era using low cost, existing technology<\/strong>, Boris Starosta | independent (United States) [SDA\u2060-\u2060340] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4485\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=f0Nd_6amEm8&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=11\"><\/a>&nbsp; <a href=\"https:\/\/www.youtube.com\/watch?v=M2EdYEC-z6E&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=12\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-340\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>4:10 pm : <strong>User experience and intent to adopt VR across levels of immersion: A case study of the flight simulation game Elite Dangerous<\/strong>, Aleshia Hayes | University of North Texas (United States) [SDA\u2060-\u2060341] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4508\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=aFoXw7T2HUc&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=14\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-341\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>4:30 pm : <strong>Sensory mechanisms underlying cybersickness<\/strong>, Douglas Gill | FlightSafety International (United States) [SDA\u2060-\u2060342] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4545\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>] (Extended Presentation)<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=VOhaHOt2JwA&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=13\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>5:10 pm : <strong>Effects of stereoscopic representations in sublime experiences induced by immersive VR<\/strong>, Yoshihiro Banchi, Taisei Tsukahara | Waseda University; Tomohiro Ishizu | Kansai University; Takashi Kawai | Waseda University (Japan) [SDA\u2060-\u2060343] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4253\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=0fuyaWR-ap8&amp;list=PLoksP178KYM71ZBcJwMXNOxFJBIu1MiSL&amp;index=15\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.2.SDA-343\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>Electronic Imaging Symposium Demonstration Session and Exhibits Happy Hour<br>Location: Regency ABC<br>EI Demonstration Chair: Tyler Bell, University of Iowa (United States)<br>SD&amp;A Demonstration Chair: Bjorn Sommer, Royal College of Art (United Kingdom) Tues.&nbsp;5:30&nbsp;&#8211;&nbsp;7:00&nbsp;pm<\/td><\/tr><tr><td>Demonstrations This symposium-wide, hands-on, interactive session, provides a perfect opportunity to witness electronic imaging in action first hand. Attendees can see the latest research, compare commercial products, ask questions of knowledgeable demonstrators, and even make purchasing decisions about a range of electronic imaging products. The demonstration session hosts a vast collection of technologies and products, and is also a valuable networking opportunity.<br>The session will include demonstrations from presenters from the Stereoscopic Displays and Applications conference and you will see a range of stereoscopic products with your own two eyes. More information about previous year demonstrations: <a href=\"http:\/\/www.stereoscopic.org\/demo\/index.html\">http:\/\/www.stereoscopic.org\/demo\/index.html<\/a>.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a>Wednesday 5th February 2025<\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>The Engineering Reality of Virtual Reality (ERVR) conference Session 1<\/strong><br>XR for Urban Design &amp; Social Applications<br>Location: Grand Peninsula D<br>Session Chair: Sharad Sharma<br><strong>Joint session<\/strong> with the Stereoscopic Displays and Applications (SD&amp;A) conference.<\/td><td>Wed. 9:00 &#8211; 10:30&nbsp;am<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>9:00 am : <strong>ERVR Welcome and Introduction<\/strong><\/p>\n\n\n\n<p>9:10 am : <strong>The XR stream &#8211; Grand challenges for ocean and cities from a London perspective<\/strong>, Bjorn Sommer, Rian Stephens, Rashi Agarwala, Ayushi Saxena, Zak Berry, Elise Hodson, Carla Amaral, Christopher Ross, Alyssa Liu, Jasmine Black, Paul Anderson, Ashley Hall | Royal College of Art (United Kingdom) [ERVR\u2060-\u2060160] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4406\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=PxGAAtAl9gM&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=1\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-160\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>9:30 am : <strong>Re-envisioning Paris landmarks &#8211; VR used to evaluate and judge architectural design competitions<\/strong>, Kevin Gilson | WSP (United States) [SDA\u2060-\u2060344] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4560\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=wjBPHCyWjr4&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=2\"><\/a>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>9:50 am : <strong>Look around you! Situating extended reality within the urban fabric<\/strong>, Carolina Ramirez-Figueroa | Royal College of Art (United Kingdom), Campbell Orme | Meta Reality Labs (United States) [ERVR\u2060-\u2060161] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4470\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=4D1vxQXs7ek&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=3\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-161\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>10:10 am : <strong>Can virtual reality and artificial intelligence improve quality of life for individuals with dementia through reminiscence therapy?<\/strong>, Gloria James-Avalos | UNT (United States) [ERVR\u2060-\u2060162] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4558\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>] (cancelled)<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Coffee Break<\/strong><\/td><td>Wed. 10:30 &#8211; 11:00&nbsp;am<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>The Engineering Reality of Virtual Reality (ERVR) conference Session 2<\/strong><br>VR for Education &amp; Learning<br>Location: Grand Peninsula D<br>Session Chair: Bjorn Sommer<\/td><td>Wed. 11:00 am &#8211; 12:30&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:00 am : <strong>Enhancing teacher training with AI-guided simulations in smart virtual reality<\/strong>, Lee Flores, Seth King, Vedansh Airen, Tyler Bell | University of Iowa (United States) [ERVR\u2060-\u2060163] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4557\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:20 am : <strong>Virtual reality as a value engineering method in machine shop learning<\/strong>, Myles Cupp, Marie Vans | Colorado State University (United States) [ERVR\u2060-\u2060164] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4390\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=XyldrlWEwsI&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=4\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-164\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:40 am : <strong>Evaluating the impact of interaction level on content learning in the Eureka VR Environment for Mining Engineering Education<\/strong>, Rojin Manouchehri, Levi Scully, Araam Zaremehrjardi, Umut Kar, Pengbo Chu, Frederick Harris Jr., Sergiu Dascalu | University of Nevada Reno (United States) [ERVR\u2060-\u2060165] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4532\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=FLjgvSg7Jxw&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=5\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-165\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>12:00 pm : <strong>bestie: An immersive, interactive, intelligent storytelling companion<\/strong>, Tyler Bell | University of Iowa (United States) [ERVR\u2060-\u2060166] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4537\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Lunch Break (Lunch Provided)<\/strong><\/td><td>Wed. 12:30 &#8211; 2:00&nbsp;pm<\/td><\/tr><tr><td colspan=\"2\"><br><a><\/a> Electronic Imaging Symposium Poster Session (lunch provided)<br>Location: The Grove<br>Wed.&nbsp;12:30&nbsp;&#8211;&nbsp;2:00&nbsp;pm<br>Conference attendees are encouraged to attend the Symposium-wide Poster Session where authors display their posters and are available to answer questions and engage in in-depth discussions about their work.<br><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><img loading=\"lazy\" decoding=\"async\" width=\"250\" height=\"300\" class=\"wp-image-416\" style=\"width: 150px;float: right;\" src=\"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-content\/uploads\/2026\/03\/Gerard_Medioni.jpg\" alt=\"Gerard Medioni Photo\">EI Plenary 3<br>Location: Grand Peninsula D<br>Wed. 2:00 &#8211; 3:00&nbsp;pm<br><strong>PLENARY: Prime Video: a Differentiated Viewing Experience<\/strong> <strong>G\u00e9rard Medioni, vice president and distinguished scientist, Prime Video &amp; Studios<\/strong> <br><br><em>Abstract:<\/em> This talk presents an overview of the technology components powering the Prime Video customer experience. Going beyond title level information, we segment the video into shots and scenes, parse each scene to infer semantic content, and use it for a number of applications, such as content moderation, subtitles, dubbing, audio descriptions. We also augment the original content with artwork and video clips, provide cast and music recognition in X-Ray, all of which feed into the recommendation presentation. The talk ends with a presentation of AI-powered innovative features in live broadcast of sports events. <br><br><em>Biography:<\/em> G\u00e9rard Medioni is a member of the leadership team for Amazon Prime Video &amp; Studios group. Prior to joining Prime Video, Medioni was responsible for leading AI and computer vision-based research efforts powering Amazon&#8217;s Just Walk Out technology and the Amazon One palm recognition service that combines cutting-edge biometrics, optical engineering, generative AI, and machine learning to deliver a new means of identification, entry, payment, and age-verification. The recipient of several prestigious awards recognizing his contributions to both academia and industry, Medioni is a Fellow of the National Academy of Inventors, ACM, IAPR, IEEE, AAAI, and AAIA and is a member of the National Academy of Engineering. He received the IEEE PAMI Mark Everingham Prize and APSIPA Industrial Distinguished Leader award, and serves on the advisory board of the IEEE Transactions on PAMI and the Image and Vision Computing journal. He is the Vice President of the Computer Vision Foundation. The author of four books, more than 90 journal papers and 280 conference articles, and the recipient of 121 patents, he is also the editor, with Sven Dickinson, of the Computer Vision series of books for Springer and serves as co-chair of many technical conferences (CVPR, ICPR, ACCV, WACV, ICPR). Medioni is Professor Emeritus of Computer Science at USC, where he served as the Computer Science Department Chair from 2001-2007. Prior to joining Amazon in 2014, he consulted with numerous companies and startups. He received his Dipl\u00f4me d&#8217;Ingenieur from ENST, Paris, and MS and PhD from USC.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Coffee Break<\/strong><\/td><td>Wed. 3:00 &#8211; 3:30&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>The Engineering Reality of Virtual Reality (ERVR) conference Session 3<\/strong><br>VR\/AR for Research, Training &amp; Emergencies<br>Location: Grand Peninsula D<br>Session Chair: Tyler Bell<\/td><td>Wed. 3:30 pm &#8211; 5:10&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>3:30 pm : <strong>The Trojan horses of virtual reality<\/strong>, Bjorn Sommer | Royal College of Art (United Kingdom) [ERVR\u2060-\u2060167] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4404\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-167\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>3:50 pm : <strong>ScryVR: A systematic framework for accelerating experimental research in VR<\/strong>, Levi Scully, Jose Toro-Cerna, Pengbo Chu, Frederick Harris Jr., Sergiu Dascalu | University of Nevada Reno (United States) [ERVR\u2060-\u2060168] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4546\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=Ihvl8IpdbYg&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=6\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-168\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>4:10 pm : <strong>A collaborative virtual reality environment module for active shooter response training and decision making<\/strong>, Pranav Moses | University of North Texas (United States) [ERVR\u2060-\u2060169] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4481\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=7leH8Rfu26E&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=7\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-169\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>4:30 pm : <strong>A mobile augmented reality application for indoor emergency evacuation and navigation<\/strong>, Keerthana Srinivasan | University of North Texas (United States) [ERVR\u2060-\u2060170] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4482\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=JX9tHBmI3v4&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=9\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-170\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>4:50 pm : <strong>VR\/AR-NRP: Improving training for the neonatal resuscitation program using virtual and augmented reality<\/strong>, Mustafa Yalin Aydin, Vernon Curran, Peter Attia | Memorial University of Newfoundland; Susan White | Eastern Health, Newfoudland and Labrador; Lourdes Pena-Castillo, Oscar Meruvia-Pastor | Memorial University of Newfoundland (Canada) [ERVR\u2060-\u2060171] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4513\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/www.youtube.com\/watch?v=z-6va3K2Vh8&amp;list=PLoksP178KYM53xZQPkE-VFAeiLvBGkznB&amp;index=8\"><\/a>&nbsp; <a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-171\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a>Thursday 6th February 2025<\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Human Vision and Electronic Imaging (HVEI) conference Session 7<\/strong><br>Perception in Augmented\/Virtual\/360\u00b0 Environments<br>Location: Regency A<br>Session Chair: Alex Chapiro, Meta<br><strong>Joint session<\/strong> with the Stereoscopic Displays and Applications (SD&amp;A) and The Engineering Reality of Virtual Reality (ERVR) conferences<\/td><td>Thu. 8:30 &#8211; 10:30&nbsp;am<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>8:30 am : <strong>HVEI Keynote: Transparency and Scission in Augmented Reality<\/strong>, Michael Murdoch, Rochester Institute of Technology (United States) [HVEI\u2060-\u2060193] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4660\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.11.HVEI-193\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>9:30 am : <strong>Investigation of whether perspective guide vergence when gazing at moving object in 360-degree images<\/strong>, Hisaki Nate, Tamaki Takamura | Tokyo Polytechnic University (Japan) [HVEI\u2060-\u2060194] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4167\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.11.HVEI-194\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>9:50 am : <strong>The impact of realistic avatars on self-other perception in virtual environments<\/strong>, Hiroyuki Morikawa | Tokyo University of Technology; Shota Maruyama, Yoshihiro Banchi, Takashi Kawai | Waseda University (Japan) [ERVR\u2060-\u2060158] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4352\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.13.ERVR-158\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>10:10 am : <strong>From Polaroid to augmented reality: The enduring avantages of whiteborders<\/strong>, Michael Murdoch | Rochester Institute of Technology (United States) [HVEI\u2060-\u2060195] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4206\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.11.HVEI-195\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Coffee Break<\/strong><\/td><td>Thu. 10:30 &#8211; 11:00&nbsp;am<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Human Vision and Electronic Imaging (HVEI) conference Session 8<\/strong><br>Fundamental and Extended Visual Perception<br>Location: Regency A<br>Session Chair: Rafal Mantiuk<br><strong>Joint session<\/strong> with the Stereoscopic Displays and Applications (SD&amp;A) and The Engineering Reality of Virtual Reality (ERVR) conferences<\/td><td>Thu. 11:00 am &#8211; 1:00&nbsp;pm<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:00 am : <strong>Computational trichromacy reconstruction: empowering the color-vision deficient to recognize colors using augmented reality<\/strong>, Yuhao Zhu, Ethan Chen, Colin Hascup, Yukang Yan, Gaurav Sharma | University of Rochester (United States) [HVEI\u2060-\u2060196] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4427\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:20 am : <strong>JPI-first (JPI-0197): Effectiveness of visual, auditory, and haptic guidance cues for visual targets in virtual environments<\/strong>,Hila Sabouni | Iowa State University (US) [HVEI\u2060-\u2060197] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4739\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/J.Percept.Imaging.2025.8.000403\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>11:40 am : <strong>JPI-first (JPI-0196): Experimental investigation of depth cues for small-field light sources in darkness<\/strong>, Yuko Harada, Midori Tanaka, Takahiko Horiuchi | Chiba University (Japan) [HVEI\u2060-\u2060198] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4693\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/J.Percept.Imaging.2024.7.000405\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>12:00 pm : <strong>Impact of camera height and field-of-view on distance judgement and gap selection in digital rear-view mirrors in vehicles<\/strong>, Felix Thulinsson, Niclas S\u00f6derlund, Shirin Rafiei, Bo Schenkman, Anders Djupsj\u00f6backaa, B\u00f6rje Andr\u00e9n, Kjell Brunnstr\u00f6m | RISE Research Institutes of Sweden AB (Sweden) [HVEI\u2060-\u2060199] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4524\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/EI.2025.37.11.HVEI-199\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>12:20 pm : <strong>JIST-first (JIST1933): Influence of display sub-pixel arrays on roughness appearance<\/strong>, Kosei Aketagawa, Midori Tanaka, Takahiko Horiuchi | Chiba University (Japan) [HVEI\u2060-\u2060200] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4710\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/doi.org\/10.2352\/J.ImagingSci.Technol.2024.68.6.060404\"><\/a> (<a href=\"https:\/\/doi.org\/10.2352\/J.ImagingSci.Technol.2025.69.4.040601\">erratum<\/a>)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>12:40 pm : <strong>Cross-modal brain plasticity in haptic perception, kinesthetics &amp; spatial navigation: Profound interhemispheric asymmetry<\/strong>, Lora Likova, Kristyo Mineff, Zhangziyi Zhang, Michael Liang, Christopher Tyler | Smith-Kettlewell Eye Research Institute (United States) [HVEI\u2060-\u2060201] &nbsp;[<a href=\"https:\/\/pcm.secure-platform.com\/imaging\/solicitations\/102002\/sessiongallery\/94033\/application\/4678\" target=\"_blank\" rel=\"noreferrer noopener\">ABSTRACT<\/a>]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Monday 3rd February 2025 Electronic Imaging Symposium Highlights SessionLocation: Grand Peninsula D Mon.&nbsp;11:00&nbsp;am&nbsp;&#8211;&nbsp;12:20&nbsp;pm Join us for a session that highlights the breadth of the EI Symposium with short papers selected by their Chairs from EI conferences.The full papers are given at other times in the program. Electronic Imaging Welcome Lunch (lunch provided)Location: The Grove The Electronic Imaging Symposium All-Conference Welcome [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-394","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/pages\/394","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/comments?post=394"}],"version-history":[{"count":18,"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/pages\/394\/revisions"}],"predecessor-version":[{"id":450,"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/pages\/394\/revisions\/450"}],"wp:attachment":[{"href":"https:\/\/cellmicrocosmos.org\/conferences\/ERVR\/wp-json\/wp\/v2\/media?parent=394"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}