For now, the lab version has an anemic field of view — just 11.7 degrees in the lab, which is significantly lower than Magic Leap 2 or even Microsoft HoloLens.
But Stanford’s Computational Imaging Laboratory has a whole page of visual aids after visual aids, which suggests it might be something special: a thinner stack of holographic elements that could almost fit into standard eyeglass frames and be trained to display realistic, full-color, moving 3D images that appear at different depths.
Comparison of the optics of existing AR glasses (a) and prototype glasses (b) with the 3D printed prototype (c). Photo: Stanford Computational Imaging Laboratory
Like other AR glasses, they use waveguides, which are the element that guides light through the glasses to the user’s eyes. However, the researchers say they have developed a unique “nanophotonic metasurface waveguide” that can “eliminate the need for bulky collimating optics” and a “trained physical waveguide model” that uses artificial intelligence algorithms to dramatically improve image quality. The study says the models “are automatically calibrated based on camera feedback.”
Objects, both real and magnified, can vary in depth. GIF: Stanford Computational Imaging Laboratory
Although the Stanford technology is currently only a prototype, and the working models appear to be attached to a 3D-printed bench and frames, researchers are looking to disrupt the current spatial computing market, which also includes bulky passthrough mixed reality headsets like the Apple Vision Pro. Meta’s Quest 3 and others.
Dr. Gun-Yeal Lee, who helped write the paper published in Nature, says there is no other AR system that compares in terms of both capabilities and compactness.
Credit : www.theverge.com