Issue #7/2024
E. V. Vlasov, P. S. Zavialov, E. S. Zhimuleva
Design and Evaluation of High-Resolution Wide-Angle Oculars for a 3D Multifocal Head-Mounted Display
Design and Evaluation of High-Resolution Wide-Angle Oculars for a 3D Multifocal Head-Mounted Display
DOI: 10.22184/1993-7296.FRos.2024.18.7.564.568
Design and Evaluation of High-Resolution Wide-Angle Oculars
for a 3D Multifocal
Head-Mounted
Display
E. V. Vlasov, P. S. Zavialov, E. S. Zhimuleva
Technological Design Institute of Scientific Instrument Engineering, Siberian branch of the Russian Academy of Sciences, Novosibirsk
The design and analysis of the optical circuit for the wide-angle oculars for an immersion multifocal head-mounted display are given. The ocular is based on the use of a beam-splitting cube as a splitter to generate two image planes. The field-of-vision angle is 44°×33°, 60° diagonally. The maximum radius of the circle of confusion (RMS radius) is 13.2 µm, with a pixel size of 9.3 µm.
Keywords: 3D image, head-mounted display, stereoscopic lenses
Article received: 11.10.2024
Article accepted: 28.10.2024
Introduction
The development trend of individual aids for navigation, orientation and control over human movements is determined by their ever more widespread use in such areas of human activities as sports, medicine, interactive computer games and combat missions. The special requirements are imposed on the mobile navigation aids for military purposes and their tactical, technical and weight-dimensional specifications.
At present, the technologies of augmented reality goggles are being actively developed abroad. The developments provided by such companies as Epson (Epson Moverio BT‑200, BT‑300), Lumus, Microsoft (Microsoft Hololens), Sony (Sony SmartEyeglass), Google (Google Glass) and others are popular. The structure of augmented reality goggles is based on several various technologies for image projection from a microdisplay onto a rear-projection screen located in front of the human eye. One of the technologies involves the image projection using the stereoscopic lenses [1–4].
One of the significant drawbacks of the available and developing augmented reality goggles is the very small field of view (FOV) that is equal to about 20–40 degrees diagonally. The viewing angle increase in the above-described circuits leads to a significant increase in the dimensions and weight of the glasses.
Design of the optical scheme
All subsequent designs of the stereoscopic lens for the generation of a three-dimensional scene image were performed on the basis of Fourier optics using the ZEMAX–EE optical design program by Focus Software, Inc. The Low-Power AMOLED SXGA060 microdisplays by Olightek and FLCOS microdisplays by Forth Dimensional Displays with a resolution of 1 280 × 1 024 are considered as an image source. In the papers [5, 6] as a result of diffraction analysis of a linear combination of two image planes, it was shown that the eye accommodates to the curve peak intensity in the case of any aberrations. Moreover, it was found that at different eye’s pupil sizes, the intensity of half-sums did not remain a constant value, since the eye was one of the display components (combination of the image plane intensities occurs on the retina). In the program, the design was performed in the beam retrace for the visible radiation length within the spectral range from 486 nm to 656 nm. A simplified model of the eye was considered. During the design, several eye models were checked (Lotmare, Atchison) [7].
The ocular shown in Fig. 1 contains two flat images generated by the microdisplays 1 and generates their false image using the optical elements 2–5. Due to the beam-splitting cube 2, the false images are coaxial and perpendicular to the optical axis. The radiation of the image elements from the microdisplays is focused by the crystalline lens into the light spots of images on the retina in such a way that the maximum total energy is in the fovea region (central fovea of the retina) of the observer’s eye.
The design was performed for two image planes installed at the distances of 1 m (1 D) for the near plane and 4 m (0.25 D) for the far plane. Thus, a 3D depth perception of the sharply displayed space from 1 meter to infinity is ensured. The objects located at the specified distances are in focus of the human eye.
The system is not rigidly aligned. The device with two built-in stereoscopic lenses is placed on the observer’s head due to which the eye position can be shifted along three axes within the range of ±2 mm (along and across the optical axis). In addition, the eye’s pupil being the aperture diaphragm of the system, is moved across the field of view during the observation process and this should not qualitatively affect the resulting image.
For reliable perception of the resulting image, it is important to select such a field-of-vision angle that the observer’s peripheral vision is involved in the scene generation. The field-of-vision angle obtained in the ocular is 44° (the eyeball rotation angle during observation is 45–50°).
In order to be able to refocus from one plane to another without any visible displacement of objects, an additional function is provided that does not allow the same point on the images in various configurations to be shifted.
The design was performed for 16 configurations with due regard to the pupil displacement, changes in optics between the beam-splitting cube and the microdisplay, and the dioptric shift.
Figure 2 shows the circle of confusion diagram for all configurations. It is evident that the maximum image deterioration occurs when the eye is displaced across the optical axis, as well as when additional negative refractive power is introduced. The maximum circle of confusion radius (RMS radius) is 13.2 µm with a pixel size of 9.3 µm. The distortions do not exceed 5%, the field curvature is no more than 0.05 mm.
During the process, the system was examined for the occurrence of spirits in the series operation mode. Fig. 3 shows the image obtained by the designed optical system in the Zemax program in the non-sequential mode. It is evident that the obtained image is sufficiently high-contrast, the spirits and glare are absent in the image, the distortion and chromatic aberration are weakly expressed.
The developed stereoscopic pair of oculars allows to generate a wide-format 3D image without any format changes of the left and right oculars. Figure 4 is an image obtained by the oculars with the “infinite” settings.
The original array format is 4×3, or 1280×1024 pixels. Let the areas of 3D stereo and 3D mono areas be the same. Then in each ocular, the mono area will be 1/3 of the array, and the total area of the 3D image will be increased by 33.3%. The image horizontal will be equal to 4+4/3=16/3. The resulting format is (16/3)×3, or 16×9 that corresponds to a wide-format 3D image.
In our case, with an “angular” format of oculars of 44°×33°, (4×3) we get (44°+44°/3)×33°, or 58°40′×33°, i. e. we have almost an addition of 12° to the field of view of one ocular.
It is possible to obtain a 2×1 format. In this case, the left and right images will overlap by half, and the total 3D field will be 66°×33°, with the FOV of about 74°.
Based on the optical scheme designs for the oculars, the design documentation for the body was developed. The ocular structure implements such operational features as adjustment of the inter-center distance of the oculars and optical dioptric adjustment by ±1 D obtained to adjust the oculars to the features of the observer’s eyes. According to the developed design documentation, the units and parts were manufactured, and a model of stereoscopic oculars was assembled (Figure 5).
As a result of the research conducted and evaluation of comprehensive criteria of the image quality generated by the optical unit of augmented reality goggles, the eye models showed that the maximum center of confusion radius (RMS radius) is 13.2 μm with a pixel size of 9.3 μm; the distortion does not exceed 5%, the field curvature is no more than 0.05 mm, these aberrations can be adjusted by software; the resulting image on the eye retina is sufficiently high-contrast, the spirits and glare are absent in the image, the distortion and chromatic aberration are weakly expressed.
In the future, it is planned to use this type of system in the development of new generation simulators for the astronauts and flight simulators, as well as in the development of control systems for the robotic devices.
for a 3D Multifocal
Head-Mounted
Display
E. V. Vlasov, P. S. Zavialov, E. S. Zhimuleva
Technological Design Institute of Scientific Instrument Engineering, Siberian branch of the Russian Academy of Sciences, Novosibirsk
The design and analysis of the optical circuit for the wide-angle oculars for an immersion multifocal head-mounted display are given. The ocular is based on the use of a beam-splitting cube as a splitter to generate two image planes. The field-of-vision angle is 44°×33°, 60° diagonally. The maximum radius of the circle of confusion (RMS radius) is 13.2 µm, with a pixel size of 9.3 µm.
Keywords: 3D image, head-mounted display, stereoscopic lenses
Article received: 11.10.2024
Article accepted: 28.10.2024
Introduction
The development trend of individual aids for navigation, orientation and control over human movements is determined by their ever more widespread use in such areas of human activities as sports, medicine, interactive computer games and combat missions. The special requirements are imposed on the mobile navigation aids for military purposes and their tactical, technical and weight-dimensional specifications.
At present, the technologies of augmented reality goggles are being actively developed abroad. The developments provided by such companies as Epson (Epson Moverio BT‑200, BT‑300), Lumus, Microsoft (Microsoft Hololens), Sony (Sony SmartEyeglass), Google (Google Glass) and others are popular. The structure of augmented reality goggles is based on several various technologies for image projection from a microdisplay onto a rear-projection screen located in front of the human eye. One of the technologies involves the image projection using the stereoscopic lenses [1–4].
One of the significant drawbacks of the available and developing augmented reality goggles is the very small field of view (FOV) that is equal to about 20–40 degrees diagonally. The viewing angle increase in the above-described circuits leads to a significant increase in the dimensions and weight of the glasses.
Design of the optical scheme
All subsequent designs of the stereoscopic lens for the generation of a three-dimensional scene image were performed on the basis of Fourier optics using the ZEMAX–EE optical design program by Focus Software, Inc. The Low-Power AMOLED SXGA060 microdisplays by Olightek and FLCOS microdisplays by Forth Dimensional Displays with a resolution of 1 280 × 1 024 are considered as an image source. In the papers [5, 6] as a result of diffraction analysis of a linear combination of two image planes, it was shown that the eye accommodates to the curve peak intensity in the case of any aberrations. Moreover, it was found that at different eye’s pupil sizes, the intensity of half-sums did not remain a constant value, since the eye was one of the display components (combination of the image plane intensities occurs on the retina). In the program, the design was performed in the beam retrace for the visible radiation length within the spectral range from 486 nm to 656 nm. A simplified model of the eye was considered. During the design, several eye models were checked (Lotmare, Atchison) [7].
The ocular shown in Fig. 1 contains two flat images generated by the microdisplays 1 and generates their false image using the optical elements 2–5. Due to the beam-splitting cube 2, the false images are coaxial and perpendicular to the optical axis. The radiation of the image elements from the microdisplays is focused by the crystalline lens into the light spots of images on the retina in such a way that the maximum total energy is in the fovea region (central fovea of the retina) of the observer’s eye.
The design was performed for two image planes installed at the distances of 1 m (1 D) for the near plane and 4 m (0.25 D) for the far plane. Thus, a 3D depth perception of the sharply displayed space from 1 meter to infinity is ensured. The objects located at the specified distances are in focus of the human eye.
The system is not rigidly aligned. The device with two built-in stereoscopic lenses is placed on the observer’s head due to which the eye position can be shifted along three axes within the range of ±2 mm (along and across the optical axis). In addition, the eye’s pupil being the aperture diaphragm of the system, is moved across the field of view during the observation process and this should not qualitatively affect the resulting image.
For reliable perception of the resulting image, it is important to select such a field-of-vision angle that the observer’s peripheral vision is involved in the scene generation. The field-of-vision angle obtained in the ocular is 44° (the eyeball rotation angle during observation is 45–50°).
In order to be able to refocus from one plane to another without any visible displacement of objects, an additional function is provided that does not allow the same point on the images in various configurations to be shifted.
The design was performed for 16 configurations with due regard to the pupil displacement, changes in optics between the beam-splitting cube and the microdisplay, and the dioptric shift.
Figure 2 shows the circle of confusion diagram for all configurations. It is evident that the maximum image deterioration occurs when the eye is displaced across the optical axis, as well as when additional negative refractive power is introduced. The maximum circle of confusion radius (RMS radius) is 13.2 µm with a pixel size of 9.3 µm. The distortions do not exceed 5%, the field curvature is no more than 0.05 mm.
During the process, the system was examined for the occurrence of spirits in the series operation mode. Fig. 3 shows the image obtained by the designed optical system in the Zemax program in the non-sequential mode. It is evident that the obtained image is sufficiently high-contrast, the spirits and glare are absent in the image, the distortion and chromatic aberration are weakly expressed.
The developed stereoscopic pair of oculars allows to generate a wide-format 3D image without any format changes of the left and right oculars. Figure 4 is an image obtained by the oculars with the “infinite” settings.
The original array format is 4×3, or 1280×1024 pixels. Let the areas of 3D stereo and 3D mono areas be the same. Then in each ocular, the mono area will be 1/3 of the array, and the total area of the 3D image will be increased by 33.3%. The image horizontal will be equal to 4+4/3=16/3. The resulting format is (16/3)×3, or 16×9 that corresponds to a wide-format 3D image.
In our case, with an “angular” format of oculars of 44°×33°, (4×3) we get (44°+44°/3)×33°, or 58°40′×33°, i. e. we have almost an addition of 12° to the field of view of one ocular.
It is possible to obtain a 2×1 format. In this case, the left and right images will overlap by half, and the total 3D field will be 66°×33°, with the FOV of about 74°.
Based on the optical scheme designs for the oculars, the design documentation for the body was developed. The ocular structure implements such operational features as adjustment of the inter-center distance of the oculars and optical dioptric adjustment by ±1 D obtained to adjust the oculars to the features of the observer’s eyes. According to the developed design documentation, the units and parts were manufactured, and a model of stereoscopic oculars was assembled (Figure 5).
As a result of the research conducted and evaluation of comprehensive criteria of the image quality generated by the optical unit of augmented reality goggles, the eye models showed that the maximum center of confusion radius (RMS radius) is 13.2 μm with a pixel size of 9.3 μm; the distortion does not exceed 5%, the field curvature is no more than 0.05 mm, these aberrations can be adjusted by software; the resulting image on the eye retina is sufficiently high-contrast, the spirits and glare are absent in the image, the distortion and chromatic aberration are weakly expressed.
In the future, it is planned to use this type of system in the development of new generation simulators for the astronauts and flight simulators, as well as in the development of control systems for the robotic devices.
Readers feedback
rus


