WiMi Hologram Cloud has rolled out a real-time networked holographic microscopy interactive solution. This system significantly enhances resolution along the optical axis, using three collimated RGB LEDs impacting the sample from different angles. Each camera channel independently records a hologram, which is then computed on a GPU for a corresponding volume reconstruction.
These volume reconstructions, despite suffering from axial resolution differences, overlap to generate a comprehensive volumetric image. This image mirrors the surface profile of a simple microscopic object with a high degree of precision.
The new system supports real-time interaction via VR devices, allowing users to manipulate the holographic images using gestures or more complex remote control interactions. When an event occurs, like the creation, destruction, or displacement of a capture, the updated configuration data is sent to the holographic engine that controls the optical hardware.
The engine processes this data and computes an optimized digital hologram. This hologram is reflected from the spatial light modulator (SLM) by an infrared laser beam and phase modulated. The resulting diffraction-limited spots, following propagation through the microscope objective, correspond exactly to their virtual counterparts.
A feature of the technology is the smooth user experience, the SLM’s 60 Hz refresh rate helps, which corresponds to a minimum latency of 17 ms for the display.
The system also equips users with tools for tracking objects and monitoring their coordinates over time. This aids in precise alignment of multiple particles or cells in 3D space, assisting in observing stochastic behavior and biological interactions under controlled initial conditions. The technology even enables manipulation of micromachined objects with complex shapes, simplifying and accelerating the assembly of multi-component microsystems.
The technology is particularly effective for imaging objects of a size comparable to light’s wavelength, such as bacteria. The convolution of the actual object shape by a point spread function approximating a 3D Gaussian results in a slightly blurred 3D image. Despite this, the method reliably infers the geometric parameters of the shapes, thereby allowing for precise volume reconstruction.