subscribe

Sensor Requirements for Better Pixels

Klaus Weber, a Camera Sr. Product Marketing Manager at Grass Valley, gave a very nice analysis on various camera architectures and the needs for next generation image capture. His conclusion: CMOS sensors are superior to CCD for high frame rate, high dynamic range and higher resolution. Plus, he thinks a global shutter (360-degree shutter angle) camera is best for live applications and larger pixels are needed on smaller imagers. How he arrived at this conclusion and his recommendation for a three image solution, is described below.

Klaus Weber, a Camera Sr. Product Marketing manager at Grass Valley, gave a very nice analysis on various camera architectures and the needs for next generation image capture. His conclusion: CMOS sensors are superior to CCD for high frame rate, high dynamic range and higher resolution. Plus, he thinks a global shutter (360-degree shutter angle) camera is best for live applications and larger pixels are needed on smaller imagers. How he arrived at this conclusion and his recommendation for a three image solution, is described below.

Weber was primarily concerned with live TV production, where the focus is on 2/3” image sensors and related lenses. However, he noted in the beginning of his presentation that nearly all sensors – single chip or three-chip – cinema to consumer have photosensors that are all about the same size: from 4×4 to 7×7 microns (with the exception of one CDD sensor with 10×5 micron photosensors).

CMOS sensors outperform CCD in the area of sensitivity at various frame rates by a wide margin (2X for progressive scan) and they have a 2.5X advantage in dynamic range.

So what are the compromises? Most offer a rolling shutter meaning they expose and read-out photosites in bands that scroll down from top to bottom. The issue here is vertical objects that get imaged at different times at the top than at the bottom. That can create distorted non-straight objects.

Global shutters that expose a full frame all at once are therefore preferred. However, chip makers must add two additional transistors at each photosite, adding cost and reducing the fill factor. Fill factor can be restored by adding a microlens array on top of the sensor, with some added cost.

The size of the photosite matters too. The bigger the photosite, the more light it can gather and the better the dark performance. Making the pixel smaller and maintaining performance requires an increase in quantum efficiency. That is already above 60%, so don’t expect any dramatic improvements any time soon. For more resolution and high dynamic range, bigger imagers will be needed to maintain performance levels.

Weber also noted that the bottleneck for high frame operation is not the sensor but the A/D conversion stages. This can be addressed by adding more A/D blocks and multiplexing data through them.

3-chip prism solutionWide color gamut cameras are here today, but the rest of the production, distribution and display pipeline is still maturing.

With all these considerations in mind, Weber next looked at a common digital cinema camera architecture – a single large sensor with a Bayer filter pattern (RGBG). Here, a 4K sensor may have 4096 x 2160 photosites, but it does not capture a full 4K image. Instead, it captures a green 2K image and 1K images in the red and blue (because of the Bayer filter pattern). The process of De-Bayering is in fact a scaling process that can happen in the camera or in outboard processors. This creates an RGB image with 4K red, green and blue pixels. That is why 5K, 6K and even 8K single sensor cameras would be preferred, but with additional cost.

Plus, these large sensors are ideal for cinematic production where shallow depth of field is preferred for artistic reasons. But in the broadcast environment, smaller sensors with a deeper field of view are preferred.

As a result, Weber sees the 2/3” 3-chip CMOS solution remaining the best choice for broadcast. Here, he described using three conventional 1080p imagers for the red, green and blue channels with a prism to split the incoming light. To create a UHD image from this requires scaling in the horizontal and vertical direction. He proposed a novel solution that includes a “spatial offset of the three imagers, an adapted optical filtering and in the RGB RAW domain, means in 4:4:4 and before any detail correction is applied.” However, he was not clear exactly what the optical set-up would look like. Nevertheless, Grass Valley apparently tested this solution in September 2014 at a MotoGP race in Misano, Italy.

Weber also described a 4-imager solution that adds a second 1080p imager for the green channel that would exactly mimic the Bayer pattern. This is not a new concept and was tried for SD cameras, but half the sensitivity for each green channel is lost and alignment is more complex, so this seems less likely. –Chris Chinnock