What They Say
PetaPixel reported on some very clever software technology from Google that is capable of dramatically improving low resolution images. The technique is called SR3 or ‘Super-Resolution via Repeated Refinement’. Google said:
“SR3 is a super-resolution diffusion model that takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise. The model is trained on an image corruption process in which noise is progressively added to a high-resolution image until only pure noise remains”.
“It then learns to reverse this process, beginning from pure noise and progressively removing noise to reach a target distribution through the guidance of the input low-resolution image.”
There is a great animation of the process here.
In evaluation of the images, the group asked viewers to choose between original high resolution images and images created using the process and upscaled. The ‘confusion rate’ was 47.4% which is close to a 50% rate, which would suggest no visible difference.
What We Think
As a hobby, I do some desktop publishing and have to say that this kind of software would be incredibly useful. It’s only a matter of time before you could use this kind of process on video as well. Wow. That would really boost the amount of 4K or 8K content available. That’s a chance for those with old content to sell it to us again.
Because this technique is based on reducing noise, it has applicability on reducing audio noise as well as image noise.
I couldn’t see any reference to how difficult it would be to build this technology into a real time chip, but, without being a semiconductor engineer, it looks like the kind of massively parallel process that might really benefit from hardware acceleration. (BR)