Microsoft wants to improve sub-screen cameras with AI – review addiction

One of the problems with sub-screen cameras today is the poor image quality from them. The reason for this is diffraction due to the transparent OLED display (the so-called T-OLED), which in some cases does not allow enough light to pass through for the camera underneath to take a high-quality picture.

IN Microsoft research tackled this problem and found that the reduction in image quality occurs in a predictable way, which means it can be compensated for. For this, a neural network was used. U-Netwhich simultaneously improves the signal-to-noise ratio and reduces image blur. The screenshots below show an example of how the system works.

This is how a picture looks like on a conventional front-facing camera:

So – through a T-OLED screen without processing:

And so – with processing:

As you can see, there is practically no difference between the first and third pictures. And although it has not yet been reported when the new product will appear in smartphones, it is obvious that this will happen in the foreseeable future.

More on review