One of the coolest new features of Lightroom is the dehaze slider, a button that seemingly magically strips away “haze” (mist, fog, smog, etc.) and reveals what it was obscuring.
However, it’s not magic. In fact, the methods for dehazing a photo are often fairly simple (but clever). In this post, we’ll see how dehazing works by going through through a method described in this research paper: html, pdf. In doing so, we’ll cover the following topics:
- What is haze?
- How do we teach a computer to tell which parts of photo are hazy?
- How do we remove haze from the photo?
What is haze?
We see objects from the light reflected off them:
But that’s not all we see. Some of this light will get absorbed or scattered by the air that it is passing through. In addition, ambient light will bounce of these air particles and be reflected back to your camera.
Most of the time this is not a big problem. However, if the air is full of additional particles such as water droplets (fog) or soot particles (smog), then visibility will decrease. This occurs because the haze stops the light from the original object, and replaces it with additional reflected light.
Dehazing a photo requires a couple of steps:
- Figure out how hazy each pixel is.
- Figure out the colour of the light reflected by haze.
- Remove the haze colour from each pixel in proportion to how hazy that pixel is.
Light reflected due to haze has a particular colour: the colour of the ambient light that is reflecting. This colour should be fairly constant over the whole picture, so if we can calculate it, we can just remove that constant value from every pixel.
Determining how hazy each pixel is causes a bit more of a headache. The method I describe here uses a clever technique called the “dark channel”.
The dark channel – i.e. a map of haze
What is the dark channel? The inventors of the method have a succinct description:
The dark channel … is based on the statistics of haze-free outdoor images. We find that, in most of the local regions which do not cover the sky, it is very often that some pixels (called ”dark pixels”) have very low intensity in at least one color (rgb) channel… Therefore, these dark pixels can directly provide accurate estimation of the haze’s transmission.
Let’s break this down and see how we go from a photo to a dark channel.
Each pixel in the original image will have red, green, and blue colour values associated with it. In an 8-bit picture, like most JPGs, these will vary between 0 and 255. Each of these is a called a colour channel. High values mean more intensity in that colour.
In normal pictures with no haze, most small regions of the picture will have a low value for one of these colours, due to shadows, naturally dark objects, or colourful things (with a low value in another colour). However, the ambient light reflected by haze will have fairly high values in all three colours.
To calculate the dark channel, we take every pixel in the image and look at the region surrounding it (i.e. a square of 15×15 pixels centred on the pixel of interest). We calculate the minimum intensity colour at each pixel in the square, and then we find which of these is the lowest in the whole square. This is the dark channel value for the original pixel. In other words we pick the lowest colour value of all the colours in all the surrounding pixels.
You will note that we have gone from a colour photo composed of three colour channels to an image with just one channel. This will be a black-and-white (technically “greyscale”) image.
Here’s what the dark channel looks like for David LaCivita’s landscape:
Just as predicted, the haze free foreground has very low dark channel values in each 15×15 square, whereas the hazy region has much lighter values.
Calculating the atmospheric light
Now that we know which pixels are hazy, we want to know the colour of the light that is being reflected by these pixels. This is called the atmospheric (or ambient) light.
The RGB values of the atmospheric light are basically the RGB values of the haziest pixels. This is where the haze is opaque, so all the colour is from reflected atmospheric light (rather than from the objects behind the haze).
Unfortunately we can’t just take the brightest pixel in the original image as the ambient light, because the brightest pixel might be from something in the foreground (e.g. a washed out reflection).
The authors of the paper point out a clever way to get the ambient light using the dark channel.
- The lightest regions of the dark channel correspond to the haziest part of the original image.
- If we select the brightest 0.1% of the dark channel we will get the haziest pixels.
- Switching back to the original RGB values of these same pixels, we can take the brightest as the atmospheric light.
Transmission: how much light gets through the haze
The final thing we need is the transmission. This is an estimate of how much of the light from the original object is making it through the haze at each pixel.
The calculation of this is a little technical (for the mathematically minded, it is explained in the paper linked above). Essentially it will look opposite to the dark channel picture (i.e. hazy parts of the image have lower transmission values):
Poof…and the haze is gone
With an estimate of the transmission and the atmospheric/ambient light we can recover the original image. Again, this step involves some maths, but essentially the atmospheric light is subtracted from each pixel in proportion to the transmission at that pixel. The results (from my fairly basic Python implementation):
You will note that there are some imperfections.
For example the picture has a blocky/streaky texture most noticeable in the halo around the foreground. This is due to the 15×15 pixel areas we used to estimate the dark channel (you can see the same blockiness in the darkchannel and transmission pictures). We can get around this by smoothing out the transmission map (the authors of the paper use a method called soft matting).
These are the basics of how dehazing works. In a future post I will dig into the details more thoroughly.
Want to read more tutorials like this? You can get new tutorials from FreeThatPhoto.com delivered straight to your inbox: