Restoration of distorted images is one of the most interesting and important problems of image processing – from the theoretical, as well as from the practical point of view. There are especial cases: blurring due to incorrect focus and blurring due to movement – and these very defects (which each of you knows very well, and which are very difficult to repair) were selected as the subject of this article. As for other image defects (noise, incorrect exposure, distortion), the humanity has learned how to correct them, any good photo editor has that tools.
Why is there almost no means for correction of blurring and defocusing (except unsharp mask) – maybe it is impossible to do this at all? In fact, it is possible – development of a respective mathematical theory started approximately 70 years ago, but like other algorithms of image processing, deblurring algorithms became wideused just recently. So, below is a couple of pictures to demonstrate the WOWeffect:
I decided not to use wellknown Lena, but just found my own picture of Venice. The right image was really made from the left one, moreover, no optimization like 48bit format (in this case there will be almost 100% restoration of the source image) were used – there is just a regular PNG with syntetic blur on the left side. The result is impressive… but in practice not everything is so good.
Introduction
Let’s start from afar. Many people think that blurring is an irreversible operation and the information in this case is lost for good, because each pixel turns into a spot, everything mixes up, and in case of a big blur radius we will get a flat color all over the image. But it is not quite true – all the information just becomes redistributed in accordance with some rules and can be definitely restored with certain assumptions. An exception is only borders of the image, the width of which is equal to the blur radius – no complete restoration is possible here.
Let’s demonstrate it using a small example for a onedimensional case – let’s suppose we have a row of pixels with the following values:
x_{1}  x_{2}  x_{3}  x_{4}… – Source image
After blurring the value of each pixel is added to the value of the left one: x’_{i} = x_{i} + x_{i1}. Normally, it is also required to divide it by 2, but we will drop it out for simplicity. As a result we have a blurred image with the following pixel values:
x_{1} + x_{0}  x_{2} + x_{1}  x_{3} + x_{2}  x_{4} + x_{3}… – Blurred image
Now we will try to restore it, we will consequentially subtract values according to the following scheme – the first pixel from the second one, the result of the second pixel from the third one, the result of the third pixel from the fourth one and so on, and we will get the following:
x_{1} + x_{0}  x_{2} – x_{0}  x_{3} + x_{0}  x_{4} – x_{0}… – Restored image
As a result, instead of a blurred image, we got the source image with added to the each pixel an unknown constant x_{0} with the alternate sign. This is much better – we can choose this constant visually, we can suppose that it is approximately equal to the value x_{1}, we can automatically choose it with such a criteria that values of neighboring pixels were changing as little as possible, etc. But everything changes as soon as we add noise (which is always present in real images). In case of the described scheme on each stage there will accumulate the noise value into the total value, which fact eventually can produce an absolutely unacceptable result, but as we already know, restoration is quite possible even using such a primitive method.
Blurring process model
Now let’s pass on to more formal and scientific description of these blurring and restoration processes. We will consider only grayscale images, supposing that for processing of a fullcolor image it is enough to repeat all required steps for each of the RGB color channels. Let’s introduce the following definitions:
f(x, y) – source image (nonblurred)
h(x, y) – blurring function
n(x, y) – additive noise
g(x, y) – blurring result image
We will form the blurring process model in the following way:
g(x, y) = h(x, y) * f(x, y) + n(x, y) (1)
The task of restoration of a blurred image consists in finding the best approximation f'(x, y) to the source image. Let’s consider each component in a more detailed way. As for functions f(x, y) and g(x, y), everything is quite clear with them. But as for h(x, y) I need to say a couple of words – what is it? In the process of blurring the each pixel of a source image turns into a spot in case of defocusing and into a line segment (or some path) in case of a usual blurring due to movement. Or we can say otherwise, that each pixel of a blurred image is “assembled” from pixels of some nearby area of a source image. All those overlap each other, which fact results in a blurred image. The principle, according to which one pixel becomes spread, is called the blurring function. Other synonyms – PSF (Point spread function), kernel and other. The size of this function is lower than the size of the image itself – for example, when we were considering the first “demonstrational” example the size of the function was 2, because each result pixel consisted of two pixels.
Blurring functions
Let us see what typical blurring functions look like. Hereinafter we will use the tool which has already become standard for such purposes – Matlab, it contains everything required for the most diverse experiments with image processing (among other things) and allows to concentrate on algorithms, shifting all the routine work to function libraries. However, this is only possible at the cost of performance. So, let’s get back to PSF, here are their examples:
PSF in case of Gaussian functions using: fspecial(‘gaussian’, 30, 8);
PSF in case of motion blur functions using: fspecial(‘motion’, 40, 45);
The process of applying of the blurring function to another function (in his case, to an image) is called convolution, i.e. some area of the source image convolves into one pixel of the blurred image. It is denoted through the operator “*”, but do not confuse it with a simple multiplication! Mathematically, for an image f with dimensions M x N and the blurring function h with dimensions m x n it can be written down as follows:

(2) 
Where a = (m – 1) / 2, b = (n – 1) / 2. The process, which is opposite to convolution, is called deconvolution and solution of such task is quite uncommon.
Noise model
It is only left to consider the last summand, which is responsible for noise, n(x, y) in the formula (1). There can be many reasons for noise in digital sensors, but the basic ones are – thermal vibrations (Brownian motion) and dark current. The noise volume also depends on a number of factors, such as ISO value, sensor type, pixel size, temperature, magnetic field value, etc. In most cases there is Gaussian noise (which is set by two parameters – the average and dispersion), and it is also additive, does not correlate with an image and does not depend on pixel coordinates. The last three assumptions are very important for further work.
Convolution theorem
Let us get back to the initial task of restoration – we need to somehow reverse the convolution, bearing in mind the noise. From the formula (2) we can see that it is not so easy to get f(x, y) from g(x, y) – if we calculate it straightforward, then we will get a huge set of equations. But the Fourier transform comes to the rescue, we will not view it in details, because a lot has been said about this topic already. So, there is the so called convolution theorem, according to which the operation of convolution in the spatial domain is equivalent to regular multiplication in the frequency domain (where the multiplication – elementbyelement, not matrix one). Correspondingly, the operation which is opposite to convolution is equivalent to division in the frequency domain, i.e. this can be expressed as follows:

(3) 
Where H(u, v), F(u, v) – Fourier functions for h(x,y) and f(x,y). So, the blurring process from (1) can be written over in the frequency domain as:

(4) 
Inverse filter
Here we are tempted to divide this equation by H(u, v) and get the following evaluation F^{^}(u, v) of the source image:

(5) 
This is called inverse filtering, but in practice it almost never works. Why so? In order to answer this question, let us see the last summand in the formula (5) – if the function H(u, v) gives values, which are close to zero or equal to it, then the input of this summand will be dominating. This can be almost always seen in real examples – to explain this let’s remember what a spectrum looks like after the Fourier transform.
So, we take the source image,
convert it into a grayscale one, using Matlab, and get the spectrum:
% Load image I = imread('image_src.png'); figure(1); imshow(I); title('Source image'); % Convert image into grayscale I = rgb2gray(I); % Compute Fourier Transform and center it fftRes = fftshift(fft2(I)); % Show result figure(2); imshow(mat2gray(log(1+abs(fftRes)))); title('FFT  amplitude spectrum (log scale)'); figure(3); imshow(mat2gray(angle(fftRes))); title('FFT  phase smectrum');
As a result we will have two components: amplitude and phase spectrum. By the way, many people forget about the phase. Please note that the amplitude spectrum is shown in a logarithmic scale, because its values vary tremendously – by several orders of magnitude, in the center the values are maximum (millions) and they quickly decay almost to zero ones as they are getting farther from the center. Due to this very fact inverse filtering will work only in case of zero or almost zero noise values. Let’s demonstrate this in practice, using the following script:
% Load image I = im2double(imread('image_src.png')); figure(1); imshow(I); title('Source image'); % Blur image Blurred = imfilter(I, PSF,'circular','conv' ); figure(2); imshow(Blurred); title('Blurred image'); % Add noise noise_mean = 0; noise_var = 0.0; Blurred = imnoise(Blurred, 'gaussian', noise_mean, noise_var); % Deconvolution figure(3); imshow(deconvwnr(Blurred, PSF, 0)); title('Result');
noise_var = 0.0000001 
noise_var = 0.000005 
It is well seen that addition of even a very small noise causes serious distortions, which fact substantially limits application of the method.
Existing approaches to deconvolution
There are approaches, which take into account the presence of noise in an image – one of the most popular and the first ones, is Wiener filter. It considers the image and the noise as random processes and finds such a value of f’ for a distortionfree image f, that the mean square deviation of these values was minimal. The minimum of such deviation is achieved at the function in the Frequency domain:

(6) 
This result was found by Wiener in 1942. We will not give his detailed conclusion in this article, those interested can find it here. The S function denotes here the energy spectrum of noise and of the source image respectively – as these values are rarely known, then the fraction S_{n} / S_{f} is replaced by some constant K, which can be approximately characterized as the signalnoise ratio.
The next method is “Constrained Least Squares Filtering”, other names: “Tikhonov filtration”, “Tikhonov regularization”. His idea consists in formation of a task in the form of a matrix with subsequent solution of the respective optimization task. This equation result can be written down as follows:

(7) 
Where y – regularization parameter, а P(u, v) – Fourierfunction of Laplace operator (matrix 3 * 3).
Another interesting approach was offered by Richardson (1972 year) and Lucy independently (1974 year), so this approach is called as method LucyRichardson. Its distinctive feature consists in the fact that it is nonlinear, unlike the first three – potentially this can give a better result. The second feature – this method is iterative, so there arise difficulties with the criterion of iterations stop. The main idea consists in using the maximum likelihood method for which it is supposed that an image is subjected to Poisson distribution. Calculation formulas are quite simple, without the use of Fourier transform – everything is done in the spatial domain:

(8) 
Here the symbol “*”, as before, denotes the convolution operation. This method is widely used in programs for processing of astronomical photographs – use of deconvolution in them (instead of unsharp mask, as in photo editors) is a defacto standard. An example can be Astra Image, these are examples of deconvolution. Computational complexity of the method is quite high – processing of an average photograph, depending on the number of iterations, can take many hours and even days.
The last considered method, or to be exact, the whole family of methods, which are no being actively developed – is blind deconvolution. In all previous methods it was supposed that the blurring function PSF is known for sure, but in practice it is not true, usually we know just the approximate PSF by the type of visible distortions. Blind deconvolution is the very attempt to take this into account. The principle is quite simple, without going deep into details – there is selected the first approximation of PSF, then deconvolution is performed using one of the methods, following which the degree of quality is identified according to some criterion, based on this degree the PSF function is tuned and iteration repeats until the required result is achieved.
Practice
Now we are don with the theory – let’s pass on to practice, we will start with comparison of listed methods on an image with syntetic blur and noise.
% Load image I = im2double(imread('image_src.png')); figure(1); imshow(I); title('Source image'); % Blur image PSF = fspecial('disk', 15); Blurred = imfilter(I, PSF,'circular','conv' ); % Add noise noise_mean = 0; noise_var = 0.00001; Blurred = imnoise(Blurred, 'gaussian', noise_mean, noise_var); figure(2); imshow(Blurred); title('Blurred image'); estimated_nsr = noise_var / var(Blurred(:)); % Restore image figure(3), imshow(deconvwnr(Blurred, PSF, estimated_nsr)), title('Wiener'); figure(4); imshow(deconvreg(Blurred, PSF)); title('Regul'); figure(5); imshow(deconvblind(Blurred, PSF, 100)); title('Blind'); figure(6); imshow(deconvlucy(Blurred, PSF, 100)); title('Lucy');
Results:


Wiener filter 
Tikhonov regularization 


LucyRichardson filter 
Blind deconvolution 
Conclusion
And at the end of the first part we will consider examples of real images. Before that all blurs were artificial, which is quite good for practice and learning, but it is very interesting to see how all this will work with real photos. Here is one example of such image, shot with the Canon 500D camera using manual focus (to get blur):
Then we run the script:
% Load image I = im2double(imread('IMG_REAL.PNG')); figure(1); imshow(I); title('Source image'); %PSF PSF = fspecial('disk', 8); noise_mean = 0; noise_var = 0.0001; estimated_nsr = noise_var / var(I(:)); I = edgetaper(I, PSF); figure(2); imshow(deconvwnr(I, PSF, estimated_nsr)); title('Result');
And get the following result:
As we can see, new details appeared on the image, sharpness became much higher, however there appeared “ring effect” on the sharp borders.
And an example with a real blur due to movement – in order to make this the camera was fixed on a tripod, there was set quite long exposure value and with an even movement at the moment of exposure the following blur was obtained:
The script is almost the same, but the PSF type is “motion”:
% Load image I = im2double(imread('IMG_REAL_motion_blur.PNG')); figure(1); imshow(I); title('Source image'); %PSF PSF = fspecial('motion', 14, 0); noise_mean = 0; noise_var = 0.0001; estimated_nsr = noise_var / var(I(:)); I = edgetaper(I, PSF); figure(2); imshow(deconvwnr(I, PSF, estimated_nsr)); title('Result');
Result:
Again the quality noticeably increased – window frames, cars became recognizable. Artifacts are now different, unlike in the previous example with defocusing.
Practical implementation. SmartDeblur
I created a program, which demonstrates restoration of blurred and defocused images. Written on C++ using Qt.
I chose the FFTW library, as the fastest open source implementations of Fourier transform.
My program is called SmartDeblur, windowsbinaries and sources you can from GitHub:
https://github.com/YVladimir/SmartDeblur/downloads
All source files are under the GPL v3 license.
Screenshot of the main window:
Main functions:
 High speed. Processing of an image with the size of 2048*1500 pixels takes about 300ms in the Preview mode (when adjustment sliders can move). But highquality processing may take a few minutes
 Realtime parameters changes applying (without any preview button)
 Full resolution processing (without small preview window)
 Deep tuning of kernel parameters
 Easy and friendly user interface
 Help screen with image example
 Deconvolution methods: Wiener, Tikhonov, Total Variation prior
And in the conclusion let me demonstrate one more example with real (not syntetic) outoffocus blur:
That’s all for the first part.
In the second part I will focus on problems with real images processing – creation of PSF and their evaluation, will introduce more complex and advanced deconvolution techniques, methods of ring effect elimination, will make a review of existing deconvolution software and compare it.
If you have any questions please contact me by email – I will be happy if you give me any feedback regarding this article and SmartDeblur.
P.S. Russian translate can be found here
References
1. Digital Image Processing. Rafael C. Gonzalez, Richard E. Woods
2. Digital Image Processing Using MATLAB. Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins
—
Vladimir Yuzhikov
My Google+ pofile
UPDATE: Article’s discussion on reddit
Very interesting! By the way did you see refocus plugin (http://refocus.sourceforge.net)? Based on deconvolution too.
Vladimir Yuzhikov (20121017 05:09:22)
Yes, I tried Refocus plugin but he has not very useful interface and is not supported now (since 2003).
Garrett (20121021 08:13:43)
This was awesome! Great write up. The end with the real blur was spectacular. I think blur removal and interpolation of data will be something we will really see a big growth of in the future.
Vladimir Yuzhikov (20121021 08:19:41)
Garrett, thanks! Soon I will publish the second part of article which will be devoted to practical problems and their solutions
terlmaa (20121021 08:31:02)
Sounds like a solid plan to me dude. www.OverAnon.tk
Robert (20121021 08:36:16)
This is amazing!
Anon (20121021 08:42:55)
I could see this possibly being used in forensics in the future
Jen Matthews (20121021 08:43:07)
This is good work! What’s great about this is that you’ve put up all of your steps! Good going!
Anonymous (20121021 08:46:53)
Wow. This is amazing.
George Sinise (20121021 08:56:08)
Enhance.
Kevin Strongafur (20121021 09:06:44)
Thank you for all the detail. It’s nice to know what’s going on behind all these functions. I would ask though, does this deblur tool differ from the one Adobe presented at last year’s Max Conference, which is still in the works for a future version of Photoshop.
Mustafa (20121021 09:39:44)
LOL magic of the CSI was true then
TheOnlyJoker (20121021 09:41:00)
Wow dude! Awesome! I really appreciate how much time you’ve spent on such a great productive project! It’s a really awesome little tool you’ve developed.
Anthony (20121021 09:53:44)
You guys are masters, congrats on making some superpimp type shit
ctan (20121021 10:00:28)
Absolutely amazing work.
Mikola Lysenko (20121021 10:15:32)
Long ago I did a similar thing for a class project, and wrote a pretty detailed write up of my findings. You can find the report here: http://pages.cs.wisc.edu/~bmsmith/projects/2009/graphics838p1/report.pdf (15mb PDF warning) I also kept a running blog/journal of the project while I was working on it as well: http://research.cs.wisc.edu/graphics/Courses/AdvancedGraphics09/Mikola/Project1Blog Fun stuff!
Oren Tirosh (20121021 10:19:34)
I believe the ring artifacts in the nonsynthetic images are mostly a result of nonlinearities. I wonder if there is some iterative process that could model and compensate for these nonlinearities. If the image has areas that you know should be of uniform color (e.g. the road or the white area of the page) these can be used as reference for such an algorithm.
Vladimir Yuzhikov (20121021 10:46:42)
Mikola, thanks for the link – will look at your blog
Vladimir Yuzhikov (20121021 10:49:23)
Oren Tirosh, ring effect is the most artifact of deconvolution. One of the goal of next SmartDeblur release – to supress this and other artifacts
ivanhoe (20121021 11:48:09)
Great results. Are there any techniques for fixing the blur that results from small camera movements (like shaking of hands)?
Henk Poley (20121021 11:55:59)
Would you be able to extract where the distortion is, and then use inpainting techniques to fill in the blanks? Even just having the distortion map as an alphachannel or something would be nice to have for further image processing.
Vladimir Yuzhikov (20121021 12:01:08)
ivanhoe, you can choose “Motion blur” defect type in the SmartDeblur and restore that photos
Vladimir Yuzhikov (20121021 12:02:51)
Henk Poley, “Would you be able to extract where the distortion is, and then use inpainting techniques to fill in the blanks?” – it is slightly different problem – but in general, yes
PhotoEnthusiast (20121021 12:59:40)
Very interesting. How useful would this be for images which is only partially slight out of focus? Would the side effects and artifacts from the deblurring affect, or leak into, the neighboring sharp areas of the image? I’m thinking of usage where the image quality is prioritized, contrary to information retrieval.
Scott (20121021 14:23:21)
Will you ever make a version for Mac users??
Vladimir Yuzhikov (20121021 14:38:58)
Scott, sources are compilable for Windows, Linux and Mac OS – because they are based on Qt – you can complie it. But I published windowsbinaries only. Also you can look at the discussion about it on reddit: http://www.reddit.com/r/technology/comments/11uaqm/restoration_of_defocused_and_blurred_images_with/c6pntv2
Peter (20121021 15:06:58)
This is pretty cool. Most photos have parts that are out of focus in front or in the back, with this your able to get these parts sharp So you can answer “who was that man in the back?” As a result i think you found a way to retrieve or restore depth information from a flat picture with no extra tools. You didnt wrote it for that, but it can be used for that. You can retrieve depth information from a single image. And taking this to a far deeper level, i think (but i am not sure) that this should interest anyone who wonders how information of a 3d world could be stored on a flat surface, this reminds me of holograms, and some scientist who say a blackhole’s surface is a holografic presentation of a 3d world, as all info is still there. I know i am getting a bit to far and make huge mind jumps here. But i do think that this math you using, could be used to Study other fields as well.. (Hubble image not sharp… etc)
YuX (20121021 17:11:03)
Please, bring it to Mac. Thanks!
adrian (20121021 20:32:33)
is there a way to do this with video? 1080p I’d be very interested in using this on a film I’m editing.
Henk Poley (20121022 01:46:21)
Pretty easy to build on OS X. * Install fftw3 with macports * Install QtSDK from https://qtproject.org/downloads * git clone https://github.com/YVladimir/SmartDeblur.git * Open src/SmartBlur.pro with Qt Creator * Edit line 40 of SmartBlur.pro so it points to macports libs :: unix: LIBS += L$$/opt/local/lib/ lfftw3_threads lfftw3 * On Mountain Lion the build (green play button) will fail because QT is unsupported there. You can comment /* */ out the warning pragma to make it build.﻿ The application crashes sometimes, but only when you load a new image while the high quality renderer is still working on the old image data. Also there’s appears to be a bug, the low quality color render (almost) never shows. So often only the grey scale version is shown, or one of the RGB color planes.
Spazturtle (20121022 02:43:59)
adrian, Yes, extract the frame from the video (very easy) run them through the program, then put them back into a video format.
Peter D (20121022 02:51:26)
Gull, S.F. and Skilling J. (1984) Maximum Entropy Image Reconstruction in IEE Proc., 131F, 64659.
Joseph Nguyen (20121022 04:32:21)
Incredible software.
Matt W (20121022 05:10:13)
Peter D beat me to it, Maximum Entropy Image Reconstruction is where it’s at, although it still has significant artefacts in the presence of noise. There is an open source Maximum Entropy Deconvolution library written by H�vard Rue in the 1990s available from here: http://www.math.ntnu.no/~hrue/index_eng.html
Chris J. Bartle (20121022 05:29:10)
I was an unbeliever, but I put it to the test and I have seldom been so impressed with a program. This is a work of pure genius. Thank you for sharing.
Alexander Mikhalev (20121022 05:40:30)
Wonderful work, very impressive.
Matthew Davis (20121022 05:56:57)
Hello! First of all, great work! This is truly amazing. Second, I would love to partner with you and develop this for the iPhone. I currently develop iOS applications for the California Institute of Technology and Telecommunications (CalIT2, www.calit2.net). The amount of people that could use this technology in a mobile setting is astounding. If you are interested, please get in touch with me via my email. Thanks and keep up the amazing work!
Elliot Geno (20121022 09:46:46)
I wonder what would happen if you took the Wiener filter, Tikhonov regularization, LucyRichardson filter and Blind deconvolution and found an average between the different algorithms. Seems like the most accurate information would be shared across the different results. The least accurate information could be flagged for review based on the type of area sampled from. For instance, Blind Deconvolution works well for detailed parts of images where the Weiner filter looks best for broad areas of similar color. Take the worst offending pixels (based on a threshold of wildly inaccurate pixels vs similar pixels across algorithms) and use the Weiner filter for the broad areas and the Blind Deconvolution for the edge details.
Michele (20121022 09:55:35)
Wow, your work is really great and definitely will have very interesting applications, continues on this path, and you will have much success.