Oct 282012
 
Premium WordPress Themes

WordPress is one of the most popular applications in the web design community not only for its ease of use as a blogging platform, but for its versatility in any kind of content managed website. Building custom themes for WordPress is pretty straight forward, making it one of the easiest templating systems to master. This post rounds up 15 of the best WordPress theme tutorials, each taking you through the process of building your own WP theme from scratch.

 

How To Create a WordPress Theme: The Ultimate WordPress Theme Tutorial

View the WordPress theme tutorial

This thorough 11 part tutorial series takes you through every detail of creating your own WordPress theme from scratch. The series begins with a look at the structure of a WordPress theme before taking a close look at each of the various template files.

So you want to create WordPress themes huh?

View the WordPress theme tutorial

Even some of the oldest WordPress theme tutorials are still the best today. I remember reading this WPDesigner post back in 2007 and using it myself to get the hang of building WordPress themes. A lot of features have been included in WordPress since 2007, but the core process of building themes remains the same.

How to Create a WordPress Theme from Scratch

View the WordPress theme tutorial

Sam Parkinson takes us through the process of building this custom blog theme in his tutorial on Nettuts+ and describes the use of each template file along with code samples.

Designing for WordPress: Complete Series

View the WordPress theme tutorial

Another tutorial that I remember being really useful was Chris Coyier’s video screencast series. This three part tutorial takes a live look at building a WordPress theme, which gives that extra insight you just don’t get from a written guide.

How to Code a WordPress 3.0 Theme from Scratch

View the WordPress theme tutorial

Oneextrapixel hosts a more up to date WordPress theme tutorial that explains some of the newer features and functionality such as custom post thumbnails and the revised method of calling the whole comments list and comment form from WordPress 3.0 and beyond.

WordPress Theme Development Training Wheels

View the WordPress theme tutorial

Nur Ahmad Furlong’s Training Wheels tutorial series is written specifically for those who haven’t had any experience with CMS code or PHP and covers the very basics of creating a custom WordPress theme, right down to the theme screenshot.

Building Custom WordPress Theme

View the WordPress theme tutorial

Chapter two of the complete WordPress theme guide on Web Designer Wall focuses on the building of a custom theme. Parts one and three cover the installation of WordPress and moving/exporting WordPress to create a comprehensive series for anyone building a complete WordPress powered website.

How to Build a Custom WordPress Theme from Scratch

View the WordPress theme tutorial

One of my first WordPress theme tutorials was posted on my Blog.SpoonGraphics website. In this tutorial I cover the basics of WordPress theme creation and go through the process I used when building my ‘Sticky’ WordPress theme.

Developing Your First WordPress Theme

View the WordPress theme tutorial

Dan Walker’s tutorial on Wptuts+ not only covers the basic how-to of building WordPress themes, but also includes tips on creating public themes. The initial overview of features alone is useful for anyone creating a generic theme to sell or give away.

Basics Of WordPress Theme Design

View the WordPress theme tutorial

This multi-part tutorial from WPShout takes a detailed look at each of the various template files and gives a simple description of how it all works in “plain english”. Code samples without the HTML makes it easy to identify the WordPress template tags.

How to Build A WordPress Theme From Scratch

View the WordPress theme tutorial

The complete code samples in this tutorial from Developer Drive makes it easy to see how WordPress template tags are combined with HTML to provide that dynamic functionality, as well as being easy to copy and paste straight into your working files.

How To Create WordPress Themes From Scratch

View the WordPress theme tutorial

All the tutorials so far have explained how to build a WordPress theme, but this guide from Themetation explains the whole process of building a WordPress powered website from the Photoshop concept, the HTML/CSS coding and finally the WordPress implementation.

Create a Typography Based WordPress Blog Theme

View the WordPress theme tutorial

Just in case you’re new around here, I also have a bunch of WordPress theme tutorials. This recent tutorial takes you through the process I used to create my Typo WordPress theme.

How to Build a Basic Portfolio WordPress Theme

View the WordPress theme tutorial

One of my more popular WordPress theme tutorials here on Line25 was my guide to creating a basic portfolio theme. This tutorial covers the usual template tags, but also explains how custom page templates can be used to tailor the blogging platform into more of a content management system.

How To Create Your Own Custom WordPress Theme

View the WordPress theme tutorial

Finally we have one of my first WordPress theme tutorials on Line25. I’d recommend this post in particular out of my collection as it’s one I went into the most detail when describing how the various template files and template tags work.

Written by Chris SpoonerChris Spooner is a designer who has a love for creativity and enjoys experimenting with various techniques in both print and web. Check out Chris’ design tutorials and articles at Blog.SpoonGraphics or follow his daily findings on Twitter.

26 Comments

  1. I’m redesigning my site, this post will help me! Thank’s Chris!

  2. Alexandros

    What about Joomla! and Drupal ?
    Very Nice Article I Like all of them

  3. Nice will go through these thanks.

  4. Finaly some web site who can help me to create a theme!
    Thank Chris!

  5. Travis Welborn

    Now this is a post I’ll be reading through for a little while. Great idea for a post, Chris!

  6. عمل رائع شكرا لكم

    thanks

  7. Hello, I’m contacting you because we are looking for website developers, web designers and graphic artist. We would love to partner with you to help introduce our product to the public. We produce website commercials and web spokespeople.
    We are now doing partnerships with web designers that would include a kickback for every client brought to us. Also, we work with several clients looking to have websites built, so we would be referring clients to you. We would also feature you on our Partners page of our site.
    If this would be something of interest to you, please feel free to check out our website http://www.virtualwebmodel.com
    and contact us with any questions you may have.

    We look forward to working with you!

  8. Good post but maybe you shouldn’t have included some of those older tutorials

    • Peter Greathad

      The older tuts are outstanding. I’m working my way through one now, it’s the first that really explains the php tags and how to build a wp powered site. And sorting out what to do with the deprecated calls is educational too!

  9. Good Post, I found some great tutorials that help me , thank you 🙂

  10. Great one Chris! Thanks a lot for this compaction.

  11. wow… great resources, Thanks for share this tutorial 🙂

  12. Awesome suggestions.Nothing like some work along tutorials and hands on action to learn faster.

    Or of course, we could just buy a theme for $25 and be done!

  13. Wow! This is a great list. I’ve actually used several of these sites in the past to help with theme development, but I’ll check out the other ones too.

  14. Word press will always remain my one of the best Platform to publish content. The best part of this CMS is that they have number of plugins inbuild in the which makes the management of entire website as an ease. Even I have got plugins to run my eCommerce website through Word press platform.

  15. I currently use Drupal 7 and I’ve only recently been getting to grips with WordPress and so far I’m enjoying it as well. Thanks for the great article!

  16. Very nice post, helps to create stunning and unique theme. Thanks for this tutorial.

  17. steve

    Chris – Naturally your Tutorials are the best in that list!

    I actually created the theme for my site based on your tutorials.

  18. johan

    “roundup” posts are generally RUBBISH and this fails to break the trend.

    this post offers no added value, you would save time simply issuing a query in GOOGLE. 🙁

    Now, if this post actually went into some detail about which one was the best to read and why, it would be worthwhile.. ugh

  19. Excellent, we are in process of renewing our theme for a custom theme in WP and this info will be helpful, of course.

  20. I need to rebuild my template. More speed and less features!

  21. Posts like these have helped me a lot with my coding skills.

    Thank you and keep em coming.

  22. Been googling for good wp-tutorials for a while, thanks for the tips 🙂

  23. If you are only looking at WordPress for its CMS feature, and not its blogging feature “CMS Made Simple” is something to consider. As it provides the same tools as wordpress, it is just easier to set up and manage on a day to day basis.

Comments are now closed

stock Images
Website TemplatesPSD2HTML.comPremium WordPress Themes

What you’re saying

 Posted by at 6:47 pm
Oct 222012
 

 

http://yuzhikov.com/articles/BlurredImagesRestoration1.htm

Vladimir Yuzhikov
Home Projects About me

RESTORATION OF DEFOCUSED AND BLURRED IMAGES

Restoration of distorted images is one of the most interesting and important problems of image processing – from the theoretical, as well as from the practical point of view. There are especial cases: blurring due to incorrect focus and blurring due to movement – and these very defects (which each of you knows very well, and which are very difficult to repair) were selected as the subject of this article. As for other image defects (noise, incorrect exposure, distortion), the humanity has learned how to correct them, any good photo editor has that tools.

Why is there almost no means for correction of blurring and defocusing (except unsharp mask) – maybe it is impossible to do this at all? In fact, it is possible – development of a respective mathematical theory started approximately 70 years ago, but like other algorithms of image processing, deblurring algorithms became wide-used just recently. So, below is a couple of pictures to demonstrate the WOW-effect:

I decided not to use well-known Lena, but just found my own picture of Venice. The right image was really made from the left one, moreover, no optimization like 48-bit format (in this case there will be almost 100% restoration of the source image) were used – there is just a regular PNG with syntetic blur on the left side. The result is impressive… but in practice not everything is so good.

Introduction

Let’s start from afar. Many people think that blurring is an irreversible operation and the information in this case is lost for good, because each pixel turns into a spot, everything mixes up, and in case of a big blur radius we will get a flat color all over the image. But it is not quite true – all the information just becomes redistributed in accordance with some rules and can be definitely restored with certain assumptions. An exception is only borders of the image, the width of which is equal to the blur radius – no complete restoration is possible here.

Let’s demonstrate it using a small example for a one-dimensional case – let’s suppose we have a row of pixels with the following values:
x1 | x2 | x3 | x4… – Source image

After blurring the value of each pixel is added to the value of the left one: x’i = xi + xi-1. Normally, it is also required to divide it by 2, but we will drop it out for simplicity. As a result we have a blurred image with the following pixel values:
x1 + x0 | x2 + x1 | x3 + x2 | x4 + x3… – Blurred image

Now we will try to restore it, we will consequentially subtract values according to the following scheme – the first pixel from the second one, the result of the second pixel from the third one, the result of the third pixel from the fourth one and so on, and we will get the following:
x1 + x0 | x2 – x0 | x3 + x0 | x4 – x0… – Restored image

As a result, instead of a blurred image, we got the source image with added to the each pixel an unknown constant x0 with the alternate sign. This is much better – we can choose this constant visually, we can suppose that it is approximately equal to the value x1, we can automatically choose it with such a criteria that values of neighboring pixels were changing as little as possible, etc. But everything changes as soon as we add noise (which is always present in real images). In case of the described scheme on each stage there will accumulate the noise value into the total value, which fact eventually can produce an absolutely unacceptable result, but as we already know, restoration is quite possible even using such a primitive method.

Blurring process model

Now let’s pass on to more formal and scientific description of these blurring and restoration processes. We will consider only grayscale images, supposing that for processing of a full-color image it is enough to repeat all required steps for each of the RGB color channels. Let’s introduce the following definitions:
f(x, y) – source image (non-blurred)
h(x, y) – blurring function
n(x, y) – additive noise
g(x, y) – blurring result image

We will form the blurring process model in the following way:
g(x, y) = h(x, y) * f(x, y) + n(x, y) (1)

The task of restoration of a blurred image consists in finding the best approximation f'(x, y) to the source image. Let’s consider each component in a more detailed way. As for functions f(x, y) and g(x, y), everything is quite clear with them. But as for h(x, y) I need to say a couple of words – what is it? In the process of blurring the each pixel of a source image turns into a spot in case of defocusing and into a line segment (or some path) in case of a usual blurring due to movement. Or we can say otherwise, that each pixel of a blurred image is “assembled” from pixels of some nearby area of a source image. All those overlap each other, which fact results in a blurred image. The principle, according to which one pixel becomes spread, is called the blurring function. Other synonyms – PSF (Point spread function), kernel and other. The size of this function is lower than the size of the image itself – for example, when we were considering the first “demonstrational” example the size of the function was 2, because each result pixel consisted of two pixels.

Blurring functions

Let us see what typical blurring functions look like. Hereinafter we will use the tool which has already become standard for such purposes – Matlab, it contains everything required for the most diverse experiments with image processing (among other things) and allows to concentrate on algorithms, shifting all the routine work to function libraries. However, this is only possible at the cost of performance. So, let’s get back to PSF, here are their examples:


PSF in case of Gaussian functions using: fspecial(‘gaussian’, 30, 8);


PSF in case of motion blur functions using: fspecial(‘motion’, 40, 45);

The process of applying of the blurring function to another function (in his case, to an image) is called convolution, i.e. some area of the source image convolves into one pixel of the blurred image. It is denoted through the operator “*”, but do not confuse it with a simple multiplication! Mathematically, for an image f with dimensions M x N and the blurring function h with dimensions m x n it can be written down as follows:

(2)

Where a = (m – 1) / 2, b = (n – 1) / 2. The process, which is opposite to convolution, is called deconvolution and solution of such task is quite uncommon.

Noise model

It is only left to consider the last summand, which is responsible for noise, n(x, y) in the formula (1). There can be many reasons for noise in digital sensors, but the basic ones are – thermal vibrations (Brownian motion) and dark current. The noise volume also depends on a number of factors, such as ISO value, sensor type, pixel size, temperature, magnetic field value, etc. In most cases there is Gaussian noise (which is set by two parameters – the average and dispersion), and it is also additive, does not correlate with an image and does not depend on pixel coordinates. The last three assumptions are very important for further work.

Convolution theorem

Let us get back to the initial task of restoration – we need to somehow reverse the convolution, bearing in mind the noise. From the formula (2) we can see that it is not so easy to get f(x, y) from g(x, y) – if we calculate it straightforward, then we will get a huge set of equations. But the Fourier transform comes to the rescue, we will not view it in details, because a lot has been said about this topic already. So, there is the so called convolution theorem, according to which the operation of convolution in the spatial domain is equivalent to regular multiplication in the frequency domain (where the multiplication – element-by-element, not matrix one). Correspondingly, the operation which is opposite to convolution is equivalent to division in the frequency domain, i.e. this can be expressed as follows:

(3)

Where H(u, v), F(u, v) – Fourier functions for h(x,y) and f(x,y). So, the blurring process from (1) can be written over in the frequency domain as:

(4)

Inverse filter

Here we are tempted to divide this equation by H(u, v) and get the following evaluation F^(u, v) of the source image:

(5)

This is called inverse filtering, but in practice it almost never works. Why so? In order to answer this question, let us see the last summand in the formula (5) – if the function H(u, v) gives values, which are close to zero or equal to it, then the input of this summand will be dominating. This can be almost always seen in real examples – to explain this let’s remember what a spectrum looks like after the Fourier transform.
So, we take the source image,

convert it into a grayscale one, using Matlab, and get the spectrum:

% Load image I = imread('image_src.png'); figure(1); imshow(I); title('Source image'); % Convert image into grayscale I = rgb2gray(I); % Compute Fourier Transform and center it fftRes = fftshift(fft2(I)); % Show result figure(2); imshow(mat2gray(log(1+abs(fftRes)))); title('FFT - amplitude spectrum (log scale)'); figure(3); imshow(mat2gray(angle(fftRes))); title('FFT - phase smectrum');

As a result we will have two components: amplitude and phase spectrum. By the way, many people forget about the phase. Please note that the amplitude spectrum is shown in a logarithmic scale, because its values vary tremendously – by several orders of magnitude, in the center the values are maximum (millions) and they quickly decay almost to zero ones as they are getting farther from the center. Due to this very fact inverse filtering will work only in case of zero or almost zero noise values. Let’s demonstrate this in practice, using the following script:

% Load image I = im2double(imread('image_src.png')); figure(1); imshow(I); title('Source image'); % Blur image Blurred = imfilter(I, PSF,'circular','conv' ); figure(2); imshow(Blurred); title('Blurred image'); % Add noise noise_mean = 0; noise_var = 0.0; Blurred = imnoise(Blurred, 'gaussian', noise_mean, noise_var); % Deconvolution figure(3); imshow(deconvwnr(Blurred, PSF, 0)); title('Result'); 

noise_var = 0.0000001 noise_var = 0.000005

It is well seen that addition of even a very small noise causes serious distortions, which fact substantially limits application of the method.

Existing approaches to deconvolution

There are approaches, which take into account the presence of noise in an image – one of the most popular and the first ones, is Wiener filter. It considers the image and the noise as random processes and finds such a value of f’ for a distortion-free image f, that the mean square deviation of these values was minimal. The minimum of such deviation is achieved at the function in the Frequency domain:

(6)

This result was found by Wiener in 1942. We will not give his detailed conclusion in this article, those interested can find it here. The S function denotes here the energy spectrum of noise and of the source image respectively – as these values are rarely known, then the fraction Sn / Sf is replaced by some constant K, which can be approximately characterized as the signal-noise ratio.

The next method is “Constrained Least Squares Filtering”, other names: “Tikhonov filtration”, “Tikhonov regularization”. His idea consists in formation of a task in the form of a matrix with subsequent solution of the respective optimization task. This equation result can be written down as follows:

(7)

Where y – regularization parameter, а P(u, v) – Fourier-function of Laplace operator (matrix 3 * 3).

Another interesting approach was offered by Richardson (1972 year) and Lucy independently (1974 year), so this approach is called as method Lucy-Richardson. Its distinctive feature consists in the fact that it is nonlinear, unlike the first three – potentially this can give a better result. The second feature – this method is iterative, so there arise difficulties with the criterion of iterations stop. The main idea consists in using the maximum likelihood method for which it is supposed that an image is subjected to Poisson distribution. Calculation formulas are quite simple, without the use of Fourier transform – everything is done in the spatial domain:

(8)

Here the symbol “*”, as before, denotes the convolution operation. This method is widely used in programs for processing of astronomical photographs – use of deconvolution in them (instead of unsharp mask, as in photo editors) is a de-facto standard. An example can be Astra Image, these are examples of deconvolution. Computational complexity of the method is quite high – processing of an average photograph, depending on the number of iterations, can take many hours and even days.

The last considered method, or to be exact, the whole family of methods, which are no being actively developed – is blind deconvolution. In all previous methods it was supposed that the blurring function PSF is known for sure, but in practice it is not true, usually we know just the approximate PSF by the type of visible distortions. Blind deconvolution is the very attempt to take this into account. The principle is quite simple, without going deep into details – there is selected the first approximation of PSF, then deconvolution is performed using one of the methods, following which the degree of quality is identified according to some criterion, based on this degree the PSF function is tuned and iteration repeats until the required result is achieved.

Practice

Now we are don with the theory – let’s pass on to practice, we will start with comparison of listed methods on an image with syntetic blur and noise.

% Load image I = im2double(imread('image_src.png')); figure(1); imshow(I); title('Source image'); % Blur image PSF = fspecial('disk', 15); Blurred = imfilter(I, PSF,'circular','conv' ); % Add noise noise_mean = 0; noise_var = 0.00001; Blurred = imnoise(Blurred, 'gaussian', noise_mean, noise_var); figure(2); imshow(Blurred); title('Blurred image'); estimated_nsr = noise_var / var(Blurred(:)); % Restore image figure(3), imshow(deconvwnr(Blurred, PSF, estimated_nsr)), title('Wiener'); figure(4); imshow(deconvreg(Blurred, PSF)); title('Regul'); figure(5); imshow(deconvblind(Blurred, PSF, 100)); title('Blind'); figure(6); imshow(deconvlucy(Blurred, PSF, 100)); title('Lucy'); 

Results:

Wiener filter Tikhonov regularization
Lucy-Richardson filter Blind deconvolution

Conclusion

And at the end of the first part we will consider examples of real images. Before that all blurs were artificial, which is quite good for practice and learning, but it is very interesting to see how all this will work with real photos. Here is one example of such image, shot with the Canon 500D camera using manual focus (to get blur):

Then we run the script:

 % Load image I = im2double(imread('IMG_REAL.PNG')); figure(1); imshow(I); title('Source image'); %PSF PSF = fspecial('disk', 8); noise_mean = 0; noise_var = 0.0001; estimated_nsr = noise_var / var(I(:)); I = edgetaper(I, PSF); figure(2); imshow(deconvwnr(I, PSF, estimated_nsr)); title('Result'); 

And get the following result:

As we can see, new details appeared on the image, sharpness became much higher, however there appeared “ring effect” on the sharp borders.

And an example with a real blur due to movement – in order to make this the camera was fixed on a tripod, there was set quite long exposure value and with an even movement at the moment of exposure the following blur was obtained:

The script is almost the same, but the PSF type is “motion”:

 % Load image I = im2double(imread('IMG_REAL_motion_blur.PNG')); figure(1); imshow(I); title('Source image'); %PSF PSF = fspecial('motion', 14, 0); noise_mean = 0; noise_var = 0.0001; estimated_nsr = noise_var / var(I(:)); I = edgetaper(I, PSF); figure(2); imshow(deconvwnr(I, PSF, estimated_nsr)); title('Result'); 

Result:

Again the quality noticeably increased – window frames, cars became recognizable. Artifacts are now different, unlike in the previous example with defocusing.

Practical implementation. SmartDeblur

I created a program, which demonstrates restoration of blurred and defocused images. Written on C++ using Qt.
I chose the FFTW library, as the fastest open source implementations of Fourier transform.
My program is called SmartDeblur, windows-binaries and sources you can from GitHub:
https://github.com/Y-Vladimir/SmartDeblur/downloads
All source files are under the GPL v3 license.

Screenshot of the main window:


Main functions:

  • High speed. Processing of an image with the size of 2048*1500 pixels takes about 300ms in the Preview mode (when adjustment sliders can move). But high-quality processing may take a few minutes
  • Real-time parameters changes applying (without any preview button)
  • Full resolution processing (without small preview window)
  • Deep tuning of kernel parameters
  • Easy and friendly user interface
  • Help screen with image example
  • Deconvolution methods: Wiener, Tikhonov, Total Variation prior

And in the conclusion let me demonstrate one more example with real (not syntetic) out-of-focus blur:

That’s all for the first part.
In the second part I will focus on problems with real images processing – creation of PSF and their evaluation, will introduce more complex and advanced deconvolution techniques, methods of ring effect elimination, will make a review of existing deconvolution software and compare it.
If you have any questions please contact me by email – I will be happy if you give me any feedback regarding this article and SmartDeblur.

P.S. Russian translate can be found here

References

1. Digital Image Processing. Rafael C. Gonzalez, Richard E. Woods
2. Digital Image Processing Using MATLAB. Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins


Vladimir Yuzhikov
My Google+ pofile

UPDATE: Article’s discussion on reddit

Comments:
Sander U.  (2012-10-17 04:14:24)
Very interesting! By the way did you see refocus plugin (http://refocus.sourceforge.net)? Based on deconvolution too.

Vladimir Yuzhikov  (2012-10-17 05:09:22)
Yes, I tried Refocus plugin but he has not very useful interface and is not supported now (since 2003).

Garrett  (2012-10-21 08:13:43)
This was awesome! Great write up. The end with the real blur was spectacular. I think blur removal and interpolation of data will be something we will really see a big growth of in the future.

Vladimir Yuzhikov  (2012-10-21 08:19:41)
Garrett, thanks! Soon I will publish the second part of article which will be devoted to practical problems and their solutions

terlmaa  (2012-10-21 08:31:02)
Sounds like a solid plan to me dude. www.Over-Anon.tk

Robert  (2012-10-21 08:36:16)
This is amazing!

Anon  (2012-10-21 08:42:55)
I could see this possibly being used in forensics in the future

Jen Matthews  (2012-10-21 08:43:07)
This is good work! What’s great about this is that you’ve put up all of your steps! Good going!

Anonymous  (2012-10-21 08:46:53)
Wow. This is amazing.

George Sinise  (2012-10-21 08:56:08)
Enhance.

Kevin Strongafur  (2012-10-21 09:06:44)
Thank you for all the detail. It’s nice to know what’s going on behind all these functions. I would ask though, does this deblur tool differ from the one Adobe presented at last year’s Max Conference, which is still in the works for a future version of Photoshop.

Mustafa  (2012-10-21 09:39:44)
LOL magic of the CSI was true then

TheOnlyJoker  (2012-10-21 09:41:00)
Wow dude! Awesome! I really appreciate how much time you’ve spent on such a great productive project! It’s a really awesome little tool you’ve developed.

Anthony  (2012-10-21 09:53:44)
You guys are masters, congrats on making some super-pimp type shit

ctan  (2012-10-21 10:00:28)
Absolutely amazing work.

Mikola Lysenko  (2012-10-21 10:15:32)
Long ago I did a similar thing for a class project, and wrote a pretty detailed write up of my findings. You can find the report here: http://pages.cs.wisc.edu/~bmsmith/projects/2009/graphics838p1/report.pdf (15mb PDF warning) I also kept a running blog/journal of the project while I was working on it as well: http://research.cs.wisc.edu/graphics/Courses/AdvancedGraphics09/Mikola/Project1Blog Fun stuff!

Oren Tirosh  (2012-10-21 10:19:34)
I believe the ring artifacts in the non-synthetic images are mostly a result of nonlinearities. I wonder if there is some iterative process that could model and compensate for these nonlinearities. If the image has areas that you know should be of uniform color (e.g. the road or the white area of the page) these can be used as reference for such an algorithm.

Vladimir Yuzhikov  (2012-10-21 10:46:42)
Mikola, thanks for the link – will look at your blog

Vladimir Yuzhikov  (2012-10-21 10:49:23)
Oren Tirosh, ring effect is the most artifact of deconvolution. One of the goal of next SmartDeblur release – to supress this and other artifacts

ivanhoe  (2012-10-21 11:48:09)
Great results. Are there any techniques for fixing the blur that results from small camera movements (like shaking of hands)?

Henk Poley  (2012-10-21 11:55:59)
Would you be able to extract where the distortion is, and then use in-painting techniques to fill in the blanks? Even just having the distortion map as an alpha-channel or something would be nice to have for further image processing.

Vladimir Yuzhikov  (2012-10-21 12:01:08)
ivanhoe, you can choose “Motion blur” defect type in the SmartDeblur and restore that photos

Vladimir Yuzhikov  (2012-10-21 12:02:51)
Henk Poley, “Would you be able to extract where the distortion is, and then use in-painting techniques to fill in the blanks?” – it is slightly different problem – but in general, yes

PhotoEnthusiast  (2012-10-21 12:59:40)
Very interesting. How useful would this be for images which is only partially slight out of focus? Would the side effects and artifacts from the de-blurring affect, or leak into, the neighboring sharp areas of the image? I’m thinking of usage where the image quality is prioritized, contrary to information retrieval.

Scott  (2012-10-21 14:23:21)
Will you ever make a version for Mac users??

Vladimir Yuzhikov  (2012-10-21 14:38:58)
Scott, sources are compilable for Windows, Linux and Mac OS – because they are based on Qt – you can complie it. But I published windows-binaries only. Also you can look at the discussion about it on reddit: http://www.reddit.com/r/technology/comments/11uaqm/restoration_of_defocused_and_blurred_images_with/c6pntv2

Peter  (2012-10-21 15:06:58)
This is pretty cool. Most photos have parts that are out of focus in front or in the back, with this your able to get these parts sharp So you can answer “who was that man in the back?” As a result i think you found a way to retrieve or restore depth information from a flat picture with no extra tools. You didnt wrote it for that, but it can be used for that. You can retrieve depth information from a single image. And taking this to a far deeper level, i think (but i am not sure) that this should interest anyone who wonders how information of a 3d world could be stored on a flat surface, this reminds me of holograms, and some scientist who say a blackhole’s surface is a holografic presentation of a 3d world, as all info is still there. I know i am getting a bit to far and make huge mind jumps here. But i do think that this math you using, could be used to Study other fields as well.. (Hubble image not sharp… etc)

YuX  (2012-10-21 17:11:03)
Please, bring it to Mac. Thanks!

adrian  (2012-10-21 20:32:33)
is there a way to do this with video? 1080p I’d be very interested in using this on a film I’m editing.

Henk Poley  (2012-10-22 01:46:21)
Pretty easy to build on OS X. * Install fftw-3 with macports * Install QtSDK from https://qt-project.org/downloads * git clone https://github.com/Y-Vladimir/SmartDeblur.git * Open src/SmartBlur.pro with Qt Creator * Edit line 40 of SmartBlur.pro so it points to macports libs :: unix: LIBS += -L$$/opt/local/lib/ -lfftw3_threads -lfftw3 * On Mountain Lion the build (green play button) will fail because QT is unsupported there. You can comment /* */ out the warning pragma to make it build. The application crashes sometimes, but only when you load a new image while the high quality renderer is still working on the old image data. Also there’s appears to be a bug, the low quality color render (almost) never shows. So often only the grey scale version is shown, or one of the RGB color planes.

Spazturtle  (2012-10-22 02:43:59)
adrian, Yes, extract the frame from the video (very easy) run them through the program, then put them back into a video format.

Peter D  (2012-10-22 02:51:26)
Gull, S.F. and Skilling J. (1984) Maximum Entropy Image Reconstruction in IEE Proc., 131F, 646-59.

Joseph Nguyen  (2012-10-22 04:32:21)
Incredible software.

Matt W  (2012-10-22 05:10:13)
Peter D beat me to it, Maximum Entropy Image Reconstruction is where it’s at, although it still has significant artefacts in the presence of noise. There is an open source Maximum Entropy Deconvolution library written by H�vard Rue in the 1990s available from here: http://www.math.ntnu.no/~hrue/index_eng.html

Chris J. Bartle  (2012-10-22 05:29:10)
I was an unbeliever, but I put it to the test and I have seldom been so impressed with a program. This is a work of pure genius. Thank you for sharing.

Alexander Mikhalev  (2012-10-22 05:40:30)
Wonderful work, very impressive.

Matthew Davis  (2012-10-22 05:56:57)
Hello! First of all, great work! This is truly amazing. Second, I would love to partner with you and develop this for the iPhone. I currently develop iOS applications for the California Institute of Technology and Telecommunications (CalIT2, www.calit2.net). The amount of people that could use this technology in a mobile setting is astounding. If you are interested, please get in touch with me via my email. Thanks and keep up the amazing work!

Elliot Geno  (2012-10-22 09:46:46)
I wonder what would happen if you took the Wiener filter, Tikhonov regularization, Lucy-Richardson filter and Blind deconvolution and found an average between the different algorithms. Seems like the most accurate information would be shared across the different results. The least accurate information could be flagged for review based on the type of area sampled from. For instance, Blind Deconvolution works well for detailed parts of images where the Weiner filter looks best for broad areas of similar color. Take the worst offending pixels (based on a threshold of wildly inaccurate pixels vs similar pixels across algorithms) and use the Weiner filter for the broad areas and the Blind Deconvolution for the edge details.

Michele  (2012-10-22 09:55:35)
Wow, your work is really great and definitely will have very interesting applications, continues on this path, and you will have much success.

Add new comment:
Your name:

Email: (optional, will not be displayed)

Text:

 Posted by at 7:25 pm
Oct 212012
 

http://www.fit.ac.jp/~tanaka/fitsat.shtml

FITSAT-1 (NIWAKA)

A Small Artificial Satellite Developed at the Fukuoka Institute of Technology

Takushi Tanaka

JA6AVG

 

———————————– News ————————————— 20121005: FITSAT-1 was deployed from ISS at 15:44 on 4th October (UTC). 20121006: We have received a lot of signal and telemetry reports. All reports show FITSAT-1 starts working. … 20121016: We got temperatures, voltages and currents stored in FITSAT-1 by remote commands. Those signals (437.445MHz, AX.25 packet 1200bps) are replies for remote commands. They can be received only local area. 20121017: New TLE for FITSAT-1 FITSAT-1 (NIWAKA) 1 38853U 98067CP 12290.28654574 .00042939 00000-0 70055-3 0 125 2 38853 51.6512 239.6745 0015224 173.3140 186.8061 15.52030183 1793 20121020: First picture from NIWAKA by 5.8GHz
20121021: Flashing LED is not started yet. ——————————————————————————–

 

We developed a 5.8 GHz high speed transmitter for artificial satellites.
It consists of an exciter module with a 115.2kbps FSK modulator and 
a liner amplifier which amplifies a 10mW signal to 4W.

But, these two modules were too big for a cubesat. So, we have developed a
new module which combines the exciter and the liner amplifier of 2W output.

Using this module, we have developed a small artificial satellite
named FITSAT-1. It also has the nickname "NIWAKA".
The shape is a 10cm cube, and the weight is 1.33kg.

The main mission of this satellite is to demonstrate the high speed
transmitter developed. It can send a jpeg VGA-picture(480x640)
within 6 sec.

NIWAKA will be launched from the International Space Station
at 15:44 on 4th October 2012 (UTC) as shown here.


(JAXA movie 60MB)

NIWAKA also uses the 430MHz band for beacon transmission and remote commands.

The beacon signal is a standard Morse code CW signal.
The signal starts with “HI DE NIWAKA …” and telemetry data follows.


(telemetry format)

The following table summarizes the radio frequencies of NIWAKA.


NIWAKA has another experimental mission to test the possibility of optical
communication by satellite. It will actually twinkle as an artificial star.

NIWAKA’s high power LEDs will be driven with more than 200W pulses to
produce extremely bright flashes. These, we hope, will be observable by
the unaided eye or with small binoculars.

NIWAKA will write messages in the night sky with Morse code as:


(JAXA movie 120MB)

The LEDs will also be driven in detecting faint light mode.
The light will received by a photo-multiplier equipped telescope
linked to the 5.8 GHz parabolic antenna.

Duty 30%, 10Hz signal is modulated with also duty 30%, 5kHz signal.
So the average input power will be 220W x 0.3 x 0.3 = 20W.
In order to detect the faint light, a high gain amplifier with
5kHz filter may be useful.

While, the Morse code is modulated with duty 15%, 1kHz signal. So, the
signal can directly drive a speaker with AF-amplifier to hear Morse sound.


Overview (Flight Model)


(Bottom View)

Block diagram


The NIWAKA body is made by cutting a section of 10cm square aluminum
pipe. Both ends of the cut pipe are covered with aluminum plates.
The surface of the body is finished with black anodic coating.

The CubeSat slide rails and side plates are not separate; they are
made as a single unit. The thickness of the square pipe is 3mm, but
the surfaces attached by solar cells are thinned to 1.5mm because of
weight limit. In order to make the 8.5mm square CubeSat rails, 5.5mm
square aluminum sticks are attached to the four corners of the square
pipe.

The following picture shows the inside of NIWAKA.

The “L” at upper left is a lever to push the deployment switch.
The rotor at upper center is 437MHz antenna extension mechanism.
The antenna element is stored in a polycarbonate case in spiral.
This mechanism was developed by an undergraduate student.
There are connectors for flight pin and testing functions at upper right.


The trajectory of the ISS is inclined 51.6 deg from the equator, so NIWAKA will
travel between 51.6 degrees south latitude and 51.6 degrees north latitude.

NIWAKA will carry a mounted neodymium magnet to force it to always point to
magnetic north like a compass. When NIWAKA rises above the horizon, it will
be to the south of the Fukuoka ground station, and both the 5.8 GHz antenna
and the LEDs will be aimed accurately enough by the magnet aligning itself
and the satellite with the earth’s magnetic field that the Fukuoka ground
station will be within the main beams.

We will perform both 5.8 GHz high-speed and optical communication experiments
for about 3 minutes as the satellite travels along the orbit shown as the red
line in the figure.


The name NIWAKA is from “Hakata Niwaka”, which is traditional impromptu comical
talking with this mask. Here, Hakata is old name of Fukuoka city.


After Deployment from NASA pictures




Ground Station

The 5.840 GHz signal is converted to 440 MHz with LNB which is attached
to the focal point of 1.2m parabola antenna. The parabola is mounted on
an equatorial telescope.

The 440 MHz signal is converted to 10.7 MHz by AR8600 receiver.
The 10.7 MHz signal is detected by 280/500 kHz FM detector.
As we use a simple FSK, FM detector directly generates RS232C
signal for PC.

A jpeg-picture data consists of 128 bytes packets as follow:

(Photo Data)

00 00 7A 00 FF D8 FF E0 …
01 00 7A 00 09 0A 16 17 …

12 34 56 00 ….. FF D9 …

Namely, the first 4 bytes and the last 2 bytes do not consist the
photo data. Data size of all packets except the last is 122(=7A hex).
A jpeg-picture data starts with “FFD8” and ends with “FFD9”.
The jpeg picture is made by connecting the data part of each packet by
removing the first 4 byte and the last 2 byte.
20 VGA-pictures are sent at a time. Each picture is sent around 4-6 sec.
There is 8 msec interval between packets. and 5 sec interval
between pictures.

Circuit Schematics

All circuits are designed by JA6CYY Mr.Takakazu Tanaka who is the founder of Logical Product Corp.  and now 
he is the chairman of the company.  We appreciate him that he opened his designs to public. 

[1]  5.84GHz to 440MHz converter (LNB)
[2]  5.4GHz local oscillator for converter
[3]  DC-power supply to LNB through coax
[4]  10.7MHz discriminator for 115.2kbps FSK

Softwares

I appreciate Timothy HB9FFH who made a telemetry decoder for FITSAT-1. 
It is available from the Carpcomm website:
http://carpcomm.com/satellite/fitsat1

One of our student also made the similar software on windows:
http://turing.cs.fit.ac.jp/~fitsat/CWFM/FITSAT_CW_Analyzer1.zip

All programs for 5.8GHz are developed on Linux. A simple schell script controls
receiving data and displaying pictures.  Here is tgz file which we have developed.

References

[1] Takushi Tanaka, Takakazu Tanaka: "Development of a 5.8GHz-band High Speed Communication Radio Module
    for Small Artificial Satellites", Bulletin of Information Science Isnt., Fukuoka Inst. Tech., vol.20,
    pp.1-6, 2009 (in Japanese).
[2] Kenta Tanaka, Takushi Tanaka, Yoshiyuki Kawamura:  "Development of The Cubesat FITSAT-1", UN/Japan nano
    satellite symposium (to appear in October 2012).
[3] Yuka Mizoguchi, Kaihua FENG, Takanori Soda, Toshiki Otsuka, Tatsuro Kinoshita, Kohei Nishimoto,
    Yoshiyuki Kawamura, Takushi Tanaka:  "Ovbservation of The LED signal from FITSAT-1", UN/Japan nano
    satellite symposium (to appear in October 2012).
[4] Yoshiyuki Kawamura, Takushi Tanaka: "Emission of LEDs from a ultra small satellite", 
    The 418th Topical Meeting of the Laser Society of Japan (to appear October 2012).
[5] Takushi Tanaka, Yoshiyuki Kawamura: "Over view of FITSAT-1",
    The 53rd Symposium on Space Science and Technology in Japan, (to appear November 2012).

 


Current position


 

Verification Card

FITSAT sends the beacon signal 30 min after the deployment. Please send
the signal report and your postal address to fitsat1@hotmail.co.jp and
also cc to tanaka@fit.ac.jp. You will receive this verification card.

The beacon frequency 437.250MHz of FITSAT-1 conflicts with the satellite
PRISM of Tokyo Univ. Please confirm that the CW starts “HI DE NIWAKA …”.

The orbit is almost same as ISS.

 


 Posted by at 8:39 pm
Oct 202012
 

The Museum of HP Calculators


HP Forum Archive 21

[ Return to Index | Top of Index]

[WP34s] Accurate Clock Message #1 Posted by Katie Wasserman on 8 Apr 2012, 12:48 a.m. 

I added a crystal and load capacitors to my WP34s and was bothered by the inaccuracy of results as others have reported here. While you might get a lucky and find an accurate crystal with the stated 20 or 30 ppm of most of these 32K crystals you typically end up losing or gaining about 2 seconds per day — not terrible but not as good as a $10 Timex watch.

I decided to change one of the load capacitors to a tiny trimmer (variable) capacitor and used that to “pull” the crystal to better accuracy. I found that there was enough room inside the calculator to use thru-hole devices for the crystal, fixed capacitor and variable capacitor:

While a little messy, it works just fine and with some trial and error over the course of many days I was able to adjust the clock to within about 1 second per week. The oscillator is not temperature compensated so this accuracy is only obtainable at a constant temperature. Also this won’t last forever as the crystal will age, but it’s much better than when I started with two fixed 6pf capacitors.

For the record I used the following parts:

Digikey 300-8301-ND – Citizen 32,768Hz 6pf load xtal

Digikey 490-3752-ND – Murata 6pf ceramic capacitor

Mouser 242-2820 – Xicon variable capacitor 2.8 – 20 pf

-Katie

Edited: 8 Apr 2012, 1:34 a.m. after one or more responses were posted

Re: [WP34s] Accurate Clock Message #2 Posted by Walter B on 8 Apr 2012, 1:25 a.m., in response to message #1 by Katie Wasserman 

Thanks, Katie, for sharing 🙂


[ Return to Index | Top of Index ]

Go back to the main exhibit hall

 Posted by at 8:18 pm
Oct 182012
 
Phone-Controlled Lock Lets You Lock Your Door From Anywhere In The World

Computerized door lock ‘Lockitron’ allows users to lock and unlock their doors remotely from anywhere in the world with their phones.
Simple and intuitive, access to your Lockitron can be shared with family, friends and guests in a few quick taps.
To ensure security, you can set to receive a notification whenever your door is unlocked, whether by phone or key—and if you lose your phone, simply disable its access by changing your password.
Compatible with any phone on the market, the Lockitron can even be operated by older phones via simple text message commands.
If you are using an iPhone 4S or iPhone 5, Lockitron will work even if the internet or power goes out.
Here’s how you can achieve keyless entry wherever you want:




[via  Lockitron]
 Posted by at 8:37 am
Oct 142012
 

http://www.geekestateblog.com/

You are here: GeekEstate Blog » About

About

GeekEstate Blog was founded by Zillow.com as a resource for real estate professionals who want to learn more about how they can grow their business through smart use of technology.

As the name implies, this blog’s varied authors are geeks – people with experience providing technology services to the real estate industry — they understand technology and how it can work for you. Topics discussed here could range from a review of the latest cell phone to Web site tips for search engine optimization (SEO).

The project was the brainchild of Drew Meyers when it launched in summer 2007. He managed the site until his departure from Zillow in early 2010. In late 2010, ownership of the site was transferred to Drew.

The opinions expressed on this blog are posted by its authors and don’t necessarily reflect the views of Drew Meyers. GeekEstate Blog’s contributors are not employed or paid by anyone (unless otherwise stated). If you’d like to become a contributor to this blog, please let us know!

Disclosures: I worked for Zillow for close to 5 years, still have many friends there, and am a shareholder in the company. Virtual Results was my employer from late 2010 until December of 2011, and I have a referral agreement in place for sending Virtual Results business.

 Posted by at 10:10 am
Oct 082012
 

Astronomy Picture of the Day

Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.

2012 October 7
See Explanation.
Moving the cursor over the image will bring up an annotated version.
Clicking on the image will bring up the highest resolution version
available.

The Same Color Illusion
Image Credit: Edward H. Adelson, Wikipedia
Explanation: Are square A and B the same color? They are! To verify this, either run your cursor over the image or click here to see them connected. The above illusion, called the same color illusion, illustrates that purely human observations in science may be ambiguous or inaccurate. Even such a seemingly direct perception as relative color. Similar illusions exist on the sky, such as the size of the Moon near the horizon, or the apparent shapes of astronomical objects. The advent of automated, reproducible, measuring devices such as CCDs have made science in general and astronomy in particular less prone to, but not free of, human-biased illusions.

 

Tomorrow’s picture: horse of a different color 


< | Archive | Index | Search | Calendar | RSS | Education | About APOD | Discuss | >


Authors & editors: Robert Nemiroff (MTU) & Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
NASA Web Privacy Policy and Important Notices
A service of: ASD at NASA / GSFC
& Michigan Tech. U.

 Posted by at 9:26 pm
Oct 022012
 

LINKS:

 

http://dl.acm.org/citation.cfm?id=1606374

http://www.flexijet.info/en/produkte/flexijet-4architects/das-flexijet-4architects/

http://www.jmatech.com/

http://www.acuitylaser.com/AR200/sensor-technical-data.html

 

 Posted by at 9:17 pm