If you have ever dug into the nature of light as it pertains to photography, or become involved in a discussion about how narrow (small) apertures adversely affect image quality, you have probably heard of diffraction. It is probably one of the more well-known technical aspects of photography, even by those who are not technically minded. Despite its broad recognition, diffraction is probably one of the most misunderstood things in photography. I have explained on many forums, on many occasions, the true nature of diffraction. Now that I have my own personal blog at my fingertips, I thought I would put down in a single, cohesive article, what the Diffraction Myth is, and debunk the myth with what I hope will be a relatively simple explanation backed up by some real-world examples and visual aids.
Before diving into what the diffraction myth is, it helps to fully understand what diffraction is. Diffraction is a physical phenomena, a trait of waves as they move past an obstacle. Diffraction presents with all waveforms. Not just light, but the entire electromagnetic spectrum, as well as waves moving through water, sound waves, etc. Any time a waveform encounters an obstacle, it will bend around that obstacle. The obstacle may be a small obstacle with space around it, or a large obstacle with only a slit or a hole through it through which the wave may pass. The simplest explanation I can offer as to why diffraction occurs would be that any obstacle introduces “drag”. (NOTE: Drag and “bending” of light is not what actually happens. What actually happens is a much more complex topic that covers how diffraction is an intrinsic trait of electromagnetic waves. You can read more about that here, if you wish. For now, “drag” and “bending light” are simpler terms that describe the effect sufficiently enough.)
“Drag” on a light wavefront increases as you get closer to the obstacle. In the case of an opening in a large obstacle (such as the aperture in a lens), the part of the wave near the center is able to pass through largely unscathed. The part of the wave closest to the edge of the opening experiences the most drag, and “bends” considerably, propagating off at a different angle on the far side of the opening. A larger opening leaves more of the wave minimally affected, while a smaller opening affects more of the wave. The two images below (from Wikipedia) demonstrate the effects of diffraction with a large and small opening on a waveform passing through them. Note that the smaller opening causes a more significant effect:
When light of a single point light source passes through the round opening created by the diaphragm in a lens, the aperture, diffraction causes that point of light to spread out in a circular shape. A bright spot surrounded by concentric rings, each subsequent ring being slightly dimmer than the previous. This is called an Airy Pattern. The central bright spot is called the Airy Disc. Below is an example of an airy pattern (again from Wikipedia), and I’ll be using this image to debunk the Diffraction Myth later on.
The featured photo on this post is also an example of diffraction, although of a slightly different nature. The featured photo is a crop of the out of focus blur circles (boke) of Christmas Lights. Each of those big circles is an airy pattern. Unlike the airy pattern of the laser above, the airy pattern of my image is relatively flat. This is because a very wide aperture was used (f/2.8), so the effects of diffraction are minimal (additionally, the lens has bit of “spherical aberration”, which changes how light is focused onto the sensor, further “flattening” the airy pattern). The wave nature of light is clearly visible, however, particularly near the fringes of the blur circles. Additionally, one can see further diffraction as light bends around tiny flecks of dust on the elements of my lens, creating tiny diffraction patterns of their own. Unlike diffraction caused by an opening, dust is a point obstruction, acting more like a rock in a pond.
Now that you know what diffraction is, and how it affects light, it’s time to debunk the Diffraction Myth. What is the diffraction myth? If you have ever been involved in a discussion about why you shouldn’t use narrow apertures like f/16, f/22, etc. then you have probably been told the reason is diffraction. Furthermore, if you have ever been told by anyone, such as an over-the-counter DSLR sales rep at a camera store, that you shouldn’t get a camera with smaller pixels because they will experience problems with diffraction, you’ve had first-hand experience with the Diffraction Myth. The notion that a higher resolution sensor will produce worse IQ than a lower resolution sensor because of diffraction is the evil heart of the myth, and is usually based on the DLA, or Diffraction Limited Aperture, of the sensor (the point at which blur caused by diffraction begins to affect IQ at a pixel level.) These notions are a fallacy, and I’ll explain why throughout the rest of this article.
When it comes to diffraction at small apertures like f/16 or f/22, it is true that your IQ drops at those levels. Diffraction puts an upper limit on the resolving power of a lens, and each narrower aperture lowers that limit. Diffraction, however, blurs an image in a very even, very predictable way. That means softening caused by diffraction can fairly easily be corrected with some sharpening while post-processing. The “Smart Sharpen” feature of Adobe Photoshop when used in “Lens Blur” mode, or the “Detail” slider in Lightroom’s sharpening tool, are both designed to correct diffraction blur in an image. The use of these features allows one to desoften an image that has been softened by diffraction blur, often to a very acceptable degree, without significantly affecting detail. IQ from a photo taken at f/22 can frequently be restored to such a degree that it rivals the IQ of a photo taken at f/4.
The myth really comes into play when one compares photos taken at f/22 to photos taken “wide open”, at a lenses maximum aperture. All lenses suffer from two types if image quality degrading factors. Diffraction is of course one of those IQ degrading factors. Optical Aberrations are the other. Aberrations generally only occur when a lens us used at or near its maximum aperture (“wide open” in photographer speak.) Most of the time, most lenses perform far worse when used “wide open”, which may be f/2.8, f/1.8, even f/1.4, than when used at the narrowest aperture setting like f/22, or even f/32 if it is available. A single lens may suffer from a variety of optical aberrations wide open, including Chromatic Aberration, Spherical Aberration, Coma, Distortion and a variety of others. Optical aberrations are generally most prevalent when light from outer edge of the lens elements is used, as that is where light usually has to bend the most. The greater degree to which light is bent, the more likely it is that aberrations in the way that light is focused will occur. Aberrations are reduced when a lens is stopped down, because the diaphragm blades block light from the periphery of the lens, progressively allowing only light from the center, where aberrations are lowest, to be focused on the sensor.
The animated GIF image below demonstrates the effects of aberrations and diffraction from the Canon EF 50mm f/1.4 USM lens. The EF 16-35mm f/2.8 L USM is used as a subject, as it has great detail and clean, sharp edges around text. The 50mm lens has an aperture range of f/1.4 to f/22. The animated GIF cycles through all of the major aperture stops. The most important comparison to make is f/22 vs. f/1.4…the difference is huge, and strongly in favor of f/22. Another key comparison to make is f/22 vs. f/2.8. While f/2.8 does not exhibit nearly the same degree of aberrations as f/1.4, slight color fringing is still evident around the white lettering. The lens does not reach its “sweet spot”, the point where optical aberrations and diffraction are at their relative minimums, until f/8! Even at f/5.6 some fringing from chromatic aberration can still be seen. By f/11 and through f/16 and f/22, the sensor is “diffraction limited”, however take note that IQ is still better than most other apertures except perhaps f/8, and possibly f/5.6.
The notion that diffraction is an IQ killer at narrow apertures, while it may have applied in the past when enlarging 35mm film for print at much larger sizes, no longer really applies today. Beyond the simple fact that diffraction tends to be less destructive to IQ than a optical aberrations at much wider apertures, with digital photography we have far more power to correct things like diffraction blurring with some post processing on a computer. As I mentioned before, diffraction is a predictable and well-understood form of blurring. When something is predictable and well-understood, we have the power to reverse the process. With a deep enough understanding, we could theoretically completely undo diffraction softening entirely, restoring all of the image detail that was actually present in the scene. Generally speaking, we do not have that depth of knowledge, so we use approximations and “good enough” techniques.
In the animated GIF below, I’ve sharpened the f/22 image from the animation above to be relatively as sharp as the f/8 image. I used the most basic technique for sharpening, the Unsharp Mask tool in Adobe Photoshop CS6. The restoration of the f/22 image is not complete, and sharpening enhanced noise a touch, however the softening in that image has largely been reversed. If the entire image was scaled to web size, the difference between the f/22 and f/8 images would be unrecognizable. Better results could be achieved with more advanced tools, such as the Smart Sharpen tool in Photoshop, or third party tools from Nik Software such as Sharpener and Dfine, or from Topaz such as Detail 3 and Denoise 5, and more attentive care to the settings used.
The Diffraction Myth extends its nasty little fingers even further. Many photographers always aim to use their lens at a certain aperture in order to maximize IQ. Some photographers never shoot narrower than f/8, some try to shoot at f/4 as often as possible if their lens produces optimal quality at that aperture (which is usually the case with top-end glass, like Canon’s Mark II telephoto and supertelephoto lenses.) While it is true that the resolving power of a lens tends to peak somewhere between f/3.5 to f/6.3 on most lenses, it is not necessarily always artistically valid to use the aperture that produces the highest resolution image.
A prime example of this fact would be landscape photography. Landscapes contain immense volumes of detail. You have large scale elements such as mountains, lakes, rivers, etc., while concurrently also having fine detail. Leaves or pine needles on trees, striations in the rock, ripples on the waters surface, and for ultra wide field photography you might even have full foreground detail in the form of flora, maybe some rocks or boulders, a log or other deadfall. One could aim to maximize the foreground detail at f/4, at the cost of defocus in the distant mountains. Some may think this is an acceptable tradeoff, depth of field blur rather than diffraction blur…however this, again, is a fallacy.
Blur caused by a thin depth of field is non-linear. The farther away from the focal plan you go, the more blurry things will get. DOF affects different parts of a scene to differing degrees. In contrast, diffraction blur is linear. Softening caused by diffraction is uniform, affects the entire photo in exactly the same way, regardless of how close or far from the focal plan any given element of the scene is. Non-linear deconvolution algorithms exist that can correct non-linear defocus, however they are complex and usually not as effective as even a basic sharpening algorithm. A simple unsharp mask, let alone a more advanced sharpening tool, can restore a heavily diffraction blurred image back to acceptable levels of sharpness.
From an artistic standpoint, if it is important to have a very deep depth of field, shooting at a narrower aperture such as f/11, f/16, even f/22, would be better than shooting at f/4 or even f/8. This rule applies even if you are not shooting landscapes. Any time you need additional depth of field to capture your entire subject sharply, diffraction blur will always be the preferable IQ detractor over depth of field blur, as it will always be more correctable. To caveat, it is often the case that one wants a thin depth of field. In such cases, maximum sharpness of the entire scene is not an important factor. First and foremost, use the aperture setting that achieves your artistic vision first, and concern yourself about diffraction (or, for that matter, optical aberrations) last. Art trumps technical accuracy every day, all day, 356 days a year, every year. 😉
Beyond the simple notion that diffraction is an IQ killer, which would be more along the lines of “less than accurate” than “flat out false”, there is an extension of this myth that is indeed flat out incorrect. The DLA Myth is based on the notion that sensors with smaller pixels produce worse image quality than sensors with larger pixels, because of diffraction. In some ways, smaller pixels ARE worse than larger pixels, and under certain circumstances, sensors with smaller pixels can produce worse IQ than sensors with larger pixels. The best example of this would be the fact that, all else being equal (i.e. same sensor manufacturing process and pixel design), a sensor with smaller pixels will be noisier than a sensor with larger pixels. The reasons for that are probably grounds for another article dedicated explicitly to Noise, so I’ll leave that for another day.
When it comes to the notion that diffraction makes sensors with smaller pixels produce worse IQ than sensors with larger pixels, assuming noise is not a factor, we are definitely into “flat out false” territory. Technical camera review sites will often throw around the term “DLA”. This stands for Diffraction Limited Aperture. In reality, the DLA actually has little to do with the lens, and everything to do with the sensor and its pixel size. All the term DLA refers to is the aperture of the lens at which point diffraction begins to affect IQ. The key point here is “begins to affect IQ”. Most of the time, when I get into a discussion involving diffraction and smaller sensors, the misunderstanding of the other party is that the DLA marks the point where diffraction degrades IQ so much that a camera affected by it is thus producing WORSE image quality than any other camera NOT affected by it. In other words, the common misunderstanding is that the DLA marks a “turnaround point” where diffraction kicks in, where IQ is suddenly worse than any camera not experiencing diffraction. That line of reasoning is completely false, and I’ll try to demonstrate why with some visual aids and a little math.
First off, before we dive into what the DLA is and what it means for IQ, it is important to understand that diffraction is always present. It does not matter what aperture on your lens you are using, diffraction is an ever-present aspect of how lenses resolve an image. The reasons why were explained at the beginning of this article, but just to make the point clear. At wide apertures, the degree to which diffraction presents is low, where as at narrow apertures, the degree to which diffraction presents is high. At wider apertures, diffraction is often significantly overpowered by optical aberrations. It is also important to point out that diffraction is entirely a physical attribute of light caused solely by the lens. The sensor does not cause diffraction.
The Diffraction Limited Aperture is the point at which the size of the Airy Disc of a resolved point of light becomes as large as a pixel. Logically, it follows that a sensor with smaller pixels will become diffraction limited at wider apertures than a sensor with larger pixels. However, when a sensor becomes diffraction limited, it can and usually will continue to produce better images, because it continues to resolve more detail. The image below demonstrates. In the left hand column two hypothetical sensors are depicted. On top is a “large pixel” sensor. On bottom is a “small pixel” sensor, who’s pixels are 1/4 the size of the larger pixels. That means you can fit 16 smaller pixels into one single larger pixel. Both “sensors” are being illuminated by a single point of diffracted light. The diffraction of that point of light is the same for both sensors. The sensor with smaller pixels is diffraction limited, while the sensor with larger pixels is not.
In the right hand column are images that depict what the actual pixels themselves would resolve in such a scenario. It should be obvious that the sensor with smaller pixels, despite being thoroughly diffraction limited, is producing a more accurate representation of the Airy Pattern than the sensor with larger pixels! The Airy Patterns is by no means interesting. It is, for all intents and purposes, a simple spot of light. One wouldn’t think twice about it most of the time. Regardless, smaller pixels resolve that point of light with more definition and accurate intensity, while larger pixels resolve…a dim square, adjoined by even dimmer squares. Assuming both sensors have similar noise characteristics, the sensor with smaller pixels is, without question, and despite being diffraction limited, producing better, more accurate results. As a matter of fact, even smaller pixels, and even further smaller pixels, and so on will always be better from a detail resolving standpoint. Assuming noise characteristics are the same, at no time, ever, will a sensor with smaller pixels produce worse image quality than a sensor with larger pixels. Simply put…the larger pixels are incapable of resolving any definition…size, shape, intensity…when that point is smaller than a pixel.
If you photograph things with lots of fine detail, smaller pixels will usually be better than larger pixels. Even if those smaller pixels are noisier, there are reasons why that may not be an issue, and ways to solve that problem (future KC articles will cover those things.) A great example of where small pixels can be very useful is bird photography. Birds are difficult subjects. They are extremely alert, very wary of their surroundings, and difficult to get close to. They are also packed full of fine detail…every single feather is a microcosm of detail in and of itself. Assuming you do not have the option of getting closer to a bird when photographing it, a camera like the Canon EOS 7D, which has a sensor packed with 18 million 4.3µm tiny pixels, is probably going to be better than something like the Canon EOS 1D X, which, while it too has 18 million pixels, has much larger 6.95µm pixels. The 7D is capable of resolving 2.6 times as much detail as the 1D X at optimal apertures, and will continue to resolve more detail at diffraction limited apertures, over the 1D X. (Granted, there are many other reasons why the 1D X is still a better camera overall, perhaps those tidbits will go into a future Tech KC article.)
(NOTE: It should be noted that this is an obtuse example. In reality, we have more than a single point of light. One could think of any point on any object resolved by a lens as its own discrete point of light. In a normal photograph, all of those discrete points of light produce airy discs that overlap. That overlap is where image softening comes from. On a pixel-relative basis, individual resolved points can and will soften the results…relative to an image exposed with the same sensor at a wider aperture. Relative to itself, any sensor will produce worse IQ at a narrower aperture than at a wider aperture. The key point of debunking the DLA Myth is that smaller pixels cannot produce worse IQ than larger pixels can. Even when the detail they are resolving is softened by diffraction, smaller pixels will still be resolving as much or more detail. Eventually, assuming you could stop down enough, a higher resolution sensor with smaller pixels scaled down to the same image dimensions as produced by a lower resolution sensor with larger pixels will produce identical image quality. But never worse.)
A visual explanation of why sensors with smaller pixels can continue producing better IQ is good for general purpose internet discussion. For those of you who wish to know more, I’ll bring in the heavy artillery in a future KC article. There are some fundamental mathematical concepts that explain in more specific terms why smaller pixels are better than larger pixels from the standpoint of resolving detail…even in the face of diffraction. There are some additional caveats to the whole “image quality” discussion, however, noise and dynamic range being the most significant. Future technical KC articles will discuss these factors, and hopefully build upon the basis of knowledge presented here in the Diffraction Myth.