Emprical Study: Extreme digital upscaling
For all of the theory in my article about high quality inkjet printing, thats all it currently is…theory. It is the end result of days of research on the physical characteristics of printers, the theory behind printing and ink, the concepts of DPI and PPI, etc. The real question is, how does it stack up against empirical evidence? Does it withstand the test of reality? In this small study, I’ll be looking at whether digital can really compare to film when it comes to significant enlargements, and whether maximum quality can be obtained when upscaling for extremely large format prints. It has long been held that film holds a significant advantage in this area, however I believe that digital is just as capable as film when it comes to printing significant enlargements at high PPI.
For this particular study, I will be working with a macro photo of a species of fly. The fine details, particularly the eyes, visible in this little bug (as repulsive as it may be), make it an exceptionally good subject for exploring upscaling and sharpening for print. One should always consider the amount of inherent detail when deciding how to upscale an image for print. Many photos simply do not have the level of detail that would warrant such painstaking techniques, however as you can see here in this 100% crop, there is more than enough interesting fine detail that one would want to preserve, even when enlarging 300% or more:
In my article about quality inkjet printing, the acuity of the human eye, and the average viewing distances, it was noted that as the viewing distance increases, the print resolution can be reduced without any noticeable loss of detail. While this is true, it makes the assumption that a viewer of a large print will indeed observe it at the expected distance. In practice, however, the assumed viewing distance is not guaranteed, and many a viewer steps in for a closer look, often expecting to see more detail. Achieving the maximum detail in a large print can be important in producing a print that will, quite literally, draw your viewers in.
When viewing a photograph, the detail of a photograph is often lost due to the way it was processed or obscured by imperfections in the way it is filtered and rendered. One of the key aspects of detail is sharpness. Ideal sharpness is perceived when acutance (the definition of edges between areas of perceptible contrast) and resolution (the distinction between closely spaced fine details) are high. The various kinds of processing applied to a digital photograph, from passing through an anti-alias filter by in-camera processing, to scaling an image up in Photoshop, can all affect the sharpness of an image. A variety of methods exist to improve the sharpness of an image, and at lower resolutions, they can be quite effective. The real challenge arises when you need to maintain the maximum level of detail in an image during extreme enlargements.
Data in the detail
When scaling of an image up by any significant degree, say more than double its native size, you often suffer from information anemia and information fabrication defects. The more resolution your native image has, the more leeway you have, however enlargements beyond 2x will usually introduce some degree of softening, loss of detail and artifacting. Image enlargements are usually achieved increasing the resolution of an image up and applying some kind of scaling filtration, such as nearest neighbor (which produces blocky, pixelated images) or bicubic (which smooths out the differences between enlarged pixels.) Image detail is usually preserved by applying some kind of sharpening filter, such as an unsharp mask, which attempt to artificially improve the acutance of an image by hardening the edges softened by bicubic (or possibly more advanced scaling filtration.)
Both scaling filtration and sharpening try to “preserve” detail by fabricating information. Only an original image at its native size will contain “real” information, and any enlargement will contain a combination of real and fabricated information. Doubling the size of an image effectively doubles the number of pixels, however data stored in those extra pixels can only be generated and approximated from the original image. Bicubic filtration “fills in” extra pixels by fabricating information from nearby original pixels.
Sharpening simulates high acutance by lightening lighter content and darkening darker content along edges. Both processes are limited and imperfect mathematical algorithms that can introduce various kinds of undesirable artifacts into an image when they encounter something that falls outside of the domain of the algorithm. In this test, I’ll be comparing various common forms of image upscaling techniques. The most common form of image enlargement is the Bicubic upscale, which is often followed by an Unsharp Mask sharpening filter.
A variety of third-party scaling tools exist these days, such as Perfect Resize, PhotoZoom, etc. These tools employ more advanced algorithms including fractal and S-Spline scaling, in combination with unsharp masking, to produce some impressive upscaling results when compared to Bicubic. Despite their high tech nature, a very simple trick can be employed to produce the best results without any need for fancy algorithms or special sharpening post-scale: iterative bicubic scaling.
The sample images used below were scaled up from an original center crop of an 18mp image from a Canon EOS 7D of size 5184×2848 pixels. At 300ppi, the original full size uncropped image could generate a 17″x11″ print without any scaling (which is a nearly-ideal size to print with an adequate 1″ border on 13×19″ A3+ paper.) The test will scale the original image enough that it could print a borderless 36″x24″ print at 300ppi. This is an upscale of 2x over the original size or 200% scaling, which is enough to demonstrate the differences in scaling and sharpening techniques.
NOTE: Extreme upscaling starts at around a 200% increase in size, but does not necessarily stop there. Landscape photography in particular, especially from professional landscape photographers, as well as macro photography of both flora and insects, are also often targets for extreme upscaling, by as much as 300-400% (prints as large as 60″ on the long side.)
The obvious starting point for our test is bicubic scaling. This is the Photoshop default and de-facto standard way most people scale their images in most cases. It can provide good results when the ability to view maximum detail is not a concern, and is generally more than adequate for most upscaling. When there is not a lot of fine detail in an image, standard bicubic is all you will usually need.
To compensate for the softening caused by Bicubic filtering, an unsharp mask is often applied to improve the acutance of fine details. Use of a sharpening filter is often the best approach to improving detail in an upscaled image for 2x or lower enlargements, as well as for downscaling. When performing significant enlargement of greater than 200%, algorithms that sharpen by trying to enhance acutance can often do more damage than good, and haloing can become prevalent when using strong enough filter settings to actually do any good. Alternative methods for upscaling will generally be required for extreme enlargements. The sample below compares standard Bicubic and Bicubic with an Unsharp Mask of 75%, 1.5 radius, and threshold of 3:
In the animated GIF comparison above, you can see that while a decent amount of sharpening can improve your standard Bicubic filtering a lot. It will also sharpen noise, and may create halos and other artifacts like color shimering or moire, especially if your sharpening is too heavy.
S-Spline & Fractal Scaling
Many third-party scaling tools exist that can be used to perform extreme enlargements of digital images. They provide some of the most advanced scaling algorithms available today, and can generally do an excellent job upscaling certain types of images. Many of these algorithms are tuned for certain types of image content, and are not ideal for any kind of image. PhotoZoom’s S-Spline scaling is adept at identifying high contrast edges where acutance enhancement is most beneficial and crisp, smooth definition is important. It is capable of preserving smooth edge detail through considerable enlargements.
Similarly, Perfect Resize (once called Genuine Fractal’s) fractal scaling is also adept at maintaining geometric structure through the use of fractal compression and interpolation. No single algorithm is ideal, however. S-spline scaling has the tendency to pass over finer details in its quest to perform ideal geometric enlargement, and can often flatten areas of lower-contrast detail. Perfect Resize has similar problems with detail, however given that it is based on a fractal algorithm, is better at preserving some fine detail at the cost of not being quite as adept at geometric perfection as S-spline scaling is. These tools can be superb when used with the proper kinds of images, such as architecture or images that intrinsically have minimal low-contrast detail and/or many important geometric content.
Iterative Bicubic Scaling
Neither bicubic filtering, nor alternative filtering algorithms such as Lanczos, S-spline, Fractal, etc. are capable of preserving maximum detail to any size. The greater the difference between the original size and the destination size, the more information must be fabricated to “fill in the holes”, so to say. A simple logical conclusion to this problem, when one takes the time to ponder it, is to reduce the difference. Scale an image from its native size to your desired destination size in discreet steps, or iterations, that are a fraction of the difference between the native and destination.
To take our sample image as an example, scaling from 17″x11″ to 36″x24″. Performing a direct Bicubic upscale would increase the image size by 209% in both dimensions. Content would need to be generated to fill in 59,844,096 pixels out of 77,760,000 pixels from the 17,915,904 pixels worth of original image data. That is over 84% of the upscaled images total area, a hefty cost and a considerable drain on image detail. The vast majority of the image would be purely fabricated content. Alternatively, the image could be scaled up in stages, say 10% at a time. The benefit of such an approach is that, for each step, you generate a small amount of new content from a bulk of existing content. Each subsequent step only needs to generate 17.35% of the new image, rather than 84%, and each step has much more accurate information to work with when generating content.
It used to be that iterative bicubic scaling had to be done manually, literally increasing the width and height of the image by 10% at a time in several stages until you just surpassed your target, then downscaling back to your exact target. (Iterative bicubic usually resulted in overshooting a bit, hence the single final downscaling.) Today, Adobe Photoshop CC offers a special scaling option in their Resize feature, called Preserve Details. Using the Preserve Details, you get iterative bicubic scaling automatically, and Photoshop works out the iterations for you. It also has a denoising option which helps reduce noise in smooth regions without sacrificing detail in high detail areas. Here is an example of using Photoshop’s Preserve Details scaling method with 35% NR in comparison with standard Bicubic and Bicubic w/ sharpening:
Comparing the above sample to the original direct Bicubic example, and there is a noticeable difference in sharpness of fine details. Most notable are the cells in the eye and the hairs towards the middle right side. This scaling is comparable to the second Bicubic example with the ample Unsharp masking applied, and in fact better preserves certain details that the Unsharp mask did not. It is also comparable to the S-Spline & Fractal scaling, as can be seen in the comparison below:
Note that while both S-Spline and Fractal have sharper edges and in many senses “crisper” detail, that detail in comparison to iterative bicubic scaling is often rather flat. The soft gradients and edges that is important to some detail, like the cells of a compound eye, are lost due to the way information is processed by S-Spline and Fractal scaling. Instead of preserving original data, it is replaced with regenerated content, hence the sometimes flat look to certain fine details.
Note: some posterization has occurred in smoother regions due to saving the image as a .gif…posterization is not normally inherent in Bicubic scaling, although it may present in Fractal and S-Spline scaling.
While it has long been held that film has a considerable edge over digital when printing significant enlargements, I believe that is an old misnomer that can be empirically tested and put to rest today. As with digital enlargements, film enlargements are still ultimately fabricating information when scaled beyond their original size, simply via a different mechanism. With film it is often easier to bring out fine details (and fine imperfections) that exist and make them more prevalent in an enlarged image, however on a size-comparable basis, film doesn’t ultimately contain significantly *more* original information than digital.
One caveat here, shooting with a larger film format will obviously captures more original data, however significantly enlarging a 4×5 slide to 55×36 is not all that significantly better than enlarging an 18-22mp digital photograph to 55×36 when using advanced scaling techniques. On the flip side, with digital, you may actually have more options at your disposal for preserving detail, or controlling how intermediate detail is fabricated during significant enlargement than you do with film, and careful massaging of your original pixel data can produce some incredible results.
As a side note, huge enlargements of film, either smaller formats or large format, are usually done by scanning the film first, and digitally scaling up anyway.
While performing this test, a single enlargement of the original image was made by scaling it to 55″x36″. The image was a whopping 16500×11003 pixels in size, or a monstrous 181 megapixels, some 318% larger than the original image! The image was compared to a direct Bicubic version as well as a Bicubic with Unsharp masking. The iterative scaling preserved at least as much detail as the sharpened version, without the tonal flattening of low contrast detail or harsh edging to fine details. Examples of all three versions below (direct bicubic, bicubic w/ sharpening, staged 5% scaling):
A 55″ enlargement is a huge size, and maximum detail can easily be preserved in a digital image for printing at such sizes. Prints of 50-55″ are fairly popular amongst experienced landscape photographers, and a landscape photograph looks truly superb when framed and wall mounted at such sizes. So for all you digital photographers out there who have heard for years that you can’t get a high-quality super-enlargement with digital, here’s to proving the nay-sayers wrong.