Why Do Your Fall Images Look Better This Year?

Yellow aspen in the Eastern Sierra

Eastern Sierra Fall Colors

I often receive supportive feedback on my photography, as well as questions on how I get my results.  Since I’m “in this for the photography” I tend to prioritize photography over writing.  So my answers to questions provide a great opportunity to address common questions in a blog post.  This time, I’ll just have it all be the blog post, illustrated with photos that I’ve post-processed in the past month, fall 2017…

Yosemite Daylight Long Exposure Composite

On 9 Nov 17, 5.52PM PST ———- said:
Jeff:
I see a dramatic change in your fall images….much improved, even though the old ones were great to start with. What software are you using to develop your images? It looks like you are using focus stacking for the landscapes as well. Is this so?
Nice job, ————

Hi ———-,

I’ll answer in two parts, first regarding post-processing.

I honestly don’t know if I can narrow it down to one or two factors and answer the question completely, but here goes…Everyone’s looking for ways to improve their photography, and the questions often assume that a new camera or post-processing software must be the key.  To be sure, cameras and applications do evolve, so there are benefits to new versions, but there’s a lot to be said for the influence of experience and personal stylistic choices.

Spring in the FallIt would be really easy to simply provide “the answer” and point to one new product that will provide the magic bullet.  You find that all over the Internet with people paid to promote products, and they often do not follow FTC guidelines to properly identify their social media and blog “reviews” of their sponsors’ products as paid ads.  I’m unencumbered by product/manufacturer relationships, so I can take a more comprehensive and less biased approach.

I do find Adobe Lightroom 5 and lately 6 to be meter than older versions of the software, and I do often re-process results as recent as two years ago and get better results.  But here’s the catch: I also notice that I’m using a different approach and settings than I did even as recently as two years ago.  So I can’t really attribute the improvements to solely or even mainly to newer Lightroom software.

Fall Colors in the Virgin River NarrowsI’ve been using Photomatix from HDRsoft for many years, and I remember as early as 2009 I was occasionally layering my best edit of the original photo on top of the HDR result to make the result more realistic.  Unfortunately that required exporting the files to Photoshop for the layering.  I prefer the photography side of the process over the computer/graphics arts options, so I often just settled for an average of the three exposures in Photomatix, and touched that up on Lightroom instead.  The new version Photomatix 6 that I started using in beta last spring includes the layering of any of the original files on the HDR output, and enables blending using a slider from 0 to 100%.  So in addition to being to select from more preset HDR results, it’s little extra effort to blend in the best straight photographic result that you were able to produce in Lightroom.

That would certainly account for many of the files that I post-processed in Photomatix, but I try to tag all of them with HDR and Photomatix, so you can see for yourself that it’s not a huge percentage of my overall fall results.

Yosemite Fall DogwoodsSo what’s left is some combination of experience and what I choose to do with it.  I think that I’ve become more demanding with my results, which forces me to take a more critical look at them.  I often say that I prefer to spend five minutes or less post-processing a photo on my computer, but to get better results, at a minimum it is necessary to take the lead of Ansel Adams and at least invest some time in dodging and burning.

Stylistically, while I always preferred to produce more or less realistic images, sometimes digital cameras simply didn’t have the dynamic range to capture an entire natural scene well, so I’ve decided to accept the compromise of visibly manipulated results.  As cameras get better in subtle ways and I continue to master my skill with the various techniques and tools available, including the software tools, I can shift my focus to stylistic choices instead of fighting the tools to get an acceptable result.

Fall CalmI recall that I decided to get a little more assertive with contrast and blacks about a year ago.  At some point earlier this year I decided to produce some more colorful results, although I still don’t want the first impression people get to be “manipulated”.  I may not always succeed, but I’m exploring a wider range of results, and reining myself in when I can detect that the photo is crossing some invisible line.  I guess that you could boil it down to developing my own effects, range and style, mainly within the bounds of what Lightroom can do, but occasionally using Photomatix if/when the dynamic range of the scene warrants it.

The next logical question is what am I doing in Lightroom.  The short answer is that what I like about landscapes is the photography “pursuit of light” side in the field, experiencing the moment itself, so as mentioned, I tend to keep my adjustments under five minutes or so per photo on the computer, whenever possible.  I push as much quality as I can back to the capture side of the process, and automate some of the post-processing, so I can get back outside.  The fine details of how I achieve that, from image capture through post-processing, are probably best left for interactive post-processing demos during my workshops, since sharing my process and some of my favorite locations is exactly how I continue to pursue photography.

Yosemite's El Capitan in the Fall by Jeff Sullivan on 500px.com

 

Share This:

500px Offers onOne Perfect Effects 9 free to Current Members

Trial of OnOne Perfect Effects 8Second BeachHalf Dome from Sentinel BridgeBodie on a Snow Day

onOne Perfect Effects, a set on Flickr.

If you’re a 500px member, they’re offering a free copy of onOne Perfect Effects 9.  Try this link (or if if doesn’t work, check your inbox for an email from 500px a few days ago):

http://www.on1.com/500px/?utm_campaign=Get-ready-for-International-Womens-Day-Plus-get-a-free-download-from-on1-and-more-26-02-2015

I saw a similar offer a while back to get Perfect Effects 8 for free, so I’ve used it on a few photos.  I find it particularly interesting for conversion to black and white, since I can pre-process in Adobe Lightroom, transfer the photo to Perfect Effects and select from many conversions and fine tune the effects, then a TIF file is transferred back to Lightroom for any additional fine tuning required.  It’s flexible, fast and seamless. (I’m not sponsored by either company, just sharing a workflow that I’m exploring and finding useful.)

There can also be an interesting outcome de-saturating a color image for an aged photo look, or restoring/improving color on an image which has had its color affected by another process such as HDR. Hop on over to my Perfect Effects Flickr album and see some examples. I’ll add more images to Flickr for the album shortly.

Moon and Half Dome

Free Perfect Effects 9 from on1 Software

Share This:

Color Accuracy vs. Art in Photo Post-processing, the Case for HDR

Tree on Fire, originally uploaded by Jeff Sullivan.

I see that question a lot posed online, in discussion groups, even under photos. To me liking Photoshop but not liking HDR would be analogous to liking wrenches but not liking hammers. Sure, many people wield HDR poorly, but many carpenters wield a hammer poorly too… what could that have to do with hammers? In other words, what does the existence of poor results have to do with the potential (value or utility) of the tool?

Many people vilify HDR; I don’t get it. Most people play guitar poorly, but that won’t keep me from enjoying the work of many talented guitarists. Of course everyone’s entitled to their opinion and their own tastes. If classical music fans want to say, “Ugh, I think I hear a guitar in that piece!”, or photography fans want to say “Ugh, Galen Rowell used graduated neutral density filters!”, that’s their privilege. Surely HDR software will get better and better at expanding dynamic range while producing unobtrusive results, and as that value is delivered for more and more shots, I’ll have terabytes of exposure-bracketed images to draw upon.

I find HDR a useful tool about 80% of the time, with maybe 5-10% of all shots I choose to keep being simply not possible without it.

My example above is pretty obvious and results like that may be an acquired taste, but can you identify which of the following photos was shot with HDR and which were not?

Perhaps more to the point, which do you like better? If you can’t tell how an image was produced, does the process or tool used matter?

As for whether or not a result matches an original scene, no photograph does (unless the entire scene is pure white or pure black).

Consider the scene’s brightness. An original scene contains light in a range of up to 17 stops, our eyes can handle 13 stops, a film camera can handle about 11 stops, the best full frame digital cameras at most 8-9 stops. Most of the digital cameras with small format sensors that most people shoot with are probably closer to 4-5 stops. How do you restore some fraction of the shadow and highlight detail in those 8-9 lost stops of light, if not with High Dynamic Range techniques?

Then consider the color. The camera’s CCD sensor has one range of colors that it can sense. The proprietary RAW format each camera saves the file in has another range of colors that it can store. The monitor you display it on has yet another. You may edit and save in a format with a lot of colors like TIFF.  Eventually the image tends to get converted to 8 bit JPEG format for printing or display online, trying to represent the infinite shades of natural color in only 256 levels of color for Red, Green, and Blue (RGB). But the printer typically uses a subtractive CMYK color scheme of Cyan, Yellow, Magenta and blacK, which doesn’t match or directly overlap any of the other color spaces used along the way. Color accuracy is a great concern for graphic artists and the fashion industry, so the Pantone™ system was developed to provide 700 calibrated colors.across scanners, monitors and printers.  Even back in the Sony Trinitron™ cathode ray tube days, a calibrated monitor could often display 16 million colors, so even using color calibration systems we’re 15 million colors and change short of having a truly accurate end-to-end result across all the technologies and color maps your images will pass through.  I worked for years as an applications engineer at the world’s leading color printer manufacturer.  You’ll hear a lot about how vivid colors are, perhaps how many colors a monitor or printer can accept as input, but rarely will you hear about how many colors can be printed at a given spot on the paper, the challenges of color space mismatch as an image moves from device to device, color representation bottlenecks (such as JPEG), or the general lack of calibration and accuracy at every step along the way.

Fortunately human perception is very forgiving.  It was important for use to be able to discern between millions of colors, but we never developed much ability to remember them accurately.  Cognitively we group colors into roughly 11 categories, but we have little ability to remember even distinctly different colors from memory, let alone subtle shades of them.

Our brains also distort color, and try to assign the brightest thing in a scene to be white. That’s we have to have our cameras and software adjust images to a certain “white balance” (strictly a human perceptual distortion). The ambient light available when viewing an image (outdoors in sun, shade, under incandescent light, flourescent, etc) seriously affects our perception of the result as well.

Our eyes and brains are not carbon copies from person to person. Some people report noticeably different perception even from eye to eye. There’s truly no such thing as “reality” when it comes to white balance and human color perception.

So given the essentially insurmountable issues at every step of the process, how can anyone claim to have produced an accurate copy of any given moment? What would that even mean… accurate to an electronic device, to one person, or to which subset of people, and under which ambient lighting conditions for viewing?

Must we “go with the flow” and pretend with the charlatans that accuracy is possible (or even a desirable goal), or is it safe to observe that the “this is just as it happened” emperor has no clothes?

To each his own though… everyone is entitled to like or not like something for any reason or for no reason. HDR simply happens to be one tool that I find not just extremely useful, but indispensable. I’d sooner part with even basics like UV filters and circular polarizers.

If photographers aspire to be some sort of sterile recording device, s sort of walking copy machine, then they can be replaced by webcams nailed to trees or doorjambs. Most definitions of “art” require human involvement and influence… a departure from sterile reality. Accept the inaccuracies of color capture and representation, the massive distortions inherent in human color perception, the lack of human color memory, and recognize the opportunity to free yourself to exercise your human side, your artistic side.  Any departure from the fruitless pursuit of perfection will set you free.

Update in early 2013: Wow, what a turnaround… after getting a full frame camera in early 2009 and starting to use Lightroom, I began to have far more success with single exposures and far less need or desire to go to the extra time and effort required to use HDR.  I take less time per photos while having far more control over the result… great example of why it’s useful to remain open to new techniques and to periodically evaluate new tools.  I’ve even seen great improvements when moving form Lightroom 3 to 4, which I only noticed by going back and trying to edit in 3.  The color translation tables to interpret RAW files seem to be much better in Lightroom 4.  But man, that must be a mind-bender to people who were absolutely convinced that they had end to end accurate color.  How did they choose between an “Adobe” or “Camera Standard” interpretation of the RAW file originally, and if the color translation for that setting changed for their camera from Lightroom 3 to 4, which is accurate?  There are other choices… Camera Landscape, Camera Neutral, Camera Portrait… it’s quite possible that our color perception varies so much over time and by viewing conditions and by subject type that different color translations are best in each unique situation. 

I should also clarify that a lack of accuracy is not necessarily an argument against monitor calibration, which helps you get more predictable prints (if you manage color profiles all the way through to printing), and discern more colors while you edit.  That won’t fix the inherent end to end accuracy problem and color space translation issues or the inherent mismatch between digital imaging and the human perception system… there’s no one right or wrong approach.  Fight our human failings and try to impose order and reason on them, or accept imperfection and embrace the freedom to create; it’s entirely up to you to decide.

Share This: