52
The Showfoto Handbook
From here I will speak from my experience as a Canon user, but I will guess that most or all
entry-level and mid-range dSLRs behave in a similar manner. Canon offers the user several
picture styles - neutral, standard, portrait, landscape, and so forth - that determine what kind
of processing will be done to the raw image file to produce the final image, whether or not the
processing is done ´´in-camera´´ or later, using the proprietary Canon DPP software. The Canon
DPP raw processing software does give the user additional control, but still manipulatesthe raw
image file in accordance with the chosen picture style. Most of the Canon picture styles add a
heavy S-curve and extra color saturation to give the picture more ´´pop´´. Even if you choose the
´´neutral´´ picture style (the Canon picture style that gives you the least modified tonality); and
select ´´less contrast´´, ´´less saturation´´, ´´no noise reduction´´, and ´´no sharpening´´ in the DPP
raw development dialog, you will find, if you know what to look for, that an S-curve and also
shadow denoising has been applied to your image.
Dcraw (which Showfoto uses to convert raw files to image files) doesn’t add an S-curve to your
image tonality. Dcraw gives you the lights and darks that are actually recorded by the cam-
era sensor. According toTindeman, an excellent read and source of good advice, with links to
equally good sources of additional information), dcraw is one of only a handful of raw develop-
ers that actually gives you the ´´scene-referred´´ tonality. Ufraw also produces a scene-referred
image by default (although ufraw gives the user the option to modify the scene-referred image
by changing the tonal distribution and saturation). And the dcraw/ufraw scene-referred image
IS flat-looking, because the camera sensor records light linearly, whereas our eyes are constantly
interacting with our brain to accommodate dim and bright areas in a scene, meaning our brain
to some extent ´´applies an S-curve´´ to the scene to enable us to better focus in on the areas of
particular interest as we look around
3.2.4.3 The embedded jpeg preview looks so much nicer than dcraw’s output. What is the
value in scene-referred tonality?
When you take a picture, presumably you have an idea of what you want the final image to look
like. It is much easier to achieve that final image if you don’t have to ´´undo´´ stuff that has
already been done to your image. Once Canon (or Nikon, or Bibble, or etc) has applied their
proprietary S-curves and shadow-denoising, sharpening, etc to your image, then your shadows,
highlights, edge detail, etc, are already squashed, clipped, chopped, and otherwise altered and
mangled. You’ve thrown information away and you cannot get it back. Especially in the shad-
ows, even with 16-bit images (actually, 12- or 14-bits, depending on the camera, but it’s encoded
as 16-bits for the computer’s convenience), there just isn’t that much information to begin with.
It seemsto me that the heart andsoul of image processing is the deliberate manipulation of image
tonality, color, selective sharpening, and so forth, such that the viewer focuses in on what you,
the photographer, found of particular interest when you took the picture. Why give the art of
image processing over to some proprietary raw processing software? In other words, ´´flat is
good´´ if you’d rather give your images your own artistic interpretation. The alternative is to let
the canned, proprietary algorithms produced by Canon, Nikon, Bibble, etc interpret your images
for you. (On the other hand, there is no denying that for many images, those canned algorithms
are really pretty good!)
3.2.4.4 Well, that’s all very interesting. I can see the value in starting my image-editing with
ascene-referred rendition instead of the eye-popping rendition that I see in the em-
bedded jpeg. But I’m telling you, the images produced by digikam/dcraw look really
really bad! Why?
Well, that depends. If the image looksvery dark, then you asked dcraw to output a16-bit file and
you have run into a problem with dcraw not applying a gamma transform before outputting the
image file. You can use imagemagick to apply the appropriate gamma transform to the image
file produced by dcraw. Or you can find or make a cameraprofile with a gamma of 1. Or you can
use ufraw, which applies the gamma transform for you.
116
47
The Showfoto Handbook
If your image has pink highlights, there’s a solution. For an explanation of the problem, along
with the command line cure for this problem, seethis´´LuminousLandscape´´forumpost.
If the image isn’t dark but it looks really weird, probably you made some injudicious choices
in the digikam/dcraw user-interface. The digikam/dcraw interface conveniently allows you to
´´dial in´´ options that you would otherwise have to specify at the command line. However,
convenience always comes at a price. First, the interface might not provide access to all the
options that are available at the command line (as of Showfoto 0.9.4, only some of the dcraw
command line options are available from the interface). And second, to get the most from the
digikam/dcraw interface, you have to know what the buttons, sliders, etc in the interface actually
do. Which means you need to know what happens at the command line if you want to get the
best results from using the interface. (This tutorial will not attempt to document how to use the
digikam/dcraw user interface. Digikamis developing at arapid pace and anything I might write
about the digikam/dcraw interface will surely be outdated in the near future.)
For example, if your embedded jpeg has very nice deep rich shadows but the digikam/dcraw-
produced jpeg or tiff has blotchy red line patternsin the shadow areas, thenyou probably put an
´´x´´ in the ´´Advanced, Black point´´ option, with the slider set to 0. Uncheck the Black point box
and try again. This box in the digikam/dcraw interface corresponds to the ´´-k´´ option when us-
ing dcraw at the command line. The ´´-k´´ option allows you to override dcraw’s best estimate of
where, inthe shadow tonesof your image, does digital signal start to override background noise.
If you don’t use the ´´-k´´ option at the command line, then dcraw calculates anappropriate value
for you, based on its estimate of background noise. For my Canon 400di, the dcraw-calculated
background noise value is usually around 256 (the command line option ´´-v´´ will tell dcraw to
tell you what it’s doing as it processes your raw file). If, however, I use the ´´-K /path to black-
frame.pgm´´ option to tell dcraw to subtract out a black frame, then dcraw will report the black
point as ´´0´´, as there is now no need to set it higher to avoid the deepest shadows in the image,
where noise typically drownsout signal. (A ´´black frame´´ is an exposure taken with the lenscap
on, with the same exposure settingsas, and ideally right after, taking the image being processed.
The ´´-K´´ option allows dcraw to subtract background noise from the image.)
3.2.4.5 Where do I find good information on digital noise?
See the following excellent articles:
• http://www.ronbigelow.com/articles/noise-1/noise-1.htm
• http://www.cambridgeincolour.com/tutorials/noise.htm
• http://www.clarkvision.com/imagedetail/digital.signal.to.noise/
3.2.4.6 Where do I find good information on the dcraw command lineoptions?
The very best source of information on how dcraw processes raw files is foundhere.
If you want to work with raw files, I recommend that you read Guillermo’s article two or three
times over. Guillermo believes that dcraw produces output superior when compared to the raw
processing done by commercial raw processors. After testing every commercial raw process-
ing program I could find, I also eventually ended up concluding that dcraw produces superior
results.
The dcraw manpage explaining all the command line options ishere.
3.2.4.7 Why are the Canon and Nikon colors better than the colors produced by dcraw?
Color rendition is one place where the Canon (and presumably Nikon) proprietary raw devel-
oping software does a really, really good job. Why? Because the proprietary raw processing
software is coupled with camera profiles that are specific to raw images coming from your make
117
48
The Showfoto Handbook
and model of camera, when processed using your make and model camera’s proprietary raw
processing software. I’ve checked extensively, using an ´´eyedropper´´ to compare the output
of various raw developers using various camera profiles from various sources - a very tedious
thoughinstructive process. Withufraw and dcraw (from the command line if not from digikam’s
dcraw user interface), you can apply Canon’s camera-model-picture-style-specific color profile(s)
to the dcraw output during the raw development process, and the colors willstill NOT be exactly
the same as what Canon produces. Likewise, Bibble profiles work pretty well with the Bibble
software, but they don’t work quite as well, in my opinion, with dcraw as they do with Bibble’s
own software. And so on. And so forth.
3.2.4.8 Why is a camera profile specific to a given make and model of camera?
Digital cameras have an array of millions of little light sensors inside, making up either a CCD
or a CMOS chip. These light-sensing pixels are color-blind - they only record the amount, not
the color, of light falling on them. So to allow pixels to record color information, each pixel is
capped by a transparent red, green, or blue lens, usually alternating in what is called a Bayer
array (except for Faveon sensors, which work differently). A raw image is nothing more than an
array of values indicating ´´how much light´´ passed through the red, blue, or green lens cap to
reach the sensor.
Clearly, pixel response to light is the result of lotsof camera-specific factors including: the nature
of the sensor array itself, the precise coloring/transmissive qualities of the lens caps, and the
particular analog-to-digital conversion and post-conversion processing that happens inside the
camera to produce the raw image that gets stored on the card.
3.2.4.9 What does ´´analog-to-digital conversion´´ mean?
´´Analog´´ means continuously varying, like how muchwater you can put ina glass. ´´Digitizing´´
an analog signal means that the continuously changing levels from the analog signal source are
´´rounded´´ to discrete quantities convenient to the binary numbers used by computers. The
analog-to-digital conversion that takes place inside the camera is necessary because the light-
sensing pixels are analog in nature - they collect a charge proportionate to the amount of light
that reaches them. The accumulated charge on each pixel is then turned into a discrete, digital
quantity by the camera’s analog-to-digital converter. Which by the way explains why a 14-bit
converter is better than a 12-bit converter - more precision in the conversion output means less
information is thrown away in the conversion process.
3.2.4.10 Why is a camera profile specific to the raw processing program used to develop the
raw file?
The whole point of interpolation using demosaicing algorithms such as dcraw’s default AHD
is to guess what color and intensity of light actually fell on any given pixel by interpolating
information gathered from that single pixel plus its neighboring pixels (seeWikipediaarticle).
Every raw processing program makes additional assumptions such as ´´when is it signal and
when is it background noise?´´, ´´at what point has the sensor well reached full saturation?´´,
and so forth. The resulting output of all these algorithms and assumptions that raw processing
software makes is a trio of RGB values for each pixel in the image. Given the same raw file,
different raw processors will output different RGB values.
3.2.4.11 Where do I find a generic profile for my camera?
The ufraw websitesectiononcolormanagementhas information on where to find ready-made
camera profiles. If you poke around the Showfoto users forum archives, you’ll find additional
advice. If you keep hunting and experimenting, likely you will find a generic profile that works
´´well enough´´. However, as stated above, it’s an unfortunate fact of digital imaging that the
118
48
The Showfoto Handbook
camera profiles supplied by Canon, Nikon, and the like don’t work as well with raw converters
other thaneach cameramanufacturer’sownproprietary raw converter. Whichis why Bibble and
Phase One, for example, have to make their own profiles for all the camerasthat they support. So
eventually you may decide that you want a camera profile that is specific to your camera, your
lighting conditions, and your raw processing workflow.
3.2.4.12 How do I get a camera profile specific to my camera, lighting conditions, and raw
workflow?
Many commercial services provide profiling services, for a fee, of course. Or you can use LPRof
to profile your camera yourself. If you want to profile your own camera, you will need an ´´IT8
target´´, that is, animage containing squares of known colors. Along with the IT8 target, you will
receive the appropriate set of known values for each square of color on the target.
If you plan to use LProf to profile your camera, check the documentation for a list of recom-
mended targets. To profile your camera, you photograph the IT8 target under specified lighting
conditions (for example, in daylight, usually taken to mean noon on a sunny day in the summer,
with nothing nearby that might cast shadows or reflect color casts) and save the image as a raw
file. Then you process the raw file using your particular raw processing software+settings and
run the resulting image file through the profiling software. The profiling software compares the
RGB values in the image produced by your camera+lighting conditions+raw processing routine
with the RGB values in the original target and then produces your camera (icc) profile.
Profiling a camera is exactly analogous to profiling a monitor. When profiling a monitor, the
profiling software tells the graphics card to send squares of color with particular RGB values
to the screen. The spectrophotometer measures the actual color that is produced on the screen.
When profiling a camera, the known colors are the RGB colors in the original patches on the IT8
target, which the profiling software compares to the colors produced by the digital image of the
target, which was photographed in selected lighting conditions, saved as raw, then processed
with specific raw processing software+settings.
Hereisalinktoa´´howto´´forusingLProfv1.11andufraw(andbyanalogy,anyotherraw
processor) to produce a camera profile. Debian Lenny has LProf 1.11.4 in the APT repositories.
More recent versions can be built from CVS. And here is a link to an affordable, well-regarded
IT8 target.
3.2.4.13 How do I apply a camera profile to the 16-bit image file produced by my open source
raw processing software?
If you are using the digikam/dcraw interface,hereis how to tell Showfoto which camera profile
to use. If you are using dcraw fromthe command line, you have the choice of outputting your 16-
bit image file with or without the camera profile already applied. If you ask dcraw to output the
file without applying the camera profile, you can use LCMS’s tifficc utility (also at the command
line) to apply the camera profile. The advantage of using tifficc is that you can tell LCMS to use
high quality conversion (dcraw seems to use the LCMS default medium). The disadvantage, of
course, is that applying your camera profile from the command line adds one extra step to your
raw workflow. If you are using ufraw, consult the ufraw user’s guide.
3.2.5 The PCS: color profiles point to real colors in the real world
3.2.5.1 Camera, scanner, working space, monitor, printer - what do all these color profiles
really do?
Acolor profile describes the color gamut of the device or space to which it belongsby specifying
what real color in the real world corresponds to each trio of RGB values in the color space of the
device (camera, monitor, printer) or working space.
119
49
The Showfoto Handbook
The camera profile essentially says, ´´for every RGB trio of values associated with every pixel in
the image file produced from the raw file by the raw processing software, ´´this RGB image file
trio´´ corresponds to ´´that real color as seen by a real observer in the real world´´ (or rather, as
displayed on the IT8 target if you produced your own camera profile, but it amountsto the same
thing - the goal of profiling your camera is to make the picture of the target look like the target).
You cannot see an image by looking at its RGB values. Rather you see an image by displaying
it on a monitor or by printing it. When you profile your monitor, you produce a monitor profile
that says ´´this RGB trio of values that the graphicscard sends to the screen´´ will produce on the
screen ´´that real color as seen by a real observer in the real world´´.
What the monitor profile and the camera profile have in common is the part (in italics above)
about ´´that real color asseen by a realobserver inthe real world.´´ Different trios of RGB numbers
in, respectively, the monitor and camera color spaces point to the same real, visible color in the
real world. Real colors in the real world provide the reference point for translating between all
the color profiles your image will ever encounter on its way from camera to screen to editing
program to print or the web.
3.2.5.2 How can a color profile point to a real color in the real world?
Real people don’t even see the same colors when they look at the world, do they?
Along time ago (1931, although refinements continue to be made), the International Color Con-
sortium decided to map out and mathematically describe all the colors visible to real people in
the real world. So they showed a whole bunch of people a whole bunch of colorsand asked them
to say when ´´this´´ color matched ´´that´´ color, where the two visually matching colors were in
fact produced by differing combinations of wavelengths. What was the value of such a strange
procedure? Human color perception depends on the fact that we have three types of cone re-
ceptors with peak sensitivity to light at wavelengths of approximately 430, 540, and 570 nm, but
with considerable overlap in sensitivity between the different cone types. One consequence of
how we see color is that many different combinations of differing wavelengths of light will look
like ´´the same color´´.
After extensive testing, the ICC produced the CIE-XYZ color space which mathematically de-
scribes and models all the colors visible to an ideal human observer (´´ideal´´ in the sense of
modeling the tested responsesof lots of individual humans). Thiscolor space is NOT a color pro-
file in the normal sense of the word. Rather it provides an absolute ´´Profile Connecting Space´´
(PCS) for translating color RGB values from one color space to another. (Seehereandhere.)
CIE-XYZ is not the only PCS. Another commonly used PCS is CIE-Lab, which is mathematically
derived from the CIE-XYZ space. CIE-Lab is intended to be ´´perceptually uniform´´, meaning ´´a
change of the same amount in a color value should produce a change of about the same visual
importance´´ (cited fromWikipediaarticle). Wikipedia says ´´The three coordinates of CIELAB
represent the lightness of the color (L* = 0 yields black and L* = 100 indicates diffuse white;
specular white may be higher), its position between red/magenta and green (a*, negative values
indicate green while positive values indicate magenta) and itsposition between yellow and blue
(b*, negative values indicate blue and positive values indicate yellow)´´ (cited fromWikipedia
article).
To be useful, color profiles need to be coupled with software that performs the translation from
one color space to another via the PCS. In the world of linux open source software (and also
many closed source, commercial softwares), translation from one color space to another usually
is done byLCMS, the ´´little color management software´´. For what it’s worth, my own testing
has shown that LCMS doesmore accurate colorspace conversionsthanAdobe’s proprietarycolor
conversion engine.
120
Documents you may be interested
Documents you may be interested