I'm sure people here would be interested in seeing your results but I would
ask you to exercise judgment in how much technical detail you post here
(information as to links, however, is always welcome).
On behalf of the photographers, let me suggest that you are unintentionally
misusing a term. If something is in the public domain it can be freely used
by anyone for any purpose, such as inclusion in a calendar that is offered
for sale. I think you mean that the photographer should be willing to
extend a limited permission to others to experiment with the images, but
not to reuse them otherwise. Perhaps this should be clarified.
My apologies. I hope everyone understands what I meant.
d) They must be at the maximum _hardware_ resolution of the capture device.
This excludes, for example, images captured on a 300 dpi scanner that yields
1200 dpi images via interpolation.
e) Digital camera images must come from three-shot and/or scanbacks. More
specifically, they cannot come from a camera using a mosaiced sensor, since
in such cameras, a full color image is made by interpolating colors across
the missing color elements in the array.
And Dan's reply:
I think it would be a big mistake to leave these restrictions in. Those are
real-world categories, and those advocating a 16-bit workflow AFAIK
advocate using it in them as well.
More important, these files would be an important validation of your
method. As I said in a previous post, I am skeptical that there's a
statistically valid method of measuring how accurate a scan is. The ways
that might seem obvious to some would probably report that the images you
are excluding would be at least as accurate as the untouched scans. And if
it turns out that the method can't distinguish between real scans and
munged data then the method has next to no value.
My tests will have nothing whatever to do with accuracy. I am measuring
significant bits, that is, how many bits actually contain any image data and
how many contain non-image data (e.g. noise). The reason I would like to
impose the above restrictions is much the same argument put forth by others
regarding computer generated gradients, namely that for the most part,
computer generated pixels, made by interpolation, contain no noise, and
thus, would skew the noise measurement. I have run my program on images
from mosaiced digital cameras and these images show "image" data all the way
down to the least significant bits, because these low bits came about
largely by interpolation. A mosaiced digital camera image contains, at a
minimum, two-thirds computer generated information.
Granted, such images are frequently encountered in the "real-world" and so,
would be an interesting follow-on study. But it important that we start
with a more controlled test before expanding into other types of imagery.
Dan, if I find some 16-bit images that really DO contain significantly more
than 8-bits of real data (and hopefully this conclusion comes not just from
my own tests, but also from other independent researchers), this may provide
you with valuable, public domain image data...
It can't hurt and in some areas of the discipline it may help. In the
specific area we are talking about, it's unlikely to shed much light. Even
making the assumption that the test is valid, if it reports that most
scanners see deeper than 8-bit that still doesn't validate a high-bit
workflow; it just says that a high-bit workflow not only has a smoother
histogram, but one that is statistically more significant. Which
information, with $1.50, gets one on the subway.
I agree. My test will prove (subject to scrutiny of course) how many
significant image bits you are starting with. Period. But imagine
(hypothetically) that I find that none of these expensive "high-bit"
scanners produces more than 8-bits of actual image data. That could give
some explanation to your findings, because all those extra bits don't
contain anything but noise anyway, so of course they are "dead weight",
BTW, my tests will have little, if anything to do with the image histograms.
...to support your claim that 16-bit offers no advantages over 8-bit.
I have made no such claim. What I have said is that while a number of
people have advocated extensive work in 16-bit, nobody to my knowledge has
ever offered evidence that there's any quality gain in doing so. I have
said that my own experiments show none, and my work with Todd's files
hasn't changed that view. However, I obviously have not tested every
conceivable type of image in every conceivable circumstance and every
conceivable workflow. Therefore, I would welcome anyone who feels that they
have images (and a record of what has been done to them) that would show an
advantage to message me so that I can make arrangements to test them. I
only ask that they be real photographs, not computer-generated art, and
that the corrections be, by as liberal a definition as you like, be
real-world, not something like repeated gross darkening and gross
Thanks for the clarification.
Bruce J. Lindbloom, Pictographics Intl. Corp.