Re: High Andes: Dan's comments

Dan Margulis

On Apr 7, 2021, at 9:14 AM, Kenneth Harris <reg@...> wrote:

Perhaps this deserves a separate thread, but I'd like to drop this in here as this exercises wind up.  What interests me most here is the question of the wisdom of crowds vs the judgement of experts, going back the MIT "retoucher" used as a model for the AI programing. For a crowd to be make good assessments, it needs to large, diverse, and not self-conferring.  Can a crowd make good esthetic assessments?  

I suppose it depends on the crowd. If referring to group members, we’re hardly typical of the general public. We have biases that they don’t, typically because we know what tricks we are using and because we can see their telltale signs, we assume that laypeople can, too, and that it will bother them. So we’re more inclined to reject for what we consider oversharpening, or for colors being too loud, than they are. OTOH they are quick to punish those who don’t have a full tonal range.

What happens when the MIT researchers anoint the chosen one to train the AI?  How does that bake in bias, and at the same time open up room for expression by creating a norm from to counter?  Professionally, I have to land a picture in a spot where the photographer, the client, the model's agency, the ad agency, the stylist, and hair+makeup are all okay with the picture, and not just do that, but satisfy all these stakeholders while doing the thing I'm also hired to do, not have it look like it came out of a major shop, ie, simply checking those boxes and being done with it.  It was easy for me to produce images that deviated meaningfully from my anticipated typical submission, yet if was hard for me to produce a par that had a major variance from Dan's, which for me opens more questions in terms of bias/background, set size, and the generalizability of pictorial esthetics.

Preferences between versions is partly a matter of taste but a lot is technical and artistic proficiency. When I was teaching ACT up to eight people would submit versions and then we would decide which were the best one(s). Approximately half the time the vote would be unanimous for a certain version as best. Not surprising, since half the versions would have disqualified themselves.

And that’s the case here. You and Gerald often post lists of your favorites before knowing mine. Granted, all three of us have very different tastes generally, yet usually any two of us would agree on three of the top five. That wouldn’t happen if we had 30 or 40 excellent entries to choose from. Typically, however, only eight or ten could realistically be considered, so even if we each selected our choices by flipping coins we’d still have a lot of overlap on our lists.

The question is also how a group would determine its preference. Is it the version that gets the most votes? And how do we adjust for images that some like and others hate? That’s the virtue of the par, we can criticize it but nobody is really going to dislike it.

More commonly a group tries to reach a conclusion by consensus, which usually results in a bland choice. As Benjamin Franklin once said about our discipline, “If all Printers were determin’d not to print any thing till they were sure it would offend no body, there would be very little printed.”

Something along those lines is described in the final pages of CC2E. Book publishers traditionally consider the cover artwork of supreme importance, and probably it was in a time when people actually went to bookstores. I imagine that 95% of those buying CC2E did so on the basis of a thumbnail online, but that did not prevent the entire staff of Peachpit Press from getting involved in the decision and spending hours and hours on it.

First, they investigated stock photography of canyons, without checking with me first. I suggested that I know some pretty good photographers and that I have quite a selection of canyon photography to choose from, and I myself have a bit of experience in preparing images for CMYK. They asked for samples, and I gave them a dozen, all tagged sRGB. By whatever method they were using they narrowed it down to two, and they sent me high-quality proofs of their designs for each one. The images were quite a bit louder than I had expected, since the art director knew nothing about color management and had set up her system to open all files in Adobe RGB, ignoring any embedded profiles. After discussion, we selected one, but they requested that I submit a few color-corrected-for-CMYK versions of it so that they could make a more informed decision.

At about that time the beta readers were getting rather uppity about the quality of some of my corrections elsewhere in the book, and I suggested to them that if they were such hotshots maybe they’d like to prepare their own versions of the cover art, now that a certain original had finally been chosen. And I said I would send the best results off to Peachpit, without identifying them or expressing my own opinion.

What followed was something quite similar to our case studies: nearly a dozen entries, followed by a vigorous discussion that got even more vigorous when I announced I wasn’t going to submit one of the group’s favorites because I didn’t like it, and since it was my book, by God, I’d veto any entrant I pleased. But I okayed the group’s other favorite, an Italian entrant. I also submitted two of my own, and that of one other beta reader, and I said I was told that all four would be proofed, and then six people from Peachpit would make the decision without help from any of us. The group, but especially the Italian delegation, predicted theirs would be chosen. I named the more exuberant of my two versions as the likely winner.

Then, the apocalypse. The art director to whom I submitted the four unilaterally vetoed the Italian version as being “sharpened too much”. (It was no such thing, it merely assigned contrast to an area that bothered her.) She proofed only the other three and sent them out for a final decision, nearly causing an international incident when the beta readers heard what she had done. I then revised my prediction. I said they would now pick the more conservative of my two versions.

And that proved correct. Groups like these often split the difference. If all four had been submitted, from most to least conservative they would have been the fourth version; my two; and the Italian version. I thought that they’d see the Italian version as the most aggressive of the four and would choose my less conservative one as a substitute. But with the Italian version removed from the competition, they would see my predicted winner as the most aggressive, and would pick my conservative version instead.

Bottom line: the question is rather complicated, whether experts or laypeople make the decision.


Join to automatically receive all group messages.