New posting, MIT set

Dan Margulis

I have posted an addition to my blog series on the MIT 5k dataset at

Like any other workflow, PPW gets better results in certain cases than others. It’s nice to be able to predict these. For example, if one of the images in the MIT competition is of a canyon, it’s a safe bet that PPW is going to win handily. In others, like ones that include lots of different colors none of which require a lot of detail, PPW has little advantage. And some images have so little potential that no system is much better than the others. Plus, there is always the possibility of user error, like ones where I misinterpreted what the point of the correction was and thereby did worse than the MIT retouchers. I’ve shown example of all these categories in previous posts in the series.

This time the topic is a little trickier: cases where I anticipated an easy win for PPW, did nothing obviously wrong in the correction, and yet did not do much better than the MIT retouchers. The three examples that I show all have something to do with browns, which should be a hint.

Interesting learning experience for me, and for you too, I hope.

Dan Margulis