Can you immediately determine if an image you see accurately represents the news it is covering? Are you confident that you can see the difference between images that have been enhanced vs manipulated?
Graduate student at RIT, Emily Shriver, is researching how well the average media consumer is able to determine how much a photojournalistic image has been post-processed.
If you would like to participate in her research, sort any number of images through her survey: https://imagesurvey.cad.rit.edu
Here’s her story: 

By Emily Shriver
My interest in photography started in high school when I joined the yearbook club. I slowly learned how to use a DSLR, mostly through exploration and button mashing. I also took advantage of my high school’s dark rooms and digital image editing labs. I was always interested in distorting the images I took. My interest in photography grew when I toured Rochester Institute of Technology’s campus. I thought I wanted to be a war correspondent and study photojournalism, but after hearing about the photographic sciences program, I knew that it would be a better fit. It gave me both the knowledge about the image pipeline as well as how the human visual system perceives images. A week before I graduated, I was accepted into the Print Media Sciences MS program at RIT and accepted an internship at PAR Government Systems. So, I chose to take the opportunity to do both! Learning to work remotely on first graduate studies, and then work, was a big learning experience. At PAR, I was assigned to work on DARPA’s MediFor program as Lead Image Manipulator. I learned a lot about image manipulation techniques, the different quirks of how cameras write their image files and about the field of media forensics.

I became very interested in the differences between how computational detectors of image manipulations compared to visual observations of manipulated images. Sometimes computers would catch things that visual analysis wouldn’t and vice versa.

Through more research, I found that there were very few studies that looked specifically at how well average media consumers could discern between manipulated, enhanced, and original images, especially when they were photojournalistic. Through support from my graduate advisors Professors Christine Heusner and Michael Riordan, I put together a survey of 165 images to test from local photographers and public domain websites. I was lucky enough to find seven different photographers who were willing to let me use and manipulate their raw and original photographs.

In the final survey there are a mix of images that are enhanced, manipulated, or originals. Using definitions from the NPPA and AP, I determined the difference between enhanced and manipulated. Enhancements are changes to an image that would be considered appropriate for photojournalistic integrity. These sorts of post-processes would be mostly to correct aesthetics for output, including slight sharpening, small increases in saturation, or histogram changes to bring back blown out highlights. Manipulations are changes that would not be considered ethical such as adding, removing, or drastically changing content in an image. It was important to me that my survey was mobile friendly, so cropping was not considered an enhancement or manipulation. However, in real world photojournalistic scenarios, cropping an image could completely change the context, and not be considered ethical.

Has it been manipulated or enhanced?

I was inspired by the work of Dr. Hany Farid at Dartmouth, Dr. Victor Schetinger, and Dr. Nightingale among others. All of my sources can be found on my website {hyperlink: https://eas7793.cad.rit.edu/wordpress/index.php/list-of-sources/}.  My thesis study is unique in that it looks at only photojournalistic images, and the survey mechanism is a web-based application that anyone can take. I am promoting it through social media, and I am trying to understand how the average person could qualify the different images while looking at them in a similar fashion to a social media experience. It surprised me how many news stories there were about fake images being passed around on social media, and how even though the medium was new, the historical precedence was deep. Several books that were illuminating on the topic were:  Reconfigured Eye (Hadland, 1992) and The Burden of Visual Truth: The Role of Photojournalism in Mediating Reality (Newton 2001).

 In order to make the barrier to participation very low, no demographic information is being tracked for individual users. Google Analytics is used to track general location information if the users allow their browsers to track that information. I hope to get a broad understanding of how people sort the images when seeing them on their own devices, just as they would be looking at their news feed. The survey is set up based on the Tinder API so that the user can quickly determine which category the image falls into by swiping or dragging the image left, right, or up. I hope to have my results out by the end of fall after when survey participants reach 2000 individual participants. As of the beginning of August, the survey has 800 participations after just opening it in the beginning of May.

I hope to follow up this research by conducting it with a global audience, and with more diverse image sets and manipulators. I also would like to see if the survey had game like features, if it could train a participant to better visually detect the image post-processing category as they went along. Overall, I hope to use my research to bring awareness to the importance of cultivating visual media literacy and not believing everything that we see.

More information and sources for her research can be found on her website: https://eas7793.cad.rit.edu/wordpress

Here’s the original photo.