We built a prototype to help readers judge the credibility of photos on social media.
This is part two of a behind-the-scenes look into the research, design and prototyping behind The News Provenance Project’s proof of concept that shows how publishers could use blockchain to surface source information about news photography. Read part one here.
How might news outlets leverage blockchain technology to surface the source information for news photography? This is a question we set out to explore at The News Provenance Project, which is a part of the New York Times R&D Lab. After speaking to 34 users total over one round of discovery research and two rounds of prototype testing, we learned how news consumers decide whether to trust a photo on social media. We used these findings to design a proof of concept to explore how news outlets might provide provenance information for news photography on social media platforms.
First, as part of our discovery research, we conducted in-depth interviews with 15 daily users of social media from different backgrounds and geographic locations, and with diverse news preferences.
During these interviews, we elicited reactions to several design concepts around surfacing the provenance of news photography. Each concept framed provenance slightly differently: from a “verified” checkmark, to an emphasis on verification by a network of major news organizations. We also asked them to rank metadata by what information most convinced participants that an image was credible.
Here are some of our findings:
A checkmark isn’t enough information to provide credibility
Participants explained that a checkmark — similar to what appears on verified accounts on Twitter and Facebook — did not give them enough information to feel confident about what was being verified. The concept showing a simple checkmark was the bottom-ranked choice for nearly all of our participants.
Frame as “sourced” instead of “verified”
While many participants said they found confidence in the word “verified,” a concept framed around the word “sourced” was more successful with the majority of our participants. They valued having more information that they could follow up on themselves, rather than an endorsement that a photo was verified by others, without more information about what it meant to be verified.
Multiple pictures build more confidence instead of the edit history of single picture
In one of our concepts, we showed a feature that displayed the edits, such as a change in color contrast, made to a photo. Several participants noted that they didn’t need data this detailed unless they had reason to believe the edits dramatically altered what the photo represented. In contrast, they noted that other related pictures or videos from the scene would give them more confidence that an event occurred.
Emphasize familiar photo metadata that’s easy for a consumer to understand
In an exercise where we asked participants to rank photo metadata according to what they thought was most useful, they chose familiar details such as “source,” “original caption” and “publication history” over more technical and unfamiliar terms, like “encrypted” or “stored in an immutable database,” that are relevant to blockchain technology.
Show the process, with evidence of oversight and accountability
In another exercise, we asked participants to compare and discuss a selection of common credibility indicators, such as the Nutrition Facts label and the USDA organic icon. Because these indicators are connected to organizations that provide oversight, participants told us they inspired confidence that there would be consequences for failing to comply with standards. What this indicated to us is that a provenance signal for news photography could provide assurance that an umbrella organization is responsible for ensuring transparency and accuracy.
If a provenance signal fails a user once, they may not fully trust it again.
A signal that uses language like “true,” rather than “sourced” may fail in breaking news cases or if a correction is needed later on. To build trust, we should clearly emphasize what is known about the history of a photo, rather than offering a guarantee that a photo does or does not represent the complete truth.
Testing photo provenance on a simulated social media feed
We incorporated these findings into a prototype that included visuals with and without a provenance signal, as well as fake ads and other elements to simulate a real social media feed. The provenance signal we designed was a label overlaid on the photo, which expanded to show more information about the photo’s metadata, publishing history and related photos captured of the same event. We conducted two rounds of testing, first with a group of seven people, followed by iterative design changes and a second round with 12 frequent users of social media.
The content in our prototype fit into two main categories: “hard to believe, but true” and “false context.” We wanted to know how source information could influence the perceived credibility of dramatic but accurate “hard to believe” photos. We also wanted to know what participants thought about “false context” posts, where the social media poster claimed a photo showed one event when it actually depicted another.
We had participants complete a series of tasks that were designed to help us understand the usability and comprehension of the provenance signal, as well as how our participants made judgements about a photo’s credibility.
What worked: Provenance helps, even in a polarized media climate
People trust photo source details, even if the perceive an editorial slant in a headline
In our discovery research, we’d spoken to many people who have low trust in mainstream media. We wondered whether that distrust would inform how people view individual news photos even if the source information was visible. Encouragingly, this is not what we observed. There was almost no skepticism of the surfaced source information, though people did express skepticism about perceived editorial slant.
This is promising, as it meant that people with low trust in mainstream media institutions could still trust basic facts of a news photo, even in cases where social media users apply false context.
When source information is provided, “hard to believe” photos become easier to trust
Including source information on photos on contentious topics, such as immigration, did not affect most participants’ confidence in the veracity of the photos. If someone did express a lower confidence in a photo, it was because of an unknown source or because a photo “looked photoshopped.”
Providing source information on some photos doesn’t completely discredit others
We wondered whether the presence of source information on some photos but not others might delegitimize credible news publishers without access to these provenance tools. We did not find support for this. Some posts, such as a cat picture, were completely trusted even without a provenance signal. Other posts, such as a news story about a flood, were largely still found credible even without signal, though some people expressed that the signal would have made them more confident.
This indicates that the use of a signal like this would not necessarily damage the credibility of amateur photographers and credible outlets who don’t surface photo metadata, but could instead act as an incentive for publishers to use in order to increase audience confidence.
Perhaps most encouraging of all, many people expressed unprompted enthusiasm for the concept: they appreciated having convenient access to details about the origins of a photo and its context, and reflected on times they could have used it to confirm ambiguous visuals on Facebook or Twitter.
What didn’t work
Many people didn’t discover false context at all
Unless explicitly pushed to consider a post’s credibility, almost all the people we spoke to glossed over the details in the source information. Even if they noticed the photo source peripherally, they assumed the source must match what the social media poster had written about the photo. We realized we needed clearer cues that were more prominent for people to include the source information in their judgement of a photo.
It was hard for people to comprehend why there would be false context
Many participants expressed confusion when the details didn’t match the post’s description. They didn’t quite understand how that dissonance could happen, assuming it was an accident or a technical glitch. We realized the signal needed to factor in the potential of photos shared with false context.
The shorter signal made participants overconfident
In one version of our prototype, we displayed the photo source information in a tab that people needed to click to expand (only the name of the news outlet was visible). The name of the source provided enough proof to some participants that they believed that a post was accurate, even if it had been miscaptioned. Many people noted they would not have expanded the source unless they were motivated to learn more. This risk was less likely with the longer signal, where more details were presented on the surface.
More editorial history and related articles
The people we spoke to were less interested in seeing the history of how publishers used a photo and more interested in having access to other headlines, captions, summaries and links about the event depicted in the photo. This tied back to our earlier findings that interest, more than truth-seeking, drives user behavior on social platforms. People want more context because they are interested in a story, not because they are trying to prove whether a photo is real.
Building the blockchain-based proof of concept
The lessons we learned from our discovery research and prototype testing informed how we built a blockchain-based proof of concept, which is available to view on our website.
Our changes included drawing more attention to details that could inform a person’s gut reaction, like age and caption of a photo. We also incorporated prompts and resources to support more critical thinking, and to help people make sense of potential dissonance between a mis-captioned photo and its original context. Finally, we provided more photos and article links related to the event depicted in a photo to help people explore a story more on their own.
This proof of concept was created as a provocation to show publishers that it is possible to use the journalistic work they already do to promote clarity in public discourse. Yet, there are still many open questions for how publishers might further expand on this work to address problems of misinformation and trust in news photography online.
Providing basic source information for visuals is relatively low tech, yet we see the potential to build an entire system that connects accurate source information for all images, from point of capture to publication and distribution across the internet.
We hope that these findings can also serve as an encouraging starting point for other publishers looking for ways to use metadata to build trust, helping audiences believe in what they see in credibly sourced visual journalism.
To learn more about the user research and design we did for this project, read Part One.
Emily Saltz is UX Lead for The News Provenance Project in The New York Times R&D Lab. Find her on the Internet @saltzshaker, trying to keep up with the latest critical tech hot takes.
What If Every News Photo on Social Media Showed Contextual Information? was originally published in NYT Open on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source: New York Times