Discussion: Are We Unaware of Mistakes?

Yves here. As Lambert might say, “BAHA!” But it would be nice if the “misinformation” challenges came earlier and more often.

By Sara Talpos, contributing editor at Undark. Originally published in Undark

In June, the journal Nature published an opinion piece suggesting that the harms of online misinformation are poorly understood. The authors of the paper, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they described as three common misconceptions: That the average person’s exposure to false and inflammatory content is high, that algorithms drive this exposure, and that many broader problems in society are largely caused by social networks.

“People who appear on YouTube will watch baking videos and end up on Nazi websites – this is very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher at the University of Pennsylvania’s Penn Media Accountability Project. That’s not to say that extreme cases don’t matter, he and his colleagues write, but treating them as routine can contribute to a negative view — and divert attention from more pressing issues.

Rothschild spoke to Undark about the paper in a video call. Our discussion is structured for length and clarity.

Undark: What inspired you and your co-writers to write this idea?

David Rothschild: The five authors of this paper have been doing a lot of different research in this space for years, trying to understand what is happening on social media: What is good, what is bad, and especially to understand how it differs from the news we hear in the mainstream media and other researchers.

In particular, we were getting a little bit into these questions about what is the experience of the average consumer, the average person versus the extreme example. Much of what we have seen, or much of what we have understood – referring to many studies – has described a very good extreme situation.

The second part of that is a lot of emphasis around algorithms, a lot of concern with algorithms. What we’re seeing is that a lot of malicious content doesn’t come from an algorithm pushing it to people. Actually, it’s quite the opposite. The algorithm kind of pulls you into the center.

And then there are these questions about causation and correlation. Most research, and especially the mainstream media, includes the proximate cause of something and its origin.

There are many people who say: “Oh, these yellow vest riots happened in France. They are organized on Facebook.” However, there have been riots in France for several hundred years. They find ways to organize without the presence of social media.

Proximate cause – the proximate way people planned [January 6] – there was certainly a lot online. But then the question arises, could these things happen in the offline world? And these are difficult questions.

Opinion writing here at Nature really allows us to reach out to stakeholders outside of academia to engage in a broader conversation because it has real-world implications. Research is being shared, money is being shared, platforms are getting pressure to solve a problem that people are talking about.

UN: Can you talk about the example of the 2016 election: What did you find out about it and the role that the media may have played in disinformation?

DR: The bottom line is that what the Russians did in 2016 is interesting and newsworthy. They invested heavily in creating sleepless Facebook organizations that posted viral content and then slipped into a bunch of fake news at the end. It makes sense and certainly something I understand why people are interested. But ultimately, what we wanted to say was, “How much of an impact would that obviously have?”

The impact is really hard [to measure]but at least we can put a perspective on people’s news feed and show that the number of Russian-specific fake news views is a small fraction of people who use news on Facebook – let alone their use of Facebook, let alone their use of news. in general, which is just a small part of Facebook. Especially in 2016, most people, even young people, were still consuming more news on television than they were on social media, let alone the internet.

While we agree that any fake news is probably not good, there is enough research to see that repeated interaction with content is what drives the basic understanding of the world, the narrative, however you want to describe it. Being hit sometimes with some fake news, and with very low numbers for the average consumer, is not just a driving force.

DD: What I see from reading your Mwelo paper is that you have found that journalists are spreading false information about it results of inaccurate information. Is that accurate? And why do you think this happens if so?

DR: In the end, it’s a good story. And the nuance is difficult, very difficult, and the negative is loved.

DD: So what is a good story, in particular?

DR: That social media is harming your children. That social media is a problem.

There is a general idea of ​​covering things in worse light. Of course there is a long history of people scorning and subscribing to all social ills in new technologies, whether that was the Internet, or television, or radio, or music, or books. You can just go back in time, and you can see all these kinds of concerns.

Ultimately, there will be people who will benefit from social media. There will be people who get hurt on social media, and there will be more people who will improve as society continues to improve with new technologies. That’s not just an interesting story as social media creates these problems, without measuring it.

“Social media is the problem, and it’s really the algorithms” offers a very simple and practical solution, which is to fix the algorithms. And it avoids a very difficult question – one that we often don’t want to ask – about human nature.

A lot of the research that we cite here, and what I think makes people uncomfortable, is that a certain segment of the population wants horrible things. They want things that are racist, degrading, and cause violence. That need can be met in various social media, and it was met before in other forms of media, whether it was people reading books, movies, or radio, or whatever people were listening to or receiving. old knowledge.

Ultimately, the various channels we have shift easily with distribution and how they are distributed. But the existence of these things is a question of human nature that is beyond my power as a researcher to solve, beyond the power of most people – most people, everyone. I think it makes it difficult and uncomfortable. And I think that’s why a lot of journalists like to focus on “bad social media, algorithms are the problem.”

DD: On the same day Nature published your piece, the magazine also published a comment titled “Misinformation poses a greater threat to democracy than you might think.” The authors suggest that “Concern about the expected avalanche of election-related misinformation is warranted, given the potential of misinformation to promote alienation and undermine trust in electoral processes.” What is the average person to make of these seemingly divergent opinions?

DR: We certainly do not want to give the impression that we tolerate any misinformation or harmful content or underestimate the impact it has, especially on the people it affects. What we mean is that it’s focused away from the average consumer to the extreme pockets, and it takes a different approach and a different allocation of resources to achieve that than traditional research, and the traditional questions you see come up about targeting. the average consumer, about aiming for this big impact.

I read that and I don’t think it’s wrong, just that I don’t see who they’re yelling at, really, in that clip. I don’t think that’s a big move – to do it a little bit – even to say, “Well, we have to fight it where you are, fight it where the problems are.” I think the talk is past each other, in a sense.

DD: You are a Microsoft employee. How can you convince skeptical readers that your study is not an attempt to discount the negative effects of products profitable of technology industry?

DR: The paper has four academic co-authors, and it has been an incredibly difficult process. You may not [have] previously noted: We submitted this paper on Oct. 13, 2021, and it was finally accepted on April 11, 2024. I’ve had some crazy review processes in my time. This was intense.

We came up with ideas based on our academic research. We have supplemented it with recent research and continue to supplement it with future research, especially some research that contradicts our original view.

The bottom line is that Microsoft Research is a very diverse place. For those who are not familiar with it, it was founded under the Bell Labs model where there is no review process for publications from Microsoft Research because they believe that the integrity of the work depends on the fact that they do not evaluate it. go through. The idea is to use this position to be able to engage in discussions and understanding about the impact of other things close to the company, other things that have nothing to do with it.

In this case, I think it is too far. It’s a really nice place to be. Much of the work is co-authored with academic partners, and that is certainly important to ensure that there are clear guidelines in the process and to ensure the academic integrity of the work you do.

DD: I forgot to ask you about your team’s methods.

DR: It’s obviously different than a traditional research paper. In this case, it really started with conversations between the co-authors about the collaborative work and the separate work we were doing that we felt wasn’t going into the right places. It really started with laying out a few theories that we had about the differences between our academic work, the general body of academic work, and what we were seeing in the public debate. Then there is a comprehensive literature review.

As you will see, we are somewhere in the 150-plus citations – 154 citations. And with this incredibly long review process in Nature, we went one step at a time to ensure that nothing could be banned from books: or, where appropriate, academic books, or, where appropriate, what we were able to do. quote from things that were in the community.

The idea was to create, hopefully, a broader piece that allows people to really see what we think is a very important conversation – and that’s why I’m so excited to talk to you today – about what the real harm is and where. push should be.

Neither of us is a firm believer in trying to rule out a situation and cling to it in the face of new evidence. There are changing models of social media. What we have now with TikTok, and Reels, and YouTube Shorts is a very different thing than what social media used to be a few years ago – with long videos – or the main social media a few years before that with news feeds. These will continue to be something you want to monitor and understand.




Source link