MARKETING

Social media encourages division and anxiety – but solving fundamental problems in the game is extremely complex


Despite various studies and counter-studies, mostly funded by the networks themselves, social media remains an extremely problematic means of exchanging messages that provoke divisions and harmful movements.

But its influence is often misunderstood or elements are mixed up to obscure the facts, for a variety of reasons. The real impact of the social is not necessarily reduced to algorithms or amplification as focal elements. The most significant damage comes from the very connection and ability to include in the minds of people you know, something that was not possible in the past.

Here is an example – let’s say you are fully vaccinated against COVID, you fully believe in science and do what health officials have advised, no problems, no worries about the process. But then you see his old friend’s post – let’s call him ‘Dave’ – in which Dave expresses his concern about the vaccine and why he hesitates to get it.

You may not have talked to Dave for years, but you love him, you respect his opinion. Suddenly, this is not an impersonal, nameless activist you can easily dismiss, this is someone you know, and it makes you wonder if there might be more in the fight against wax than you thought. Dave has never seemed stupid or gullible, maybe you should have a little more fun.

That’s how you do it – read the links Dave posted, check out posts and articles, maybe even browse a few groups to try to understand better. Maybe you will start posting comments on anti-wax articles, and all this tells Facebook’s algorithms that you are interested in this topic and that you will deal with similar posts more and more often. Recommendations are starting to change in your feed, you are becoming more involved in the topic, and all this takes you further to one side or the other of the discussion, encouraging divisions.

But it didn’t start with the algorithm, which is an essential refutation in Meta’s counter-arguments. It started with Dave, someone you know, who published an opinion that aroused your interest.

That is why broader campaigns to manipulate public opinion are so much of a concern. The disruption campaigns organized by the Russian Internet Research Agency in the run-up to the 2016 US elections are the most obvious example, but similar efforts are happening all the time. Last week, there were reports that the Indian government used bots, crude social media campaigns to “flood the zone” and shift public debate on certain topics by finding alternative topics on Facebook and Twitter. Many NFT and crypto projects are now trying to make money on the wider hype using Twitter bots to make their offers look more popular and reputable than they are.

Most people, of course, are now increasingly wary of such intentions and will be more willing to re-examine what they see online. But, similar to the classic Nigerian email scam, it only takes a very small amount of people to get hooked, and all that effort is worth it. Labor costs are low and the process can be greatly automated. And only a few Daves can have a big impact on public discourse.

The motivations for these campaigns are complex. In the case of the Indian government, it is about controlling public discourse and stifling possible disagreements, while for fraudsters it is about money. There are many reasons why such efforts are being made, but there is no doubt that social media has provided a valuable, sustainable connector for these efforts.

But the counter-arguments are selective. Meta says political content is only a small part of the total material shared on Facebook. Which may be true, but it only counts on shared articles, not personal posts and group discussions. Meta also says the divisive content is actually bad for business because, as CEO Mark Zuckerberg explains:

We make money from ads, and advertisers keep telling us they don’t want their ads to be next to harmful or angry content. And I don’t know of any technology company that makes products that make people angry or depressed. Moral, business and production incentives all point in the opposite direction.

Yet at the same time, Meta’s own research has also shown Facebook’s power in influencing public opinion, especially in a political context.

Back in 2010, about 340,000 additional voters went to the polls for the US Congress because of a unique Facebook message on election day encouraged Facebook.

According to study:

About 611,000 users (1%) received an ‘information message’ at the top of their news, which encouraged them to vote, provided a link to information on local polling stations and included a ‘I voted’ button that can be clicked and the Facebook Users counter About 60 million users (98%) received a ‘social message’ that included the same elements, but also showed profile pictures of up to six randomly selected Facebook friends who clicked on the ‘I voted’ button. 1% of users were assigned to a control group that did not receive any message. “

Facebook message on election day

The results showed that those who saw the second message, with pictures of their relationships included, were more likely to vote, which eventually led to 340,000 people going to the polls as a result of pushing peers. And that’s only on a small scale in terms of Facebook, among 60 million users, and the platform is now approaching 3 billion monthly active users worldwide.

It is clear, based on the evidence of Facebook itself, that the platform really has significant influential power through peer insights and personal sharing.

So, it is not Facebook specifically, nor the infamous News Feed algorithm that are the key culprits in this process. These are the people and what people choose to share. What Meta CEO Mark Zuckerberg has repeatedly pointed out:

Yes, we have big disagreements, maybe more now than ever in recent history. But part of it is because we bring our questions to the table – issues that haven’t been talked about in a long time. More people from more parts of our society have a voice than ever before, and it will take time for these voices to be heard and merged into a coherent narrative.

Contrary to the suggestion that it creates more problems, Meta sees Facebook as a tool for real social change, that through freedom of expression we can reach a point of better understanding and that providing a platform for all should, in theory, ensure better representation. and connection.

Which is true from an optimistic point of view, but still, the ability of bad actors to influence these common opinions is equally important, and these are just as often thoughts that are amplified among your network connections.

So what can be done other than what Meta’s implementation and moderation teams are already working on?

Well, probably not much. In some respects, detecting duplicate text in posts would seemingly work, which platforms already do in a variety of ways. Restricting sharing around certain topics can also have some impact, but really, the best way forward is what Meta is doing, working to discover the originators of such topics, and removing networks that reinforce questionable content.

Would removing the algorithm work?

Maybe. Whistleblower Frances Haugen pointed to the News Feed algorithm and its focus on encouraging engagement above all else, as a key issue, as the system is effectively designed to amplify content that encourages quarrels.

That’s definitely problematic in some apps, but would that stop Dave from sharing his thoughts on some issue? No, it wouldn’t, and at the same time, there’s nothing to suggest that Dave’s from the world are getting their information through dubious sources, according to those highlighted here. But social media platforms and their algorithms facilitate both, improve such a process, and provide entirely new avenues for sharing.

There are various measures that could be taken, but the effectiveness of each is very questionable. Because a lot of that is not a problem of social media, it is a problem of people, as Meta says. The problem is that we now have access to other people’s thoughts, and we will not agree with some of them.

We have been able to continue in the past, blissfully unaware of our differences. But in the age of social media, that is no longer an option.

Will this ultimately, as Zuckerberg says, lead us to understanding, a more integrated and civil society? The results so far suggest that we have a way to go further.



Source link

Naveen Kumar

Friendly communicator. Music maven. Explorer. Pop culture trailblazer. Social media practitioner.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button