Prime Minister Jacinda Ardern and French President Emmanuel Macron are hosting a summit in Paris on May 15 (local time) with world leaders and tech company CEOs to discuss how they can prevent social media being used to organise and promote terrorism.
Ardern hopes the attendees will agree to a pledge called the ‘Christchurch Call’ which aims to eliminate terrorist and violent extremist content online.
The SMC asked experts to comment.
Marianne Elliott, Co-Director of The Workshop, comments:
“What I hope comes from the summit are concrete, specific and immediate commitments from tech companies to change their policies and practices to prevent the upload and spread of violent extremist content. We know they can do this, if they have the will. I’m also hoping for commitments from governments and tech companies to work together for better regulation and oversight of content published on and disseminated through these platforms, and specifically, I hope to see a commitment to a review of the way algorithms are contributing to radicalisation and the rapid distribution of extremist content. I’ll be looking for concrete steps towards greater transparency and accountability from the platforms, both to their users and to independent regulatory bodies around the world. And finally I’m hoping for a commitment to an ongoing process to address some of the wider issues contributing to the extreme end of harm, and some sort of road map for next steps in this process.
“What do I expect? I expect those specific commitments to stop the very worst of this content being published, albeit with caveats, and I expect some more vague and partial commitments on the wider issues including the role of the algorithms.”
Conflict of interest statement: Marianne is the lead researcher on Digital Media and Democracy report released on 8 May 2019, which was primarily funded by the New Zealand Law Foundation with additional funding from Luminate Group.
Dr Kevin Veale, Lecturer in Media Studies, Massey University, comments:
“I think the ‘Christchurch Call’ is a fantastic place for this discussion to be starting, and it’s good that Jacinda Ardern is bringing the conversation to such prominence.
“However, this isn’t the first time that social media platforms have been implicated in terrorism. This is the first time that a terrorist attack in a ‘Western’ country was broadcast via the internet, but Facebook has been a significant factor in the genocide of Rohingya Muslims in Myanmar. This is not an isolated case: previous studies have demonstrated a link between Facebook use and violence against refugees in Germany and Youtube’s complicity in propagating and profiting from neo-Nazi and white-supremacist content through its service.
“I hope the summit draws attention to these cases, and the fact that social media platforms profit from both an indifference to harassment, and from harassment itself. It falls within the realms of corporate responsibility to deal with these problems, which have been known for a substantial amount of time, and they have done nothing to remedy their contributions to harassment campaigns.
“Potentially pressure from governments and the threat of regulation will mean there is some movement. However, I expect that the social media companies themselves will offer primarily technological solutions based on filtering and algorithms, which can be and visibly are gamed by bad actors.
“Possibly the discussion will turn to removing anonymity from social media services or the internet, despite the evidence that many people involved in online abuse are comfortable doing so under their own names.
“New Zealand and other countries do get some benefit from social media platforms, but we also need to ask where the scales are set: what do we REALLY get out of allowing them to connect pervasively to so many aspects of our societies? There will have been high-level policy discussions weighing the benefits and risks involved in participating in the ‘Five Eyes’ surveillance network, but have similar policy discussions considered Facebook’s capacity to gather personal information and communication? What would happen if we – and potentially other countries connected to the ‘Christchurch Call’ discussions – flatly said that we were blocking Facebook from operating entirely in our territory until concessions were made?
“The issue, to an extent, is political will: if we cannot expect to tax Facebook and other social media giants based on their profit within our countries, we cannot expect to have enough leverage to change them in other ways either. In Germany and France, local law requires Twitter to block and filter neo-Nazi content; for some reason, Twitter has elected to only apply such a filter to those countries. Legislative action and regulation can have an impact, as we can see in examples like this.”
No conflict of interest. Dr Veale gave a talk on Tuesday 14th May about how social media companies profit from racism, abuse and harassment.
Associate Professor Alistair Knott, Dept of Computer Science, University of Otago, comments:
“It’s great to see Jacinda Ardern taking the initiative in calling for regulation of social media companies in the wake of the Christchurch attacks. Jacinda’s focus is on preventing the posting and dissemination of violent extremist content on social media sites. That’s understandable, given the trauma caused by the Christchurch video – and given its potential as propaganda and precedent for other like-minded extremists. However, removing videos of atrocities is essentially a reactive process, that happens after the event. We should also be thinking about proactive reforms to social media platforms, that prevent the growth of extremism.
“What turns people into extremists? Videos of attacks have an effect at one end of the extremist spectrum – but we should also be thinking about processes that move people towards extremism, from more neutral positions. Obviously these are complex processes, but here again, the way information is shared in social networks may play a role. In platforms like Facebook and Twitter, users are shown the kinds of items they have previously shown some interest in. There is some evidence that this pushes users into ‘bubbles’ of increasingly narrow political or religious viewpoints. When Jacinda and colleagues consider how to regulate social media companies, they might want to think not just about removing depictions of terrorist atrocities, but also of exercising some control over the algorithms that choose items for users’ feeds. Small changes could potentially have large effects in reducing the polarisation of opinions that lead to extremism.
“Social media companies have become hugely powerful distributors of information in our society. In some ways, the policies of these companies are like government policies: they affect everyone, and small tweaks can have big effects. At present, tech companies’ policies are dictated solely by commercial considerations, rather than the public good. There are good arguments that governments should get more involved in their operation.”
No conflict of interest.
Associate Professor Dave Parry, Head of Department, Computer Science, AUT, comments:
“In technical terms, simply making preference setup clearer, allowing people to have a ‘whitelist’ of approved sources and only allowing upload by verified users would go a long way to reducing the viewing of despicable videos like the one recorded in Christchurch. A set of expectations for this and takedown response times, including automated systems to detect suspicious behaviour could form the basis for a reasonable set of rules that can be enforced.
“Although not perfect, this could be enforced on a national level and issues of different levels of censorship avoided. The key element is that social media companies will have to take steps to ensure that users are verified including their age, and that at least within the company, users can be linked to real people. Because of privacy issues, this will also bring the need for the regulations to stop companies simply using this information to increase advertising revenue.
“A set of ‘best practice’ guidelines may be the most we can hope for from the current meeting, along with some sharing of techniques for suspicious activity detection. Unfortunately, automatic recognition techniques are extremely useful to intelligence agencies, and it is unlikely that much will be revealed from those sources.
“The major conflict in these cases is not between free speech and censorship, it is between convenience and harm. These interventions will make it slightly more difficult to upload your snowboarding exploits, but they will also reduce the number of people who could be greatly distressed and damaged by offensive material. If the leaders can make this point then the meeting will be a success and lead to sensible and acceptable measures.”
No conflict of interest.