Fifty people have died in Christchurch following Friday’s shooting at two mosques.
The alleged offender livestreamed the attack on Facebook and posted links to a manifesto on Twitter. Facebook suspended the account after requests from the NZ police. This morning Facebook said they have taken down 1.5 million videos within 24 hours of the attack. A second person appeared in court today over the alleged re-posting of the livestream.
The SMC gathered expert comment on the media ethics of reporting of the livestream and how online media can find and remove this type of video.
Associate Professor David Parry, Head of Department of Computer Science, AUT, comments:
“Automatically spotting and removing offensive videos is an extremely demanding task. Although Facebook does employ human moderators there are probably less than 10,000 of them whereas the number of posts per day is in the billions.
“Some of the automated moderation is effective. In many cases, indecent images or posts related to self-harm or suicide can be identified and flagged for moderation. However, in both of these cases, there are obvious features to look for. Identifying naked bodies or particular body parts can often be done using quite simple algorithms that look for shapes and colour combinations. Words that are commonly used in regard to self-harm can also be identified.
“At a more complex level, there is a huge stock available of offensive images or text indicating self-harm, and using machine learning techniques such as deep learning highly accurate predictions can be made by training the models on these known examples and counterexamples, effectively by finding images that look most similar to the one being tested.
“However, video is a great deal more difficult to identify, especially in real-time. A lot of the features that an automatic system would identify would be common to many Go-Pro type videos – movement due to walking, changing scenes as camera wearer moves around, sounds of breathing and speech that may be out of breath and of course people in the field of view. All of these could be perfectly innocent. In addition, because these vile videos are so fortunately rare, there is very little data to train a recognition algorithm on.
“Real-time video is especially difficult to identify as some of the scenes will be very common, and often the speech will be hard to recognise, and even then may not use keywords that identify the video as needing removal. Facebook (and other social media such as Youtube or Instagram) depends on user-generated content and because the market is potentially very easy to enter the companies involved are very reluctant to censor materials unnecessarily or even delay the transmission.
“Once a video is online it is very easy to download it and either post it again in an unmodified form or make small changes in terms of length or colour balance or type of format and either repost it to the same website or another one. These changes would delay the automatic identification and removal of the video. It is quite possible also that automated posting programmes ‘bots’ are being used which might explain the very large number, although with billions of users there are likely to be a reasonably large number of people who might want to repost for extremist reasons or even just to offend.
“To address this Facebook could do a number of things, all of which it may feel would cause issues with its business model and attitude to speech. Firstly the number of moderators could be increased – in conjunction with a time-delay, this would increase the chance of stopping these videos being posted.
“More work could go into identifying and potentially banning users who have extreme or offensive views – however, Facebook has historically been very reluctant to ban any sort of speech which could be seen as expressing political views. In addition, identifying users can be very difficult, especially if they start using some sort of IP address hider or virtual private network.
“Work on scene recognition and identification of activity – for example, people behaving suspiciously in front of CCTV cameras is progressing but it is still in its infancy, and is much better at recognising common activities and sport, for example, because of the large amount of training data.
“In 2017 Facebook started a pilot to remove ‘revenge porn’ images, however, this required users to send them the original or similar images, which required the victim to trust the system a lot.
“Ultimately there is not a huge amount the social media networks can do while working under their current business model, until activity recognition becomes better. The question is whether the social licence to operate this model, of effectively no pre-moderation and little censorship will continue to attract users and advertisers.”
No conflict of interest.
Dr Belinda Barnet, Senior Lecturer, Media and Social Media Major, Swinburne University of Technology (Australia), comments:
“Facebook and Twitter have done an incredible job getting rid of ISIS and other extremist-related content over the last two years. To do this they’ve deployed a combination of algorithmic, community reporting, and moderator assets.
“I don’t feel they’ve paid as much attention to right-wing extremism, and in many cases have promoted it. There is advertising revenue to be had for example. So yes, I feel they do already have the tools and the capacity to combat this: it just hasn’t been their focus. So we need to encourage them to change focus and take White Supremacist and right-wing content as seriously as other dangerous content. Because it is dangerous.
“They did respond fairly quickly to the NZ massacre and removed both video content and links to the video. But could we redirect attention to stopping violent livestreams from occurring in the first place for example? The AI capacity exists if it was turned to that end.”
No conflict of interest declared.
Glynn Greensmith, lecturer in the Department of Journalism, School of Media, Creative Arts and Social Inquiry, Curtin University (Australia) comments:
“Do we lead or do we follow?
“The internet is a swamp, but it is not the wild west. The main reason we have not applied any real checks and balances to the system is because we have not asked. Well, now the right questions are being asked by the right people, so we will get to see what can be done.
“What must not be taken for granted, what must not be accepted, is the notion that we stop before we start because it goes in the ‘too hard pile’.
“I am told constantly that my calls for more responsible reporting are moot because of the internet. Before the internet, we were told it was not acceptable for other reasons.
“Let’s use the evidence, let’s try and do it right. It is my firm belief that if we effectively apply a better understanding of the relationship between mass murder and the way it is reported, then other problematic areas, such as social media platforms, are more likely to be held to account. The evidence tells us the first step we can take, let’s take it.”
No conflict of interest.
Dr Alistair Knott, Dept of Computer Science, AI and Law in New Zealand project, University of Otago, comments:
“We don’t know exactly what methods the big tech companies use to detect copies of the Christchurch video footage being posted: their methods are all closely-guarded trade secrets. But we can guess, based on recent techniques that have been presented in the public domain.
“In this particular case, the specific aim is to detect instances of one particular video automatically, rather than to classify a type of video (e.g. one containing offensive content defined more generally). This could be called ‘video recognition’, rather than ‘video classification’.
There are a few techniques that could be used. I’ll mention three.
1. A simple technique would be to use methods that take frames of the target video, and compute numbers from them that are almost guaranteed to be unique – that is, they’re very unlikely to be computed from any videos other than the target video. These techniques are called ‘checksum’ techniques. But they’re relatively easily thwarted, by manipulating the images in the video – for instance, changing the brightness, contrast, and so on.
2. Another method, which is more resistant to that, is to embed what’s called a ‘digital watermark’ in the video. This is a signature that doesn’t show up in the image, but that can be identified by running a computational process over the image file. The general field here is called ‘steganography’, which covers various ways of hiding messages in images (and thus by extension, videos). Digital watermarks are designed to be resistant to simple image manipulations. But obviously, this only works if the video of interest has a digital watermark. We could envisage legislation that requires all video cameras to add a digital watermark – but that’s certainly not the case at the moment, and it would be quite a heavy piece of legislation, because it would make all videos from new cameras identifiable.
3. A final method would be to use AI techniques used more widely in video classification: namely, machine learning techniques, that are ‘trained’ to detect videos in a certain class. The best techniques these days are ‘deep networks’, and specifically ‘convolutional networks’, which are the state-of-the-art AI techniques in most areas of machine vision. These are probably the main method the tech companies use to detect video content of various types. In the case of the Christchurch video, the idea would be to take the original video, plus various manipulations of this video, and train a deep network to recognise videos of this general kind.
- Deep networks typically take lots of computer resources to train. They might take many days to run on the kind of hardware that university computer science departments could call on. But big tech companies have vast computing resources: if they devoted lots of resources to a task like this, they could probably do a good job.
- There’s a limit to how many target videos could be searched for at any given time: because each video uploaded would have to be passed through a detector for each target video before being allowed. But again, it’s a matter of resources: the more resources are devoted to this filtering task, the more videos can be scanned for.
- It’s also important to note that most big tech companies appear to be keeping humans in the loop for these filtering decisions: so what the algorithms are doing is alerting human judges, who make the final decision. See here for some information from YouTube. Fast responses in this case also require very large teams of human judges.
“I believe big tech companies should be required by law to devote sufficient resources to important tasks like this. It’s not in their immediate financial interest to do it – and public opinion is a relatively weak instrument to oblige them to do the right thing, because they are such a monopoly. So they won’t do it unless they’re made to by law.
“It’s not clear how attuned the big tech companies are to white supremacist terrorist content. In 2018, Facebook reported some numbers for how many ISIS posts had been deleted, but no numbers for white supremacist posts.”
No conflict of interest declared.
Professor Ursula Cheer, School of Law, University of Canterbury, comments:
“One thing I have observed this time about the TV coverage of Christchurch hospital is that there were endless repeated shots of identifiable people, who were distressed, semi-clothed and covered in blood, being delivered to and taken into the hospital. In my view, that breached the privacy of vulnerable people and media should have thought about that.
“I am utterly fed up with the ‘grief’ coverage now and am increasingly resorting to the ‘off’ button, and so are many others.
“Our media laws apply to anyone or any company publishing here and having a legal presence in NZ. If they can be tracked down, that works for online publication too. But if material is published overseas or from overseas, our laws cannot reach those publishers. That is why the main resort is the online publishing companies like Facebook and Twitter. They are actually accepting responsibility to do something now, which is good.
“But mainstream media would do well not to direct people to social media, as that only causes people to look. The less oxygen given to the manifesto and the video, the better. The same applies to the alleged gunman’s name. And when the trial proceeds in court, it is very important that media do not encourage a social media feeding frenzy, so that the justice system can do its job and the trial is not jeopardised.”
No conflict of interest declared.
Marianne Elliott, Co-Director of The Workshop, comments:
“A good starting point are these recommendations for reporting on mass shootings which include:
- Minimize reporting on the perpetrators.
- Use the perpetrator’s photo sparingly, especially in follow-up stories.
- Avoid putting photos of the perpetrator side by side with a victim.
- Caution against quoting from manifestos.
“The livestream video was created with the intent to spread hatred and fear. If re-traumatising survivors and the families of those killed isn’t a good enough reason not to share it, then the knowledge it would advance the intentions of a terrorist must be.
“When it comes specifically to reporting on online extremism in the wake of a terrorist attack, Data & Society’s guide on ‘Better Practices for Reporting on Extremists, Antagonists, and Manipulators Online’ is useful.
“One of the most complex issues is how to report on false and manipulative accounts, which abound online in the wake of a tragedy. This report points out that “claims emanating from 4chan [at such a time] are pointedly performative, and almost certainly false.”
“Cognitive science tells us repeating falsehoods, even to debunk them, has the effect of spreading the falsehood further. It is therefore critical that reporters understand that reporting such stories, even with the intention of debunking falsehood, will spread the falsehoods, encourage the people generating them, and “most pressingly, incentivise future bad actions.”
Note: Marianne is currently researching digital media and democracy with funding from the Law Foundation.
Dr Lyn Barnes, former AUT Senior Lecturer, comments:
“My biggest concern for journalists covering the Christchurch massacre is for those who covered Pike River or the earthquakes and lacked support at the time, as many will be re-experiencing traumatic feelings they may not have dealt with. Also, the novice journalists who are not prepared for some of the unpleasant things they may experience, directly or indirectly through people sharing their grief.
“Hopefully, editors are more alert to symptoms of stress and are assertive about it: that is, tell the journalists to take some time out, rather than let them continue.”
No conflict of interest.