Labour rights activist and former content moderator Daniel Motaung speaks about the role of organising and its challenges, and his aspirations for the future of content moderation.
Julia: When it all started about three years ago, you were one of the first whistleblowers, right?
Daniel: Yes, roughly three years ago. It has been something of a roller coaster and I fear that the movement in Africa and the Global South is experiencing a serious downturn. For instance, in Kenya, which has attracted attention as a content moderation hub since the whistleblowing, Big Tech companies like Meta, which owns Facebook, have begun to country hop – I believe they want to destabilise the movement and counter any gains made. As a result, the problems of content moderation are not lessening; rather, they are being moved elsewhere. Another thing to be aware of is that content moderation is not just Facebook’s problem. There’s plenty of information online about Facebook and TikTok’s content moderation, but there’s almost nothing about X (formerly Twitter) and other social media platforms whose business models depend on it.
Julia: What made you blow the whistle? Was there a specific moment and do you ever regret doing it?
Daniel: I need to start off by saying that I became a whistleblower by accident. I had no intention whatsoever of whistleblowing, but I found myself in a position where my life was under threat. I intuitively realised that my sanity, my whole being, my entire existence as a person was being threatened. I reacted because I had stopped living when I needed to continue to be alive. I was compromised mentally, isolated, depressed and potentially suicidal. Because of that, and the fact that I naturally am a person who doesn’t not fold easily, I fought back. The one thing I knew at the time was that I didn’t want to die – and if I was going to die, I was going to die fighting. Looking back, I have no regrets because it was the only thing I could do. I genuinely feel like not fighting back would have meant losing everything that gave me a sense of life. Whistleblowing was my means of survival.
Julia: And it was mental health issues caused by the content that put you in that position?
Daniel: My mental health issues were caused by content moderation. At some point I found myself isolated and felt like I was losing my mind, but at the same time as I didn’t know or understand what was happening to me, I always had the desire to get out into the world and speak and fight. Luckily, with the support of other people and funding, I was able to get help and regain myself. I say ‘luckily’ because I did not have the money for mental healthcare and did not even know I needed that kind of help or who to talk to or where to even start this fight. So, I was introduced to psychotherapy and the rest is history. If I hadn’t spoken out then, I would not be here today speaking with confidence and happiness. I would still be depressed, isolated, having sleepless nights, flashbacks and anxiety. As I speak of this luck, I’m reminded of my former colleagues and I’m wondering what is happening to them because I know for certain that, at this point, there isn’t anyone or anything extending a helping hand to assist them with whatever they are going through because of content moderation and Facebook.
Julia: Is labour organising and speaking out about injustices something you’ve always done or that you started doing in recent years, after what happened to you?
Daniel: Interestingly, I came across organising and labour issues before Sama, when I was still at Rhodes University studying Industrial Sociology and Politics, where I was introduced to socialism and Marxism. I’ve always been interested in levelling the scales between the haves and the have-nots, and I was intrigued by the theoretical proposition that the current capitalist society is a society of conflict between the so-called bourgeoisie and the proletariat. Also, I come from poverty, but working at Sama as a content moderator was my first real-life experience of injustice in connection with labour and workers rights.
In a way, organising my co-workers originated from an unconscious level on the basis of my prior experience as a student of social sciences, but I was also motivated by my experience at the university having been part of the broader ‘fees must fall movement’ in 2015 and 2016. It was basically second nature to me: I know about social and political movements and understood the agency of human beings, especially when they find themselves under pressure.
When I speak about labour struggles, I am also speaking about online digital workers more broadly: how they’re being exploited, how they are not recognized as workers – even in law. And an interesting thing here is that workers who are outsourced do not necessarily see themselves as workers; they usually see themselves as performing piecemeal jobs, or as contractors.
Julia: Does this make organising challenging?
Daniel: Yes, because the question becomes about how long these people will work for a particular organisation. And the second question then is, well, if these people are temporary rather than permanent workers, what is the point of unionising or organising? With an obvious answer being ‘no point’, because their arrangement is for a limited time. In respect of how the system works in reality, it only makes sense to be a member of a union or a labour movement if you are working as a permanent employee. This is irrespective of whether or not this is technically correct, because the reality is such that no one bothers with unionisation where temporary employees are concerned. Case in point, if content moderators unionise in Kenya and their employment contracts end, or the social media company suddenly removes that work from the country, what happens to those people who have been unionised? That movement will die out.
Julia: So we need broad alliances among gig workers, online digital workers and content moderators?
Daniel: Yes! To transcend nationalism we need broad solidarity on regional and global levels. What we are dealing with is a global problem which requires a global solution.
Right now, in my work at the Safe Content Advocacy Network, the organisation I founded at the beginning of this year, I have found it extremely difficult to mobilise anyone, because the problem we are dealing with is of nationalism and the failure of different bodies to realise that we need to come together and act as a single unit. Within countries you find smaller groupings who are very difficult to organise or mobilise because they are isolated and somewhat aggressive towards unification because everyone wants to lead. I started by pushing the union movement in Kenya, which failed because of a solidarity problem and the aggression I’ve mentioned amongst other things, but beyond that, it has proven extremely difficult to stretch to other countries to create an environment whereby unionisation transcends mere national borders to draw support from a global force that can bargain with another global force (which these big tech companies are). For emphasis, I’m saying this union I imagine needs to draw strength from other unions from the same global movement, or organisations of content moderators, in order for it to be able to push back against the tactics that Facebook and these other Big Tech companies adopt. If this doesn’t happen, they will continue to country hop and we will never reach the point where the exploitation of content moderators comes to an end. Nor will content moderation become the officially-recognised profession that I think it should become, if we are serious about the safety of content moderators and of users.
Julia: Talking about the future of content moderation, what is the first thing a global movement or global union should address? Where would you start?
Daniel: The first is security of tenure. Without it, the status of a worker at any multinational corporation is constantly threatened and the mentality of individuals in this scenario will not get to a point where they feel safe enough to fight for their rights and transform their working environment. We must, at a conscious level, see ourselves as permanent workers before we can even talk about fixing the system and making it safer. Otherwise, we’ll continue to face the capitalist phenomenon of ‘the reserve army of labour’, where there’s simply a lot of unemployed people who could do the work, so if you complain, they’ll fire you because you are replaceable. As long as workers remain expendable, we cannot actually transform the system or change anything.
Julia: What would you do next, after achieving security of tenure?
Daniel: Next, I would professionalise content moderation. Right now, as it stands, content moderation does not require any special training. The only things required are that you speak a certain language and can use a computer. Consequently, content moderators are easily replaceable, because, theoretically, anybody can do the job. For me, professionalisation is critical for security of tenure and the system itself. The system has failed and continues to fail because content moderation lacks a professional standard to subscribe to, and regulation against which it can be judged. This is only the tip of the iceberg. Developing professional training programs as part of this system will ensure that content moderation is actually effective.
Julia: If you are thinking outside of the current system, what is your most radical take?
Daniel: Well, I’m a practical person who refuses to enter the realm of ideas that are not realistic or are unhelpful to the problems that we currently face. First, we need to accept that there is no way that, where social media platforms are concerned, multinational corporations will be regulated in the next hundred years, because there is no political will. Governments simply don’t know what to regulate in respect of content moderation and artificial intelligence – note that I am intentionally not using the words ‘gig work’.
So my most radical idea is organising labour. I believe the best way to challenge these major global powers, by neutralising them and protecting ourselves as labour in the first instance, is by working together as a global unit, a well-oiled machine that can force these corporations to come to the table to agree on a global policy that ensures the rights of content moderators. Imagine if, one day, all the content moderators working for Facebook, for example, came together and refused to moderate unless the company came to the bargaining table where compromises can be made. We could negotiate, and come to agree on security of tenure alongside appropriate pay for content moderators, and then begin a conversation on how to make content moderation safe and what must be adopted in order to make this safety a reality.
Moving as a global unit to prevent these companies from continuing to exploit content moderators is very important. It is the only way in which content moderators can begin to make serious and noticeable gains in the fight against exploitation, in the absence of any regulation. Therefore, international solidarity is essential.
Julia: Can this ever be a humane job, and if so, how?
Daniel: I believe it can – and that is why I speak of the development of a content moderation system that is effective in both protecting users from harmful content and ensuring that content moderators do not necessarily suffer from it whilst policing these platforms.
In my opinion, we already have the tools to make this happen; all that’s needed is to organise them. With a training program based on these tools, together they will make a system for a humane profession that is safe and free of exploitation. That is what I’m trying to achieve through my work.
If I am asked whether emotional suffering can be prevented on the job, my answer is in the affirmative. I believe, and am open to being proven wrong, that people have different levels of emotional tolerance to toxic content. Put differently – yes, most content moderators suffer from on the job trauma, but not all do. If this is correct, it seems to me that one of the things we need to do to make content moderation safer is to create specialisations to ensure that those with a higher tolerance for graphic material deal with graphic content, and those with a lower tolerance deal with textual content.
This is not to suggest we would not need psychologists anymore. Even those with a higher tolerance can still be traumatised and need psychological intervention or treatment. What I’m saying is that people with a lower tolerance to toxic content should not be subjected to graphic content because they will definitely be traumatised.
For example, imagine two separate babies left unattended by their parents. Each plays with a snake but only one gets bitten. The one who played happily becomes a snake specialist while the other who was struck develops a fear of snakes. As such, you cannot force this one to work with snakes since their fear could develop into PTSD. Adults of sound mind already know at a conscious or unconscious level what’s potentially traumatising, but this knowledge alone does not cause PTSD or mental health issues. I believe only actual experience (physically felt or otherwise observed) can cause a person to suffer mental health problems, and that it depends on their pre-existing tolerance to graphic content. I’m not interested as to why their tolerance is higher or in the possibility of them being psychopaths – that is a different conversation entirely. All I’m interested in is whether they are likely to suffer from mental health problems as a result.
The good thing in all of this is that we now have artificial intelligence available to us as a tool to make this work safer. I’m not advocating for AI to replace us because, as some commentators say, AI only has an IQ of a four year-old – and a four year old is still better because they have a worldview and can reason by themselves, whereas AI is told to reason in a certain fixed manner through programming. All I’m saying is that AI could be used as a sorting tool to identify what is graphic and what is not. A specialist content moderator could then be tasked with deciding whether the content in question is in violation or not.
Another thing is to have global rules, not Facebook’s or TikTok’s, because in the bigger scheme of things, all content moderators do the same work and their problems are the same, irrespective of the company that employs them. Mining has industry-wide safety standards; safety standards for content moderation ought to be the same.
There are a lot of things that can be done to fix content moderation. Of course, for some of those solutions, user satisfaction and the loss of users might be brought up by the companies, but honestly, that is not a concern for me. As research shows, social media is addictive and people will still use these platforms, even if they become tedious or make posting a bit more difficult.
Julia: What is the most important thing you would like to see governments do?
Daniel: History teaches us that governments only ever act at the end of the timeline when the fight against an injustice is about to end and a revolution has already been going in a certain direction. I would like governments to regulate the industry, the starting point being a call for proposals for possible solutions, which can then be tested and, if successful, be codified into law. Ideally this would be done at an international level where it will be simpler for a treaty to develop, so different countries around the world can ratify it into their domestic laws – but I do not believe this could happen now, Therefore, I am focused more on doing the groundwork and making sure we consolidate our power so we can outline clear demands that will feed into the kind of policy that we want to develop.
Julia: And to end this great interview, could you tell us what you are currently working on?
Daniel: Right now I am currently doing work on content moderation as a research fellow with Research ICT Africa based in Cape Town, and I am doing advocacy work through Safe Content Advocacy Network, which aims to find solutions to content moderation problems and organise content moderators on a global level. The focus of my work is to find a system of content moderation which ensures content moderators and users are safe; the bigger project is to professionalise content moderation.
The solidarity of content moderators is important to me and I’m working very hard to form a global unit that can fight for content moderators and possibly all online digital workers. This will be a vehicle for change and if things go well, in the continued absence of any regulatory body, this global unit will be a platform for developing solutions and bargaining for their implementation.
Julia: Thank you for the interview and all the best for your new organisation, Daniel.