Q&A with Eirliani Abdul Rahman, Senior Fellow at the Georgetown University Collaborative on Global Children’s Issues
In this interview, Senior Fellow Dr. Eirliani Abdul Rahman shares her expertise at the intersection of artificial intelligence (AI), technology, and child online safety. Drawing on her experience with Big Tech, she reflects on her 2022 decision to resign from Twitter’s Trust and Safety Council and the challenges that lie ahead with the development of ethical AI tools that protect vulnerable communities, especially in the fight against child trafficking.

Your background encompasses public health, AI, and child online safety. How have these diverse fields intersected in your work to protect children across digital spaces and various country contexts?
I am interested in the issues of the asymmetries of power and inequity embedded within technology, along with the challenges that stakeholders in the child online protection ecosystem are facing in Big Tech. My scholarship thus far has been focused on the issues of trust and safety within Big Tech using feminist approaches.
In the context of child trafficking, I am interested in addressing the question of how we can develop ethical AI that is survivor-centric. How can we respect the dignity of historically marginalized communities while developing tools that protect survivors? And how can we disrupt the sociotechnical systems that enable criminal activities?
What motivated your decision to resign from Twitter’s Trust and Safety Council? How do you think your public resignation has impacted global conversations surrounding online safety for children?
I made the difficult decision on December 8, 2022, to resign from Twitter’s Trust and Safety Council, taking two other council members, Anne Collier and Lesley Podesta, with me. The choice was between staying with a social media platform that had seen a meteoric rise in hate speech and attempting to influence its new leadership, or leaving and having no say over its increasingly erratic content moderation policies.
Before Elon Musk’s purchase of Twitter, I had written down several commitments to myself. Should Musk step over these thresholds, I told myself I would resign. Those red lines were crossed. We know from research by the Anti-Defamation League and the Center for Countering Digital Hate that slurs against Black Americans and gay men jumped within a one-month period following Musk’s takeover. Antisemitic posts soared more than 61% in the first two weeks. Another red line for me was when previously banned accounts, and those who had incited others to violence, were reinstated.
I do not regret my decision. In September 2023, All Tech is Human, a large movement in the responsible tech space, named our resignation a “key moment” in trust and safety in Big Tech.
Looking forward, how do you think your work on children's rights and safety will evolve with the rapid advancement of AI in the coming years?
The issue of trafficking is urgent and needs a collaborative approach at the societal level. Globally, more than two-thirds of trafficking victims are forced into labor, including more than 10 million adults and approximately four million children. Developing an ethical large multimodal model (LMM) model can help law enforcement corroborate victim statements. This involves analyzing trafficking communications. Trafficking victims may be fearful or unwilling to disclose fully to law enforcement agencies and/or may have difficulty recalling traumatic events. The non-testimonial evidence that prosecutors can obtain to help secure a conviction will help survivors over the long term.
However, LMMs are likely to be built on discriminatory stereotypes. Recent research shows that there is a risk that biases in source material can become entrenched. It is critical to build AI tools that minimize the retraumatization that can occur during the investigation and trial processes. Developing a predictive model alone is not enough. We also need survivors to inform these processes.
I am a senior advisor for technology policy with Just Rights for Children, an alliance of more than 250 civil society organizations working to end violence against children in India. We are pushing for a survivor-centered lens in developing an ethical LMM model that respects human dignity, minimizes the use of surveillance technologies, and recognizes that survivors are not a monolith: their lived experiences and perspectives can differ.
Centering the survivors’ needs means providing them with meaningful participation to help design intervention tools that can protect others from being trafficked. Thoughtfully designed technology has broader implications for cyber security, can empower and strengthen the anti-human trafficking community, and disrupt illicit operations. With my partners, I hope to develop a proof of concept in the Indian context that could help ameliorate this.
What advice do you have for students passionate about technology and human rights, particularly when it comes to advocating for children's issues?
Be curious and engage critically with the likes of Big Tech. Don’t accept something as a given, especially if it is a narrative that social media platforms themselves may have driven. Recognize that technology is a societal issue and not one that should be left to the so-called “experts” in tech. We all have a say in this and in shaping the future of technology. Via the Collaborative on Global Children's Issues, I would like to offer learning opportunities related to Big Tech and child protection, as well as safety by design.
Tell us about your background. How are you bringing your lived experience into your work?
Aside from speaking about my experience resigning from the Twitter Council, I hope to enrich discussions with my perspectives on Indigenous AI. I identify as an Indigenous woman from the Malay Archipelago (present-day Brunei, Indonesia, Malaysia, and Singapore). While a single Indigenous perspective does not exist, what an Indigenous-centered AI design does is challenge the Western techno-utilitarian lens and its neocolonial knowledge production. Context is important.
Author and social critic Bell Hooks described “talking back” as an act of resistance that challenges the politics of domination that “would render us nameless and voiceless.” I have written about the concept of “coding back,” building on Hooks’ work. Coding back challenges the asymmetrical power dynamics embedded within the technology companies and institutions that develop AI. By coding back, we interrogate power and weave in resistance to the furious pace of technology. This resistance, whether performed at the individual or the collective level, is about the inclusion of Indigenous cultural values and ways of being to disrupt how we think about technology.
I am an advocate for more publicly accountable technologies. The question we need to ask is not “What’s possible?” but rather: “What’s responsible?”