The marching of technology has taken a huge jump in the 21st century with AI. It allows you to do anything at any time without any accountability.
The morale might fall short when it goes into the hand of the rulers. It is the need of time, yet one cannot ignore the fearing side of using AI without reliability.
However, Rumman Chowdhury possesses a distinctive perspective regarding the potential impact of artificial intelligence (AI). Known as one of the leading thinkers in the field, she coined the term ‘moral outsourcing’ to address the issues of accountability and governance in AI.
Chowdhury’s expertise extends across various domains, including her role as a Responsible AI Fellow at Harvard, an AI global policy consultant, and a former leader of Twitter’s Meta team focused on Machine Learning Ethics, Transparency, and Accountability.
Her approach involves attributing sentience and choice to AI systems, enabling technologists to shift responsibility onto the products. By doing so, Chowdhury argues that technical advancements become inevitable while biases become entrenched.
In a 2018 Ted Talk, Chowdhury highlighted the problem of using terms like ‘racist toaster’ or ‘sexist laptop’ when discussing AI.
She believes that such modifiers absolve humans of responsibility for the products they create. This language reinforces a systematic ambivalence that parallels the ‘banality of evil,’ a concept described by philosopher Hannah Arendt to explain how collective ignorance allowed atrocities like the Holocaust to occur.
Chowdhury emphasizes that not only do the individuals in power contribute to such acts, but also the ordinary people support and participate in them.
Chowdhury promotes the concept of red-teaming, which involves inviting external programmers and hackers to test technology’s safeguards and identify potential flaws. This approach is seldom implemented in the tech industry due to the reluctance of technologists to allow others to scrutinize their creations.
Currently, she is organizing a red-teaming event at Def Con, a convention hosted by AI Village, a hacker organization. This collaboration with OpenAI, Microsoft, Google, and the Biden administration will involve hundreds of hackers evaluating ChatGPT, providing a unique dataset for testing purposes.
According to Chowdhury, true regulation and enforcement can only be achieved through collectivism. In addition to third-party auditing, she actively contributes to multiple boards across Europe and the US to shape AI policy.
She cautions against excessive regulation that may lead to overcorrection without addressing underlying issues. Chowdhury recognizes the challenges of defining toxic or hateful behavior in AI systems and acknowledges that this journey will continue.
From the beginning of her tech career, Chowdhury noticed a need for more understanding between technologists and people, motivating her to bridge that gap. Her work revolves around interpreting human behavior through data.
She acknowledges the notion prevalent in technology that sees humanity as flawed and technology as a savior. She observes language such as ‘body hacks’ that suggest an aspiration to detach from humanity and optimize solely through technology.
With her fascination for the complexities and unpredictability of human nature, Chowdhury pursued a political science degree at MIT as an undergraduate. Later, after feeling dissatisfied with non-profit work that underutilized models and data, she pursued a master’s degree in quantitative methods at Columbia University.
Rumman Chowdhury’s innovative approach to accountability and governance in AI, her concept of moral outsourcing, and her commitment to red-teaming strategies provide a fresh perspective on responsible AI development and regulation. Her work strives to bridge the gap between technology and humanity, emphasizing the need for collective efforts to ensure AI’s responsible and ethical advancement.