After being kicked out of Google, Timnit Gebru creates his own lab for artificial intelligence.
Almost exactly a year ago, Timnit Gebru, the leader of Google's AI ethics team and one of the top experts in the field, was fired after sending a message of concern to the team. Now, she has her own store in the new DAIR lab and specializes in topics that Google thinks has been pushed back.
The Distributed Artificial Intelligence Research Institute is, according to the press release, “an independent, community-based research institute established to address the pervasive impact of big technologies on the research, development and deployment of AI.”
Designed from the ground up to incorporate and highlight different perspectives and challenge the processes used by companies such as Google, Amazon, and Facebook/Meta, DAIR will be independently funded. It will focus on publishing academic papers without the harsh pressure of the existing academia or the patriarchal intervention of global companies that threaten researchers. Gebru told the Washington Post:
To date, the lab has raised $3.7 million from the Ford Foundation, MacArthur Foundation, Kapor Center and Open Society Foundation. That should be enough to get started and pay researchers well, making this type of job an alternative to working for one of the low-income companies that often fund AI research.
Gebra, who invited us on stage to discuss these topics, has requested more information on the DAIR method and future research directions, and will update this post upon response. However, two staff members give us ideas we would expect. Safia Noble, author of
Algorithms of Oppression and Macarthur Genius Grant recipient, will serve on the DAIR Advisory Board. We recently invited her to a panel at TC Sessions: Justice. She talked about the dangers of treating technology as neutral or valuable as it becomes more widely used and “common”.
Raesetya Sefala - first researcher of DAIR; Sefala's recent research focuses on the geographic and economic segregation of South Africa, quantified using satellite imagery.
“AI must return to Earth. "It rises to a superhuman level, making you think it's inevitable and out of control," Gebru said in a presentation. “When AI research, development and deployment is based on people and communities from the outset, we can address this damage and create a future that values equality and humanity.”