Elon Musk fired Twitter's 'Ethical AI' team

Elon Musk fired Twitter’s ‘Ethical AI’ team

Spread the love

As more AI-related issues have surfaced, including race, gender, and age biases, many tech companies have installed “ethical AI” teams seemingly dedicated to identifying and mitigating these issues.

Twitter’s META unit has been more progressive than most in releasing details of issues with the company’s AI systems and allowing outside researchers to probe its algorithms for new issues.

Last year, after Twitter users noticed that a photo cropping algorithm seemed to favor white faces when choosing how to crop images, Twitter took the unusual step of letting its META unit post the details of the bias she discovered. The group also launched one of the first-ever “bias bounty” contests, which allowed outside researchers to test the algorithm for other issues. Last October, Chowdhury’s team also published details about unconscious political bias on Twitter, showing how right-wing news sources were, in fact, promoted more than left-wing ones.

Many outside researchers saw the layoffs as a blow, not just to Twitter, but to efforts to improve AI. “What a tragedy,” tweeted Kate Starbird, an associate professor at the University of Washington who studies online misinformation.

Twitter content

This content can also be viewed on the website comes from of.

“The META team was one of the only good case studies of a technology company leading an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, Director from the Center for Applied Data Ethics at the University of San. Francisco.

Alkhatib says Chowdhury is incredibly well regarded within the AI ​​ethics community and his team has done some really valuable work holding Big Tech to account. “There aren’t many corporate ethics teams that deserve to be taken seriously,” he says. “He was one of those whose work I taught in classes.”

Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms used by Twitter and other social media giants have a huge impact on people’s lives and need to be studied. “It’s hard to discern from the outside whether META had an impact inside Twitter, but the promise was there,” he says.

Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of AI issues. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “META researchers had outstanding credentials with a long history of studying AI for social good.”

As for Musk’s idea of ​​opening the Twitter algorithm, the reality would be much more complicated. There are many different algorithms that affect how information is presented, and it’s hard to understand them without the real-time data they receive in terms of tweets, views, and likes.

The idea that there is an algorithm with an explicit policy orientation could oversimplify a system that may harbor more insidious biases and problems. Discovering them is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study the biases and errors of their own algorithms,” says Alkhatib of the University of San Francisco. “META did that.” And now that is no longer the case.

Leave a Comment

Your email address will not be published.