Meta is still working on the changes recommended during last year’s civil rights review

More than a year after the first civil rights audit, Meta says he is still working on a number of changes recommended by the auditors. The company has released details of its progress in addressing a number of auditor recommendations.

According to the company, it has already implemented 65 of the 117 recommendations, while another 42 are listed as “ongoing or ongoing”. However, there are six areas in which the company says it is still determining the “feasibility” of the changes and two recommendations in which the company has “refused” to take further action. And, in particular, some of them deal with the most controversial issues listed in the original 2020 revision.

That original report, published in July 2020, stated that the company must do more to stop “pushing users towards extremist echo chambers”. It was also said that the company should address the problems related to algorithmic bias and criticize the company’s behavior in connection with Donald Trump’s announcements. In his, Meta says he has not yet addressed all the changes the auditors have sought regarding algorithmic bias. The company has implemented some changes, such as engaging with external experts and increasing the diversity of its AI team, but says other changes are still “under assessment”.

In particular, the auditors called for a mandatory process for the entire company to “avoid, identify and address potential sources of bias and discriminatory outcomes in the development or implementation of AI and machine learning models” and to “regularly test existing algorithms and machine learning models.” Meta said the recommendation is “in the evaluation phase”. Likewise, the audit also recommended “mandatory training on understanding and mitigating sources of bias and discrimination in AI for all teams building machine learning algorithms and models”. That proposal was also listed as “under assessment,” Meta said.

The company also says some content moderation-related updates are also “under assessment”. This includes a recommendation to improve the “transparency and consistency” of decisions regarding requests for moderation, and a recommendation for the company to study more aspects of the spread of hate speech and how it can use that data to address targeted hate more quickly. The auditors also recommended that Meta “disclose additional information” about which users are the target of voter suppression on their platform. This recommendation is also “in the evaluation phase”.

The only two recommendations that Meta completely rejected relate to both the elections and the census policy. “The auditors recommended that all user-generated voter interference reports be forwarded to content reviewers to determine if the content violates our policies, and to add an option to appeal for reported voter interference content,” Meta wrote. However, the company said it decided not to make those changes because it would slow down the review process and because “the vast majority of content reported as voter interference does not violate company policy.”

Separately, Meta also said it was “a framework for studying our platforms and identifying opportunities to increase fairness when it comes to racing in the United States.” To achieve this, the company will conduct “off-platform research” and analyze its own data using last names and zip codes.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn a commission for partners.

Source link

Naveen Kumar

Friendly communicator. Music maven. Explorer. Pop culture trailblazer. Social media practitioner.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button