With the federal government backing off AI safety research, it could leave a void of standards for risk-proofing AI models. The Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights wants to help fill those needs with a new framework meant to help companies and other orgs design and deploy AI systems with equity in mind. The 36-page document covers each stage of the development process with considerations for protecting the civil rights of marginalized groups, as well as case studies and resources. It’s aimed at companies and investors in “specific sectors that utilize consumer-focused tech,” including those at a particular risk for discrimination, like housing, banking, and healthcare. “Private industry doesn’t have to wait on Congress or the White House to catch up; they can start implementing this Innovation Framework immediately,” Kostubh “KJ” Bagchi, VP of the Center for Civil Rights and Technology, said in a statement. Founded in 1950, the Conference is a coalition of national organizations born out of the civil rights movement. The group formed the Center for Civil Rights and Technology as a joint project with its education and research arm in 2023 to advocate specifically around AI and privacy, industry accountability, and broadband access. The framework’s release came just before Commerce Secretary Howard Lutnick renamed the National Institute for Standards and Technology’s AI Safety Institute to drop the word “safety.” NIST released a widely cited AI risk management framework in 2023 under President Biden that faced opposition from some Republicans, including Senator Ted Cruz, who called the org’s AI safety standards “woke.” Keep reading here.—PK |