Last year, content moderators who’ve risked consequences like PTSD working for Big Tech companies have started to organize for better treatment in the last several years. Now, Meta has announced a wide rollout of its AI support assistant for Facebook and Instagram, and that it will “reduce our reliance on third-party vendors” employing humans for content enforcement.
While we’ll still have people who review content, these systems will be able to take on work that’s better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drugs sales or scams.








































