The online store is launching a new service. Now users can edit and enhance product descriptions, similar to wiki communities. In other words, customers can suggest edits and comment on changes made by others. The store needs a tool that can detect toxic comments and flag them for moderation.
Please train a model to classify comments as positive or negative. You have a dataset with annotations indicating the toxicity of the edits.
The goal is to build a model with an F1 quality metric score of at least 0.75.