Amazon Transcribe flags and categorizes toxic speech from audio or video, which minimizes the volume of data that must be manually processed. This enables content moderators to quickly and efficiently manage the discourse on their platforms.
Amazon Transcribe Toxicity Detection is a tool designed to moderate social media platforms by identifying and classifying toxic speech.
It uses audio cues, such as tone and pitch, to detect toxic intent, improving upon systems that focus only on specific terms. The tool flags and categorizes toxic speech, reducing the data that must be manually processed and aiding content moderators in managing discourse efficiently.
Toxicity detection detects the following categories of offensive content:
- Graphic: Uses detailed and vivid imagery to amplify discomfort or harm.
- Harassment or Abuse: Imposes disruptive power dynamics, affecting the recipient’s psychological well-being.
- Hate Speech: Criticizes, insults, or dehumanizes a person or group based on their identity.
Insult: Includes demeaning, humiliating, or belittling language.
- Profanity: Contains impolite, vulgar, or offensive words or phrases.
- Sexual: Indicates sexual interest or activity using references to physical traits or sex.
Violence or Threat: Includes threats inflicting pain, injury, or hostility.
- Toxicity: Contains potentially toxic words or phrases across any of the above categories.
This plugin returns the transcript, along with diarization information, of the audio or video file in FLAC, MP3, MP4, Ogg, WebM, AMR, or WAV format stored in AWS S3.
Made with by wise:able
Discover our other Artificial Intelligence-based Plugins