Livestream Moderation
Enabling safe and engaging live-streaming experiences.
Last updated
Enabling safe and engaging live-streaming experiences.
Last updated
The Livestream Moderation enables real-time, automated moderation of live streams. It offers frame and audio detection, ensuring comprehensive content moderation with minimal latency.
To enable the livestream moderation feature, please contact our support team. The cost of moderation is $0.135 per minute of live video broadcast.
Livestream moderation ensures that ongoing live streams are scanned for inappropriate, offensive, and undesirable content before it is published. Our system detects undesirable content in images across five categories:
Pornographic Content
Violent Content
Prohibited Content
Inappropriate Content
Profanity Content
Once you have enabled livestream moderation, you need to set the confidence level for each moderation category. Confidence levels represent the degree of certainty the AI system has in identifying specific content categories. Adjusting these levels fine-tunes the system's sensitivity to meet your moderation needs.
There are three levels of moderation:
Pass: The content passes moderation without issues.
Flagged: The content is flagged for review.
Terminated: The content is terminated and removed from the system.
Flagged live streams will be marked as flagged in the livestream list menu for the admin. The system will not take any automatic actions; however, the admin can choose to stop or delete the livestream from the console.
Moderated live steams will be stopped, and end-users will not have access to playback. Posts containing moderated live steams will be soft-deleted. Users will see a terminated stream as if it ended normally, and no flag indicators will appear on the UIKit.
By default, the confidence levels are set as follows:
Flagged : Content with a confidence level higher than 40% will be flagged.
Terminated : Content with a confidence level higher than 75% will be terminated.
A confidence level of "0" implies a low threshold, making the system more likely to block content, potentially resulting in false positives, even if the content is not inappropriate.