Image Moderation
Creating safer online communities for all!
Last updated
Creating safer online communities for all!
Last updated
Image moderation is a feature that allows all uploaded images on posts and messages to be scanned and moderated for inappropriate, offensive, and unwanted content before images are published.
To this end, we're partnering with Amazon's Rekognition AI technology to detect and moderate images that contain violence, nudity, suggestive, or disturbing content; allowing you to create a safer online community for your users without requiring any human intervention. For more information on how Rekognition works, click here!
Please note that image moderation is disabled by default. Read below for instructions on how to enable image moderation for your network.
The following steps will allow you to enable Image Moderation for your network:
Log into Console
Go to Moderation > Image Moderation
Toggle “Enable image moderation” to "ON"
Do note that once you've enabled image moderation, you will need to also set the confidence level for each moderation category. By default, the confidence levels set are "0" for each category. Allowing any one category to be set to '0' confidence level will likely result in all images being blocked from being uploaded, regardless of whether the image contained any inappropriate elements.
Setting confidence levels at a higher threshold is likely to yield more accurate results when it comes to detecting images for said content. If you specify a confidence value of less than 50, a higher number of false positives are more likely to be returned compared to a higher confidence value. You should specify a confidence value of less than 50 only when lower confidence detection is acceptable for your use case.
To disable image moderation, simply toggle "Allow Image Moderation" to "No". Any images uploaded will no longer go through the image recognition service, and any inappropriate content will no longer be detected.
Can we set confidence levels for just one category and leave the rest at '0'? Doing so will result in all images being blocked from being uploaded, even if it didn't contain any offensive or unwanted imagery. This is because Amazon Rekognition will scan the uploaded image for all categories (i.e. "Nudity", "Suggestive", "Violence", and "Disturbing) and return a confidence score for each category. If the confidence score returned is equal to or higher than the confidence level set in the SP Console, the image will be blocked. Currently, we don't have a feature to allow image moderation for just one element - but you could try setting different confidence levels for each element to see what works best for your use case.
What happens when an uploaded image is detected to have unwanted imagery? It depends on the confidence threshold set in the SP Console for each moderation category. For example, if your settings are ("Nudity: 99; Suggestive: 80; Violence: 10; Disturbing: 50"), based on the image uploaded, Amazon Rekognition will scan and return a confidence score for each moderation category. If the confidence score returned by Amazon Rekognition equals to, or is higher than the confidence levels set in Console for that category, SP will block this image from being uploaded, as illustrated below:
Settings in Console: "Nudity: 99; Suggestive: 80; Violence: 10; Disturbing: 50"
Amazon Rekognition Confidence Level for Image Uploaded: "Nudity: 80, Suggestive: 85; Violence: 5; Disturbing: 49"
Image blocked: YES - because the confidence score returned for the "Suggestive" category is higher than the confidence threshold set on the SP Console. Settings in Console: "Nudity: 99; Suggestive: 80; Violence: 10; Disturbing: 50"
Amazon Rekognition Confidence Level for Image Uploaded: "Nudity: 80, Suggestive: 79; Violence: 5; Disturbing: 49"
Image blocked: NO - because the confidence level returned for all the categories is not equal to or higher than the confidence thresholds set on the SP Console.