Find out about the latest from Cloudmersive.

New AI Content Moderation Feature Added to Storage Protect
5/6/2022 - Brian O'Neill

AI Content Moderation for Storage Protect

We are excited to announce the addition of a brand-new AI Content Moderation feature to Storage Protect. This new feature uses Machine Learning to automatically review and classify image files uploaded to cloud storage based on the degree to which they may be NSFW (Not Safe for Work).

The type of Not Safe for Work (NSFW) content being screened for by this new addition is strictly that of a racy or pornographic nature.

Storage Protect Overview

Storage Protect is Cloudmersive’s leading edge in-storage cloud security product, designed to accompany Amazon Web Services (AWS) S3, Azure Blob, Google Cloud (GCP) and SharePoint Online Site Drive storage.

Storage Protect: Virus Scanning Service

Under the hood, the baseline Storage Protect product calls upon the Cloudmersive Virus Scanning API to scan new files as they enter a customer’s cloud storage instance. The Virus Scanning API is equipped with over 17 million virus and malware signatures and boasts a sub-second typical response time due to high-speed, in-memory scanning.

Customers can configure Storage Protect to perform either Basic or Advanced scans. The former checks files for viruses, malware, trojans, ransomware & spyware, and the latter expands that list to include executables, invalid files, scripts, and more. Files are classified as “clean” or “infected’ post-scan and any identified threats can then be quarantined, deleted, or otherwise dealt with based on outcome actions predetermined by the customer.

Storage Protect: AI Content Moderation Service

Storage Protect now includes the services of the Cloudmersive NSFW Image Classification API. This additional API uses Machine Learning to scour the pixels within each newly uploaded image file and classify the degree to which that content is racy or pornographic (or neither).

This operation returns both a classification score (scaled between 0 and 1, with lower values indicating very little racy content was detected) and a probabilistic description of the degree of raciness in the image based on that score (0 – 0.2 is low probability; 0.2 – 0.8 is medium probability; 0.8 – 1.0 is high probability) for each file.

For any additional information, review our whitepaper on AI Content Moderation for Storage Protect, or contact your Cloudmersive representative more details.

800 free API calls/month, with no expiration

Get started now! or Sign in with Google

Questions? We'll be your guide.

Contact Sales