Detecting Unsafe Images in Cloud Storage: Combine AI and Human Insight

By Smoothwall
Published 24 April, 2025
3 minute read

Harmful and inappropriate images can easily go unnoticed in a school’s cloud storage, presenting a safeguarding risk to students, staff and networks. The most effective way to address this issue is to combine the power of AI technology and human insight. It ensures that large amounts of data can be checked in minutes, and staff time is prioritised for delivering accuracy and appropriate intervention. 

 

This article explains why the combination of AI and human insight is so powerful, and introduces a safeguarding solution that utilises this approach to ensure school cloud storage remains free of unsafe images. 

Using AI to detect harmful imagery in cloud storage

Manually checking school cloud storage for unsafe images is a time consuming task. School DSLs and IT staff often have heavy workloads, leaving little time for this necessary but arduous process. When such checks are carried out, there is always a risk that basic human error may result in some harmful images being missed. 

AI technology can assess huge swathes of data at speeds that are far beyond human capability. When applied to school drives, this means that harmful and inappropriate images can be identified in minutes, no matter the size of the cloud storage. These scans can be set to run automatically, ensuring minimal disruption to network users and staff. 

Advanced AI software can also categorise images that are identified as potentially harmful or inappropriate. For example, the content may be flagged as “porn” or “weapons”. Assigned staff members can then review alerts, which contain full contextual data, so informed decisions can be made on the appropriate response. 

The importance of human intervention

As impressive as AI technology is, in a context as serious as student safeguarding, human insight is required. Not only does a human eye improve accuracy, but it’s vital to identify key contextual clues and understand the intricacies of human behaviour. 

A DSL, for example, has the breadth of experience to notice subtle safeguarding risks in images. They can take these into consideration alongside the wider context of an incident and their knowledge of the student(s) involved. This enables them to build a more complex, informative picture. 

The AI technology lays the necessary groundwork, but it's important to leave decision-making on appropriate action to the professionals. 

Cloud Scan: Powered by AI, perfected with human review

Cloud Scan is a brand new safeguarding solution that enables schools to quickly identify and remove harmful images hidden in cloud storage. It uses an AI image classification tool to scan a school’s cloud storage, and allows human intervention to take over at the final, crucial stages. In other words, it balances automation and human review for better safeguarding outcomes.

Cloud Scan runs automatically every 24 hours. When potentially harmful images are identified, alerts with full contextual information are created and a daily summary email is sent to the DSL. They then have the choice to mark the images as safe or remove them from the system. There’s no need for arduous manual checks, and no requirement for IT staff to be involved.

With Cloud Scan:

  • Harmful images are identified within 24 hours
  • Students and networks are protected from unsafe content
  • Schools retain full control over decision-making
  • Staff time is prioritised

Book a free demo of Cloud Scan for your school, college or MAT

Learn more about Cloud Scan and schedule a no-obligation walkthrough with one of our digital safety experts by contacting enquiries@smoothwall.com. We’re ready to help. 

Essential reads hand-picked for you...