In a turn of events, Microsoft AI researchers unintentionally exposed tens of terabytes of sensitive data while publishing an open-source training data storage bucket on GitHub. This accidental exposure has raised significant concerns regarding data security within one of the world’s tech giants.
Discovery by Cloud Security Startup Wiz
Cloud security start-up Wiz uncovered this security lapse during their ongoing investigation into cloud-hosted data exposures. According to reports, Wiz stumbled upon a GitHub storage file belonging to Microsoft’s AI research division. What they found was deeply concerning.
The GitHub file was intended to provide open-source code and AI models for image recognition, and users were directed to download these models from an Azure…
Source link