In a troubling security misstep, Microsoft recently faced a significant breach that led to the exposure of a staggering 38 terabytes of private data. This incident, flagged by researchers at Wiz, occurred during a routine update of open-source AI training materials on GitHub. In this article, we delve into the nature of the exposed data, how the issue was discovered by Wiz, misconfigurations and security concerns, potential consequences, and Microsoft’s response.
Nature of the Exposed Data
The exposed data includes a disk backup of two employees’ workstations, corporate secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages. This breach highlights the sensitive and valuable information that Microsoft failed to adequately protect.
Discovery of the Issue
Wiz, a cloud data security startup founded by former Microsoft software engineers, discovered the issue during routine internet scans for misconfigured storage containers. Their proactive approach to identifying vulnerabilities led them to uncover this significant data exposure, emphasizing the importance of thorough security monitoring and assessment.
Use of Azure SAS Tokens for Data Sharing
During the process of sharing files, Microsoft utilized an Azure feature called Shared Access Signature (SAS) tokens. This feature enables data sharing from Azure Storage accounts. Shockingly, during Wiz’s scan, it was revealed that this account contained an additional 38 terabytes of data, including personal computer backups of Microsoft employees.
Misconfigurations and Security Concerns
Aside from the overly permissive access scope, Wiz discovered that the SAS token was also misconfigured to allow “full control” permissions instead of read-only. This oversight created a fertile ground for potential cyberattacks, as an attacker could have injected malicious code into all the AI models in this storage account. This would have infected any user who trusts Microsoft’s GitHub repository, amplifying the scale and impact of the breach.
The potential consequences and implications of this security misstep are severe. With the ability to inject malicious code into AI models, an attacker could compromise critical business operations, leading to devastating consequences for both Microsoft and its users. The breach also raises concerns about the trustworthiness and integrity of the data hosted on Microsoft’s platforms.
Security Concerns with the File Format
Adding to the security concerns, the exposed blueprints were in a ‘ckpt’ format, a creation of the widely-used TensorFlow library and sculpted using Python’s pickle formatter. Wiz emphasizes that this specific file format can serve as a gateway for arbitrary code execution, presenting significant risks for those accessing and using these blueprints.
Microsoft’s Response
Upon being informed of the breach, Microsoft’s security response team took prompt action and invalidated the SAS token within two days of the initial disclosure in June. While this response demonstrates the severity of the situation, questions still arise about the effectiveness of Microsoft’s initial security measures and protocols.
The recent security misstep at Microsoft highlights the ongoing battle against cyber threats and the urgent need for robust data protection measures. As users increasingly rely on cloud services, companies must prioritize the security of their infrastructure to prevent such breaches. This incident serves as a cautionary tale for organizations worldwide, emphasizing the need for comprehensive security audits, robust access controls, and constant vigilance in the face of an ever-evolving threat landscape.