Is Your SaaS Stack Secretly Consuming Sensitive Data?
- Julie Taylor
- Oct 9, 2024
- 5 min read
Updated: Aug 3
The SaaS paradigm is great for businesses who want to spend time focused on serving customers, not building technology. These businesses also typically assume their SaaS provider is keeping their data safe. Unfortunately, all you have to do is open the headlines to see that’s not true.
This is why almost 9 out of 10 survey respondents said SaaS security was a top concern. The rise of AI, and the challenges of AI data governance, will only worsen the problem, as the number of SaaS apps multiplies across the business and its environment.
Why Are SaaS Providers So Unreliable on Sensitive Data Protection?
They process and store vast amounts of sensitive data from a diverse user base, allowing attackers to hit multiple targets with a single attack.
Their easy integration means employees and business units sometimes bypass established IT procurement processes. This can create vulnerabilities due to misconfigurations and unnecessary privileges, creating a low-hanging fruit for threat actors.
Nearly 50% of organizations that use Microsoft 365 believe they have less than 10 applications connected to the platform; however, the average number per organization is over 1,000. With organizations unaware of the number of SaaS apps they've deployed, assuming they have any control over the data these shadow SaaS applications may be storing or consuming is unrealistic.
SaaS Providers Can’t Solve AI Data Governance For You
Recent findings suggest that 7 of the top 10 most commonly used AI applications can use your data to train their AI models. Organizations often have little choice but to grant apps like DocuSign, ChatGPT, and Grammarly access to proprietary business data and documents for everyday operations. However, the fact that these apps can further use and potentially share this data with third parties for AI enhancements is concerning.
Strict data privacy and AI governance regulations compel SaaS providers to disclose their data collection practices and allow users to opt out of data sharing for AI training. But to bypass these restrictions, SaaS providers resort to murkier tactics.
Shady Vendor Practices and Shadow IT
They bury necessary disclosures and opt-out procedures in lengthy terms of service that, let’s face it, users never read in full. Worse, opting in is often the default, while opt-out is intentionally obscure to deter users from exercising their rights.
In some cases, users must directly request an opt-out when it could be as simple as an automated toggle. This makes bypassing IT procurement processes even more problematic – individual users rarely take the time to investigate the permissions they’ve granted by default or how their company’s data is being shared, let alone navigate the complex opt-out procedures.
SaaS Sensitive Data Controls Are Failing, But You’re Paying the Price
Why does all of this even matter? Storing your confidential and proprietary files and documents in SaaS applications can open up a Pandora's box of potential issues. Once your data is out, there's no regaining control over it. With ever-changing and increasingly stricter data privacy regulations, the data you overlook today can land you in serious trouble tomorrow.
Inadvertent GenAI Data Leaks: AI models and LLMs sometimes regurgitate text verbatim. This means the trade secrets, confidential information, and communications in your documents that you upload to DocuSign or scan by Grammarly may get leaked unintentionally.
Shadow SaaS and Inconsistent Policies: Visibility gaps surrounding shadow SaaS mean these applications may be storing or accessing data they should never have had access to in the first place.
Supply Chain Vulnerabilities: Gaps between business leaders and IT teams and their quest to adopt ground-breaking technologies faster mean SaaS applications don’t always get security vetting before deployment. Unknown vulnerabilities in these platforms can compromise your organization's privacy, security, and reputation.
Compliance and Legal Challenges: If business documents containing your customers' private information are stored and subsequently used to train SaaS providers' AI models without consent, your company could face privacy breach lawsuits.
Insider Threats and Access Control Failures: SaaS apps often let a lot of people from different departments access all sorts of data. If permissions aren’t set up carefully, it’s all too easy for someone inside—whether by accident or on purpose—to share sensitive info they shouldn’t. Sometimes it’s just a sloppy setting like “anyone with the link can view,” and other times it’s more deliberate.
Data Residency and Jurisdiction Issues
With SaaS, your data might be sitting on servers halfway around the world, and you probably don’t have much say about where it lives. That can bring up all sorts of compliance headaches and could even mean another government gets access to your sensitive info.
The Growing Need for Data-Blind SaaS and AI Dev
Sometimes, it is necessary to upload data to SaaS providers as it is part of their workflow. Sometimes, SaaS providers demand permissions far beyond what they need to maximize their profits. It would be unrealistic to cut off any and all SaaS applications. After all, data must be accessed and utilized for it to be worth anything at all. An ideal approach combines the convenience of SaaS with the ability to apply the principle of Least Privilege Access (LPA) to the data stored within. This ideal is realized in data-blind SaaS.
For most SaaS applications, transferring your data to their servers is unnecessary. Instead of relying on the guardrails SaaS providers put around their storage systems and infrastructure, choosing alternative applications that process your data and documents directly on your infrastructure of choice should be a priority.
At Confidencial, we refer to this as “Data-blind SaaS”. This model depends on advanced cryptographic techniques to process data without exposing it to the SaaS provider or any third party. It allows organizations to utilize and process their data through third-party SaaS applications while maintaining control over where it lives and how it’s managed.
Confidencial Ushers in the Data-Blind SaaS Era and Superior Sensitive Data Protection
As a data-blind data security and workflow management platform, Confidencial is pioneering the era of Data-blind SaaS. We’re doing so in two ways:
Confidencial is data-blind by design: From Cloud Protector to SDX and Sign, all of our solutions are built to be data-blind, meaning we never see or access your data – it stays securely within your infrastructure.
Confidencial makes your SaaS and GenAI apps virtually data-blind: Confidencial automatically identifies and protects the most critical parts of your unstructured data using selective encryption, leaving the non-critical parts of files and documents accessible to SaaS applications and LLMs.
Keep your data in the safest hands—your own—even as your SaaS footprint expands and new cloud apps pop up everywhere. In today’s AI-driven world, the risks of unintentional data leaks have skyrocketed, with generative AI models sometimes exposing private company information without warning.
Confidencial gives you a single, powerful platform to lock down sensitive unstructured data, no matter how many SaaS apps you add or how global your operations become. With Confidencial, you can embrace innovation and leverage AI tools confidently, knowing you always remain in full control of your data and your business.
Learn how Confidencial seamlessly integrates security and access control into the data, making data protection effortless. Request a free demo to see Confidencial in action now!



Comments