It’s probably not surprising that, according to a 2018 Gartner survey about SaaS migration, 97% of respondents said their organization had already deployed at least one SaaS application. Today, a significant number of cloud applications have been elevated to the status of ‘critical-business system’ in just about every enterprise. These are systems that the business cannot effectively operate without. Systems that are used to either inform or to directly take really important action.

 It’s no wonder cloud applications like CRM, Support, ERP or e-commerce tools, have become prime targets for DataOps teams looking for answers about what and why certain things are happening. After all, think about how much business data converges in a CRM system – particularly when it’s integrated with other business systems. It’s a mastered data goldmine!

DataOps teams often identify a high-value target application, like a CRM system, and then explore ways to capture and ingest data from the application via the system’s APIs. In the case of, say, Salesforce, they might explore the Change Data Capture and Bulk APIs. Various teams with different data consumption needs might then use these APIs to capture data for their particular use case, inevitably leading to exponential growth in data copies and compliance exposure. (After all, how do you enforce GDPR or WORM compliance for data replicas tucked away God knows where?!). 

When they encounter API limitations or even application performance issues, DataOps teams then start to replicate data into nearby data lakes. This enables them to create centralized consumption points for the SaaS data outside of the application. Here, storage costs are more favorable and access is ubiquitous. Here, teams typically take a deep breath and start a more organized process for requirements gathering, beginning with the question of “who needs what data and why?”

Meanwhile in a parallel world, IT teams implement data backup strategies for those same cloud applications. If something bad happens (say, data corruption), these critical business systems need to be rapidly recovered and brought back online to keep the business going. Here, standard practice is to take snapshots of data at regular increments either through DIY scripts or with SaaS backup tools. In most scenarios, the backup data is put in cold storage because… well, that’s what you do with data replicas whose sole purpose is to act as an ‘insurance policy’ in case something goes wrong.

 With all of these teams trying to consume the same data in the same organization, it makes sense that costs and maintenance cycles quickly spiral out of control. For every TB of production data, ESG identified that another 9 TB of secondary data is typically generated – rapidly offsetting any cost savings due to ever-decreasing storage costs on public clouds.  

So why are we inflicting this 9X+ data multiplier on ourselves?

One reason is convenience. It’s just easier to walk up, grab what we need and walk away. But convenience can often come at the cost of quality, security and risk: how you do you the data you are grabbing is the best possible dataset the organization has on a particularly entity? This question is particularly important in organizations that have strong data mastering initiatives. If your replicas contain sensitive data that you are tucking away in some generally unknown place, are you expanding the attack surface area for the organization? Are there governance or compliance regulations that your data may fall under?

Another reason is because “we’ve always done it this way.” The status quo of thinking about backup data as an insurance policy that is separate and unrelated to SaaS data ingestion for other scenarios, reaches back before the days of SaaS applications themselves – when data backup and ingestion were two separate motions done on the database level.

How we do things is just as important as doing them in the first place. And changing HOW we do things is hard. It starts with the realization that the status quo no longer applies. In this case, the realization that cloud applications allow for fundamentally different data consumption patterns – and that backup tools can be the perfect hat trick to take back ownership and control of your cloud application data, and to re-use backed up data for all other data consumption needs across our organizations.