Data protection strategy

Like everything in IT, the security afforded by your data protection strategy comes with costs. You have a budget for software, hardware, on-premises storage, colocation, long-term cloud storage and even the connections you maintain to keep redundant sites synchronized. As economic uncertainty evolves, organizations will start looking for ways to cut costs. When budgets shrink, how do you successfully manage all those moving parts while continuing to implement your data protection strategy?

This post examines ways to reduce data protection costs without reducing your data protection.

The knee-jerk reaction: Retain data for less time

In crunch times, when data volume is rising and budgets are shrinking, a common approach is to reduce data retention policies. For example, instead of retaining backups for seven years, you configure your storage software to delete them after six years. Why? To reduce the amount of storage space you need to keep it on, whether in your data center or in the cloud.

Naturally, that lowers your spending in the short term. But if you have compliance obligations to keep that data for seven years, you’re betting on not being audited. Even more seriously, you’re betting on never needing to restore backed-up data from that seventh year. How lucky do you feel?

Adjusting your data protection strategy to lower costs

It makes more sense to broaden your data protection strategy by investing in smarter storage hardware, or in secondary storage software that reduces your storage requirements through compression and deduplication. Consider these tactics for lowering costs without risking compliance problems.

Evaluate your data retention policies

It’s not necessary to roll your data retention all the way back to the minimum allowed by your storage solution. But you may be able to lower costs by adjusting to the minimum requirements for the business purpose.

Note that business purpose varies by the business function, so you could have different retention policies across business units, verticals (such as health care vs. finance) and departments (such as marketing vs. manufacturing).

Classify your data and set priorities for protection

Are you backing up data that doesn’t need to be backed up, like test machines, swapfiles and entire operating systems? When you classify data according to what does and doesn’t require protection — or at least, doesn’t need to be backed up every day — you can reduce your storage requirements.

Opt for software-defined solutions

Most hardware-defined solutions come with specific hardware requirements, and you can find yourself making a lot of decisions about data protection to justify your investment in hardware. With software solutions, on the other hand, you have a lower total cost of ownership (TCO) and greater agility than when you’re tied to fixed appliances.

Also, suppose you’re accustomed to backing up to your data center, but your organization decides to move backup to the cloud. If you stay with your on-premises hardware, you’ll either delay that migration to the cloud or squander a big part of your hardware investment by migrating. The right software-defined solution will operate flexibly in either your data center or in the cloud.

Evaluate cloud versus on-premises storage

Speaking of steering your data protection strategy to the cloud, are you convinced that you’ll save money that way?

At first glance, it seems like a simple calculation: the cost of a physical, on-premises storage solution over three or five years, against the cost of storing the data in the cloud for three or five years.

But the reality is that it’s seldom a simple calculation, and your results can vary from one provider to another. Some enterprises that rely on the cloud for secondary storage have found the cost savings elusive because it’s not always clear from the outset what you’re getting into.

Nevertheless, it is useful to understand which flavors of cloud storage are expensive and which are not, then optimize your data accordingly, before it goes to the cloud. In essence, if you park your data in the cloud and never need to recover it, then it will work out to be inexpensive storage. But the cost of recovering your data from the cloud can add up, and most companies overlook that in their assessment of cloud costs.

One approach to cloud versus on-premises is to blend them in a tiered model. Consider keeping data and backups local for, say, two weeks, then move them off to the cloud after that. Store the most recent files locally, because there’s a greater likelihood that you’ll need to recover them, and fast recovery is a business priority. Older data that you keep for compliance purposes goes out to a colder storage tier in the cloud.

Whichever approach you follow, an intelligent recovery solution will allow you to reduce the amount of data you need to recover, so that you spend less on storage.

Conduct a risk assessment

Risk assessment plays a role in your data protection strategy because you ask the question, “If we lose this data, what will it cost the company?” It leads into considerations of protecting data in certain ways, whether by pushing it out to cloud or keeping it local. You then roll those considerations into your recovery plan.

Implement a data recovery plan. Then test it

Having a data recovery plan in place is a big step in your organization’s cyber resilience. The next big step is to test it regularly.

At first blush, disaster recovery testing looks like a way to spend time and money for little demonstrable gain. In fact, testing could save you money especially when you use it to work out the kinks in your recovery plan to quickly resume services after an outage. Among other things, testing will show you the dependencies among your data sets so that you see the best order of recovery to make your users and customers productive again.

The faster you can recover from an outage or interruption, the less impact it will have on business and productivity, and the less the downtime will cost you.

As part of your data protection strategy, a documented recovery plan is how you set service level agreements (SLA) with management, be it the board of directors, the CEO or the C-suite. It shows how you plan to recover first the important data, then the less-important data, and how long it will take to return to productivity.

Testing your plan doesn’t mean that you try all possible recovery scenarios; after all, you can’t stage a tornado or set fire to a building. But you can have tabletop exercises and perform test restores to see how long they take under normal conditions. That’s especially useful if you’re restoring from the cloud, where bandwidth will be a big factor.

That’s better than nothing, and it’s certainly better than saying, “I’ve got a recovery plan. Our data is backed up over there,” without any testing.

Optimize your backed-up data before storing it

Does your data protection strategy include getting the most out of your secondary storage? Naturally, the storage requirements of your backups will grow over time, as more users generate more data that needs to be protected. You may think that you can’t afford to buy more storage, but doing nothing is not the solution when you have this kind of problem. The solution is to optimize your data as much as possible before you store it.

Whether on premises or in the cloud, you can use compression and deduplication algorithms to reduce your storage requirements and costs by up to 90 percent. That also applies to cloud object storage like native object storage. And, ransomware protection capabilities in storage optimization products can shield your backup data by making it immutable.

As your data footprint expands, you’ll realize that optimization is the best way to reduce your cloud costs. It makes financial sense to spend money on optimization and save money over time, whether one, three or five years.

Adjustments that seem like good ideas but are not

Besides those recommended adjustments, almost every company contemplates a few non-recommended ones.

Open-source software

In the effort to adjust their data protection costs, some organizations look to free, open-source software (FOSS) for the benefit of saving money. But even moving to an open-source solution has associated costs.

Unless you plan to crack open the open-source code base and start contributing to the community, you have no control over FOSS. If somebody reports a bug in the product, who’s going to fix it? At that point, you’re dependent on the community to agree that it’s a problem, find a member willing and able to repair it, and schedule the patch release. That can take a long time in the non-commercial world of FOSS.

Enterprises and government entities purchase commercial software for the peace of mind that comes from having a well-reputed product built in a sufficiently secure way. The software is tested and market-proven, and it comes with technical and engineering support. Staking your data protection strategy on FOSS puts you at the mercy of the software’s creators. And your exposure increases if you create scripts and enhancements that the software suddenly one day no longer supports. The problem rests squarely with you because you have nobody responsible for success.

Do you want to run the risk of not being able to perform backup reliably? If you use FOSS and your backup or recovery operation fails, you’ll have nowhere to go. Your data protection tools are essentially an insurance policy you pay for, and when something goes wrong, you’ll want to make a claim against that insurance policy. FOSS isn’t set up for that.

In-house (“home-brewed”) solutions

Imagine a small team of administrators tasked with backing up their company’s data, then distributing it to several remote sites for redundancy. Their data protection strategy calls for writing and maintaining elaborate scripts to perform that work, then regularly executing the scripts by hand. The possibility of a script error notwithstanding, the solution does what the admins need it to do: it backs up the data.

And it keeps them employed.

Modern backup tools automate exactly that kind of task, but they can be a hard sell to an entrenched audience. An in-house solution is not always amenable to the argument that commercial software products are backed by the resources of an entire company. Small, in-house teams who value their understanding of the company’s needs over marketplace proof may prefer to stick with the solution they themselves have developed.

Home-brewed data protection solutions are known for familiarity, not for cost savings. They revolve around the proprietary, in-house expertise of a small number of people, not around teams of professionals working on a product tested in the market. Commercial tools introduce automation, which can free up resources for more productive pursuits, provided they are correctly implemented.

Sweating the IT assets

When you demand too much of your hardware and IT assets, you deprive them of breathing room for any single mistake and run the risk of malfunction.

In the context of data protection strategy, tape drives are a case in point. As described above, the industry standard is to use physical media for short-term (i.e., a few weeks) storage, then move the data to tape drives for long term storage. But suppose the amount of data to be backed up continues to grow and you have no budget to add physical media. You could convince yourself to make your short term even shorter and start moving data off physical media and out to tape sooner. That’s an example of sweating the IT assets; in this case, your tape drives.

Unfortunately, the wrinkle in the carpet soon becomes too large to ignore. In time, the size of data to be backed up will exceed the time it takes to move it. When you overrun the tape drive’s capacity, there’s not enough time for it to back up from the physical media. That physical media in turn runs out of capacity and cannot back up from the network. Storage fills up and backups stop.

It’s a misapplication of technology. While your first reflex may be to complain to your backup software vendor, the real problem is that you’ve doubled your data volume without upsizing your backup hardware.

Turning to a backup solution provider

What would justify switching from your data protection strategy to a different backup solution provider?

In most companies, if somebody is keen to switch, it’s usually because of one of the five C’s:

  • Cost – The current solution is becoming too expensive to maintain.
  • Complexity – The company’s IT landscape has grown so complex that admins prefer to hand the task off.
  • Capability – The solution no longer suffices for the data types that the company has adopted.
  • Change – New IT leadership takes over and prefers, for better or worse, to use vendors and solutions they know and trust.
  • Copiousness – From terabytes through petabytes and exabytes, the company wants to get out of the business of keeping up with unabating data growth.

If your concern is to save money, you’ll revisit your recovery plan and your current, ongoing costs, and then budget accordingly.

But before going to the trouble of changing your data protection strategy, keep in mind that steps can be taken to further optimize your backup data. Lower storage requirements save you on the costs of hardware, cloud, network, and electricity, which may suffice in your current business climate.

Every organization will have to answer some key questions regarding their current data protection strategy. Will a backup solution provider be better for your business? How much better? Will backup and recovery be easier? Once maintenance is figured in, will it still cost less in the long run? Should you purchase a term license? Are subscriptions an option?

Protect all your systems, applications and data.

Protect all your systems, applications and data.

Gain continuous data protection, instant recovery and ransomware protection with our cloud-ready immutable backup solution.

Conclusion

To adjust your data protection strategy in an uncertain economy, most of the factors you’ll evaluate are internal. On top of that, the return on investment may not be visible for a long time, but it will be a function of your current business situation and problems compared to the cost of the solution.

The conversation often begins when somebody says, “Our IT budget has been cut by 25 percent. Go evaluate everything we’re running and see how much of it we can keep for another year (or two years or five years) at that cost level.” The key to making smart adjustments is to correctly identify the original problems you’re trying to solve, and it may be that saving money is only one of those problems.

You may find yourself replying, “We’re going to have to spend money to save money. If we do nothing for several years, we risk having unreliable backup.” That will expose your business to financial risk and affect your compliance with regulations. Worse yet, if you’re still in business by then, you’ll probably have to spend even more money to get your data protection strategy back to where it was before you started cutting costs.

However, at the end of the day every organization will have to balance their current backup and recovery posture with the cost and risks that come with adjusting an overall data protection strategy.

6 ransomware attack vectors where backup can help

Learn More

About the Author

Adrian Moir

Adrian Moir is a Technology Strategist at Quest Software with a background in electrical and electronic engineering, and over 30 years’ experience working in the IT industry throughout both corporate and channel-based businesses. In recent years, Adrian has worked within the Information and Systems Management business at Quest with a focus on Data Protection products, driving the technology portfolio strategy and working as a product evangelist. Before this as the EMEA Pre-Sales Manager for the Dell Software Data protection team and EMEA Technical director for BakBone Software, Adrian ensured delivery of presales activities throughout the region.

Related Articles