Zerto 9.0 Introduces New Enhancements to LTR with Amazon S3, and Here’s What you Need to Know

Zerto 9.0 went GA on July 13, 2021, and the official launch webinar was today (July 29, 2021), but if you missed it, you can head to the following URL and register to watch it on-demand:

https://www.zerto.com/page/zerto-9-live-demo-instant-ransomware-recovery/

While there are many new enhancements that I’m not going to cover here, this blog is specifically related to the changes made to the product to bring even more value, cost savings, and security to Amazon S3 repositories used with Zerto’s Long-term Retention (Backup).

Along with these changes, you can sure expect an updated technical document that will cover all the steps and requirements (in detail) to take advantage of the new features. I will also update this post with a link to the updated document once it becomes available.

Update: The latest version of the published document I wrote to accommodate this blog post titled “Deploy & Configure Zerto Long-term Retention for Amazon S3” can be found here: https://bit.ly/ZLTRAWSS3

Auto-Tiering for Data Backed up to Amazon S3

The first enhancement I want to cover here is automatic tiering of retention sets after they’ve aged, meaning Zerto will automatically move backup data from Amazon S3 Standard to Amazon S3 Standard-Infrequent Access, and then again (if desired) to Amazon S3 Glacier.

Here is what it looks like when creating a repository in Zerto:

Now, when Zerto customers are backing up to Amazon S3, they can take advantage of better pricing as data ages, reducing cost and enabling more efficient use of storage. The new feature is enabled by default in a fresh install. If you are upgrading from a previous version, tiering will not be enabled by default, so you’ll need to enable it on an existing Amazon S3 repository, or create a new one. There are no additional configuration changes required to take advantage of this new feature.

Retention Set Immutability via Amazon S3 Object Lock

With Ransomware attacks continuing to rise (150% increase in 2020), the need to protect backup data via immutability becomes more important than ever. Customers can now specify whether or not they would like to enable immutability, which offers better protection from data either being deleted or otherwise compromised after it has been written.

While tiering doesn’t require any additional configuration, here are some things you’re going to need to know if you plan on using Zerto’s immutability feature with or without tiering:

  • You cannot enable S3 Object Lock on an existing S3 bucket. This is an AWS limitation. You will need to create a new S3 bucket to store immutable backups, and then create a new repository in Zerto.
  • You can have a repository that takes advantage of both of these new features, however, because of the object lock limitation on buckets (cannot be changed after the fact), you are still going to need a new repository (S3 bucket).
  • There are some additional permissions for the IAM policy (covered below) required in order to take advantage of immutability.
  • There are some additional features (covered below) you will need to enable on the S3 bucket to take advantage of immutability.
  • If you’ve enabled S3 bucket encryption per my previous blog post in an earlier version of Zerto, the good news is that you can still have encryption enabled along with these new features.

Updated IAM Policy Permissions Required for Amazon S3

Here is the updated list of S3 permissions required in your IAM policy to take advantage of these new features. If you have an existing policy in use today, I’ve highlighted the additional permissions required (in bold), so you can easily update that policy. If you’d like a JSON version of the permissions for use with Amazon IAM policy creation, you can get the file from Zerto’s GitHub repo:

https://github.com/ZertoPublic/Zerto9-LTR-AWS-IAM-JSON

  • S3:ListBucket
  • S3:ListBucketVersions
  • S3:GetBucketObjectLockConfiguration
  • S3:GetObject
  • S3:GetObjectAcl
  • S3:GetObjectVersion
  • S3:DeleteObject
  • S3:DeleteObjectVersion
  • S3:PutObject
  • S3:PutObjectRetention
  • S3:RestoreObject
  • S3:DeleteBucketPolicy
  • S3:PutObjectACL

Amazon S3 Bucket Configuration for Immutability

In order to enable Immutability for the Amazon S3 bucket, you’re going to have to create a new bucket. Enabling S3 Object Lock has to be done at time of creation, so as you’re creating your new S3 bucket, be sure to include the following options:

  • Enable S3 Bucket Versioning (This is required in order to enable Object Lock – See the screenshots below)
  • Under Advanced Settings for the bucket, enable Object Lock, and tick the box to acknowledge that enabling Object Lock will permanently allow objects in this bucket to be locked.

There you have it! I’ve done quite a bit of testing with the new feature and am excited that we’re able to provide these new capabilities to meet our customer requirements and better safeguard them! We’ve also got similar enhancements for Azure users (however no immutability – yet), and I am planning on creating a technical document for setting this up in Azure, so stay tuned for that as well ๐Ÿ™‚

If you have found this to be useful, please comment, or share so others are also aware. Thanks for reading ๐Ÿ™‚

Share This:

Reduce the Cost of Backup Storage with Zerto 8.5 and Amazon S3

When Zerto 7.0 was released with Long-Term Retention, it was only the beginning of the journey to provide what feels like traditional data protection to meet compliance/regulations for data retention in addition to the 30-day short term journal that Zerto uses for blazing fast recovery.

A few versions later, Zerto (8.5) has expanded that “local repository” to include “remote repositories” in the public cloud. Today it’s Azure blob (hot/cold), and AWS S3 (with support for Standard S3, Standard S3-IA, or Standard One Zone-IA).

And to demonstrate how to do it, I’ve created some content, which includes video and a document that walks you through the process. In the video, I even go as far as running a retention job (backup) to AWS S3, and restoring data from S3 to test the recovery experience.

The published whitepaper can be found here: https://www.zerto.com/page/deploy-configure-zerto-long-term-retention-amazon-s3/

Update: I have just completed testing with S3 Bucket Encryption using Amazon S3 key (SSE-E3), and the solution works without any changes to the IAM policy (https://github.com/gjvtorres/Zerto-LTR-IAM-Policy). There are two methods to encrypt the S3 bucket, with Amazon S3 key as the first option (recommended), and AWS Key Management Service key (SSE-KMS) as the other. I suggest taking a look at the following AWS document that provides pricing examples of both methods. According to what I’ve found, you can cut cost by up to 99% by using the Amazon S3 key. So go ahead, give it a read!

https://aws.amazon.com/kms/pricing/

Now for the fun stuff…

The first option I have is the YouTube video below (or you can watch on my YouTube channel) .

I’ve also started branching out to live streaming of some of the work I’m doing on my Twitch channel.

If you find the information useful, I’d really appreciate a follow on both platforms, and hey, enable the notifications so when I post new content or go live, you can get notified and participate. I’m always working on producing new content, and feedback is definitely helpful to make sure I’m doing something that is beneficial for the community.

So, take a look, and let me know what you think. Please share, because information’s only useful if those who are looking for it are made aware.

Cheers!

Share This:

Using the AWS Storage Gateway to Backup to S3 using Zerto

This one took a while to get out there, but alas, it has been published for public consumption.

With that, I’m happy to be able to share this new whitepaper with the community, as it was not only great to hear that Zerto supports it, but it was also a blast testing and documenting the solution!

As a part of the Zerto 8.0 launch earlier this year (March 22, 2020 to be exact), the AWS Storage Gateway was officially announced as being supported as a Zerto LTR (Long Term Retention/Backup) target, which effectively enables you to send your Zerto backups to Amazon S3.

Sure, as of Zerto 8.5, you can actually configure Azure Blob (Hot/Cold) Storage or Amazon S3 (with Infrequent Access Tier support) for Zerto backups, which will effectively enable you to send backups directly to the public clouds via HTTPs.

That said, where does the AWS Storage Gateway fit into the picture? When or why should I use it as opposed to sending my backups directly to the cloud?

In a nutshell, the difference between what Zerto does in 8.5, and what you get by using the AWS Storage Gateway is that with the storage gateway, you are getting a cached copy of your backup data on-premises, which resides outside of Zerto’s short term journal. Here’s how that topology looks:

Topology for the AWS Storage Gateway in a Zerto Environment

What we see here is that the Storage Gateway sits on-premises, and serves as a cache location for most frequently accessed data. You connect it to Zerto as an NFS or SMB repository (SMB must be used for indexing, btw) and configure your Virtual Protection Groups to send backups to this repository.

What you will get is a Zerto backup that will complete locally, and then the Storage Gateway asynchronously replicates that data out to an S3 bucket of your choosing. If you need to restore something from the backups (if your short term journal doesn’t contain what you need), you can quickly restore that data from the storage gateway without having to pull the data back down from S3.

Now that I’ve set the stage – without further ado (yeah I googled this to be sure I used the correct term), here’s the link to the whitepaper: https://bit.ly/2Krs14y

As an added bonus, if you are strapped for time and don’t want to read, I’ve also created a video that walks through the same steps to install and configure the AWS Storage Gateway for use with Zerto:

If you have found this useful, please be social and share! As usual, thanks for reading, and watching. Please leave any comments and questions below!

Cheers!

Share This: