Oct 12, 2021 | Nikola Apostolov
High Availability remains one of the essential capabilities the IT department of any organization is looking at when building an efficient data protection strategy for an on-premises or hybrid infrastructure.
In this post, we’ll examine the significance of High Availability as part of business continuity practices as well as ways to use Microsoft DFSR and cloud storage services to simplify and optimize the process.
High Availability (HA) is a technology system which can remove single points of failure (SPOFs) by enabling automatic failover to a redundant system without losing data as well as failback once the problem is resolved, thus avoiding downtime.
Important metrics when assessing High Availability and Disaster Recovery include Recovery Time Objective and Recovery Point Objective (RTO and RPO), which measure the maximum tolerable duration of any outage and the maximum amount of data loss that can be tolerated when a failure happens, respectively (the lower, the better). Data availability and adoption/running costs are other important factors.
It is vital to have a smart and efficient HA setup as most of the time this can consume a significant part of a company’s budget.
By default, Tiger Bridge allows customers to have a secondary failover server in several scenarios. When all their data is replicated off-site to cloud object storage, providing up to 16 nines of data availability and a much smaller adoption cost, they can choose between the following options:
1. Keep a complete asynchronous replica (failover server) of the production server.
The downside to this approach is an expensive and unnecessary second storage silo.
2. Keep a small (1% – 5% of the production storage) failover server that will sync metadata for cold files only and keep hot data locally.
This lowers the cost of secondary server infrastructure. However, the actual cold files are retrieved with more significant latency.
3. Keep no secondary data server at all
In case of a disaster, the user can connect a new physical or virtual server to the existing bucket in the cloud and restore the metadata on demand. Any folder that an application or user accesses will be imported automatically. This failover approach gives access to all the data without importing it first.
4. Use Tiger Bridge together with Microsoft DFSR (recommended option)
Microsoft DFS allows a file server to present network shares as part of a distributed file system and to transparently link those shares into a single hierarchical namespace.
DFSR is the Windows built-in replication engine that allows replication of files and folders between multiple file servers. In the event of a disaster, the network users are redirected to the second server providing an efficient failover mechanism.
In that scenario, the secondary DFS server, acting as a failover server, can run Tiger Bridge. As a result, it will store only the hot data (or cache) locally. By doing so, companies can lower the cost of infrastructure significantly while gaining a very efficient failover scenario based on DFSR.
When modernizing the business continuity plan of an organization, designing an efficient High Availability and Disaster Recovery system with the help of cloud storage and services provides a simplified and more efficient concept. Using Tiger Bridge and cloud storage and services allows organizations to achieve better data resiliency, shorter RPO and RTO with a much smaller storage footprint and a far less complex setup. This combination dramatically reduces the cost while providing superior functionality.
Want to learn more about Tiger Bridge? You can check out our 5-minute overview video.
Watch this video showing a basic Disaster Recovery setup with Tiger Bridge and Microsoft Azure.