Scale-Deep Lifecycle Manager
Replicate & tier aging files to disk, tape and cloud.
Tiger Bridge makes it easy to align data value with storage costs by seamlessly extending your NTFS or Tiger Store file system and performing transparent data migration between one storage tier and another using simple policies.
Tiger Bridge installs as a high-performance, secure and flexible software connector that runs on your Windows server. Once installed, it enables a primary storage tier (i.e. local NTFS file system) to transparently extend into a secondary tier (i.e. tape libraries with Tiger Bridge-T; block-level/SMB with Tiger Bridge-D; and object-based RESTful/S3 with Tiger Bridge-C).
The secondary tier of storage can be used for replicating, tiering, overflowing, as well as for keeping mutliple geos in sync.
Extend Tiger Store, Tiger Pool or NTFS file systems
Tiger Bridge supports the NTFS file system natively. As such, it can be installed on any Windows server to extend a local file system into a secondary tier of storage. The primary storage can contain live data. Once configured, files residing, created or updated on the primary storage are automatically replicated to the target storage (according to the policy and schedule). A Tiger Store, Tiger Pool or NTFS volume can be used as primary as well as secondary tier of storage. In some instances, both at the same time.
Tier to Tiger Store, Tiger Pool, NTFS, LTO tape libraries, SMB shares or RESTful/S3 targets
Tiger Bridge supports a wider variety of secondary tier. Tiger Store, Tiger Pool, and NTFS file systems can be used as primary or secondary tiers. Virtually any device supporting the SMB protocol can be used as a secondary tier target. This includes solutions such as XenData and StrongBox. LTO tape libraries such as IBM and Qualstar can also be used. S3 Interface: - Amazon Web Services (AWS) - IBM Cloud OpenStack Services (ICOS) - Microsoft Azure - Spectra Logic Blackpearl - DataDirectNetwork Web Object Scaler (WOS)
Support for multiple targets (soon available)
Each volume and folder of a primary tier can be configured with its own target and policy. This enables you, for example, to tier one folder to cloud and another folder to tape. It also allows you to tier from a local volume (ex: SSD) to another local volume (ex: HDD); and then from this second volume (ex: HDD) to cloud.
Stub-files for transparent tracking and retrieval
When a file has been moved to a secondary tier of storage and needs to be removed from the primary tier, a stub-file containing all the valuable metadata information is left behind and can be readily accessed by users and applications. Unlike backup software, there is no need to consult a database to find out where a file content was moved. All you need to do is navigate your browser, as usual. On Mac, stub-files will appear with a grey tag. On PCs, the stub-file attribute will be set. When a stub-file is open, its content is automatically and transparently retrieved and made available to the application requesting it.
Fully Automated and/or Manual Replication
The Reclaim Space policy lets the admin sets a usable threshold level on the primary storage (ex: 75% full). If the established volume capacity has not yet been reached, selected folders will be replicated automatically, but files will not be tiered. When the file system capacity exceeds the threshold, only files meeting the specified criteria (ex: not touched in 12 weeks and larger then 1GB) will be flushed. In other words, they will be replaced with stub-files. Of course, files, folders or the entire volume can also be manually selected for replication.
Fully Automated and/or Manual Retrieval
When the secondary tier is relatively fast (i.e. local storage, NAS) it is most convenient to put the system in automatic-retrieval mode. This way, users just need to open a stub-file to retrieve its original content. When the secondary tier is tier slow or is based on a pay-per-access (i.e. LTO Tape, AWS Glacier); it is best to have an admin control the retrieval process manually.
Smart retrieval engine
Tiger Bridge supports partial retrieval. This means that if a user (or the application) tries to access a particular portion of a file that is being retrieved, Tiger Bridge will seek that information first. This greatly enhances the user experience when trying to access large files. For instance, a video editor won't need to wait for the entire file to be retrieved when trying to "scrub" a file that is on the cloud. Tiger Bridge is also smart. This means that if a user simply tries to browse a folder, all files won't be retrieved just to display thumbnails. Finally, when retrieving from a linear storage such as LTO-tape, a smart bulk retrieval engine optimizes the order in which files will be retrieved in order to minimize seek times.
Leverage processing power from the cloud
Unlike other popular gateway solutions that transfer blocks of data into the cloud, Tiger Bridge transfers files. As such, the same file and folder structure is accessible from within the cloud as from your local server. Any number of virtual compute engines can easily be fired up to crunch all your data. Facial recognition, transcoding, big data analytics and other type of cpu intensive applications are now at your fingertips, without even having to change anything to your existing infrastructure.
Powerful policies: Replicate, Tier, Overflow and Sync
Tiger Bridge provides all the policies you might need: - Replicate: Replicates files from primary tier to the secondary tier. You end up with two copies. The reported size of the bridged volume is the size of the primary volume. - Tier: Once a file has been successfully moved to the secondary tier, the original file will be removed from the primary tier and replaced with a stub-file IF there is a need to make space on the primary tier. The reported size of the bridged volume will be the largest of the primary or the secondary. - Overflow: The secondary tier appears as a direct extension of the local volume. Once a file has been successfully moved to the secondary tier, the original file is removed from the primary tier and replaced with a stub-file. A file can be in either location, but not in both. The reported size of the bridged volume will be equal to the primary plus the secondary volumes. - Sync: A special version of Tier policy, allowing multiple servers to sync on the same cloud account. As a result, when a file gets copied to the cloud from any connected server, a stub file is automatically be created shortly after on all other connected servers.
Active Directory support
Integrating Tiger Bridge into your Active Directory environment is a real breeze. Indeed, the native NTFS file system is native to Active Directory and ensures a perfect fit.
Enabling legacy file systems to transparently extend into the cloud
Setting up an SSD accelerator for an existing NAS storage
Implementing a simple Disaster Recovery strategy
Migrating live data from one cloud provider to another (requires Tiger Pool)
Synchronizing servers located in different geos (soon available)
Tiger Bridge extends a local file system into a target storage.Figure 1 shows a regular server with local volume. Figure 2 shows how the local volume can be extended into another volume; an SMB share; a tape library or an object storage (i.e. cloud). Unlike with Tiger Pool where users can access any of the pool member directly, users ALWAYS access the local volume with Tiger Bridge. If the data is not available on the local volume, it is recalled. As such, Tiger Bridge can be used to create a high-speed SSD cache in front of traditional HDD RAID. Figure 3 shows how Tiger Bridge can be used for Data Recovery applications where near-zero downtime is required. While in a standard backup-restore procedure ALL data would need to be restored for the recovery site to be operational. With Tiger Bridge, the recovery site only needs to read metadata to generate stub-files in order to resume operations. Files will get restored as they are accessed. Such an approach takes a fraction of the time of traditional restore. Figure 4 shows how Tiger Bridge in conjunction with Tiger Pool can be used to migrate live data from one cloud provider to another. In this case, two local volumes, each extended into their own cloud using Tiger Bridge are connected to a server. Tiger Pool is then used to combine these two volumes into one unified namespace. The migration of one volume to the other will automatically trigger a recall; a migration; and a push to the new cloud. This operation can take place while the pool is in use. As such, users are not aware of the change that is taking place. Figure 5 shows how Tiger Bridge fits into the larger FAN model with the other software.
|- 2 GB of physical RAM at least (8GB or more for server).
- 100 MB of available hard-disk space for Tiger software installation.
- 4/8/16/32 GbFC, Infiniband, SAS, 1/10/40/100 GbE adapter for connection to the storage.- Network LAN connection (1 Gb at least) for public communication.
The following TCP ports - 3000, 3001, 8555, 9120, 9121, 9122, 9123, 9124, 9125, 9126, 9127 - should not be blocked by a firewall if any.
|Tiger software supports any simple or striped NTFS-formatted volume to which the storage server has Read & Write access. You can connect the storage server to the storage directly or using a switch through Fibre Channel, SAS, Infiniband or 1/10Gb Ethernet (for iSCSI storage), PCIe, etc.
Note: Although Tiger software is designed to work with any iSCSI initiator, it is currently certified to work with
- Microsoft iSCSI Software Initiator - UNH iSCSI Initiator - Studio Network Solutions' globalSAN iSCSI initiator for OS X - ATTO Xtend SAN iSCSI initiator Note: If you use an iSCSI initiator not listed above, you can contact Tiger Technology support team with inquiry about possible support.
|Tape Libraries||Spectra Black Pearl||Qualstar||XenData|
|Object Storage||Amazon AWS||Microsoft Azure||IBM Cleversafe||Hydrastor||DDN WOS|
... and more!
|DAS/SAN||SANS Digital||Seagate||ATTO RAID||Accusys RAID||LSI RAID by Agavo|
... and more!