Tiger Bridge

Scale-Deep Lifecycle Manager

Replicate & tier aging files to disk, tape and cloud.

Tiger Bridge makes it easy to align data value with storage costs by seamlessly extending your NTFS or Tiger Store file system and performing transparent data migration between one storage tier and another using simple policies.

Tiger Bridge installs as a high-performance, secure and flexible software connector that runs on your Windows server. Once installed, it enables a primary storage tier (i.e. local NTFS file system) to transparently extend into a secondary tier (i.e. tape libraries with Tiger Bridge-T; block-level/SMB with Tiger Bridge-D; and object-based RESTful/S3 with Tiger Bridge-C).

The secondary tier of storage can be used for replicating, tiering, overflowing, as well as for keeping mutliple geos in sync.

Highlights

Extend Tiger Store, Tiger Pool or NTFS file systems
Tiger Bridge supports the NTFS file system natively. As such, it can be installed on any Windows server to extend a local file system into a secondary tier of storage.

The primary storage can contain live data. Once configured, files residing, created or updated on the primary storage are automatically replicated to the target storage (according to the policy and schedule).

A Tiger Store, Tiger Pool or NTFS volume can be used as primary as well as secondary tier of storage. In some instances, both at the same time.
Tier to Tiger Store, Tiger Pool, NTFS, LTO tape libraries, SMB shares or RESTful/S3 targets
Tiger Bridge supports a wider variety of secondary tier. Tiger Store, Tiger Pool, and NTFS file systems can be used as primary or secondary tiers.

Virtually any device supporting the SMB protocol can be used as a secondary tier target. This includes solutions such as XenData and StrongBox.

LTO tape libraries such as IBM and Qualstar can also be used.

S3 Interface:

- Amazon Web Services (AWS)
- IBM Cloud OpenStack Services (ICOS)
- Microsoft Azure
- Spectra Logic Blackpearl
- DataDirectNetwork Web Object Scaler (WOS)
Support for multiple targets (soon available)
Each volume and folder of a primary tier can be configured with its own target and policy.

This enables you, for example, to tier one folder to cloud and another folder to tape. It also allows you to tier from a local volume (ex: SSD) to another local volume (ex: HDD); and then from this second volume (ex: HDD) to cloud.
Stub-files for transparent tracking and retrieval
When a file has been moved to a secondary tier of storage and needs to be removed from the primary tier, a stub-file containing all the valuable metadata information is left behind and can be readily accessed by users and applications.

Unlike backup software, there is no need to consult a database to find out where a file content was moved. All you need to do is navigate your browser, as usual.

On Mac, stub-files will appear with a grey tag. On PCs, the stub-file attribute will be set.

When a stub-file is open, its content is automatically and transparently retrieved and made available to the application requesting it.
Fully Automated and/or Manual Replication
The Reclaim Space policy lets the admin sets a usable threshold level on the primary storage (ex: 75% full). If the established volume capacity has not yet been reached, selected folders will be replicated automatically, but files will not be tiered. When the file system capacity exceeds the threshold, only files meeting the specified criteria (ex: not touched in 12 weeks and larger then 1GB) will be flushed. In other words, they will be replaced with stub-files.

Of course, files, folders or the entire volume can also be manually selected for replication.

Fully Automated and/or Manual Retrieval
When the secondary tier is relatively fast (i.e. local storage, NAS) it is most convenient to put the system in automatic-retrieval mode. This way, users just need to open a stub-file to retrieve its original content.

When the secondary tier is tier slow or is based on a pay-per-access (i.e. LTO Tape, AWS Glacier); it is best to have an admin control the retrieval process manually.
Smart retrieval engine
Tiger Bridge supports partial retrieval. This means that if a user (or the application) tries to access a particular portion of a file that is being retrieved, Tiger Bridge will seek that information first. This greatly enhances the user experience when trying to access large files. For instance, a video editor won't need to wait for the entire file to be retrieved when trying to "scrub" a file that is on the cloud.
Tiger Bridge is also smart. This means that if a user simply tries to browse a folder, all files won't be retrieved just to display thumbnails. Finally, when retrieving from a linear storage such as LTO-tape, a smart bulk retrieval engine optimizes the order in which files will be retrieved in order to minimize seek times.
Leverage processing power from the cloud
Unlike other popular gateway solutions that transfer blocks of data into the cloud, Tiger Bridge transfers files. As such, the same file and folder structure is accessible from within the cloud as from your local server. Any number of virtual compute engines can easily be fired up to crunch all your data. Facial recognition, transcoding, big data analytics and other type of cpu intensive applications are now at your fingertips, without even having to change anything to your existing infrastructure.
Powerful policies: Replicate, Tier, Overflow and Sync
Tiger Bridge provides all the policies you might need:

- Replicate: Replicates files from primary tier to the secondary tier. You end up with two copies. The reported size of the bridged volume is the size of the primary volume.

- Tier: Once a file has been successfully moved to the secondary tier, the original file will be removed from the primary tier and replaced with a stub-file IF there is a need to make space on the primary tier. The reported size of the bridged volume will be the largest of the primary or the secondary.

- Overflow: The secondary tier appears as a direct extension of the local volume. Once a file has been successfully moved to the secondary tier, the original file is removed from the primary tier and replaced with a stub-file. A file can be in either location, but not in both. The reported size of the bridged volume will be equal to the primary plus the secondary volumes.

- Sync: A special version of Tier policy, allowing multiple servers to sync on the same cloud account. As a result, when a file gets copied to the cloud from any connected server, a stub file is automatically be created shortly after on all other connected servers.
Active Directory support
Integrating Tiger Bridge into your Active Directory environment is a real breeze. Indeed, the native NTFS file system is native to Active Directory and ensures a perfect fit.
Tiger Bridge extends a local file system into a target storage.
Figure 1 shows a regular server with local volume. Figure 2 shows how the local volume can be extended into another volume; an SMB share; a tape library or an object storage (i.e. cloud). Unlike with Tiger Pool where users can access any of the pool member directly, users ALWAYS access the local volume with Tiger Bridge. If the data is not available on the local volume, it is recalled. As such, Tiger Bridge can be used to create a high-speed SSD cache in front of traditional HDD RAID.

Figure 3 shows how Tiger Bridge can be used for Data Recovery applications where near-zero downtime is required. While in a standard backup-restore procedure ALL data would need to be restored for the recovery site to be operational. With Tiger Bridge, the recovery site only needs to read metadata to generate stub-files in order to resume operations. Files will get restored as they are accessed. Such an approach takes a fraction of the time of traditional restore.


Figure 4 shows how Tiger Bridge in conjunction with Tiger Pool can be used to migrate live data from one cloud provider to another. In this case, two local volumes, each extended into their own cloud using Tiger Bridge are connected to a server. Tiger Pool is then used to combine these two volumes into one unified namespace. The migration of one volume to the other will automatically trigger a recall; a migration; and a push to the new cloud. This operation can take place while the pool is in use. As such, users are not aware of the change that is taking place.


Figure 5 shows how Tiger Bridge fits into the larger FAN model with the other software.
Server Requirements
- PC with 64-bit (x64) processor.

Note: Tiger Bridge actively uses the APIs provided by the target provider (S3, DDN WOS, etc.). These APIs may take significant amount of CPU depending on connection and the amount of data moved. Please, refer to the minimum CPU requirements of your target provider.

- 64-bit Microsoft Windows® 7/Server 2008 R2/Windows® 8/Server 2012/Server 2012 R2/Windows® 10/Server 2016.

- 4 GB of physical RAM at least.

- 30 MB of available hard-disk space for installation.

Note:Tiger Bridge keeps track of the files it manages in a database, stored in the product installation folder. The size of the database grows proportionally to the number of files managed. For example, if Tiger Bridge manages 1 000 000 files, the size of the database is approximately 100MB. Unless there’s enough free space for the database, Tiger Bridge is unable to operate.
- TCP ports 8536 and 8537 should not be blocked by a firewall if any.
Storage Requirements
Source Volume Requirements
Tiger Bridge supports any already existing NTFS volume, mounted on the computer running Tiger Bridge as a local volume with Read & Write permissions.

Target Storage Requirements
- S3-compatible object storage
- IBM Cloud Object Storage
- Microsoft Azure Blob Storage
- DDN Web object scaler (WOS)
- SMB/CIFS network share
- another volume, mounted on the computer as a local volume with Read & Write permissions
Tape Librariesblackpearl
Spectra Black Pearl
Qualstar
Qualstar
xendata
XenData
Object StorageAmazon AWS
Amazon AWS
azure
Microsoft Azure
cleversafe
IBM Cleversafe
NEC Hydrastor
Hydrastor
WOS
DDN WOS
Object StorageWasabi
Wasabi
NASisilon
EMC Isilon
netgear
Netgear
Synology
Synology
QNAP
QNAP
Qumulo
Qumulo
NASNEC
NEC
infortrend
Infortrend
netapp
NetApp
seagate
Seagate
pixitmedia
Pixit Media

... and more!
DAS/SANAccusys
Accusys ExaSAN
DDN
DDN
HP
HP
IBM
IBM
infortrend
Infortrend
DAS/SANNEC
NEC
netapp
NetApp
nexsan
Nexsan
promise
Promise
quantum
Quantum
DAS/SANSANS Digital
SANS Digital
seagate
Seagate
ATTO
ATTO RAID
Accusys
Accusys RAID
LSI
LSI RAID by Agavo
DAS/SANAccelstor
AccelStor

... and more!