Don't miss Ardis demo DDP at IBC this year

Don't miss Ardis demo DDP at IBC this year

Sep 11, 2018 - 03:07

 

Catch a demo at Booth #7 C43 between14-18 September 2018to see their fully integrated file-based SSD caching in the latest v5 software.

DDP shared storage SAN systems with its breakthrough file based SSD Caching. It makes a DDP perform as an all SSD system with large affordable HD capacity. Combining a DDP base system with file based Dual Path Technology and individual selection of SSD and HD packs, and there is magic! A SSD4 pack bandwidth on reading starts at 1,2 GByte/s., a SSD8 pack can be up to 32TB and a HD8 pack each can be up to 96TB.

To understand why DDP's way of Caching is different from others, and what exactly is the benefit, we have to explain the general operation of a DDP system.

In a DDP system, the data is stored independently of the metadata (file system) in data containers, the so-called Data Locations. This allows file-based SSD Caching as well as Load Balancing.

The big advantages of Solid State Disks – quiet, fast, no seek time, no mechanical components – are well known and of course one would like to have a storage system that consists of SSDs only. But who can afford that? With DDP you can have the bandwidth and performance of SSDs with the cost-effective storage capacity of hard drives.

Why? It is because DDPs are modular and can be populated with HD and SSD packs of different capacities.

Caching takes place between the SSDs and HDs as Data Locations and is file-based. At the competitors, very often only the metadata is cached and/or Caching takes place on the level below the file system, so block-based instead. As a result, there is little to no control over what exactly is cached and how it is cached.

With DDP and our newly developed Dual Path Technology however, Caching takes place at the file system level and allows precise control over the Caching process.

 

How does it work?

 

 

In this overview at the left, the file system tree with Folder Volumes is displayed. In the following windows, you can select the Caching Method, Data Location and Cache Priority for each single Folder Volume as needed. These selections are dynamic and can, therefore, be changed during operation. One can select a specific Data Location or select Balanced. In that case files are evenly distributed between the spindle Data Locations.

The Caching methods are “On Demand” and “Pinned”. The highly preferred cache method is to set the Data Location to the SSD Cache and the Cache Method to “On Demand”. That way each file is ingested to the SSD Cache and when the file is closed it is copied through to the spindles in balanced mode. When the cache is 80 % full least recently used files are pushed out of the cache and automatically when the file is needed it is taken from the spindles.
A second preferred method is to ingest/copy to spindles with the Data Location set to Balanced and use the cache method “Pinned” to make sure that all content of a folder volume or folder volumes a re duplicated to SSDs.
There are other cache methods of course. The SSD Cache also can be used as Primary Storage. All cache methods can be used simultaneously.

Hybrid DDPs typically have a SSD Data Location that is used as SSD Cache and two or more Data Locations with traditional hard drives. SSD Caching of a file means that a copy of the file is created and stored with an additional path to the file system tree, but there is always only one path active. This happens due to our Dual Path Technology, read more about DPT here.

DDP can be delivered partially populated and upgraded at a later date with additional SSDs or HDs. Data Locations also can be other DDPs or parts of other DDPs in a DDP Cluster configuration. A DDP Cluster is a scale-out solution with linear scaling of bandwidth and capacity.

 

File Based Load Balancing

Only adding additional spindles does not necessarily increase the performance of a storage system. But it certainly does when a DDP system with Load Balancing is used, because in this case, the more drives and RAID sets being added, the faster the storage system will get.

Why and how does it work? Well, with a DDP system, the data is independent of the file system and stored in data containers called Data Locations. Load Balancing involves multiple Data Locations and is file-based. It is designed to compensate the seek time of hard disks.

When Load Balanced is selected for a certain Folder Volume, the DDP distributes the data equally among the available Data Locations. The picture of the Storage Manager Screen below shows 3 Data Locations: HD1, HD2, and Cache.

When choosing “Balanced” as Data Location, the folder and file data of the selected Folder Volume will be ingested/copied Load Balanced to HD1 and HD2.

For example, with a DPX sequence, all odd-numbered frames would be stored on Data Location 1 (HD1) and all even-numbered frames would be stored on Data Location 2 (HD2). This leads to a significantly lower latency in seek times and playback of the files, and thus to a large increase in performance.

Therefore a DDP can Load Balance the data of any folder volume over Data Locations. The Data Locations are accessed independently and can be within one DDP, but also on another DDP,  or on parts of another DDP in a DDP Cluster configuration. A DDP Cluster then this way can scale linearly in bandwidth and capacity, so-called Scale Out by adding DDPs.

In a DDP Cluster each desktop has Parallel Access to all DDPs simultaneously. Two identical DDPs has twice the throughput as one DDP. DDPs do not need to be identical, however. The more Data Locations there are, the more independent accesses can be made and the higher the total throughput will be.

Especially with a DDP Cluster configuration the DDP is pretty safe for the future.
If you realize that you can add new DDPs or remove old DDPs any time without interrupting the workflow, if you realize that you easily can increase capacity and performance if necessary, if you realize that nevertheless, administration is always for one virtual volume only: yes, a DDP Cluster is able to live forever and can always be up to date.

Catch a demo at Booth #7 C43 between14-18 September 2018to see their fully integrated file-based SSD caching in the latest v5 software.

Check out DDP product range here.