Skip to content

Overview

In addition, the company stored the information on expensive on-premises storage, leading to high costs without efficient data access.

The Challenge: The high price of storing massive data sets

The organisation generates a considerable 250 to 300 GB of data per run of experiments, with 30 to 50 lab computers each processing around five to ten experiments a week. Storing this data was proving to be costly for the company, as it had a top tier on-premises solution, yet only required new data to be readily available during the initial 30 to 60 days of each experiment run.

In addition, when data needed to be accessed, it involved time-consuming and expensive processes of copying data back and forth between separate storage solutions. One of these was an AWS bucket for archive storage, which lacked effective management and resulted in a convoluted and untraceable data flow.

It was also important for our client’s storage solution to be secure and immutable, due to strict healthcare regulations around data integrity.

The Solution: The perfect blend of speed, savings and security

Ekco’s solution entailed leveraging Azure Archive Storage for long-term data storage. To facilitate the data transfer from the on-premises environment to Azure, we implemented Azure Databox Gateway as an appliance running in our client’s on-premises environment. This gateway acted as a local cache with a substantial storage capacity of 45 TB. During the initial 30 to 60 days, when the data was frequently accessed, the cache server would provide fast, local storage for quick retrieval. Once each experiment concluded, automated rules were put in place to move the data to the archive tier in Azure, significantly reducing the storage costs without sacrificing accessibility or compliance. Immutable storage ensured data integrity, with change logs and new versions amended while retaining the original data.

team

The Outcome: Amplifying Medical Tech Company’s data potential and storage

With our recommended solution, our client would be able to quickly access their data from the cache server during the active experiment phase and then seamlessly transfer it to the archive storage for long-term retention. The simplified data flow and centralised Azure storage environment would streamline operations and enhance traceability.

Through stress testing and verification powered by Azure Storage Copy service and Azure Storage Explorer, our client confirmed the solution was suitable for their production environment. By leveraging lifecycle management and rule-based transitions, our client would be able to enhance their research capabilities and reduce costs, while upholding stringent compliance demands.

Question?
Our specialists have the answer