Aspera

Reliable Data Delivery at the Institute for Genome Sciences of the University of Maryland

IGS manages the collection, storing and distributing of terabytes of genomic research data globally. Due to increasing file sizes from an average 5GB to 20GB, data move¬ment over FTP faced growing reliability problems, especially with international long distance transfers. FTP was delivering under 20% network utilization and frequent con¬nection failures were resulting in excessive re-transmission.

Scalable Synchronization of Big Data — Over Distance

Aspera Sync is purpose-built by Aspera for highly scalable, multidirectional asynchronous file replication and synchronization. Sync is designed to overcome the bottlenecks of conventional synchronization tools like rsync and scale up and out for maximum speed replication and synchronization over WANs, for today’s largest big data file stores —from millions of individual files to the

Moving Big Data at Maximum Speed Over Wide Area Networks

In this digital world, fast and reliable movement of digital data, including massive sizes over global distances, is becoming vital to business success across virtually every industry. The Trans¬mission Control Protocol (TCP) that has traditionally been the engine of this data movement, however, has inherent bottle¬necks in performance (Figure 1), especially for networks with high

Moving Big Data at Maximum Speed Over Wide Area Networks

The Transmission Control Protocol (TCP) provides reliable data delivery under ideal conditions, but has an inherent throughput bottleneck that becomes obvious, and severe, with increased packet loss and latency found on long-distance WANs. Adding more bandwidth does not change the effective throughput. File transfer speeds do not improve and expensive bandwidth is underutilized.

Taking Big Data to the Cloud

Cloud computing has become a viable, mainstream solution for data processing, storage and distribution. Adoption is accelerating — Amazon Web Services (AWS) has gone from 262 billion objects stored in its S3 cloud storage in 2010, to over 1 trillion in 2012. However, companies that work with big data have been unable to realize the