Home Products Enterprise Storage

To Petabytes and Beyond

HPC Storage Drives, File Systems and RAID Systems

Where will you store YOUR data?

Large computing jobs often require tremendous amounts of information and result in exponential data growth. To meet these demands, today's data storage options must be extremely scalable, cost effective, and easy to manage - ensuring data availability and recovery.

With this in mind, Red Barn Computers has carefully chosen a combination of product lines in Hard Drives, DAS RAID Systems, Distributed File Systems, Storage Area Networking (SAN) solutions, and software to provide our customers with a wide variety of storage solutions.

Products

  • Direct-Attached Storage
    • Fibre, SCSI, iSCSI, SAS to SATA RAID
    • Fibre, SAS to SAS/SATA RAID
    • SCSI to SCSI RAID
    • Fibre to Fibre/SATA RAID
  • Distributed File Systems
  • Storage Area Network (SAN) Solutions
  • Network Attached Storage (NAS)
Back to Top

Storage Solutions

Hadoop is an open-source project administered by the Apache Software Foundation. Hadoop's contributors work for some of the world's biggest technology companies. That diverse, motivated community has produced a genuinely innovative platform for consolidating, combining and understanding data. Enterprises today collect and generate more data than ever before. Relational and data warehouse products excel at OLAP and OLTP workloads over structured data. Hadoop, however, was designed to solve a different problem: the scalable, reliable storage and analysis of both structured and complex data. As a result, many enterprises deploy Hadoop alongside their legacy IT systems, allowing them to combine old and new data sets in powerful new ways. Technically, Hadoop consists of two key services: reliable data storage using the Hadoop Distributed File System (HDFS) and high-performance parallel data processing using a technique called MapReduce. Hadoop runs on a collection of commodity, shared-nothing servers. You can add or remove servers in a Hadoop cluster at will; the system detects and compensates for hardware or system problems on any server. Hadoop, in other words, is self-healing. It can deliver data - and can run large-scale, high-performance processing jobs - in spite of system changes or failures.

GlusterFS is a very sophisticated GPLv3 file system that uses FUSE. It allows you to aggregate disparate storage devices, which GlusterFS refers to as "storage bricks", into a single storage pool or namespace. It is what is sometimes referred to as a "meta-file-system" which is a file system built on top of another file system. The storage in each brick is formatted using a local file system such as ext3 or ext4, and then GlusterFS uses those file systems for storing data (files and directories). GlusterFS is a very interesting for many reasons. It allows you to aggregate and use disparate storage reasons in a variety of ways. It is in use at a number of sites for very large storage arrays, for high performance computing, and for specific application arrays such as bioinformatics that have particular IO patterns (particularly nasty IO patterns in many cases).

The Lustre architecture is used for many different kinds of clusters, but it is best known for powering seven of the ten largest high-performance computing (HPC) clusters in the world, with some systems supporting over ten thousands clients, many petabytes (PB) of storage and many of these systems nearing or over hundreds of gigabytes per second (GB/sec) of I/O throughput. The Lustre parallel file system is well suited for large HPC cluster environments and has capabilities that fulfill important I/O subsystem requirements. The Lustre gile system is designed to provide cluster client nodes with shared access to gile system data in parallel. Lustre enables high performance by allowing system architects to use any common storage technologies along with high-speed interconnects. Lustre file systems also can scale well as an organization's storage needs grow. And by providing multiple paths to the physical storage, the Lustre file system can provide high availability for HPC clusters.

The power of Ceph can transform your organization's IT infrastructure and your ability to manage vast amounts of data. If your organization runs applications with different storage interface needs, Ceph is for you! Ceph's foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block, and file system storage in a single unified storage cluster-making Ceph flexible, highly reliable and easy for you to manage. Ceph's RADOS provides you with extraordinary data storage scalability-thousands of client hosts or KVMs accessing petabytes to exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, which means your Ceph storage system serves as a flexible foundation for all of your data storage needs. You can use Ceph for free, and deploy it on economical commodity hardware.

Back to Top

Founded in 1996, Red Barn Computers has become a leader in High Performance Computing and Cluster Applications. Our expertise has given us the unique ability to offer total solutions encompassing all aspects of HPC - including turn-key solutions, storage, design, and administration. Red Barn also offers a wide array of custom Linux or open architecture platforms that are designed and built using high-end, reliable, industry standard components. Our products include Dual, Quad, and 8 CPU Systems, along with high end workstations. Our Clustering products are based on the Warewulf/Perceus platform, and provide numerous solutions for High Performance and High Availability Clusters.