I had the opportunity to be introduced to Red Hat Storage and Gluster in a joint presentation by Red Hat France and the company StartX. I have here recompiled my notes, at least partially. I will conclude with the integration between Red Hat Storage and Hadoop, especially what we can expect before conducting an experiment in real life.
In 2012, Red Hat announced the acquisition of GlusterFS at JBoss summit 2012 in order to enhance the value of this distributed storage system inside the larger offering of Red Hat Storage (hereafter called RH Storage). Previously, the acquisition of Qumranet opened the door for Red Hat to the virtualization market by giving birth to the Red Hat Enterprise Virtualization offering. The mixing of these two offers running under the same cluster unites calculation and storage on community hardware. Note also that in 2012, Red Hat also acquired FuseSource specializing in integration and messages and Polymita specialized in Business Process Management (BPM).
If you consider the architectures of unifying compute and storage on a unified platform with commodity hardware, then we are very close to what is Hadoop. Moreover, since version 3.3beta, GlusterFS is a storage compatible within Hadoop to replace HDFS.
The RH Storage offering is structured around the following areas: knowledge base, customer portal & forums, hard and soft certifications, software assurance, global support service, stability & lifecycle up to 10 years, as well as updates.
The cloud is now well under way, here are the main axes:
- virtualization, 48% of apps in virtual machines - Gartner
- banalisation, x86 servers, risk reduction around mainframes, but storage is still expensive and proprietary - ESG
- cloudification, a tendency to look at + in + the cloud, thinking about the advantages / disadvantages of using at least part of the cloud without technological break between its own datacenter and that which would be in the cloud - IDG
- explosion - processed data volumes are exponentially explosive - Gartner
Its design is based on high performance and high availability objectives. It does not have a single point of failure, each piece of data is redundant across multiple disks, and the system itself is replicable on another datacenter.
- Unified file service with a global mount point (global namespace translation) regardless of the number of components
- Metadata distributed according to a hashing algorithm between all the components of the cluster, so it has no single point of failure
- Virtualized file system, it is possible to format disks in different formats
- Horizontal scalibility by adding nodes (from 2 to 64)
- Scale to several petabytes, more anticipated in the future
- Interconnection 1GbE and InfinyBand 10GbE (SDR, Single Data Rate) being validated
- Replication of real-time data and geo-replication in asynchronous mode on LAN, WAN internet, possibility of multi-site cascading
RH storage is a software layer that does not adhere to the kernel or system on which it operates. It installs under Red Hat Enterprice Linux on x86 hardware that the client owns. It is designed in a fully distributed and redundant way. Moreover, Gluster from which it comes is a mature product.
The system can be set up on a cloud such as this one from Amazon or on a local cluster. The offering is embeddable in a virtualized system on Red Hat Virtualization and will soon be supported on VMware vSphere.
It is designed to handle from 2 to 64 nodes, each node being a standard server available on the market at a lower cost. For example, as of the date of this article, the recommendation would point to 2-socket servers with 4 to 6 cores and a memory of about 32 GB or 48 GB for an HPC. In the RH Storage offering, disks must be formatted in XFS but other file systems can be used. In the sense, the file system is considered virtualized, it is positioned at a level above the file system managing the disk.
The use cases of such an infrastructure may for example be the archiving of data enriched by an object file environment or a high performance computing environment (HPC) capable of addressing its needs with infinity band.
The system is managed from a command line or through an easy to use web interface but partially implementing all the tools.
The subscription of the offer is done by multiple of 2 nodes.
Heterogeneous storage or tearing is not yet present (except adding / creating a manual “translator”), but the idea is making its way.
The next developments are expected around March 2013. Among them: volume snapshotting, rhs-C full support (console), nfsv4 support , smb 2.1 support.
The simplest is the use of an NFS mount but this involves querying the cluster to run the hash algorithm that will locate the data. In terms of performance, it is advisable to use the Gluster client, which is smarter because it performs local hashing.
- brick, directory mounted on the node in XFS format (disks grouped in raid)
- client, Gluster, CIFS, NFS
- server, host of bricks
- subvolume, set of brick formatted for a volume
- volume, together presented as a single mount point
- Distributed, homogeneous and fair distribution
- Replicated, continuity of service through a virtual address if a node falls or leaves for maintenance
- striped, to be able to use different bricks of different nodes by cutting the files (to multiply the concurrent accesses on the same broad file), similar to the notion of * chunk * in Hadoop HDFS
- mix, distributed striped, distributed replicated, …
Note, the notes in this section remain to be cleaned up.
On Amazon, Red Hat provided an image ready for Amazon but can of course be recreated manually 1 console node - image m1 medium - 4go ram (jboss consome) - Red Hat Enterprise 6.3 4 nodes Gluster - image m1 small - ram 1.7gb - Red Hat Enterprise 6.2 use elastic IP in production mode and bring name resolution on this ip if mounting [NFS](http://en.wikipedia.org/ wiki / Network_File_System), use the floating IP mechanism to avoid the single point of failure on addressing.
system -> clusters -> Europe -> servers -> node1 -> node2 -> volumes -> data1 -> US -> servers -> volumes
The volume data1 is replicated on bricks
n2_b1 respectively on nodes 1 and 2 the cluster us is architectured in the same way
nfs -> node1.eu.glluster.toto.fr:/data1 on
nfs ( rw, vers=3, addr=10.208.23.48) this volume corresponds to the size of each brick (10GB in this example). The creation of the bricks is done manually beforehand on the command line. The GUI is still in beta and does not allow all visualizations and operations. In Amazon, for each instance you have to go into volume, create a new volume (eg tag 10tb or 50gb) and attach it to the created instance, connect to the node and the disk has been added and we make a PV (physical volume) or LV (logical volume) and finally we create a brick.
goals -> cloud, virtualization (including VMware) and the Big Data aspect of which hadoop
Gluster has been developed in its current version to work in conjunction with Hadoop. But the most interesting is not its ability to interface with Hadoop but the possibility that it has to be intercalated as a storage module in Hadoop, replacing the native HDFS. In addition, but here I will need further clarification to draw conclusions, I was told that RH Storage shipped natively Map / Reduce components.
The performance would be similar to that of HDFS. Note that the limitation of HDFS is not at the level of its performance. What an alternative like Gluster can bring more is compliance with POSIX (Portable Operating System Interface), homogeneous and fair distribution of data without centralized addressing, geo-replication and further simplification of the use of Hadoop in a virtualized environment.