Implement Ceph backend for Glance / Cinder / Nova

The new lib installs a full Ceph cluster. It can be managed
by the service init scripts. Ceph can also be installed in
standalone without any other components.
This implementation adds the auto-configuration for
the following services with Ceph:

* Glance
* Cinder
* Cinder backup
* Nova

To enable Ceph simply add: ENABLED_SERVICES+=,ceph to your localrc.
If you want to play with the Ceph replication, you can use the
CEPH_REPLICAS option and set a replica. This replica will be used for
every pools (Glance, Cinder, Cinder backup and Nova). The size of the
loopback disk used for Ceph can also be managed thanks to the
CEPH_LOOPBACK_DISK_SIZE option.

Going further pools, users and PGs are configurable as well. The
convention is <SERVICE_NAME_IN_CAPITAL>_CEPH_<OPTION> where services are
GLANCE, CINDER, NOVA, CINDER_BAK. Let's take the example of Cinder:

* CINDER_CEPH_POOL
* CINDER_CEPH_USER
* CINDER_CEPH_POOL_PG
* CINDER_CEPH_POOL_PGP

** Only works on Ubuntu Trusty, Fedora 19/20 or later **

Change-Id: Ifec850ba8e1e5263234ef428669150c76cfdb6ad
Implements: blueprint implement-ceph-backend
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
diff --git a/functions b/functions
index ca8ef80..cd9e078 100644
--- a/functions
+++ b/functions
@@ -546,6 +546,40 @@
     }
 fi
 
+
+# create_disk - Create backing disk
+function create_disk {
+    local node_number
+    local disk_image=${1}
+    local storage_data_dir=${2}
+    local loopback_disk_size=${3}
+
+    # Create a loopback disk and format it to XFS.
+    if [[ -e ${disk_image} ]]; then
+        if egrep -q ${storage_data_dir} /proc/mounts; then
+            sudo umount ${storage_data_dir}/drives/sdb1
+            sudo rm -f ${disk_image}
+        fi
+    fi
+
+    sudo mkdir -p ${storage_data_dir}/drives/images
+
+    sudo truncate -s ${loopback_disk_size} ${disk_image}
+
+    # Make a fresh XFS filesystem. Use bigger inodes so xattr can fit in
+    # a single inode. Keeping the default inode size (256) will result in multiple
+    # inodes being used to store xattr. Retrieving the xattr will be slower
+    # since we have to read multiple inodes. This statement is true for both
+    # Swift and Ceph.
+    sudo mkfs.xfs -f -i size=1024 ${disk_image}
+
+    # Mount the disk with mount options to make it as efficient as possible
+    if ! egrep -q ${storage_data_dir} /proc/mounts; then
+        sudo mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8  \
+            ${disk_image} ${storage_data_dir}
+    fi
+}
+
 # Restore xtrace
 $XTRACE