If you’ve read the other blog posts on ScaleIO you might be interested in running it yourself. However, you might not have your own hardware lab to run it on, but you do have a laptop or desktop, right? Awesome! That’s all you need, and we’ll go through how to get it up and running by using some really smart tools.
If you just want to see how it runs without installing anything, here’s the entire automated setup captured in asciinema, one of my new favourite tools:
First tool we’ll use is VirtualBox, a freely available and open source virtualization solution (yes, no money needed to get it but please contribute to the development!) for Windows, OS X, Linux and Solaris. Download it, install it, and that’s it. No configuration needed unless you want to change any of the defaults we’ll be working with. It is a really good virtualization solution and I’ve been using it for years next to my VMware Workstation and VMware Fusion installations.
Next up is Vagrant, an awesome tool for automating creation and configuration of VMs running in VirtualBox, VMware Fusion, AWS and others. It runs on Windows, OS X and Linux as well, so no matter how you spell your favourite OS you’ll be able to use it. Download it, install it and you’re ready to go. No configuration needed there either, as all the settings we’ll use with Vagrant will be in a so called Vagrantfile.
If you want to try Vagrant and VirtualBox before we get to the ScaleIO deployment, you can create a folder called “vagrant”, open your terminal/command window into that folder, and run the following commands to install and start a recent Ubuntu distribution automatically:
vagrant box add saucy http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-amd64-vagrant-disk1.box vagrant init vagrant up
Vagrant will now download a “Cloud Image” of Ubuntu 13.10, initialize the Vagrant directory with a Vagrantfile and start the VM. After it’s booted, you can ssh to it with the following command:
vagrant ssh
That’s it! Now you have a fully installed Ubuntu 13.10 VM where you can do whatever you want, and all you’ve done is issue a few commands
Ok, now that you’ve become somewhat comfortable with VirtualBox and Vagrant, let’s move on to the ScaleIO lab setup. All you need for this is the ScaleIO installation package that you can find on support.emc.com, unpack it and you’ll find a folder called CentOS_6.X, and there you’ll see a file called “ECS-1.20-0.357.el6-install“, which is the most recent version by the time of this writing.
Create a new directory called “scaleio” somewhere on your computer and copy the installation file there. As you saw in the example above, you will also need a Vagrantfile to actually get your VMs up and running, and instead of letting you figure out by yourself how to do that I am providing such a Vagrantfile for your use here. Coming with no warranty and I’m not responsible if your computer breaks in any way
When you have all that, your “scaleio” folder should look like this:
$ ls ECS-1.20-0.357.el6-install Vagrantfile
That’s all you need! Crazy, I know. But if you look at the Vagrantfile, you’ll see that we are in fact doing a lot of things in there. First, we’re defining three VMs (3 nodes is the minimum for a ScaleIO environment), setting static IPs on them, and running a really long shell command on each node which will automatically install and configure ScaleIO to use a truncated 100GB file as the SDS devices, and create an 8GB volume on it. There are no clients defined outside the ScaleIO environment, I’m leaving that as an exercise for you, dear reader.
One thing that needs to be changed in the Vagrantfile is the string called YOURLICENSEHERE in the long string of commands in the bottom of the file. Add your own ScaleIO license there and you’re done, and now run the following command to bring up the entire ScaleIO environment:
vagrant up
This will take a while so go grab a coffee and relax. I highly recommend using an SSD drive for this, if you don’t have one already isn’t it time you get one? Anyway, after the environment has been setup and is running, you can do the following to connect to the first MDM:
vagrant ssh mdm1
Then issue this command to verify that the install was completed correctly:
sudo scli --query_all --mdm_ip=192.168.50.10
You should see output similar to this:
[vagrant@mdm1 ~]$ sudo scli --query_all --mdm_ip=192.168.50.10 ScaleIO ECS Version: R1_20.0.357 Customer ID: XXXXXX Installation ID: XXXXXXXXXXXXX The system was activated 0 days ago Rebuild network data copy is unlimited Rebalance network data copy is unlimited Query all returned 1 protection domains Protection domain pdomain has 1 storage pool, 3 SDS nodes, 1 volumes and 112 GB (114688 MB) available for volume allocation Rebuild/Rebalance parallelism is set to 3 Storage pool Default has 1 volumes and 112 GB (114688 MB) available for volume allocation SDS Summary: 3 SDS nodes have Cluster-state UP 3 SDS nodes have Connection-state CONNECTED 3 SDS nodes have Remove-state NONE 3 SDS nodes have Device-state NORMAL 276.3 GB (283026 MB) total capacity 229.7 GB (235268 MB) unused capacity 0 Bytes snapshots capacity 16 GB (16384 MB) in-use capacity 16 GB (16384 MB) protected capacity 0 Bytes failed capacity 0 Bytes degraded-failed capacity 0 Bytes degraded-healthy capacity 0 Bytes active-source-back-rebuild capacity 0 Bytes pending-source-back-rebuild capacity 0 Bytes active-destination-back-rebuild capacity 0 Bytes pending-destination-back-rebuild capacity 0 Bytes pending-rebalance-moving-in capacity 0 Bytes pending-fwd-rebuild-moving-in capacity 0 Bytes pending-moving-in capacity 0 Bytes active-rebalance-moving-in capacity 0 Bytes active-fwd-rebuild-moving-in capacity 0 Bytes active-moving-in capacity 0 Bytes rebalance-moving-in capacity 0 Bytes fwd-rebuild-moving-in capacity 0 Bytes moving-in capacity 0 Bytes pending-rebalance-moving-out capacity 0 Bytes pending-fwd-rebuild-moving-out capacity 0 Bytes pending-moving-out capacity 0 Bytes active-rebalance-moving-out capacity 0 Bytes active-fwd-rebuild-moving-out capacity 0 Bytes active-moving-out capacity 0 Bytes rebalance-moving-out capacity 0 Bytes fwd-rebuild-moving-out capacity 0 Bytes moving-out capacity 16 GB (16384 MB) at-rest capacity 8 GB (8192 MB) primary capacity 8 GB (8192 MB) secondary capacity Primary-reads: 0 IOPS 0 Bytes per-second Primary-writes: 0 IOPS 0 Bytes per-second Secondary-reads: 0 IOPS 0 Bytes per-second Secondary-writes: 0 IOPS 0 Bytes per-second Backward-rebuild-reads: 0 IOPS 0 Bytes per-second Backward-rebuild-writes: 0 IOPS 0 Bytes per-second Forward-rebuild-reads: 0 IOPS 0 Bytes per-second Forward-rebuild-writes: 0 IOPS 0 Bytes per-second Rebalance-reads: 0 IOPS 0 Bytes per-second Rebalance-writes: 0 IOPS 0 Bytes per-second Volume Summary: 1 volume. Total size: 8 GB (8192 MB)
I would also recommend you to point the ScaleIO dashboard, found in mdm1 and mdm2 at /opt/scaleio/ecs/mdm/bin/dashboard.jar, to your new cluster. Just copy the dashboard.jar file to your desktop, and if you haven’t changed the IP addresses set in the Vagrantfile you can point it to 192.168.50.10, and get the following dashboard image:
And there you go, you now have a complete three node ScaleIO cluster up and running on your own computer, where you can try to put data, read data, fail nodes etc. Play around with it, and please comment on improvements you would like to see and if you’re editing or adding functionality, please let me know. Enjoy!
