Saturday, October 30, 2010

Ubuntu 10.10: Streamlined RAID 0 Installation

I had previously installed Ubuntu 10.04 on a two-drive RAID 0 array.  I did that to make a Windows XP guest virtual machine (VM) run faster in VMware Workstation 7.1.  I had then run into some problems with that installation, and had abandoned it.  Now it was time to try again, but this time with Ubuntu 10.10.  This post describes the process in more streamlined terms, drawing from the previous post in which I logged the details of that earlier attempt.

This time, as before, I had two hard drives for the RAID 0 array, plus a third drive on which I had already installed Windows XP.  The two empty hard drives for RAID were each 320GB.  That third drive also held my /home partition (i.e., the contents of the /home partition from a previous Ubuntu installation), which contained many of my settings and adjustments for various Ubuntu programs.  In other words, my Ubuntu installation would not be like a Windows XP installation, where it would be necessary to reinstall all of my applications (except the portable ones) after reinstalling the operating system.  The third drive also held my Linux swap space, which I probably could have put into the array instead, along with a partition I called LOCAL, which would hold backup copies of the VMware virtual machines.  I was going to put the active VMs into the RAID 0 setup to make them run faster, but of course RAID 0 was riskier in the sense that failure of either of the two hard drives would mean the loss of everything in the RAID 0 array.

I started by downloading and burning the Ubuntu 10.10 alternate (or "alternative") CD.  I booted that CD and chose the "Check disc for defects" option.  This took five or ten minutes, and then it said, "Integrity test successful," and then rebooted.  So then I went through the "Install Ubuntu" option and took the basic steps (selecting my country, my keyboard type, etc.).  The meat of the RAID 0 process began about three minutes into that video by amzertech (speaking, here, of its Part 1, not Part 2), where it was time to partition the drives.  I went to Manual (i.e., not Guided), and this put me into the main "Partition disks" screen, the one beginning with "This is an overview."  I went down to the first of the two hard drives.  It referred to them as SCSI partitions, but it also recognized them as being sdb and sdc.  So it looked like I had correctly cabled that third drive to actually be the first in the system (i.e., sda, a/k/a SCSI3 according to the partitioner), so as to make Windows happy.

The general concept of the RAID setup process was that, first, you designate some free space on each drive as a physical volume for RAID, and then you combine those physical volumes from the two (or more) drives into a single software RAID device.  The following paragraphs provide the details.

First, following the video, I went down to the first of the empty 320GB drives that I was going to use for my RAID array.  In my case, unlike the video, there was not yet any "pri/log" line showing "FREE SPACE" that I could select, under the drive identification line on the screen, so I just highlighted the drive itself and hit Enter.  This gave me the option of creating a new empty partition table on the drive, and I went with that for each of the two drives.  Then I highlighted the pri/log line under the first 320GB drive, showing free space.  There, I hit Enter and chose "Create a new partition."  For its size, I typed "50GB" and made it a primary partition at the end of the drive.  I guessed that this meant the outside of of the physical disc, where I believed data transfers would be faster.  Instead of leaving "Use as" at the default ext4 setting, I highlighted and hit Enter and went down to select "physical volume for RAID" (enter) > "Done setting up the partition."  I went through the same steps with the second 320GB drive, which was sdc on my system.  So now, back on the Partition disks" screen, each of the two drives showed an entry that looked like this:

#1   primary   50.0GB   K   raid
So this would give me a total of 100GB for my Ubuntu program installation, and I would still have several hundred GB left over as free space.  Now, on the main "overview" screen, I went up to the line that said "Configure software RAID" > "Write the changes to the storage devices" > "Create MD device" > RAID0.  This put me at a list of "active devices."  I wanted sdb1 and sdc1 (i.e., I didn't want to use one of the partitions I had previously created on sda, my third hard drive).  These were the only partitions on drives sdb and sdc, so the choice was easy.  For some reason, they showed up here as being 49999MB rather than 50GB.  I selected sdb1 and sdc1.  I highlighted each of those two, hit spacebar to select them, and then tabbed to Continue > Finish.  This put me back in the "overview" screen, where I saw that I now had these new lines, near the top:
RAID0 device #0 - 100.0 GB Software RAID device
   #1       100.0 GB
              131.1 kB         unusable
I highlighted the line that began with #1 and hit Enter > Use as > ext3 (apparently still more reliable than ext4) > Mount point > "/ - the root file system" > "Done setting up the partition."  This put me back in the "overview" screen, where the line now looked like this:
   #1       100.0 GB     f   ext4      /
I decided to go ahead with the video's approach of putting the swap space on the RAID0 partition.  To do this, I went through the same steps as above, starting with the free space line on each of the two drives.  The only differences were:
(a) I allocated only 5GB on each drive for this partition.
(b) This time, I selected sdb2 and sdc2 (instead of sdb1 and sdc1) as my active devices for the array.
(c) Under "Use as," I chose "swap area" instead of ext3.
The result, back in the "overview" screen, was that I had these lines:
RAID0 device #0 - 100.0 GB Linux Software RAID Array
   #1       100.0 GB    F   ext3      /
RAID0 device #1 - 10.0 GB Linux Software RAID Array
   #1         10.0 GB     f  swap     swap
              131.1 kB unusable
At this point, I wanted to vary from the video by adding one more partition, where I would put my VMs and possibly other things.  I went through the same process as with the first RAID device (above), and I used all of the remaining space on the two drives except for about 1GB.  The active devices in this case (when I got to that point in the process) were, of course, sdb3 and sdc3.  Back in the "overview" screen, I saw that I now had RAID0 device #2 of 528GB.  I would never need all of that space for my VMs, but I had no other use for the space, and this RAID setup process was a one-shot deal:  designate the space in some useful form now, or leave it forever unallocated.

So now, the final step.  I needed to create a /boot partition on just one drive.  That was why I needed to save 1GB.  I could have made one of those last active devices (either sdb3 or sdc3) larger than the other, but there was no point:  as I understood it, RAID0 would use only the amount of space that they both had in common.  So I would wind up with 1GB unused on one of the two drives.  Anyway, to create the /boot partition, I selected that remaining free space on sdb (i.e., the first of my two RAID drives) and used it all up on another ext3 partition.  This time, I chose ext3 without first choosing "physical volume for RAID"; and after choosing ext3, I didn't go right to "Done setting up the partition."  Instead, I stopped first at the "Mount point" option, where I chose the "/boot" option.  Back in the "overview" screen, I saw that I now had my three RAID0 devices at the top of the list, and the /boot device as sdb4 down under the first of my two 320GB drives.

In the "overview" screen, I saw that there was too much information; some had scrolled off the bottom of the screen.  I arrowed down until I got to the very bottom of all that, where I chose the "Finish partitioning and write changes to disk" > Yes option.  This started me right into the Ubuntu installation process, where I just entered basic information (e.g., my name).  The installation was very straightforward, and it worked:  Ubuntu booted up.  I then went to System > Administration > System Monitor > File Systems tab.  There, I saw /dev/md0 as root directory and /dev/sdb4 as /boot.  (The video said that the swap would not be visible here, and it wasn't.)  So my next step, at this point, was to refine the basic installation to suit my preferences.  The description of that process appears in a separate post.

1 comments:

raywood

I may have missed a step in the RAID creation process. For that last large RAID0 space, I created /media/RAIDSPACE, and I set fstab to mount to that location; but in Nautilus (as user, not as root), it showed up as "528 GB Filesystem." I looked at e2label and pysdm as renaming tools, but they did not seem to fit the need. GParted was another possibility, but a scary one.