Monday, September 27, 2010

Dual-Boot RAID 0: Ubuntu 10.04 and Windows XP

I wanted to set up a SATA RAID 0 array that would function like any other dual-boot system:  I would turn on the computer; it would do its initial self-check; I would see a GRUB menu; and I would choose to go into either Windows XP or Ubuntu 10.04 from there.  This post describes the process of setting up that array.

With no drives other than my two identical, unformatted SATA drives connected, I turned on the computer.  The BIOS for my Gigabyte motherboard did not give me the obvious RAID configuration option I had hoped for.  I hit DEL to go into BIOS setup.  Nothing jumped out at me.  Desperate for guidance, I turned to the manual.  I was looking at an Award Software CMOS Setup Utility.  The manual directed me to its Integrated Peripherals section.  There, I set OnChip SATA Controller to Enabled, OnChip SATA Type to RAID, and OnChip SATA Port4/5 Type to As SATA Type.  I hit F10 to save and exit.

According to the manual, that little maneuver was supposed to give me an option, after the initial boot screen, to hit Ctrl-F and go into the RAID configuration utility.  Instead, the next thing I got was this:

Press [Space] key to skip, or other key to continue...
I didn't do anything.  In scanned my drives and then led on to the Ctrl-F option.  I rebooted and tried it again.  Hitting the space key led to the same result.  Ctrl-F opened the AMD FastBuild Utility.  I hit option 2 to define an array.  This gave me a list of my two drives, labeled as LD 1 and LD 2.  Apparently it wasn't supposed to show anything.  LD was short for "logical disk set."  It was essentially showing two separate arrays, each having one drive.  So although the manual didn't say so, it seemed that I needed to get out of here and go into option 3 to delete these arrays.  I did that and then went back into option 2.  Now I was looking at a blank list of LDs, just like in the manual.

So now I was ready to prepare my array.  In option 2, I hit Enter to select LD 1.  This defaulted to RAID 0 with zero drives assigned to it.  I arrowed down to the Assignment area and put Y next to each of the two drives listed.  Now it said there were two drives assigned.  But now I had a couple of things to research.  The screen was giving me options for Stripe Block, Fast Initialize, Gigabyte Boundary, and Cache Mode.  The manual didn't say what these were.

I did a search for information on the Stripe Block size.  I found an old AnandTech article that took the approach of choosing the lowest stripe size where performance tended to level out -- where, that is, increasing the stripe size another notch did not increase performance.  For the RAID controllers they were testing, it looked like performance kept increasing right up to the range of 256KB to 512KB, for those controllers whose options went that high.  Mine only gave me a choice between 64KB and 128KB, so I chose the latter.  A more recent discussion thread seemed to support that decision.

Regarding the "Fast Init" option, a search led to some advice saying that slow initialize would take longer but would improve reliability.  A different webpage clarified that the difference was that slow initialize would physically check the disk and would be suitable if you had had trouble with the disk or if you suspected it had bad blocks.  I decided to stay with the default, which was Fast Init ON.

The "Gigabyte Boundary" option would reportedly make the larger of two slightly mismatched drives in an array behave as though it were the same size as the smaller one.  The concept appeared to be that, if you were backing up one drive with another (which was not the case with a RAID 0 array), you would use this so that the larger drive would never contain more data than the smaller drive could copy.  Mine was set to ON by default.  I couldn't quite understand why anyone would need to turn it off, even if the drives were the same size.

Finally, the "Cache Mode" option was apparently capable of offering different choices (e.g., write-back), but mine was fixed at WriteThru with no other options available.  So I thought about it a long time and then decided this was acceptable to me.  So then I hit Ctrl-Y to save these settings.  Now I was back at the Define LD Menu, but this time it showed a RAID 0 array with two drives and Functional status.  That seemed to be all I could do there, so I exited that menu.  I poked around the other options on the Main Menu.  I seemed to be done with the FastBuild Utility.

Next, the manual wanted me to use a floppy disk to install the SATA RAID driver.  I could have just gone ahead and done that -- I still had a floppy drive and some blank diskettes -- but I thought surely there must be a better way by now.  Apparently there was:  use Vista instead of WinXP.  But if you were determined to use XP, as I was, the choices seemed to be either to go through a complex slipstreaming process or use the floppy.

There was, however, another option.  I could buy a RAID controller card, for as little as $30 or as much as $1,000+, and it might come with SATA drivers on CD.  This raised the question of whether the RAID cards actually had some advantage beyond their included CD.  My brief investigation suggested that a dedicated RAID card could handle the processing task, taking that load off the CPU, but that there wasn't much of a processing task in the case of RAID 0.  In other words, for my purposes, a RAID controller card wouldn't likely add any performance improvement.  Someone said it could even impair performance if it was a regular PCI card (as distinct from e.g., PCIe) or if its onboard processor was slower than the computer's main CPU.  There did seem to be a portability advantage, though:  moving the array to a different motherboard would require its re-creation, in at least some cases, but bringing along the controller card would eliminate that need -- though the flip side was that the card might fail first, taking the array with it.

Further reading led to the distinction between hardware and software RAID.  An older article made me think that the essential difference (since they all use hardware and software) was that software RAID would be done by the operating system and would run on the CPU, and would therefore be dependent upon the operating system -- raising the question of whether dual-booting would be impossible in a software RAID array, as a generally informative Wikipedia article suggested.  To get more specific, I looked at the manual for a popular motherboard, the Gigabyte GA-MA785GM-US2H.  That unit's onboard RAID controller, plainly enough, was like mine:  it depended upon the operating system.  Wikipedia said that cheap controller cards provide a "fake RAID" service of handling early-stage bootup, without an onboard processor to take any of the load off the CPU.  FakeRAID seemed to get mixed reviews.

An alternative, hinted at in one or two things I read, was simply to set up the RAID 0 array for the operating system in need of speed, and install a separate hard drive for the other operating system.  I was interested in speeding up Linux, so that would be the one that would get the RAID array.  I rarely ran Windows on that machine, so any hard drive would do.  A look into older, smaller, and otherwise seemingly less costly drives led to the conclusion that I should pretty much expect to get a 300GB+ hard drive, at a new price of around $45.  Since I was planning to use Windows infrequently on that machine, it was also possible that I could come up with some kind of WinXP on USB solution, and just boot the machine with a flash drive whenever I needed access to the Windows installation there.

I decided that, for the time being, I would focus on just setting up the Ubuntu part of this dual-boot system, and would decide what do to about the Windows part after that.  I have described the Ubuntu RAID 0 setup in another post.

0 comments: