Thursday, September 30, 2010

Connecting Network Attached Storage (NAS) to a WinXP Guest in VMware: FAIL

I had two desktop computers running Ubuntu 10.04.  On one of them, I was running VMware Workstation 7.1, with Windows XP SP3 as a guest operating system in a virtual machine (VM).  I had just figured out how to network these two computers using Samba shares within a home network, where the two computers were connected via ethernet cables to a shared router.

Now there was a new question.  Could I add a Synology DS109 network-attached storage (NAS) device (essentially an external hard drive enclosure designed for network backup and file serving) to this network?  Of course I could, in the sense of running an ethernet cable from the Synology to the router; but what I was wondering was whether I could make this work despite the fact that the software for the Synology was available only for Windows and Mac, and not Linux.

It was a question, in other words, of whether I could run the Synology software in Windows XP in a guest VM.  I gave it a whirl.  I ran the Synology installation CD and went through the steps to set up the Synology Server.  This opened the Synology Assistant, a setup wizard; and after a moment, it gave me an error message:

No Synology Server was found on the local network.  Please make sure:

1.  Synology Assistant is not blocked by the firewall of your computer OS or anti-virus applications.

2.  Synology Server and your computer are both connected to the network.

3.  You have switched on the power of Synology Server.
Option 1 was the only one that seemed to explain the situation.  I decided to back up and make sure that I could see a shared folder on the other computer from within Windows.  In my first try, I set up that shared folder on an NTFS partition, and that led to a separate investigation of the difficulties of sharing an NTFS partition in Ubuntu.

That wound up taking longer than expected, so in the meantime I just focused on the link between the Synology and the computer in which I had VMware running.  I noticed that, in Ubuntu's Places > Network, it listed three items:  ANTEC (the name of this computer), Windows Network, and WINXP8 (the name of the computer running in the WinXP VM).  Plainly, Ubuntu was seeing Windows.  Was Windows seeing Ubuntu?  Or did it need to?  A first answer was that, of course, you could go into Windows Explorer > Tools > Map Network Drive and (assuming you had VM > Settings > Options tab > Shared Folders set up) you could gain access to NTFS and ext3 partitions outside of the drive C that existed inside the virtual machine.  These drives would be visible in Windows Explorer > My Network Places > Entire Network > VMware Shared Folders.

I tried running the Synology setup wizard again.  It gave me the same error as before.  I did a search and found webpages describing how to use NAS freeware to use another computer as an NAS device.  This raised two thoughts.  First, possibly I could use some software other than Synology's CD to make contact with the NAS device.  Second, perhaps I should consider using another computer myself, in lieu of the Synology unit.  I decided to go ahead with the Synology project for now; I could return or sell the device if it really wasn't what I wanted.  I probably could have assembled another computer at equal or lower cost, with far greater potential storage capacity, with more RAID options, with a more powerful processor (for e.g., checksum calculations) if needed, with what might prove to be more options in the choice of software packages and commands to manage and adjust it, and with more flexible hardware troubleshooting options (i.e., more than just fix it or replace it) in the event of malfunction.  Its drawbacks would include time and expense for software and hardware selection, learning, installation, maintenance, and troubleshooting; physical space requirements; power consumption; and noise and heat generation.

For the time being, I searched Synology's website and found a post raising the thought that perhaps a Windows connection was crucial only for the initial setup of the Synology device.  So I rebooted the computer into Windows XP instead of Ubuntu and ran the Synology setup CD from there.  This time, the wizard found the DiskStation right away.  So, really, I probably could have set the thing up using my laptop.  It seemed to be just a matter of connecting a Windows-based computer to configure the hard drive that I had inserted into the NAS unit.

Following the Quick Installation Guide, I looked for a Browse option in the Synology Assistant, but didn't see one.  Instead, in the Management tab of the Assistant, I double-clicked on the DiskStation entry, and that seemed to be the correct thing to do:  it opened a different Setup Wizard, or maybe a continuation of the same one.  The wizard said, "Please input the path of installation file."  Maybe this was where I was supposed to browse to the .pat file?  Sure enough. Browse brought up four different .pat files.  I chose the one for the 109 and opted for One-Click Setup.  It warned me that all data in the hard drive would be deleted.  I hoped it meant the hard drive that I had inserted into the NAS unit.  Lights began flashing on the unit.  It went through several steps:  Apply network settings, Format hard drive, Install DSM (DiskStation Manager) to hard drive, and Write configurations.  For my 2GB drive, the whole process took about 20 minutes.

When it was done, it said, "System has been installed successfully."  Then it just sat there.  Now what?  The other programs on the CD's Installation Menu were Data Replicator, in case I wanted to use the unit for backup rather than as a file server, and Download Redirector, for some purpose I didn't fully understand.  For lack of any better ideas, I rebooted into Ubuntu > Places > Network.  The list of places was the same as before.  I tried another search of the Synology website.  The product page for the DS109 definitely said that the unit was "designed for data storage and sharing among Windows, Mac, and Linux."  But how?

I knew I was desperate when I thought that perhaps I should consult the User's Guide.  But then -- what's this?  When I went to the downloads website, I saw that Synology Assistant was also available for Linux!  I had no idea.  I downloaded that and, while I was at it, also snagged what appeared to be a more recent DSM patch (.pat) file.  The User's Guide on the CD was for DSM 2.3, but the one online was for DSM 3.0, so I copied that too.  Apparently DSM was the firmware updater.  The included instructions were incorrect, as I eventually figured out.  All I had to do was to navigate to the folder where I had put the downloaded .tar.gz file ("cd /LOCAL/Synology") and the accompanying install.sh file, type "install.sh," designate /usr/local as the target directory, watch a bunch of error messages roll by, accept its offer to try again by sudo, copy and paste the command it offered to create a symbolic link, and then type "SynologyAssistant."

With that, Synology Assistant was up and running, and it found the DiskStation.  I double-clicked on it.  It opened a webpage in Firefox.  Having used the One-Click installation previously, I knew there was no administrator password, so I just clicked right on in.  Now I was looking at Management and Online Resources icons.  Management gave me all kinds of options.  I noticed I was in DiskStation Manager 2.3; did this mean that there was no DSM 3.0 for Linux?  On the left side, under System, I clicked on DSM Update.  Ah, of course.  This was the part where I got to Browse to the new .pat file I had downloaded.  It said, "Transferring data to the server.  Please wait."  This time, it was done in under 10 minutes.  It then confronted me with a signin screen.  I could not just click on through; it demanded that I enter something.  I tried Administrator without a password.  No go.  I tried my normal Ubuntu login.  Aha! . . . er, no.  That wasn't it either.  The hell.  I was locked out of my own NAS.  I wasn't alone.  Several other people had experienced this just within the last few days.  I suspected it was due to some quirk in newly released software.  I posted a "me too" note on it in Synology's moderated forum and waited.

But then -- reverting again, desperately, to the manual -- I noticed I was supposed to log in as "admin" with no password.  That worked, and now I was in DiskStation Manager 3.0.  I clicked on "Set up a volume and create a shared folder."  That opened Storage Manager.  I selected Storage > Create and that put me in Volume Creation Wizard.  The only option that wasn't greyed out was ISCSI LUN.  The manual didn't define that term, but Wikipedia said it was short for Internet SCSI, where SCSI is short for Small Computer System Interface.  The idea seemed to be that you were using the Internet instead of cables to create a SCSI setup.  LUN was short for "logical unit number."  An ISCSI LUN was apparently just any one of a set of drives in a SCSI array.  In other words, I was creating a logical drive.  So I went with that.

That gave me a choice of some more properties.  One was Thin Provisioning (default = yes), which was said to increase efficiency.  I was supposed to say how much of my 2TB (actually, 1829GB available, according to the dialog) I wanted to allocate to this first volume (default name:  LUN-1).  I was going to be backing up this file server to a 2TB drive, so I didn't worry about splitting the volume to a size that would match the external drive.  I thought it might be a good idea to have more than one volume, in case one went bad or needed maintenance.  The manual said that, on my unit, I could have up to ten.  I looked at my data and decided to go with three volumes of 600GB each.  (This would be changing later.)  Finally, there was an iSCSI Target Mapping option.  Again, the manual didn't explain this.  I found a website that sort of halfway did.  Eventually I just decided to go with the default, which was no, thank you.  I clicked Next > Apply and, in a few seconds, it was done.  I repeated for the other volumes -- or, I guess, LUNs, not volumes.  Then I clicked on the icons this process had created.  Each indicated that it had a 600GB capacity, but none of them actually seemed to have taken a bite out of the 1.8TB total.  Apparently that was how Thin Provisioning worked.  Then, to finish up with Storage Manager, I went to the HDD Management tab > Cache Management > Enable Write Cache.  I also ran a quick S.M.A.R.T. test.

This was all very nice, but I wasn't sure what it was actually accomplishing.  There weren't any new partitions appearing in Nautilus.  I wasn't sure if there were supposed to be.  I bailed out of Storage Manager.  I was looking again at Quick Start.  It said that now I needed to create a shared folder in the Synology.  I followed its link.  It put me into Control Panel - Shared Folder.  I clicked on Create.  In Create New Shared Folder, I set up a folder for LUNDATA, the first of my three LUNs.  It wouldn't let me select "Mount automatically on startup."  I gave both admin and guest read/write privileges for now.  I did the same with the other two LUNs.  I was confused, though:  after completing that step, I still didn't have anything to show for it.

It seemed that Chapter 7 of the User's Guide was where I wanted to be.  It told me to go to Main Menu (i.e., the down-arrow icon) > Control Panel > Win/Mac/NFS if I wanted to enable file sharing.  But that gave me an error:  "You are not authorized to use this service."  So, oops, that meant I had gotten logged out for dillydallying.  (First of many times!)  After re-login, the Quick Start reminded me that next on the list was "Create a User and assign privileges."  It had admin as the system default user already.  I selected that one and clicked edit.  Spooky thing here:  admin did have a password.  I wasn't sure why I didn't have to enter it when logging in.  I wasn't allowed to change the name of admin or disable that account.  I decided to change the password to something that I would actually know.  Admin already had full read/write privileges to my three LUNs.  The guest account was disabled.  I left it that way.  The manual (p. 66) said that each user could also have his/her/its own "home" folder.  It was something I had to enable if I wanted it.  I didn't need it, so I skipped that.

So now I went back to Win/Mac/NFS.  The User's Guide (p. 59) said that the unit supported file sharing in Linux in SMB, FTP, NFS, and WebDAV.  I unclicked the boxes so that the Synology would not offer Windows or Mac file service, which I did not need (and did not intend to provide to anyone else).  Instead, I clicked the Enable NFS box which, the manual (p. 61) said, was for Linux clients.  I figured that, in my Windows XP virtual machine, I would access the folders or LUNs on the Synology as network drives, just as if they had been ext3 drives inside the computer.

The remaining tab in this part of Control Panel had to do with Domain/Workgroup.  I didn't know if I wanted or needed to have the Synology be part of a domain, a workgroup, or both.  But then I found that the Domain/Workgroup tab was greyed out.  As I might have assumed, "workgroup" and "domain" appeared to be Microsoft-specific.  If I went back and enabled Windows file service, the Domain/Workgroup tab became ungreyed.  So that explained that:  it wasn't something I needed in Ubuntu.

In the Control Panel > Groups section of the Synology DSM, I saw that the default "users" group had read/write privileges only to the public folder, which I had disabled.  It was just me, so I didn't need a group.  So I left that all as it was.  Next, in Control Panel > Application Privileges, it appeared I could give users access to specific Synology applications (FTP, WebDAV, File Station, Audio Station, Download Station, or Surveillance Station).  Admin wasn't listed.  I assumed it didn't need to be.  I had no other users, so I skipped that part too.

Chapter 3 in the User's Guide, "Modify System Settings," told me that in Control Panel > Network, I could choose among several types of networks.  In my version of the Network dialog, those options were LAN, PPPoE, Wireless Network, and Tunnel.  The choice for my purposes seemed to be between LAN and PPPoE.  The manual said that I should use PPPoE if I used a cable or DSL modem and if my ISP used PPPoE.  I didn't know how to check that.  It didn't sound familiar, so I decided to start with LAN, the default (first) tab.  It gave me an option of manual or automatic configuration; I chose automatic (which was, again, the default).  That seemed to be about all I could do there.  While I was in the neighborhood, I went to Control Panel > Time and set it to synchronize itself with an NTP server.  

Now it was time to set up shared folders (User's Guide, p. 69).  In Control Panel > Shared Folder, I saw the three LUNs I had set up.  So apparently a LUN was a shared folder.  I had already taken care of this.  But that raised some questions.  If it was shared, what more did I need to do so that the computer would see it?  Should I have set up a "target" when I was creating the LUNs?  And did I want to encrypt them?

If I clicked on the Encrypt box, the "Mount automatically on startup" option became ungrayed.  I would want to enable that option.  But I had to think about that for a minute.  It seemed that encryption would protect the contents of the Synology in case of theft or loss of the physical device.  But apparently it would not protect those contents while the computer was turned on.  Anyone who could get into my computer, either physically or via the Internet, would have access to those contents.  I wasn't presently requiring myself to enter a login ID when I turned on the computer, so anyone sitting in my position would still have access, despite encryption.  I hadn't yet reviewed the part of the manual having to do with Internet access to the Synology, but evidently I would also have the option of logging in to it from elsewhere.  On the other hand, I had once had the experience of not being able to get into a backup that I had encrypted.  I wasn't sure if I had mis-recorded the password or if the encryption system on that backup had somehow gotten corrupted.  On balance, I decided that it would probably be a good idea to password the Internet-accessible data on the Synology, and to start requiring myself to enter a password to log in on the computer (System > Administration > Users and Groups).  But then, when I entered the password for the Synology and clicked OK, I got a warning telling me, "The performance of the encrypted shared folder will be decreased" and "The encrypted shared folder will not be available via NFS."  That would have defeated the purpose of having the Synology.  So I backed out of that.  No hard drive encryption in the Synology.

Well, the Synology was still not showing up in Nautilus.  I searched the manual for "target," in case that was the missing ingredient.  The User's Guide (p. 41) explained, "An iSCSI Target is like a connection interface . . . . [A]ll the LUNs mapped to the iSCSI Target are virtually attached to the client's operation [sic] system."  So apparently I would map my three LUNs to a target, and Ubuntu would see the target.  As the manual advised, I went into Synology's Storage Manager > iSCSI Target > Create.  There was an option to enable CHAP authentication, where the server would verify the client's identity.  I went with that.  I didn't go further and enable two-way authentication; I didn't need the computer to verify that it was contacting the right NAS unit.  I mapped all three LUNs to a single target.

In Edit > Advanced, I had an option to have it calculate CRC checksums for header and data digests.  The purpose would be to reduce or prevent data corruption.  The calculation would burden the CPU in the NAS, but I suspected the cabling would be more of a bottleneck than the processor nonetheless.  One post said that CRC might be a good idea for data traveling through a router, as would be the case here.  A year-old VMware webpage pertaining to a different VMware product (ESX) said that data digest for iSCSI was not supported on Windows VMs.  I decided to start out with these checksum items turned on, and see what the performance was like.  I also had options pertaining to maximum receive and send segment bytes.  The manual didn't seem to have anything on that, and nothing popped out in several different Google searches.  I decided to leave those at their default values of 262144 and 4096, respectively.

I still didn't see the Synology in Nautilus, but now (as I viewed p. 72 of the manual) I believed that was probably because I had not enabled my own username (ray) to have access.  In Synology's Control Panel > User, I added that username and gave myself full read/write access to the LUNs.  But then, whoa, on the next page, the User's Guide said that, to allow a Linux client to access a shared folder, I would have to go into Control Panel > Shared Folders > select the folder > NFS Privileges > Create and set up an NFS rule.  The first box there called for Hostname or IP.  It looked like the best way to identify the client would be by its IP address.  What was the IP address of my Ubuntu computer?  Zetsumei said I should type "/sbin/ifconfig" in Terminal.  I did that and got a bunch of information regarding eth0, lo, vmnet1, and vmnet8.  Same thing if I just typed "ifconfig -a."  A search didn't shed any light.  The number for eth0 came first and looked most familiar, so I tried that, with no mapping and asynchronous enabled.  This still didn't produce anything in Nautilus, so I thought probably I should have mapped.  But to what?  The only options were "Map to admin" or "Map to guest."  How about "Map to ray"?

A search of the Synology website led to a thread that yielded more questions than answers.  For the first time, the thought crossed my mind that the quality of the Synology organization was possibly not as gold-plated as I had hoped or imagined.  Surely the manual could have been clearer; surely, at these prices, the people posting these questions deserved some enlightenment.  At any rate, links in that thread led to one of those multiyear Ubuntu discussions, this one dealing particularly with NFS.  It seemed I should focus on learning about NFS; among other things, some posters felt that it was far better than Samba for sharing files and folders.

So I did a search and found a recent webpage promising to show me how to set up NFS.  I guessed that the real problem might be on the client side, so I started with that part of the webpage.  First off, they wanted me to install some packages:  portmap, nfs-common, and autofs.  A check of Synaptic told me that Synology had not installed these.  After installing them, I looked in the manual for the Synology IP address.  On page 161 (after many references to the IP address), the manual said that I could find it in Main Menu > System Information -- not, that is, in Control Panel.  The IP address it gave was, however, the same as the default entry it showed in Control Panel > Network > Use manual configuration; it was not the number shown in the DNS Server box.  So in the client, following the instructions on that webpage about NFS, I typed "sudo gedit /etc/hosts.deny" and added a line that said "portmap : ALL."  Then I typed "sudo gedit /etc/hosts.allow" and added a line that said "portmap : [Synology IP address]," using the address I had just found in Main Menu > System Information.  Next, I typed "sudo gedit /etc/hosts" and added a line near the top that said "[Synology IP address] [Synology Server Name]," in the same format as the other lines there.  (The server name was shown in Main Menu > System Information.)

Continuing with the NFS webpage's instructions, I was supposed to type something along the lines of "sudo mount [Synology Folder] [Local Folder]."  For that purpose, I understood that Synology Folder = [Synology IP address]:[Synology Shared Folder].  But I was not sure what the Shared Folder part was supposed to be.  Was I supposed to refer to the LUN or the iSCSI Target on the Synology unit?  Since the User's Guide (p. 41) said that an iSCSI Target was "like a connection interface," and that all the LUNs attached to it would be attached to the operating system, it seemed that I would need only one target, as I had set it up.  But now that I had learned more about security on the Synology, I had changed my mind about the number of shared folders I wanted.  I just wanted two, each 900GB in size:  one to contain stuff that shouldn't be changing very often, and that only the administrator should have write privileges for, and one for everything else, i.e., for the stuff that I would want to be able to mess with on a daily basis.  So after changing the LUNs and target in Storage Manager, I guessed that I would be creating two folders using the pattern of "/home/[username]/[foldername]" (where "username" would be "ray" in my case) -- one for each of the two LUNs on the Synology.  One of them was called SYNDATA.  On that basis, I typed "sudo mount [Synology IP address]:[Synology 900GB folder name] /home/ray/SYNDATA."  This gave me "access denied by server while mounting [Synology Folder]."  Not a desirable answer, but at least it was a reply of some kind!

By now, I was completely confused, and more than a little irritated at how very long this was taking.  The NAS was supposed to be simplifying my situation, not making it more complex.

It did seem, at this point, that it might have been easier to troubleshoot this if I had been using a computer as my NAS:  I could have gone into it and typed various commands to maybe get a bit more insight on what was happening in there.  A search for that error message led to the suggestion that I type "/usr/sbin/rpcinfo -p" to see what ports the server was using, but that gave me a "No such file or directory" error.

I decided to put in a support request at Synology.  The form required me to enter the Firmware Version -- but, of course, this was not provided in the System Information dialog.  I just entered something that seemed approximately right.  It also asked for the serial number -- and that, they helpfully indicated, was located on the bottom or perhaps the rear of the unit.  After turning it around and risking unplugging it, doing gymnastics to hold it while typing, I realized that, well, they might have mentioned that that bit of information actually *was* in the System Information dialog.  But when I got down to the part where they were ready and listening to what I had to say, I was not sure what to type.  There wasn't an option of talking to (or even chatting with) a live person.  I had to type something.  But what?  How could I possibly explain all this in a few words?

What I needed, somewhere in the Synology software, was a tool that would tell me what was happening.  "You have connected to a computer" or "You have not connected to a computer," etc.  I wasn't sure -- I hadn't done much networking before -- but I suspected that I could get that kind of information by using regular Linux commands on a computer in a network.

I decided that what I would tell the Synology people was just that they should look at this post.  I had identified a number of areas they could improve; and if they really got on the stick, they might even be able to respond in time to help me, before I returned the unit to the vendor or resold it.  The unit had more than a dozen positive remarks from other purchasers at Newegg, so I was hopeful.  But meanwhile, I started a post on the alternative of using a separate computer to create my own NAS.

Ubuntu Do-It-Yourself Network Attached Storage (DIY NAS): A Preliminary Look

I had just done an enormous amount of work in an unsuccessful attempt to get a Synology DS109 network attached storage (NAS) device to work in my Ubuntu 10.04 system.  It was an expensive unit, on my budget, even if it was near the bottom of Synology's line of products, so I was hoping for good things from it.  But I was not able to figure out why it wasn't connecting with my computer.  So while waiting for a reply from Synology's tech support, I did a search for do-it-yourself (DIY) alternatives.  This post describes a bit of what I found.

What I was hoping to find, in a DIY alternative, may be summarized in this excerpt from the previous post:

I decided to go ahead with the Synology project for now; I could return or sell the device if it really wasn't what I wanted.  I probably could have assembled another computer at equal or lower cost, with far greater potential storage capacity, with more RAID options, with a more powerful processor (for e.g., checksum calculations) if needed, with what might prove to be more options in the choice of software packages and commands to manage and adjust it, and with more flexible hardware troubleshooting options (i.e., more than just fix it or replace it) in the event of malfunction.  Its drawbacks would include time and expense for software and hardware selection, learning, installation, maintenance, and troubleshooting; physical space requirements; power consumption; and noise and heat generation.
If the Synology unit had been easy to use, I wouldn't have been able to generate that list of potential advantages of a DIY alternative.  I accumulated that list during the process of writing up the hassles I was having.  So now it was a question of how true those observations really were.

One thing for sure:  if I had taken seriously the idea of building a NAS myself at the outset, I would not have bought the Synology unit.  I did, in fact, have an old computer sitting around, one that I rarely used, mostly just for troubleshooting random hardware problems.  I was willing to convert it to another function. So, right there, I did have much of the hardware that I would need.  Its case design would accommodate a number of drives, if I decided to build a RAID NAS that would require that.

That took care of some of the objections to a DIY NAS.  What about space, noise, heat, and power?  There was no comparison:  the Synology was such a cool, quiet, sleek little thing compared to that whole computer case and its monitor and keyboard.  Even if I put the peripherals in a drawer and managed the server remotely (assuming that was possible), there was still the noise from its fans and power supply.  The trade-up in expandability and flexibility (with e.g., RAID) would come at a cost.  And that was the real question:  did I want the additional capabilities badly enough to accept the drawbacks?

Then again, it occurred to me that I didn't absolutely have to have the server remain on constantly.  Couldn't it hibernate during slack periods?  For noise reduction, couldn't I park it in a closet?  This seemed like something worth experimenting with.

What about hardware hassles?  I could probably just plug in the drives and, optionally, a RAID controller and be done.  The Synology unit did not have a RAID option, and it did not come with a drive.  The DIY NAS would have more things that could fail, but failures did not happen often, and with an optional RAID setup it would be better equipped to absorb them.  In terms of hardware, the DIY NAS was the winner.

And software?  The Synology software package had a nice GUI that, for all its good looks, had managed to confuse me and had thus failed to deliver a working solution.  If the Synology people came through on my tech support request, I would be ahead of the game there; but my request had been so broad and confused that I doubted they would be able to help much.  If it was a matter of devoting another five or ten hours, I was leaning toward trying something new, rather than beating my head against the wall with more efforts to understand the Synology software.

I decided, though, that I had probably better take a closer look at the software I would be using.  Ubuntu came with software RAID, and I had already had a bit of experience with that.  But what about NAS -- what software would I use for that?  I had gotten the impression that NFS was a good alternative to Samba for a Linux network -- simpler and faster, but possibly not as secure and apparently not as good for interfacing with Windows machines.  The Synology unit used NFS.  But this didn't really answer the question.  What software -- really, what operating system -- would I use to manage the DIY NAS?  Sources cited on the Wikipedia page for FreeNAS made it sound like a great solution.  It was apparently UNIX-based, and thus was some kind of cousin to Ubuntu, but it seemed I would have to invest some additional time to learn FreeNAS.  Ubuntu Server seemed like a more familiar, better-supported, and more flexible alternative.  Openfiler was another possibility.  A lot of people swore by Windows Home Server; I just didn't want to pay $100 for the privilege of going back in the direction of relying completely on Microsoft software.  Going in a different direction, there also seemed to be routers with hard drive connections, though I wasn't fond of the USB connection offered by the one I saw.

Those thoughts led to a thread that made me ask whether I wanted an NAS or a home server.  The tradeoffs described above were echoed in another thread I found, but now the discussion had mutated a bit.  If I was going to use a computer as an NAS, it seemed I might as well make it a home server.  In fact, that's probably how I had been thinking of it anyway:  a machine that could do whatever a computer could do, including RAID as well as NAS.

By this point, a couple of things had happened.  I had downloaded a copy of Ubuntu Server and had decided to try setting it up on that old computer, to see how easy it would make things for me in terms of network storage and RAID options.  I had read enough webpages to persuade me that setting up a home server could be somewhat time-consuming and thus would ideally be approached, for my purposes, as a longer-term project:  install the operating system, tinker with the hardware, and gradually move toward comfort with its maintenance and other requirements.  And I had heard back from the Synology people via email, not once, but twice in the space of three hours in the evening, East Coast time.  They were offering to set up a TeamViewer session with me for the next day.  So my working plan, at this point, was to get the Synology going, if I could do so the next day.

Wednesday, September 29, 2010

Ubuntu 10.04: Sharing a Folder on an NTFS Partition

I had two computers running Ubuntu 10.04.  These machines were connected by ethernet cable through a router.  Both had Samba installed, and both had a shared folder set up as described in another post.  I had set up one of those shared folders on an ext3 partition.  As that other post indicates, I was able to see the contents of that shared folder from the other computer.  But without thinking about it, I had set up the other computer's shared folder on an NTFS partition.  Seeing a folder on that partition turned out to be a more complicated matter than I had expected.  This post describes my efforts in that regard.

Right away, I rediscovered that chown wouldn't work as expected with NTFS drives, and that it was therefore necessary or advisable to type "sudo gedit /etc/fstab" and change the lines for NTFS drives or partitions to something like this:

UUID=[UUID for the drive] /media/partitionname ntfs-3g rw,suid,dev,exec,auto,user,async,umask=000 0 0
and then unmount ("sudo umount /media/partitionname") and remount the partition.  Remounting would apparently force a look at the new contents of fstab, and could be done with "sudo mount /dev/sdc3 /media/drive1" or just "sudo mount -a" (assuming a folder named /media/drive1 had already been created (e.g., "sudo mkdir /media/drive1")).  That fstab line was one I had developed to replace the "defaults" word that appeared in some fstab lines previously:  it made all of the default settings explicit, and changed some of them.  Yet even with this change, when I tried to set the Sharing Options in Nautilus, I got this error message:
Folder Sharing

'net usershare' returned error 255: net usershare add: cannot share path /media/partitionname/folder2 as we are restricted to only sharing directories we own.

Ask the administrator to add the line "usershare owner only=false" to the [global] section of the smb.conf to allow this.
I was willing to go into smb.conf and make that change, but first I wanted to know why (as I confirmed with a right-click > Properties) root still owned that folder.  This led to the insight that NTFS filesystems (such as the one I was trying to share) did not remember ownership, so the partition would need to be reminded each time I mounted it.  One way to do this was to modify the fstab line to include user and group identifications:
UUID=[UUID for the drive] /media/partitionname ntfs-3g rw,suid,dev,exec,auto,user,async,umask=000,uid=username,gid=groupname 0 0
and then save fstab and unmount and remount the partition (above).  And yet that still didn't do it.  The better statement seemed to be, not that NTFS partitions needed to be reminded, but that they simply didn't have an ownership concept.  There seemed, then, to be no alternative but to type "sudo gedit /etc/samba/smb.conf" and add the line "usershare owner only=false" to its [global] section, as advised above.  I saved and closed that and tried Sharing Options again.  This time it worked, or at least it didn't give me an error message.

Unfortunately, the shared folder still wasn't showing up on the other computer's Places > Network list.  It sounded like the solution might be to try Sharing Options as root ("sudo nautilus").  Unfortunately, at this point "sudo nautilus" was giving me an error message on that computer:
(nautilus:23890): Unique-DBus-WARNING **: Error while sending message: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
I wondered what would happen if I stepped away from the NTFS situation for a moment.  I decided to try sharing a folder on an ext3 partition in that same computer.  I right-clicked on the ext3 partition in Nautilus, chose Create Folder, named the folder, went to the Share tab, and selected "Share this folder" and "Allow others to create and delete files in this folder."  This gave me an error message:
'net usershare' returned error 255: net usershare add:
cannot stat path /media/partitionname/foldername to ensure this is a directory.  Error was No such file or directory
I guessed that this might be because I had gone directly to the Share tab without first clicking Close in the Basic tab there in the P4 Share Properties dialog.  So, OK, I saved the new folder first, and then right-clicked on it in Nautilus and chose Sharing Options.  This time it went OK.  I went into System > Administration > Samba, as described in the other post, and added this folder there.  But I still wasn't able to see it from the other computer.  I rebooted this problematic machine and tried again.  In doing so, I noticed that GParted and some other things were still running on another desktop, so that may have explained why "sudo nautilus" worked OK after the reboot.  The sharing step just described also worked OK.  But Places > Network on the other computer still didn't show the problematic computer.  

I was just about to change some hardware and reinstall Ubuntu on that machine anyway, so I deferred further effort on this project until after that was done.  At that point, though, I found a better folder-sharing solution by using a Synology network-attached storage (NAS) unit.

Tuesday, September 28, 2010

Ubuntu 10.04: Ethernet Networking Two Computers and a Router

I had installed Ubuntu 10.04 on two desktop computers.  Both were connected by ethernet cable to a router.  This seemed like the basis for a network, so that the two computers could talk to each other directly.  I had not previously set up a network in Ubuntu.  This post describes that learning experience.

I started with a search for guidance.  This led to a thread that persuaded me to try Samba shares, with Webmin for my Samba GUI.  So I typed these lines, one at a time:

sudo apt-get install samba
wget http://prdownloads.sourceforge.net/webadmin/webmin_1.510-2_all.deb
sudo dpkg --install webmin_1.510-2_all.deb
sudo apt-get install -f
The last line was a response to the error messages.  Then, in Firefox, I went to https://localhost:10000 and, after confirming a security exception, I entered the same username and password that I used to log into Ubuntu on the computer.  This gave me a locally stored Webmin webpage.  I clicked "Refresh Modules" on the left side of the page.  When that process was done, I clicked on Servers, in the top left corner.  Sure enough, it said, "Samba Windows File Sharing."  I went through that same process on both computers.

Then I searched for information on how to use Webmin.  This led to the discovery that Webmin was not fully compatible with Ubuntu, which would explain why it wasn't available for download through Synaptic.  So now it was time to undo what I had just done:
sudo apt-get remove webmin
sudo apt-get autoremove
As an alternative, someone mentioned eBox.  It looked like eBox had had some hard times, but hopefully those were all behind us now.  So I went into Synaptic and marked ebox-samba for installation.  This brought the plain ebox platform and a boatload of other packages along with it.  One of those packages was ddclient, which asked me many questions that I could not answer.  I tried to fake it, just going with the defaults, but wound up with an "Empty host list" message telling me that I had failed in some significant sense.  With that taken care of, Synaptic went ahead and installed all these downloads, presenting me once again with the ddclient Empty host list before it was done.  Yet after all that, there was no eBox icon in my menu and typing "ebox" at the prompt just gave me "command not found."  So I used the same steps as above to remove eBox.

A different search led to a page that advised me to type this command:
sudo apt-get install samba samba-common system-config-samba
With that done, I went to System > Administration > Samba.  This brought up a Samba Server Configuration dialog.  There, it looked like I could go to File > Add Share to designate a folder that would be shared between computers.  I took these same steps on the other computer.  Then I specified a folder on one computer that I did want to share.  I made it writable, visible, and accessible to everyone.  I found that this had to be a folder; I couldn't share a whole partition.  Later, I saw in another source that I should also have gone into Preferences > Server Settings > Security tab, there in the Configuration dialog, and change Authentication Mode to Share and Guest Account to my username, so I went back and did that,  Then, back in Nautilus, I right-clicked on that folder and selected Sharing Options > Share this folder and allowed others to create and delete.  I right-clicked on the folder in Nautilus again and went to Properties > Permissions, but it was already owned by me with all kinds of rights and privileges.

That seemed to be all I was supposed to do.  But I couldn't figure out where I was supposed to go, on the other computer, to see the shared folder on the first computer.  Eventually I saw instructions to go to Ubuntu's menu > Places > Network.  There, it had entries for each of the two computers, plus something called Windows Network.  I found that, if I just double-clicked on the entry for the other computer and then waited patiently for it to do its thing, with no visible indication that it had heard me, eventually (after maybe ten seconds) it would get around to showing me the shared folder on the other computer.  And it worked:  I was able to retrieve a file from that other computer.

Windows XP & Ubuntu RAID 0 Dual Boot: Error: No Such Device

I had set up a dual-boot system with Ubuntu 10.04 on a two-drive RAID 0 array and Windows XP SP3 running from a third hard drive.  I replaced the WinXP drive with a different one, copying the files over from the old drive's partitions to the new one, and rebooted.  When I selected the Windows XP option in the GRUB2 boot menu, I got an error message:

error: no such device: [apparently a UUID number]
error: invalid signature.
Press any key to continue...
As described in comment 90, within a long thread on this Ubuntu bug, with an alternate approach in another post, the solution to this problem was said to involve editing grub-mkconfig_lib.  Since I was able to get into Ubuntu (though not Windows), I took the approach described in comment 90.  In Ubuntu, in Terminal, I typed these commands:
sudo apt-get update
sudo apt-get install grub-common
The first command ended with errors along the lines of "could not connect to archive.getdeb.net."  This turned out to be an indication that the getdeb repository was down.  I repaired it by editing sources.list to refer to a mirror site instead.  Next, I typed this:
sudo gedit /usr/lib/grub/grub-mkconfig_lib
and changed gedit's Edit > Preferences to show line numbers.  I went down to line 174, ready to insert a # sign before whatever was on that line.  But it was just "fi" which, I thought, marke the end of an "if" statement.  What they wanted me to add did not look like it belonged there.  The long thread had not been visited for nearly four months; I suspected the file had been changed in a bid to fix the problem.

I got that error in the first place, as noted above, not because of an obvious malfunction in GRUB2, but because I had moved the partition containing the Windows XP program files.  I heard that just typing "sudo update-grub" would cause GRUB2 to search for operating systems.  And it did:  in the process of "Generating grub.cfg," it reported that it found Windows XP on /dev/sdc1.  I wondered if that would fix the problem by itself, so I rebooted and selected the Windows option again from the GRUB menu.  And that was it.  Problem solved!

Monday, September 27, 2010

Dual-Boot RAID 0: Ubuntu 10.04 and Windows XP

I wanted to set up a SATA RAID 0 array that would function like any other dual-boot system:  I would turn on the computer; it would do its initial self-check; I would see a GRUB menu; and I would choose to go into either Windows XP or Ubuntu 10.04 from there.  This post describes the process of setting up that array.

With no drives other than my two identical, unformatted SATA drives connected, I turned on the computer.  The BIOS for my Gigabyte motherboard did not give me the obvious RAID configuration option I had hoped for.  I hit DEL to go into BIOS setup.  Nothing jumped out at me.  Desperate for guidance, I turned to the manual.  I was looking at an Award Software CMOS Setup Utility.  The manual directed me to its Integrated Peripherals section.  There, I set OnChip SATA Controller to Enabled, OnChip SATA Type to RAID, and OnChip SATA Port4/5 Type to As SATA Type.  I hit F10 to save and exit.

According to the manual, that little maneuver was supposed to give me an option, after the initial boot screen, to hit Ctrl-F and go into the RAID configuration utility.  Instead, the next thing I got was this:

Press [Space] key to skip, or other key to continue...
I didn't do anything.  In scanned my drives and then led on to the Ctrl-F option.  I rebooted and tried it again.  Hitting the space key led to the same result.  Ctrl-F opened the AMD FastBuild Utility.  I hit option 2 to define an array.  This gave me a list of my two drives, labeled as LD 1 and LD 2.  Apparently it wasn't supposed to show anything.  LD was short for "logical disk set."  It was essentially showing two separate arrays, each having one drive.  So although the manual didn't say so, it seemed that I needed to get out of here and go into option 3 to delete these arrays.  I did that and then went back into option 2.  Now I was looking at a blank list of LDs, just like in the manual.

So now I was ready to prepare my array.  In option 2, I hit Enter to select LD 1.  This defaulted to RAID 0 with zero drives assigned to it.  I arrowed down to the Assignment area and put Y next to each of the two drives listed.  Now it said there were two drives assigned.  But now I had a couple of things to research.  The screen was giving me options for Stripe Block, Fast Initialize, Gigabyte Boundary, and Cache Mode.  The manual didn't say what these were.

I did a search for information on the Stripe Block size.  I found an old AnandTech article that took the approach of choosing the lowest stripe size where performance tended to level out -- where, that is, increasing the stripe size another notch did not increase performance.  For the RAID controllers they were testing, it looked like performance kept increasing right up to the range of 256KB to 512KB, for those controllers whose options went that high.  Mine only gave me a choice between 64KB and 128KB, so I chose the latter.  A more recent discussion thread seemed to support that decision.

Regarding the "Fast Init" option, a search led to some advice saying that slow initialize would take longer but would improve reliability.  A different webpage clarified that the difference was that slow initialize would physically check the disk and would be suitable if you had had trouble with the disk or if you suspected it had bad blocks.  I decided to stay with the default, which was Fast Init ON.

The "Gigabyte Boundary" option would reportedly make the larger of two slightly mismatched drives in an array behave as though it were the same size as the smaller one.  The concept appeared to be that, if you were backing up one drive with another (which was not the case with a RAID 0 array), you would use this so that the larger drive would never contain more data than the smaller drive could copy.  Mine was set to ON by default.  I couldn't quite understand why anyone would need to turn it off, even if the drives were the same size.

Finally, the "Cache Mode" option was apparently capable of offering different choices (e.g., write-back), but mine was fixed at WriteThru with no other options available.  So I thought about it a long time and then decided this was acceptable to me.  So then I hit Ctrl-Y to save these settings.  Now I was back at the Define LD Menu, but this time it showed a RAID 0 array with two drives and Functional status.  That seemed to be all I could do there, so I exited that menu.  I poked around the other options on the Main Menu.  I seemed to be done with the FastBuild Utility.

Next, the manual wanted me to use a floppy disk to install the SATA RAID driver.  I could have just gone ahead and done that -- I still had a floppy drive and some blank diskettes -- but I thought surely there must be a better way by now.  Apparently there was:  use Vista instead of WinXP.  But if you were determined to use XP, as I was, the choices seemed to be either to go through a complex slipstreaming process or use the floppy.

There was, however, another option.  I could buy a RAID controller card, for as little as $30 or as much as $1,000+, and it might come with SATA drivers on CD.  This raised the question of whether the RAID cards actually had some advantage beyond their included CD.  My brief investigation suggested that a dedicated RAID card could handle the processing task, taking that load off the CPU, but that there wasn't much of a processing task in the case of RAID 0.  In other words, for my purposes, a RAID controller card wouldn't likely add any performance improvement.  Someone said it could even impair performance if it was a regular PCI card (as distinct from e.g., PCIe) or if its onboard processor was slower than the computer's main CPU.  There did seem to be a portability advantage, though:  moving the array to a different motherboard would require its re-creation, in at least some cases, but bringing along the controller card would eliminate that need -- though the flip side was that the card might fail first, taking the array with it.

Further reading led to the distinction between hardware and software RAID.  An older article made me think that the essential difference (since they all use hardware and software) was that software RAID would be done by the operating system and would run on the CPU, and would therefore be dependent upon the operating system -- raising the question of whether dual-booting would be impossible in a software RAID array, as a generally informative Wikipedia article suggested.  To get more specific, I looked at the manual for a popular motherboard, the Gigabyte GA-MA785GM-US2H.  That unit's onboard RAID controller, plainly enough, was like mine:  it depended upon the operating system.  Wikipedia said that cheap controller cards provide a "fake RAID" service of handling early-stage bootup, without an onboard processor to take any of the load off the CPU.  FakeRAID seemed to get mixed reviews.

An alternative, hinted at in one or two things I read, was simply to set up the RAID 0 array for the operating system in need of speed, and install a separate hard drive for the other operating system.  I was interested in speeding up Linux, so that would be the one that would get the RAID array.  I rarely ran Windows on that machine, so any hard drive would do.  A look into older, smaller, and otherwise seemingly less costly drives led to the conclusion that I should pretty much expect to get a 300GB+ hard drive, at a new price of around $45.  Since I was planning to use Windows infrequently on that machine, it was also possible that I could come up with some kind of WinXP on USB solution, and just boot the machine with a flash drive whenever I needed access to the Windows installation there.

I decided that, for the time being, I would focus on just setting up the Ubuntu part of this dual-boot system, and would decide what do to about the Windows part after that.  I have described the Ubuntu RAID 0 setup in another post.

Installing RAID 0 in Ubuntu 10.04 (Lucid Lynx)

In a previous post, I was working toward having a dual-boot Windows XP and Ubuntu 10.04 system, with a RAID 0 array for at least the Ubuntu installation.  Here, I describe the process I went through to set up that array.  I had previously had a WinXP/Ubuntu dual-boot system on a single Western Digital Velociraptor drive; the main change here is adapting that to the RAID scenario, and seeing whether this would be faster.

I started with a very helpful video by amzertech.  (Actually, two videos.)  I have provided a detailed description of the process (below).  To summarize, I had to boot the alternate Ubuntu CD and use it to install root ("/") and swap partitions on each drive in a RAID format, along with a /boot partition on one drive.

Troubleshooting

It all seemed to go pretty smoothly, and certainly other people seemed to have had good luck with that video.  But when I was done and I tried to boot the system without the CD, I got "error:  no such disk" and then a "grub rescue" prompt.  And in the FastBuild Utility discussed in the previous post, I was no longer seeing two drives configured into one logical disk set; they were back to being two separate JBOD ("just a bunch (or a box) of disks") drives.  So it seemed that what I had done previously was not necessary.

After digging around in a search, I found a post where they said the problem got solved for them by undoing the BIOS edits I described in the previous post.  So I rebooted, hit Del, went into the BIOS > Integrated Peripherals and set the OnChip SATA Controller to Disabled.  This greyed out the other two items that I had changed and put them back into their previous settings.  I saved and rebooted.

This time, I got "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER."  A search led to a Tom's Hardware thread where they advised, among other things, adjusting the boot order in the BIOS.  So I tried setting the hard drive to boot first, instead of the CDROM drive.  But I got the same error again on reboot.  That thread contained many other suggestions:  update your motherboard's BIOS, make sure all cables are firmly connected, disconnect all other drives, use other software (e.g., Disk Boot Manager), check that the jumpers on your hard drives are set correctly (at least if you're using PATA drives, which I wasn't), fiddle with BIOS settings, etc.

The BIOS update was a possible solution, but I began instead by rearranging cables a couple of times.  My goal there was to match up the hard drive on which I had installed the /boot sector (I didn't know which of the two it was) with the lowest-numbered SATA connector on my motherboard (i.e., SATAII0 as distinct from SATAII1, SATAII2, etc.).  But that didn't do it either; still the DISK BOOT FAILURE message.  I also tried going back into the BIOS and setting Hard Drive as the first, second, and third boot options.

Following some other suggestions, I went back into BIOS and enabled the OnChip SATA Controller as Native IDE type (even though they were SATA drives).  This gave me a new error:  "MBR Error 1.  Press any key to boot from floppy."  I tried reversing the cables, in case I had the drives in the wrong order.  And that did it.  Success!  Ubuntu booted.  I restarted the computer, set the BIOS back to boot first from USB, second from CDROM, and third from hard drive (as it had been previously), and thus verified that the solution seemed to be as follows:  set BIOS to Native IDE type, and make sure the drive with the /boot partition is connected to the first SATA connector on the motherboard.

Assessment

In Ubuntu, I went into Synaptic and installed GParted.  I took a look at what had happened.  I had used two 640GB drives for my RAID 0 array.  In an earlier time, this would have been extravagant.  By now, however, prices on such drives were in the basement.  But I did still wonder if I could use the rest of these drives for some other purpose.  I had allocated an absurdly large 100GB space (50GB per drive) for my root partition -- leaving a total of more than 1TB unused!

So now, in GParted, I saw three drives:  md0, sda, and sdb.  md0 was a net 93GB, with no further information.  sda essentially had the partitions I had set up in the RAID setup process:  the root partition, the /boot partition, the swap partition, and about 540GB left over.  sdb had its own matching root and swap partitions, though they were labeled as having an "unknown" file system, and a matching 540GB unallocated.

More RAID Partitions

Could I use that 1TB of unused space?  I wouldn't have put my data there -- RAID 0 had twice the risk of data loss, in the sense that either drive's failure would take down the array -- but there were some other possibilities.  In particular, I could store my VMware virtual machines (VMs) there, as long as I kept backup copies elsewhere; they would run faster on the RAID array.  And I also wanted to set up a separate /home partition.  But should I have created these partitions while I was going through the initial RAID setup?  And could I store anything on just one hard drive or the other, or did everything on these two drives have to be set up in a RAID format now?

I decided not to research the option of storing things on just one drive or the other at this point.  Apparently it was possible, and could even be done after the fact from within Windows XP.  Since there would be a lot of completely unneeded empty room left over after this RAID installation, I would just let it sit on the drives as unformatted space for the time being.  But for the VMs and anything else (video editing files?) that might call for the performance of RAID 0, I decided that I did want to make use of some of that space.  And I wanted it to be in separate partitions, for backup purposes, not part of the Ubuntu RAID installation mentioned above.  I figured the contents of these partitions would change more frequently and would require a different backup schedule than what the root program installation would need.

RAID 0 Setup:  Detailed Description

So now that I had my BIOS and my drives and everything else in order (above), I restarted the process of setting up the RAID 0 array, and this time I took notes.  I booted the alternate CD and went into Install Ubuntu.  I went through the initial setup options (language, keyboard, etc.).  When it got to the partitioning screen, I chose Manual.  Now I saw that, actually, I didn't have to undo what I had already done.  The 100GB RAID 0 device I had already set up would be just fine.  I could just arrow down to the FREE SPACE items and add stuff there.

So I did that.  For each of the two drives (sda and sdb, on my system), I selected FREE SPACE and hit Enter > Create a new partition > 50 GB (giving me 100GB total) > Continue > Logical > Beginning > "Use as" > "physical volume for RAID."  Then I chose "Done setting up the partition."  Then, back in the Partition Disks screen, arrow up and hit Enter at "Configure software RAID" > Yes.  Next, Create MD device > select the two items shown as 49999MB (i.e., about 50GB) > Continue > Finish.  This put me back at the Partition Disks screen again, but this time I had a new RAID 0 device of 100GB.  I selected that device, hit Enter > Use as > Ext3 (more reliable than Ext4) > Enter.  I set the mount point to /home and the mount options to relatime and labeled it UHOME.  Then I selected and hit Enter on "Done setting up the partition."  I repeated this process, starting by selecting FREE SPACE, and created another pair of 200GB logical partitions that I labeled as RAIDSPACE and set it to mount as /media/RAIDSPACE.  My VMs would go on a folder in this partition.  I still had 700GB left over, but at least I had made a stab at converting some of that unallocated space to a useful form.

I noticed, in this process, that my previous root partition was no longer set as root.  I configured it again.  Now, when I went to Configure software RAID, it seemed that Ubuntu was going to be reinstalled there.  I went ahead with that.  With those changes made in the Partition Disks screen, I arrowed down and selected "Finish partitioning and write changes to disk" and hit Enter.  It gave me an option of formatting the root partition, but since I had already installed Ubuntu there, I didn't want to do that.  This put me into an empty blue screen for a while, but then it began installing the base system.  So maybe I should have let it format the root partition after all.  It went through the installation process and then told me that this seemed to be the only operating system on this computer and asked if it was OK to install the GRUB boot loader to the master boot record.  I said yes.

When it was done, it went into Ubuntu.  I reinstalled GParted and took another look.  It looked like I might have done something wrong.  The only md device was md0, the 93GB partition from before.  It did show the UHOME and RAIDSPACE partitions on sda, with matching Unknown partitions on sdb, all with RAID flags next to them.  Nautilus showed RAIDSPACE as a legitimate partition with 348GB free (19 GB used already!).  But no, everything seemed OK.

Bringing Stuff Over to the New Installation

I shut down the machine, connected a disk containing files from my previous system drive, booted with a live CD, and copied some things over. Specifically, I installed a third hard drive and, while booted with the live CD, used GParted and Nautilus to prepare a partition on it and then copy over all of the files from my previous machine's Windows XP installation.  I tried to copy the /home partition from my previous installation, to replace the contents of the /home partition in the RAID array, but the array was not accessible from a live CD boot.  So I copied the previous /home folder to that newly installed third hard drive.  (I knew that the previous system drive was bootable, so I would not start the system from a hard drive (i.e., without a live CD) while that one was connected, lest it screw up my new installation.)  I got an error message for just one file, a Google Earth file:  "Can't copy special file."  A "special file," according to Andreas Henriksson, was something like a pipe, device node, or socket.  I would be reinstalling Google Earth anyway, so this was no problem.

When those copy operations were done, I disconnected the previous drive and rebooted from the hard drive, this time with an external USB drive connected.  The external drive contained my previous fstab and some other materials that I needed now, as I began to work through the Ubuntu post-installation adjustment process described in another post.

The first step of that process required some adjustment for the RAID situation.  The /home partition in the RAID array wasn't available via live CD.  So I tried the technique of commenting out the regular fstab line for /home and replacing it with a reference to /dev/md2 (where the home partition was to be) and /home_tmp (instead of home).  I typed "sudo mkdir /home_tmp" and then rebooted.  Now, if all went well, the system would think its /home folder was in /home_tmp.  On reboot, I typed "sudo Nautilus," went into /home (not /home_tmp), deleted its "ray" folder (my username), and replaced it with a copy of the "ray" folder from my previous installation.  Then I typed "sudo gedit /etc/fstab," deleted the line referring to /home_tmp, and rebooted.  Then, in sudo Nautilus I deleted /home_tmp and rebooted once more.  All was good:  my settings from the previous setup were back.

I proceeded through the remaining steps to configure my new Ubuntu installation, as described in that other post.  The RAIDSPACE partition was not available to me as a normal user, but I wanted it to be, so I typed "sudo nautilus," right-clicked on RAIDSPACE, and changed its permissions.  Then I copied my VMS from the external drive to a VMS folder in RAIDSPACE.  I was getting an error message on reboot, "Ubuntu is running in low-graphics mode," when it did not seem to be.  Also, I noticed that the GRUB menu was no longer remembering the operating system that it had last used; it was defaulting to Ubuntu in every case.  I was not the only one who had this problem in RAID.  But otherwise, the previous post pretty much covered the adjustments required to get my system back to normal.

As mentioned above, I had copied over the Windows XP files from what used to be my system drive to the third hard drive now installed in this computer.  When I ran "sudo update-grub" to consolidate the changes I had made to the GRUB2 menu while making those adjustments, it said, "Found Microsoft Windows XP Professional on /dev/sdc1."  GParted said that sdc1 was the right place -- it was the NTFS partition to which I had copied those files.  I wondered if just copying WinXP files from one drive to another in Ubuntu was sufficient to create a working WinXP installation in the new location.  So now I rebooted and, in the GRUB2 menu, I chose Windows XP.  And, what do you know, it worked!  Just like that.  No GRUB errors or anything.  I had to adjust a few settings in WinXP, but for the most part things were in good shape.

The Acid Test

So that seemed to pretty much wrap up the process of converting my dual-boot WinXP/Ubuntu system from a single hard drive to a RAID 0 array.  I was sure there would be other changes to come, but it was time for the acid test:  I wanted to see how VMware Workstation performed in the RAID 0 environment.  It had been dragging, functioning very slowly for a long time, in the computer that this one was going to replace.  It had run more quickly on this replacement computer in the single hard drive setup.  No doubt the Velociraptor helped.  But how did it do in the dual drive array?

Let me say, first of all, that the general startup process in Ubuntu was darn snappy.  I noticed it right away.  Boom! my startup programs all came to life pretty smartly.  Inside VMware Workstation, likewise, performance seemed faster than it had been in native WinXP on the Velociraptor.  I was sure there would be much additional learning, but this had been a real step forward.

Thursday, September 23, 2010

Notes from "Philosophical Reflections on Disability"

Book Discussed

Ralston, D. C., & Ho, J. (Eds.) (2010). Philosophical reflections on disability. Dordrecht: Springer.

*  *  *  *  *

In another blog post, I was taking notes on Houtenville et al. (2009), Counting Working-Age People with Disabilities.  I got as far as page 28 and came back again to the question of how to define disability.  I had recently borrowed a copy of this book by Ralston and Ho, and decided to check it out.  So here are some notes that came to mind as I perused a few chapters of that volume.

*  *  *  *  *
Chapter 1:  Introduction

The introduction just summarizes and comments on the book's various articles.  I used it to help select which chapters to focus on, in the brief time I had available.

*  *  *  *  *

Chapter 2:  Silvers, A. (2010). An essay on modeling: The social model of disability. In D. C. Ralston & J. Ho (Eds.), Philosophical reflections on disability (pp. 19-36). Dordrecht: Springer.

Among its many interesting insights and much information that was new to me, I found this chapter somewhat illogical in spots.  For example, Silvers says, “[T]here is not nor can there be such a thing as a social model of disability” and goes on to refer to “the so-called social model” (p. 21) and to “supposed models of disability” (p. 22).  But then she goes ahead and talks about the social model in terms that indicate she does think it exists, with comments such as “[T]he medical and social models portray disability in very different ways” (p. 22).

Her point there seems to be that the social model is a model in the sense of being a “collection of claims” (p. 23) rather than somehow being a “simplified representation” or “replica” (p. 22) of disability.  But that seems like an odd point.  How would one construct a representation of disability, as though it were a tangible object instead of a concept?  This so-called “collection of claims” seems par for the course, where concepts are concerned.  That’s what models are, in this kind of context:  “Theoretical representations that simulate the behavior or activity of systems, processes, or phenomena.”  The social model of disability describes disability as a system or process in which society takes an impairment and makes it into a disability.

That was an isolated example until I got to page 31 or so.  I was learning a lot, and I was mostly engaged.  But then she said this:
Of course, we cannot infer from our sense of one’s condition’s being less preferable than some others that it also is inherently bad.  We often prefer someone else’s condition to our own – someone richer, smarter, handsomer, or more generous than ourselves – without condemning our own state as bad. . . . So the fact that not being disabled may be preferable to being disabled does not entail that the state of being disabled is bad.  The social model counsels the acceptance of disability as being a natural state of some people . . . . 
I wasn’t sure it mattered if disability was “bad.”  The point was, it’s less preferable.  Lots of things are “natural” – hepatitis, for example – and yet not desirable.  If we consider ourselves ugly or poor, that’s a comparative judgment that we would typically like to address by magically becoming beautiful and rich.  That’s natural too, even if such a belief might actually make us less happy, or if the magic didn’t ultimately yield the imagined results.  Whatever.  At the point of decision, we go toward what is more desirable because, on many levels, this is the approach that tends to keep us alive and healthy.  The word “bad” is irrelevant; however you phrase it, people generally don’t want to have disabilities, and there are good reasons for that.

Silvers (pp. 34-35) feels that the social model  may now be nearly as entrenched as the medical model (in which disability is identified as a flaw in the person that should be corrected), and that each has its usefulness from particular values perspectives.  But she says that a problem arises when a focus on the social model causes funding to be directed toward modifications of social conditions rather than toward prevention and cure.


*  *  *  *  *

Chapter 8:  Merriam, G. (2010). Rehabilitating Aristotle: A virtue ethics approach to disability and human flourishing. In D. C. Ralston & J. Ho (Eds.), Philosophical reflections on disability (pp. 133-151). Dordrecht: Springer.

This article prompted me to write a separate post regarding overpopulation, eugenics, and the right to reproduce.  

*  *  *  *  *

Chapter 13:  Tollefsen, C. (2010). Disability and social justice. In D. C. Ralston & J. Ho (Eds.), Philosophical reflections on disability (pp. 211-227). Dordrecht: Springer.

Tollefsen distinguishes citizenship from well-being.  His conclusion (p. 223) is thus:
[T]he moderately disabled, the temporarily dependent, the “normal” human person, the profoundly retarded, the brain damaged, and even those in a persistent vegetative state, are all alike as regards the fundamental reason that justifies political authority:  all are inadequate in some respect or other for their own flourishing.  All lack self-sufficiency in regards to the conditions necessary for them to achieve the level of well-being they are capable of . . . . No special attempt need be made to see any of them as citizens, or potential citizens, or even like citizens, in order to see that they fall within the fundamental scope of the political authority’s concern, the basic commitment “to foster the dignity and well-being of all persons within [the state’s] borders” [source of quote unspecified].
In other words, contra Nussbaum (2006), Tollefsen sees a person in a persistent vegetative state as being nonetheless a human being.  That, however, does not warrant his postulation of a state’s commitment to foster well-being of everyone within its borders.  A state might decline to do so, rightly or wrongly, for illegal immigrants, prisoners, and others; it might also do so unequally on a variety of grounds, including one’s individual or collective (as e.g., part of an influential constituency) wealth or political power or lack thereof.

Tollefsen does acknowledge that the state has to provide many things, including infrastructure and internal and external security, and therefore that there are limits to what the state can do, to be determined through prudent judgment.  “Such limitations,” he says (p. 224), “are not matters of injustice.”  As philosophers sometimes do, however, he repeatedly makes assertions that could be empirically supported, refuted, or qualified.  For example (p. 224):
[F]amily members such as parents and spouses, friends, and parishes all have a better grasp of the particular needs and capacities of individuals with disabilities, and all have a greater capacity for emotional involvement and sustained commitment than do any agents of the state. . . . The state should not be in the business of taking over the care of the disabled . . . .
These assertions seem quite unlikely in particular cases.  Caregivers do not tend, in any event, to be qualified professionals, and they are also not insurance companies.  For a variety of discriminatory, agenda-driven, resource-related, and competence-related reasons, parishes and other local organizations may lack the capacity and/or inclination to care effectively for people with disabilities.  It is not clear, from this essay, why the handling of disabilities would be considered a predominantly private matter, while the handling of criminal behavior is not.


*  *  *  *  *

Chapter 14:  Englehardt, H. T., Jr. (2010). The unfair and the unfortunate: Some brief critical reflections on secular moral claim rights for the disabled. In D. C. Ralston & J. Ho (Eds.), Philosophical reflections on disability (pp. 229-237). Dordrecht: Springer.

When I read this chapter, I had just read an article about health care in the United States – about how expensive and illogical the health care system has been.  I had wondered about questions of affordability and logic while reading materials about disability as well.  Whatever was going to happen in the realm of disabilities in the future, it seemed that cost and rationality would be important considerations.

Englehardt (p. 231) says this:
Disease and disability are surely, ceteris paribus, unfortunate.  The issue is whether they are unfair in a way that generates general secular moral claim rights against others who did not cause the disease or disability.
In response, he concludes that “Moral diversity, the fact of moral pluralism, undermines the self-evident character often attributed to claim rights for care, support, and accommodation” (p. 235).  In other words, not everyone prioritizes the same things.  His position seems to be that it is not entirely certain that society ought to put its full resources behind all forms of environmental manipulation, making everything more accommodating for people with disabilities, when there are other things that we could do with the money.

What we have, Englehardt says, is political compromises, not moral certainty.  Disabilities get a certain amount of funding, and not more or less, as a result of advocacy on behalf of disability-friendly perspectives.  “In such circumstances, entitlements for the disabled will not enjoy a secular moral authority.  They will simply be outcomes that it will usually be prudent to accept” (p. 236).