Sunday, October 31, 2010

Synology NAS: Mapping the Network Drive

I had newly installed Ubuntu 10.10 and was trying to communicate with my Synology DS109 network-attached storage (NAS) unit.  I had previously resolved some problems with this unit, but some had remained unresolved; and now, in any case, I was starting over.

I had downloaded and installed DiskStation Manager (DSM) 3.0, the Synology Assistant, and the User's Guide.  DSM was a web-based program, apparently running on the DS109, that I accessed by simply typing the DS109's address (e.g., 192.168.2.1) into the Firefox address bar.  I was able to connect to the DS109 by typing the username and password that it needed (i.e., not my Ubuntu username and password).

As outlined in a previous post, I had also created a mount point ("sudo mkdir /media/SYNDATA") for the DS109's SYNDATA partition.  The fstab (i.e., "sudo gedit /etc/fstab") contained a line that had worked previously to let me contact that partition.  The line I was using was this:

//192.168.2.1/SYNDATA  /media/SYNDATA  cifs  user,uid=ray,gid=users,rw,suid,credentials=/etc/cifspwd,iocharset=utf8  0  0
where "ray" was my Ubuntu user ID, not my DS109 user ID.  The DS109 user ID was contained in the /etc/cifspwd file; and now, as I looked at this, I realized that reinstalling Ubuntu from scratch had surely wiped out that file.  So I recreated it ("sudo gedit /etc/cifspwd"), using my DS109 user ID and password, in this form:
username=[DS109 username]
password=[DS109 password]
and that's all that file contained.  Example:  if my DS109 username had been Joe, the first line in this two-line file would have been "username=Joe," and the second line would have been "password=JoesPassword" (wahtever that user's actual password was).  Then I typed these commands
sudo chmod 0600 /etc/cifspwd
sudo mount -a
These were the basic steps recommended in a relevant page in Synology's wiki.  The problem -- or at least *a* problem -- as I eventually figured out (or re-figured; Synology's tech support may have originally suggested it), was that the "credentials" part of the fstab line was not working.  If I replaced "credentials=/etc/cifspwd" with "username=Joe,password=JoesPassword" (and then saved fstab), then the "sudo mount -a" command worked:  SYNDATA appeared in Nautilus like any other partition.  So what would happen if I put exactly that -- username=Joe,password=JoesPassword -- on the same line in /etc/cifspwd, instead of putting them on two separate lines?  No dice.  The cifspwd file was a dud.  I left the username and password information in the fstab, and deleted the cifspwd file ("sudo rm cifspwd").

So the short answer I arrived at, here, was that the Synology wiki was wrong, at least for my purposes.  I needed to take all the other steps -- creating a mount point, etc. -- but instead of creating the cifspwd file, I just needed to enter a line in /etc/fstab of this form:
//192.168.2.1/SYNDATA  /media/SYNDATA  cifs  user,uid=[Ubuntu username],gid=users,rw,suid,username=[Synology username],password=[Synology password],iocharset=utf8  0  0
and possibly the "iocharset" part was optional.

Ubuntu 10.10: Problems with Firefox 3.6.12

I had newly installed Ubuntu 10.10, preserving my /home folder from Ubuntu 10.04.  I was having some problems with Firefox.  This post describes the steps I took to resolve those problems.

One error was that, when I was first starting Firefox, I kept getting a message saying, "Firefox is not currently set as your default browser.  Would you like to make it your default browser?"  I kept saying Yes, and the box was checked to say "Always perform this check when starting Firefox."  This message would come back up each time I started Firefox.  This seemed to indicate that it was not remembering the answer I provided.  Next, after I said Yes to that, I got another error message:

The application "firefox-bin" attempted to change an aspect of your configuration that your system administrator or operating system vendor does not allow you to change.  Some of the settings you have selected may not take effect, or may not be restored next time you use the application.
This seemed to explain the recurrence of the first message, about the default browser.  Apparently something was set to read-only or to root privileges, when it should have been accessible to me as user.  Clicking on the Details button on that error message gave me lots of additional information:
Can't overwrite existing read-only value: Can't overwrite existing read-only value: Value for `/desktop/gnome/url-handlers/http/command' set in a read-only source at the front of your configuration path
Can't overwrite existing read-only value: Can't overwrite existing read-only value: Value for `/desktop/gnome/url-handlers/https/command' set in a read-only source at the front of your configuration path
No database available to save your configuration: Unable to store a value at key '/desktop/gnome/url-handlers/ftp/command', as the configuration server has no writable databases. There are some common causes of this problem: 1) your configuration path file /etc/gconf/2/path doesn't contain any databases or wasn't found 2) somehow we mistakenly created two gconfd processes 3) your operating system is misconfigured so NFS file locking doesn't work in your home directory or 4) your NFS client machine crashed and didn't properly notify the server on reboot that file locks should be dropped. If you have two gconfd processes (or had two at the time the second was launched), logging out, killing all copies of gconfd, and logging back in may help. If you have stale locks, remove ~/.gconf*/*lock. Perhaps the problem is that you attempted to use GConf from two machines at once, and ORBit still has its default configuration that prevents remote CORBA connections - put "ORBIIOPIPv4=1" in /etc/orbitrc. As always, check the user.* syslog for details on problems gconfd encountered. There can only be one gconfd per home directory, and it must own a lockfile in ~/.gconfd and also lockfiles in individual storage locations such as ~/.gconf
No database available to save your configuration: Unable to store a value at key '/desktop/gnome/url-handlers/chrome/command', as the configuration server has no writable databases. There are some common causes of this problem: 1) your configuration path file /etc/gconf/2/path doesn't contain any databases or wasn't found 2) somehow we mistakenly created two gconfd processes 3) your operating system is misconfigured so NFS file locking doesn't work in your home directory or 4) your NFS client machine crashed and didn't properly notify the server on reboot that file locks should be dropped. If you have two gconfd processes (or had two at the time the second was launched), logging out, killing all copies of gconfd, and logging back in may help. If you have stale locks, remove ~/.gconf*/*lock. Perhaps the problem is that you attempted to use GConf from two machines at once, and ORBit still has its default configuration that prevents remote CORBA connections - put "ORBIIOPIPv4=1" in /etc/orbitrc. As always, check the user.* syslog for details on problems gconfd encountered. There can only be one gconfd per home directory, and it must own a lockfile in ~/.gconfd and also lockfiles in individual storage locations such as ~/.gconf
My first search, focusing on the first of those error messages, turned up very little at all and nothing conclusive.  A different search did slightly better.  I went back to the original Firefox message and tried a search on that.  Bingo!  Lots of hits.  One thread led me to close Firefox and then try these commands:
killall firefox-bin
sudo chown -R ray:ray /home/ray
The first command got "No process found," which told me that there was not a separate Firefox process somehow lurking in the shadows (even after a reboot) and causing problems.  After the second command, I restarted Firefox, but nothing had changed; the same errors came up.  A different post drew my attention to the question of where the "/desktop/gnome/url-handlers/https/command" was located.  The command they used to find such locations was "sudo find / | grep /desktop/gnome/url-handlers."  I tried that.  It led me to /home/ray/.gconf, but nothing there ended with "https" or "https/command."  I checked that location in Nautilus (making sure to set View > Show Hidden Files) and verified that I was the owner.  The %gconf.xml file in that location that had caused problems for one Firefox user did not even refer to Firefox in my case, so I was confused.

Looking again at the error message, I thought that perhaps I was not understanding it, so I tried a different search.  The half-dozen webpages I opened did not lead to a solution.  It occurred to me to try killing many birds with one stone by uninstalling and reinstalling Firefox.  I had already done that, but the option I had chosen when uninstalling, in Synaptic, was "Mark for Removal."  This time, I tried "Mark for Complete Removal."  I noticed that this would take firefox-gnome-support along with it, and that seemed on target; in fact, I was tempted to remove just that package by itself.  But I went ahead with the complete removal and reinstallation of Firefox in Synaptic.  Interestingly, reinstallation did not automatically include firefox-gnome-support.  Possibly that was the issue:  dependence upon firefox-gnome-support had perhaps been eliminated in Ubuntu 10.10, and when I brought forward my /home directory from Ubuntu 10.04 at the time of upgrading to 10.10, I also inadvertently brought along this problem.  I started Firefox, and this time it was able to download updates for its add-ons -- which means, obviously, that it remembered its add-ons, so "completely" uninstalling Firefox apparently did not mean uninstalling add-ons.  I still got the same error messages as before, so reinstallation seemed to be helpful but not the ultimate solution.  I killed Firefox, ran Update Manager (which had no updates of obvious relevance), and restarted, but the errors persisted.

Following ideas in another thread, I tried "sudo firefox."  This gave me only the first question, about making Firefox my default browser; the other error messages were no longer there.  I killed Firefox and ran "sudo firefox" again.  This time, it remembered the answer; there were no questions or errors at all.  I killed Firefox and started it again from the panel icon (i.e., as a regular user).  This gave me a new error:
Session Manager
The session/window data is corrupted:
syntax error
undefined
I assumed that meant that the Session Manager add-on in Firefox was having a problem that had either existed before I ran "sudo firefox" or had been created when I did so.  I clicked OK on that error, and wondered whether part of the problem was in one or more of my add-ons.  Next, I got the question about making Firefox my default browser, and the other errors as well.  Using Session Manager, I loaded a backup session that was a month old.  I then killed and restarted Firefox.  The errors recurred.  I decided to try wiping out Firefox and all add-ons.  Following another post, I was detoured into this option:
sudo rm -rf .mozilla
firefox &
This was reminiscent of a command that I recalled fixing some other problem once.  That is, there had been a problem traceable to the .mozilla folder.  Removing that folder may have been risky if I had been using Thunderbird or other Mozilla programs in Ubuntu, but I wasn't.  So now I tried those commands.  They did not change anything, so I went on with these commands:
sudo apt-get remove --purge firefox
sudo updatedb
locate firefox
I realized, too late, that this was possibly not the best approach, since the "firefox &" command (above) had given me unending messages that this command was deprecated and no longer functional.  The "locate firefox" command gave me hundreds of responses.  But then I noticed that many of them related to just a few folders, so I tried this:
sudo rm -r /home/ray/.mozilla/firefox
sudo rm -r /usr/lib/firefox
sudo rm -r /usr/lib/firefox-addons

There were more references to Firefox yet to remove, but at this point I noticed the post telling me that completely removing all references to Firefox would create problems for other programs.  So now I went into Synaptic and looked at its status bar for broken packages.  There weren't any.  So either that post was no longer applicable or they weren't broken *yet*.  In Synaptic, I reinstalled Firefox.  Then, still in Synaptic, I went to Edit > Fix Broken Packages.  Its status bar immediately said, "Successfully fixed dependency problems."  So maybe I got lucky.  I started Firefox.  No error messages.  This time, there were no add-ons, so I had to install and configure those from scratch.  But the problem appeared to be solved.

Saturday, October 30, 2010

Ubuntu 10.10: Error: Could Not Update ICEauthority File

When I was booting a new Ubuntu 10.10 installation, I got an error message, "Could not update ICEauthority file /home/ray/.ICEauthority." I clicked through that, checked Update Manager, and rebooted. The ICEauthority error message was there again.

Seeking answers to this problem, I ran a search and tried the suggestion to type Ctrl-Alt-F1 to get a text console. There, I hit Enter to get a login prompt. I logged in with my username and password, and this gave me a regular command prompt. I typed "sudo chown -R ray:ray .ICEauthority," where ray was my username. Then I typed "sudo /etc/init.d/gdm restart" to go back into the Gnome GUI. But the ICEauthority error message was still there. (From another post, it appears I could have just entered the chown command in Terminal, without Ctrl-Alt-F1.) People for whom this approach worked seemed to think that the problem was caused by opening a graphical (i.e., not purely text-based) program owned by root using sudo instead of gksudo. If that was relevant, it evidently meant I should have edited fstab by typing "gksudo gedit /etc/fstab." But was gedit a graphical program? Probably the better explanation was that my problem was caused by something else, and that this is why their solution didn't work for me.

Comments in another thread suggested that the problem might have been caused by an update, and also that the chown command (above) wouldn't work on an encrypted home partition. In a variation, I tried this:

sudo -i
chown ray:ray /home/ray/.ICEauthority
chmod 644 /home/ray/.ICEauthority\
exit
But this didn't do it either; the error was back when I rebooted. Somebody else said the problem had to do with changing passwords, and that was possibly relevant for me, so I went into System > Administration > Users and Groups and changed my user password, and also checked the box that said, "Don't ask for password on login."  But Ubuntu hung when I clicked OK; the circle icon (like the Windows hourglass, meaning "I'm working on it") stayed there for a couple of hours.  When I came back to the machine, I killed the dialog box and tried changing the password again, but it hung again.  I ran a search on that subproblem and tried the advice to kill that GUI approach and just type "sudo passwd [username]" in Terminal (my username was ray).  That worked.  So, back to the ICEauthority problem.  I rebooted, but no, the password fix was not the solution; the error message was still there.

Another suggestion was to make sure that the entire home directory belonged to the user (me).  That sounded like it might be on the money.  My impression was that moving around and copying this old /home partition could easily have screwed up the permissions.  So now, how to make sure I owned the whole /home directory?  In Nautilus (i.e., Places > Computer), I right-clicked on Home Folder > Properties > Permissions.  It said root (i.e., not ray) was the owner.  In Terminal, I typed "sudo nautilus" and, using the Tree (i.e., not Places) view (top of left panel), went to File System > home > ray > right-click Properties > Permissions and changed the permissions to me with full access.  I clicked on the "Apply Permissions to Enclosed Files" button, closed out of there, and rebooted.  Eureka!  That was the solution.  Done!

Ubuntu 10.10: Streamlined RAID 0 Installation

I had previously installed Ubuntu 10.04 on a two-drive RAID 0 array.  I did that to make a Windows XP guest virtual machine (VM) run faster in VMware Workstation 7.1.  I had then run into some problems with that installation, and had abandoned it.  Now it was time to try again, but this time with Ubuntu 10.10.  This post describes the process in more streamlined terms, drawing from the previous post in which I logged the details of that earlier attempt.

This time, as before, I had two hard drives for the RAID 0 array, plus a third drive on which I had already installed Windows XP.  The two empty hard drives for RAID were each 320GB.  That third drive also held my /home partition (i.e., the contents of the /home partition from a previous Ubuntu installation), which contained many of my settings and adjustments for various Ubuntu programs.  In other words, my Ubuntu installation would not be like a Windows XP installation, where it would be necessary to reinstall all of my applications (except the portable ones) after reinstalling the operating system.  The third drive also held my Linux swap space, which I probably could have put into the array instead, along with a partition I called LOCAL, which would hold backup copies of the VMware virtual machines.  I was going to put the active VMs into the RAID 0 setup to make them run faster, but of course RAID 0 was riskier in the sense that failure of either of the two hard drives would mean the loss of everything in the RAID 0 array.

I started by downloading and burning the Ubuntu 10.10 alternate (or "alternative") CD.  I booted that CD and chose the "Check disc for defects" option.  This took five or ten minutes, and then it said, "Integrity test successful," and then rebooted.  So then I went through the "Install Ubuntu" option and took the basic steps (selecting my country, my keyboard type, etc.).  The meat of the RAID 0 process began about three minutes into that video by amzertech (speaking, here, of its Part 1, not Part 2), where it was time to partition the drives.  I went to Manual (i.e., not Guided), and this put me into the main "Partition disks" screen, the one beginning with "This is an overview."  I went down to the first of the two hard drives.  It referred to them as SCSI partitions, but it also recognized them as being sdb and sdc.  So it looked like I had correctly cabled that third drive to actually be the first in the system (i.e., sda, a/k/a SCSI3 according to the partitioner), so as to make Windows happy.

The general concept of the RAID setup process was that, first, you designate some free space on each drive as a physical volume for RAID, and then you combine those physical volumes from the two (or more) drives into a single software RAID device.  The following paragraphs provide the details.

First, following the video, I went down to the first of the empty 320GB drives that I was going to use for my RAID array.  In my case, unlike the video, there was not yet any "pri/log" line showing "FREE SPACE" that I could select, under the drive identification line on the screen, so I just highlighted the drive itself and hit Enter.  This gave me the option of creating a new empty partition table on the drive, and I went with that for each of the two drives.  Then I highlighted the pri/log line under the first 320GB drive, showing free space.  There, I hit Enter and chose "Create a new partition."  For its size, I typed "50GB" and made it a primary partition at the end of the drive.  I guessed that this meant the outside of of the physical disc, where I believed data transfers would be faster.  Instead of leaving "Use as" at the default ext4 setting, I highlighted and hit Enter and went down to select "physical volume for RAID" (enter) > "Done setting up the partition."  I went through the same steps with the second 320GB drive, which was sdc on my system.  So now, back on the Partition disks" screen, each of the two drives showed an entry that looked like this:

#1   primary   50.0GB   K   raid
So this would give me a total of 100GB for my Ubuntu program installation, and I would still have several hundred GB left over as free space.  Now, on the main "overview" screen, I went up to the line that said "Configure software RAID" > "Write the changes to the storage devices" > "Create MD device" > RAID0.  This put me at a list of "active devices."  I wanted sdb1 and sdc1 (i.e., I didn't want to use one of the partitions I had previously created on sda, my third hard drive).  These were the only partitions on drives sdb and sdc, so the choice was easy.  For some reason, they showed up here as being 49999MB rather than 50GB.  I selected sdb1 and sdc1.  I highlighted each of those two, hit spacebar to select them, and then tabbed to Continue > Finish.  This put me back in the "overview" screen, where I saw that I now had these new lines, near the top:
RAID0 device #0 - 100.0 GB Software RAID device
   #1       100.0 GB
              131.1 kB         unusable
I highlighted the line that began with #1 and hit Enter > Use as > ext3 (apparently still more reliable than ext4) > Mount point > "/ - the root file system" > "Done setting up the partition."  This put me back in the "overview" screen, where the line now looked like this:
   #1       100.0 GB     f   ext4      /
I decided to go ahead with the video's approach of putting the swap space on the RAID0 partition.  To do this, I went through the same steps as above, starting with the free space line on each of the two drives.  The only differences were:
(a) I allocated only 5GB on each drive for this partition.
(b) This time, I selected sdb2 and sdc2 (instead of sdb1 and sdc1) as my active devices for the array.
(c) Under "Use as," I chose "swap area" instead of ext3.
The result, back in the "overview" screen, was that I had these lines:
RAID0 device #0 - 100.0 GB Linux Software RAID Array
   #1       100.0 GB    F   ext3      /
RAID0 device #1 - 10.0 GB Linux Software RAID Array
   #1         10.0 GB     f  swap     swap
              131.1 kB unusable
At this point, I wanted to vary from the video by adding one more partition, where I would put my VMs and possibly other things.  I went through the same process as with the first RAID device (above), and I used all of the remaining space on the two drives except for about 1GB.  The active devices in this case (when I got to that point in the process) were, of course, sdb3 and sdc3.  Back in the "overview" screen, I saw that I now had RAID0 device #2 of 528GB.  I would never need all of that space for my VMs, but I had no other use for the space, and this RAID setup process was a one-shot deal:  designate the space in some useful form now, or leave it forever unallocated.

So now, the final step.  I needed to create a /boot partition on just one drive.  That was why I needed to save 1GB.  I could have made one of those last active devices (either sdb3 or sdc3) larger than the other, but there was no point:  as I understood it, RAID0 would use only the amount of space that they both had in common.  So I would wind up with 1GB unused on one of the two drives.  Anyway, to create the /boot partition, I selected that remaining free space on sdb (i.e., the first of my two RAID drives) and used it all up on another ext3 partition.  This time, I chose ext3 without first choosing "physical volume for RAID"; and after choosing ext3, I didn't go right to "Done setting up the partition."  Instead, I stopped first at the "Mount point" option, where I chose the "/boot" option.  Back in the "overview" screen, I saw that I now had my three RAID0 devices at the top of the list, and the /boot device as sdb4 down under the first of my two 320GB drives.

In the "overview" screen, I saw that there was too much information; some had scrolled off the bottom of the screen.  I arrowed down until I got to the very bottom of all that, where I chose the "Finish partitioning and write changes to disk" > Yes option.  This started me right into the Ubuntu installation process, where I just entered basic information (e.g., my name).  The installation was very straightforward, and it worked:  Ubuntu booted up.  I then went to System > Administration > System Monitor > File Systems tab.  There, I saw /dev/md0 as root directory and /dev/sdb4 as /boot.  (The video said that the swap would not be visible here, and it wasn't.)  So my next step, at this point, was to refine the basic installation to suit my preferences.  The description of that process appears in a separate post.

Ubuntu 10.10: Tweaked Installation

I had previously installed Ubuntu 10.04, and had arrived at a fast way of adjusting the basic installation to my needs. This post updates and simplifies the separate post in which I described that process.  Here, I was using Ubuntu 10.10, not 10.04.

This post begins with a basic Ubuntu setup already installed.  That basic installation process was very straightforward; but for those unfamiliar with Ubuntu, there were many webpages on how to install Ubuntu 10.10.  The question addressed in this post was, what additional steps did I need to take, in order to make Ubuntu look and act the way I wanted?  Of course, people will have various preferences.  This post does not go into that kind of individual detail.  It is more a matter of how to go about choosing and preserving one's desired setup.

In this case, I had installed Ubuntu 10.10 in a two-drive RAID0 setup.  I have written a separate post on that as well.  It looked and acted the same as a normal single-drive Ubuntu installation.  But the following discussion contains a few references to the setup described in that other post, for those who have come to this description from there.

The Home Partition

If I had been installing Ubuntu without using RAID, I probably would have decided, in the initial installation, to install everything in a single root ("/") partition.  But the RAID process had required the creation of a separate /boot partition, so I now had those two partitions instead of one.  Either way, though, my installation did not yet include a separate /home partition.  I did want a separate /home partition to be part of the mix, because /home was where all kinds of program settings were preserved.

I had retained a copy of my previous /home partition.  It was on a third hard drive.  That is, it was not part of the RAID array, and I did not plan to copy it into the RAID array.  RAID 0 is risky, in the sense that failure of any drive in the array means the end of the entire array and everything on it.  So I was just going to leave the /home partition on that separate drive, and back it up from there.  The question was, how should I get the new installation to recognize that separate /home partition?  I had struggled with this step previously, but now I wanted to write it up in simpler form.

On that third drive, I had a folder called Saved Settings.  In that folder, I had kept a copy of my old fstab (that is, the file called "fstab," from the /etc folder, viewable by typing "sudo gedit /etc/fstab").  I added lines from that copy into my current fstab, making sure that its lines referred to UUIDs (available via "sudo blkid") rather than to drive letters (e.g., sda, sdb), so that the commands in the fstab would still function if I rearranged partitions in my computer.  I also made sure it had a line referring to the correct UUID for the /home partition.  I saved and closed fstab.

To give /home a place to be mounted, I typed "sudo mkdir /media/home."  To prevent the newly installed (and nearly empty) /home partition from interfering with the selection and use of my preferred, preexisting /home partition, I typed "sudo mv /home /old_home."  I had meanwhile allowed Update Manager to install updates, and that process was done, so at this point I rebooted.  I got an error message, "Could not update ICEauthority file /home/ray/.ICEauthority."  I clicked through that, and after a moment my old, preferred desktop layout was there and seemed to be functioning normally.  (I worked through the ICEauthority problem as described in a separate post.  Basically, the solution was to make sure the user (i.e., ray, not root) had ownership of the home folder.)  The last step was to delete the temporary old_home folder I had created by typing "sudo rm -r /old_home."

Repositories

Having the home folder in place meant that about 90% of the work of installing and configuring programs was already done and saved.  This was one huge advantage over Windows XP installations, where the only programs that did not have to be reinstalled in the event of a new operating system installation were portable applications.  This section describes the relatively few steps that I did have to take to install applications and configure my Ubuntu system, as compared to the writeup in a separate post on Windows XP reinstallation.

I could not rely on my previously saved sources.list file to set up my repositories, since I was now dealing with a new version of Ubuntu.  Instead, I went into System > Administration and discovered that, unlike Ubuntu 10.04 (as discussed in a separate post), Ubuntu 10.10 no longer had a Software Sources option.  A search revealed that this change was made to make Ubuntu more user-friendly.  I could either edit the menu to add back the Software Sources option or use System > Administration > Synaptic Package Manager as an alternative.  Choosing the latter, I went into Synaptic's Settings > Repositories > Other Software and selected the non-source options.  In the Authentication tab, I saw a list of Trusted Software Providers, and the names on it were Ubuntu (archive, CD image, and extras), GetDeb, and Google.  I had no problem with any of these except maybe Ubuntu extras.  I wasn't sure if these had come from my previous installation or were pre-supplied with Ubuntu 10.10.  I could have hit "Restore Defaults" to find out, but then I would have had to figure out how to restore them.  I closed out of that and, back in Synaptic, clicked Reload.

In Terminal, I typed "sudo gedit /etc/apt/sources.list."  I saw that it contained a handful of repositories.  I wondered if there were others I should include, so I went to the Ubuntu Sources List Generator and got a list that was more concise and that also included a few third-party repos of interest (i.e., GetDeb, Google, Medibuntu, Wine, and X Updates).  I ran the commands in the "Getting the GPG Keys" list from the bottom of the page, one at a time.  In previous installations, I had saved such commands in a text file and had executed it, but that had made it easier to overlook error messages.  In this case, the Medibuntu command gave me "Unable to locate package medibuntu-keyring."  A search indicated that virtually nobody had gotten precisely that message.  Not a good sign.  A community documentation page gave me a different command to add Medibuntu:

sudo wget --output-document=/etc/apt/sources.list.d/medibuntu.list http://www.medibuntu.org/sources.list.d/$(lsb_release -cs).list && sudo apt-get --quiet update && sudo apt-get --yes --quiet --allow-unauthenticated install medibuntu-keyring && sudo apt-get --quiet update
... all on one line!  It also said that Medibuntu's repository is deactivated whenever you upgrade to a newer Ubuntu release, so this reinstallation would have to happen each time.  I copied and pasted that command and ran it.  It gave me an error message:
Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?
to which the answer was yes, of course, Synaptic is still running.  I shut down Synaptic and tried again.  This time, it ran.  I took another look at the Authentication tab in Synaptic (see above), and now Medibuntu was on the list of trusted software providers.  So the answer to my question was, this was my customized list.

I went back to the process of running commands generated by the Ubuntu Sources List Generator.  They ran without further difficulty.  I typed "sudo gedit /etc/apt/sources.list" again, and replaced its contents with the lines generated by that Generator.  I took another look at Synaptic's Authentication tab, and this produced an error indicating that I had duplicate entries for Medibuntu and, I think, something else.  But when I tried to figure out what that was about, the error went away, and I didn't seem to be able to get it back.  Anyway, the Authentication tab did show that I now had the five third-party repos that had interested me (above), so all that remained was to go into System > Administration > Update Manager > Check and download the additional updates that it detected.

Installing Programs & Other Adjustments

In Terminal, I navigated to my Saved Settings folder (basically, cd "/folder name/" -- using quotes because "folder name" has a space in it) and verified that I had a copy of the installed-software file that I had created, in my previous installation, by typing "dpkg --get-selections > installed-software."  Now that I wanted to restore the programs listed in installed-software, I entered these commands:
sudo dpkg --set-selections < installed-software
sudo apt-get install dselect
sudo dselect
That opened up dselect.  The dpkg command had provided the list of what I wanted to install, so all I had to do now, in dselect, was to arrow down and hit Enter at the Install option.  This gave me an option of installing a large quantity of stuff, and I said OK, do it.  That took an hour.  At the end, I declined to let it erase previously downloaded .deb files, when it asked.  The last time I used this installed-software approach, I got a bunch of errors after reboot, and had to work back through the process manually.  There were some errors this time, too (see below), but they did not appear to be related to the dselect process.

As before, I navigated to the folders containing other program downloads (with .deb, .gz, .bin, and .bundle extensions).  I typed "sudo sh [filename]" to install my .bin and .bundle downloads (e.g., GoogleEarthLinux.bin).  Double-clicking on the filename in Nautilus no longer installed my .deb downloads; instead, I had to right-click and choose "Open with GDebi Package Installer."  In the case of my Synology software, I typed "sudo sh install.sh."  I wasn't sure where to install it, so I told it to install in /home, just in case that would spare me from having to install it again.  But I should have said /home/ray.  When I was done with all installations, I went into Update Manager, and ran and reran it until I was all caught up.

GRUB2 menu edits were the same as before:  to get rid of the Memtest+ options, I typed "sudo chmod -x /etc/grub.d/20_memtest86+." To let Ubuntu remember which operating system it had used last, I typed "sudo gedit /etc/default/grub," changed the first line to be "GRUB_DEFAULT=saved," and added a second line that said "GRUB_SAVEDEFAULT=true."  To limit the number of Ubuntu kernels shown, I typed "sudo gedit /etc/grub.d/10_linux," added "GRUB_DISABLE_LINUX_RECOVERY=true" at the top, and changed two lines at the bottom to be three that read as follows:
list=`echo $list | tr ' ' '\n' | grep -vx $linux | tr '\n' ' '`
list=`version_find_latest $list`
done
I saved and closed that and typed "sudo update-grub."

I also needed to make some adjustments for VMware.  First, I typed "sudo vmware" and made some root adjustments there.  As before, I typed "sudo gedit [path][filename].vmx," for the .vmx file pertaining to this VM; and at the end of that file I added a line that said this:
bios.bootDelay = "10000"
and that bought me ten seconds instead of one or two, when that vmware logo came up.  In a variation from Ubuntu 10.04, the restricted drivers for my monitor were now at System > Administration > Additional Drivers.

Bugs and Other Problems

There were some problems.  First, BOINC would not suspend itself when the system was in use, so I just suspended it, period.  Also, as discussed in a separate post, there were some Firefox errors.  The solution there was to completely uninstall and reinstall Firefox, though possibly it would have been sufficient just to uninstall firefox-gnome-support and delete my profile.

Another problem was that Ubuntu was not clearly recognizing all local and network partitions.  The problem of mapping the drive in my Synology NAS required another separate post.  In that case, it came down to a problem with the line used to mount the drive in fstab.  It had seemed like there were other drive recognition problems, but evidently they sorted themselves out, or perhaps I was just mistaken.  At this point, the drives seemed to be recognized in good form.

There was also the problem that GParted wouldn't run.  It wasn't just in my installation; it wasn't running when I booted from the live CD either.  I guessed that this was some kind of brand-new bug in Ubuntu 10.10 that would be fixed shortly.  When I started GParted from System > Administration, it would start up, but then it would disappear after just a few seconds.  When I typed "sudo gparted" or "gksu gparted," it did the same thing, but it gave me an error message:
glibmm-ERROR **:
unhandled exception (type std::exception) in signal handler:
what: basicI_string::_S_create
aborting...
A search indicated that this was indeed a bug in Ubuntu.  It looked like a new release would be fixing the problem imminently.  Another problem:  Google Earth would not install.  I got this error message:
parser error : Document is empty
parser error: Start tag expected, '<' not found
Couldn't load 'setup.data/setup.xml'
The command I used was "sudo sh GoogleEarthLinux.bin."  The first response to it was "Verifying archive integrity... All good."  Just in case, I downloaded a replacement of GoogleEarthLinux.bin, but got the same result.  It looked like others had also had this problem.  It seemed to be another instance of Ubuntu 10.10 not yet having all the kinks worked out.  I started with a lengthy thread on the issue.  One post in that thread recommended a command-line alternative, which in full form went like this:
sudo apt-get install googleearth-package
sudo make-googleearth-package --force
sudo dpkg -i googleearth_5.2.1.1588+0.5.7-1_i386.deb
I tried that.  The make-googleearth-package command generated a lot of errors that included the sentence, "Can't extract name and version from library name."  I got that long package name shown in the last line (googleearth_5.2.1 etc.) from one of the last lines produced by the make-googleearth-package command:  it said this was the name of the package it was building.  (It also seemed to say that simply "googleearth" was the name of the package, but that didn't work.)  The third command (sudo dpkg etc.) seemed to run successfully.  There was also now a Google Earth icon in Applications > Internet > Google Earth.  I clicked on that, and it worked.

Another problem involved VMCI Sockets.  I have addressed that one in a separate post.  I ran out of time to continue this project at the time.  When I returned to it two months later, I had decided to stop trying to maintain a primarily Ubuntu machine, but instead to return to Windows 7 for the foreseeable future.

Thursday, October 28, 2010

Synology DS109 NAS: Unable to Mount - Network Is Unreachable

I was running Ubuntu 10.04 (Lucid Lynx) on a computer to which I had connected a Synology DS109 network-attached storage (NAS) unit.  I had resolved a "no more connections can be made" error on a separate computer, also attached to the DS109, that was running Windows XP.  Now I had a new problem on the Linux machine.  This time, the WinXP machine was connecting to the Internet and to the Synology without a problem, but the Ubuntu computer (referred to here simply as the Ubuntuter) was not connecting to either.

The symptoms included the following:  in Firefox, I was getting "Firefox can't find the server" errors for most webpages; in Nautilus, I would get "Unable to mount" errors when I clicked on the name of a partition on the Synology; and in Synology's DiskStation Manager (DSM), its web-based access to the DS109, an attempt to log in to http://[IP address] (using the IP address from DSM > Control Panel > System Information) gave me a "Processing. Please wait ..." message.

I tried rebooting the Ubuntuter.  That achieved nothing.  Typing "sudo mount -a" gave me this:

mount error(101): Network is unreachable
Refer to the mount.cifs(8) manual page (e.g., man mount.cifs)
I typed "man mount.cifs."  From that, I gathered that I should type something like "sudo mount -ta cifs," though that precise command didn't work.  The "mount -a" command was supposed to mount everything in /etc/fstab, but apparently it could not mount the fstab items that had network locations.  I tried connecting the Ubuntuter directly to the modem, bypassing the router and the rest of the network (i.e., the Synology and the other computer).  This did nothing.  Apparently it was not just a Synology problem.  I rebooted with an Ubuntu live CD and tried Firefox that way.  It was not able to reach webpages either way (i.e., while connected to the router or directly to the modem).  It seemed I had a hardware problem.  I was using a new Gigabyte GA-MA785GM-US2H motherboard.  It looked like a few of its reviewers at Newegg cited intermittent problems in connecting.  The solution, it seemed, was to return the board or try adding a network interface card (NIC).  I tried the latter.

But while I was doing that, I noticed that the little plastic tab on the ethernet cable was gone.  It was possible for the plug to just slide out of the socket.  So I replaced the cable.  That worked on the next bootup.  So at this point the solution seemed to be either (a) if one reboot doens't work, try another, (b) the NIC connector is competitive and will get its act together if you install another card that might take its place, or (c) a better cable.  I gave it some time, to see which it would be.  After several days, I could say with confidence that the simple replacement of the cable made all the difference.

Acronis True Image Plus Pack: Converting a Virtual Machine to a Physical Machine

I was using Windows XP as the guest operating system in a virtual machine (VM) on VMware Workstation 7.1, running on Ubuntu 10.04 (Lucid Lynx).  I had developed a customized WinXP installation in that VM.  Now I wanted to install that same tweaked version in physical form, as a dual-boot option on that computer.  I did not want to go through all of the time-consuming steps that had been required to create that tweaked installation.  I hoped, instead, that it would be possible somehow to convert the VM to a physical installation.  This post describes what I tried and learned in that effort.

The fact that both the VM and the physical dual-boot installation would be on the same computer did not necessarily make things easier.  VMware VMs used virtual hardware that did not match my physical hardware.  In other words, simply making an image of the VM and restoring it to the physical machine would run into the same problems as if I made an image on one physical computer and restored it on another.  That's not to say it couldn't be done.  It would just require more than a simple image-and-restore procedure.

There seemed to be a couple of different ways to go.  Through a search, I found that VMware itself offered a virtual-to-physical (V2P) conversion procedure.  This would permit conversion of the .vmdk file directly to a physical installation.  That procedure was quite complex, however, and that meant there could be quite a few ways in which it might not go exactly according to plan.  There was also the problem that, according to the detailed description, they had not actually tested it on Windows XP.  It seemed that it might be worth investing the time in this procedure if I were going to do this frequently.  A quick glance suggested that many of the steps would only be hard or time-consuming the first time around.  For my purposes, though, I was not sure that this would be much faster than just reinstalling Windows from scratch, and it would probably be less reliable.

What looked more promising was to try the Universal Restore feature of Acronis True Image 2011 (ATI).  Universal Restore was available through the Plus Pack that had to be purchased in addition to ATI itself.  The combination of ATI and Plus Pack, together, would cost around $80 unless you got it on sale.  The general concept seemed to be that you would install ATI, install the Plus Pack, create a bootable ATI CD (or USB drive) that would automatically include the Universal Restore capability (provided you did install the Plus Pack first), use that to make an image of the VM, and then restore that image to the physical machine.  So this is what I decided to try.

When I started poking around the Acronis website in search of guidance on the details of the Universal Restore process, I came across a link to a search of the Acronis KnowledgeBase.  This turned up more than a thousand entries.  A quick scan of some of the first search results suggested that some of those entries existed because people had run into problems in the Universal Restore process.  It seemed I would need to brace myself for a somewhat finicky and potentially time-consuming effort.  I decided to go ahead with it, though.  Unlike the VMware V2P process, I knew that I had used Acronis many times in recent years, and I figured I would probably be using this procedure again, or something related to it.  So it would hopefully be a productive time investment.

The Universal Restore process was emphatically a restore process.  The instructions I planned to follow began with the assumption that I already had an Acronis image of the VM that I wanted to convert to a physical installation.  I thought, at first, that making an image of a virtual machine would be a matter of setting up VMware Workstation so that it would boot from the Acronis CD or USB drive (or, conceivably, from an ISO).  I had already worked through that setup process in another post.  So now I just had to insert my Acronis CD, boot the VM, and make the backup image on a Windows-compatible (e.g., NTFS) partition.  The VM booted, saw the CD, and Acronis started.  In my tweaked Windows installation, I had put the paging file on a separate virtual partition, so I didn't include that in the backup.

For the destination of my backup, unfortunately, I had a problem.  Within this virtual world, Acronis didn't see my "network" drives -- that is, the physical drives located within this same computer that VMware treated as though they were on a network.  I bailed out of the backup process and went into Acronis's Tools & Utilities > Add New Disk.  But that didn't provide a solution either.  My searches didn't lead to an answer.  A seemingly off-target post gave me the idea that maybe the solution was to install Acronis inside the VM and then try to do the backup there.  So I did that, and it worked.  I saved the system backup to a separate NTFS formatted drive, so that it would be visible to my Acronis CD.

I rebooted the system with the Acronis CD in the drive, and went into Acronis.  I selected Recover > My Disks > Browse and went to the image I wanted to restore.  I chose the Universal Restore option and, hoping that Acronis had already loaded itself into memory, I took out the Acronis CD and inserted my motherboard's driver installation CD.  Then I told Acronis to Add Search Path for the CD drive.  I told it to restore both drive C and the MBR and Track 0.  I designated the new partition where I wanted this to be restored to.  I named that same disk as the target for the MBR recovery.  I went ahead and the recovery process got underway.  After a while, I got an error message,

Device driver 'PCID\VEN_1002&DEV_4390&SUBSYS_B0021458&REV_00' for 'Microsoft Windows XP Professional' cannot be found.
So, oops, apparently Acronis was looking for the drivers that it did include on its own Universal Restore CD, not for the motherboard drivers.  It seemed that I should have left the Acronis CD in the drive for the time being.  The dialog did not give me a "retry" option, so it looked like this restoration might be toast.  I reinserted the Acronis CD, clicked Cancel, and started over.  But after I re-entered the target location and other options, it froze.  I punched the computer's reset button and re-restarted.  So then it went ahead and did the recovery.  But then it wound up at the same error message.  A search for that device driver name produced only four webpages, none in English, but a search for the vendor suggested that Acronis wanted the ATI driver, which I guessed meant the video driver.  So apparently it had not necessarily been a mistake to leave the motherboard's driver CD in the drive, first time around.

My hunch as to the nature of this problem was that simply pointing the installer to the CD drive was not specific enough; it was not going to search all of the subdirectories on the CD to find what it needed.  I put the motherboard CD back in the drive.  I decided, this time, to click the "Ignore" button on the dialog, so it would move on past this error and show me what came next.  Boom!  It immediately said, "Recover operation succeeded."  I wasn't sure it had needed or even looked at the motherboard CD at all.  I took out the CD, closed Acronis, and the system rebooted.  I turned off the power before it got itself fully back up again and disconnected the hard drive that contained my original dual-boot installation of Windows XP, so that now it would have to boot from the newly restored Windows XP partition or nothing.  And the answer was:  nothing.  I got "Error loading operating system."

I did a search and found that, for Patrink Zink, the solution was just to do a cold reboot.  Another thread emphasized connecting the new drive to the same SATA port as the previous program drive.  Another thread made me wonder whether it would have helped to create the target partition in Acronis rather than using GParted.  Yet another thread said something about drivers.  I figured that was the answer.  Apparently whatever I had done with the Acronis CD was a flop.  Just out of curiosity, I booted from the Windows XP installation CD and went into the Recovery Console.  I got there by pressing F8 a bunch of times before the "Error loading operating system" message could come up.  Normally, that would have put me into WinXP's Safe Mode, but apparently Safe Mode was not interested in helping me out.  In Recovery Console, I poked around enough to verify that something resembling a Windows operating system had indeed been installed on the drive.  I typed "chkdsk /r" and let that run.  Then I typed "fixboot" and then "fixmbr."  Then I tried rebooting from the hard drive.  No joy; still "error loading operating system."

I started over again with the Acronis CD.  Following the Acronis guide page, I looked for hard drive controller drivers or chipset drivers.  There was a GSATA folder in the BootDrv folder of the motherboard driver CD, so I designated that as the place for Acronis to look for what it wanted.  But Acronis said "No items to display" when I went to that folder, making me think that it was not finding the right drivers there.  I went into the Chipset folder on the CD, but same thing there.  After checking about 20 folders and finding that none of them had any items that the CD wished to display, I went back to the BootDrv\GSATA folder and designated that, and then proceeded with the recovery process.  It gave me the same error (above).  When it was done, I rebooted.  This time, instead of giving me an "Error loading operating system" message, the cursor just sat there after the "Boot from CD/DVD" line.

I gave it five or ten minutes and then shut the machine down.  I unplugged the new drive and reconnected the old one.  But then -- what's this?  It froze after "Boot from CD/DVD" even though the new drive wasn't connected!  I shut the machine down again, and this time I also disconnected the other hard drive, where I had stored the Acronis backup.  So now only the old Windows program drive was connected.  That worked:  I now got the GRUB2 menu, and the choice of going into Ubuntu or WinXP.  I tried starting the machine again, this time with only the new Windows program drive connected, using the same SATA cable as I had just used with the old drive.  But it was no use:  I got "Error loading operating system" again.

It occurred to me that this was a basic bootup problem, and for that purpose I could try the approach mentioned on another thread.  I had already installed WinXP on this computer, but only in a basic form; I was going to all this trouble because I didn't want to spend the time and effort to install all my Windows programs and configure them.  So far, that decision wasn't paying off.  But I thought perhaps I could copy the needed files from that existing basic Windows installation to my newly created Windows installation from the VM.  So I put the system back the way it was originally, without the new drive, and booted into Ubuntu.  I connected the new drive as an external USB drive and ran a file-comparison utility to see what was different between the files in the basic Windows XP installation and this new one.  (The utility I used was Beyond Compare.)  I really couldn't identify anything that I should be copying over from one to the other.

Collegedropout said that WinXP could have boot problems when the BIOS was set to recognize hard drives as "Auto."  The solution, s/he said, was to change it to "Large."  I tried that, and rebooted with only the new WinXP drive connected.  In my system's BIOS, this setting was located under Standard CMOS Features.  I had to hit Enter at each hard drive and then change Access Mode to Large.  This did not affect the problem, so I changed it back.

I was tempted to set up the Ubuntu dual-boot on the new drive and see if the Ubuntu bootloader would make any difference, but I decided there was probably no way around it:  I was going to have to figure out what they were talking about, when they referred to "chipset drivers."  I went into Device Manager to get the properties for the SATA hard drive controller (Start > Run > devmgmt.msc), but I didn't see it.  I did have the option of putting in a service request to Acronis, but at this ponit I decided I had invested enough time in this experiment.  I decided to just install WinXP manually on the new drive, and to see if I could return Acronis True Image Home 2011 and/or its Plus Pack for a refund.

Tuesday, October 19, 2010

Synology DS109 NAS: No More Connections Can Be Made

I was using a Synology DS109 network attached storage (NAS) unit.  Everything was going along fine, and then suddenly I could not connect.  In Windows XP, I was getting this error message:

An error occurred while reconnecting D: to \\Diskstation\SYNDATA
Microsoft Windows Network: No more connections can be made to this remote computer at this time because there are already as many connections as the computer can accept.
This connection has not been restored.
I did a search but didn't find anything useful.  I was pretty sure it wasn't just a Windows XP problem, since I was also suddenly unable to connect using another computer running Ubuntu 10.04 (Lucid Lynx).  (It might also have been possible to test this on the Windows machine by just rebooting with an Ubuntu live CD; not sure.)  The best I could get in Ubuntu's Nautilus > Places was "Unable to mount SYNDATA," and it didn't show up at all in Nautilus > Tree.  Both machines were able to access the Internet; they just couldn't access the Synology unit.

Weird thing, though:  using Synology's web-based DiskStation Manager (DSM) 3.0 > File Browser on either machine, I was able to view the contents of SYNDATA on the Synology unit.  I was not able to view the contents of an eSATA drive connected to the Synology unit, however.  This made me wonder whether the eSATA drive was the problem.  I went into DSM > Control Panel > System > External Devices, selected the eSATA drive, and clicked Eject.  When the drive's icon disappeared, I disconnected it and turned it off.  This did not seem to make a difference, though.

I wondered if maybe I needed to do some kind of power cycle, like when a router or modem would start malfunctioning and you would have to power it down, disconnect it, wait 30 seconds, power it back up, etc. in order to get it working again.  I went to DSM > Main Menu > Shutdown and chose the shutdown option.  Its blue light was blinking, but it was still on.  I disconnected its ethernet cable.  The unit stayed on.  After five minutes, I pressed the power button next to the blue light.  This had no effect.  I pulled the plug and the lights went out.  I shut down both computers and waited a couple of minutes.  Then I started the computers and the Synology, but left the eSATA drive off.  The situation had not changed.

I noticed that the Status light on the front of the Synology unit was off.  The LAN light was on, solid green, and the Disk light was flashing.  It may have been like that before, though I didn't think so -- it seemed like something I would have noticed.  I went into the web-based DSM.  When I tried to log in, it said "Processing.  Please wait."  The Disk light on the DSM finally stopped flashing, the unit beeped, and the Status light came on as solid green.  Apparently it had been doing a disk check.  I was now able to access the SYNDATA partition on the Synology unit via the Windows XP computer.  The Ubuntu computer was still not able to mount that partition.  In Ubuntu's Terminal, I typed "sudo mount -a."  That did it; it was now able to access the unit as well.  I turned on the eSATA drive.  The Windows XP computer was able to access it.  The problem seemed to be fixed.  The solution was apparently to let the Synology unit sit there until its Disk light was done flashing or, failing that, to power it down, power it back up, and let it clear its head.

Saturday, October 9, 2010

Ubuntu 10.04: Connecting to Synology DS109 NAS

I had tried once before to connect a Synology DS109 network attached storage (NAS) device to a computer running Ubuntu 10.04 (Lucid Lynx).  It was a frustrating, confusing experience.  But when I rebooted that computer into Windows XP, it connected it right away, with a little help from Synology's extremely responsive tech support.  Now I was ready to try again in Ubuntu.

The first part was easy.  I navigated to the folder where I had placed the downloaded, unzipped copy of Synology Assistant for Linux.  That is, in Ubuntu's Applications > Accessories > Terminal, I typed "cd /media/LOCAL/[foldername]."  (If there are spaces in the foldername, you would have to put quotation marks around everything after "cd.")  This folder now contained a couple of "How to Install" (or Uninstall) items and a file named "install.sh," along with the .tar.gz file.  I typed "sudo sh install.sh" and that installed Synology Assistant.  I designated /usr/local as the install path.  It told me that I could run the Assistant program from /usr/local/SynologyAssistant/SynologyAssistant, or through the symbolic link at /usr/local/bin/SynologyAssistant.  I right-clicked on the word "Applications" on the menu and chose Edit Menus > System Tools > New Item.  I filled in the name as Synology Assistant and I browsed to the symbolic link and selected it.  Now I had a working menu link under System Tools.  I ran that and got the Management tab in Synology Assistant.  I double-clicked on DiskStation and that opened up a tab in my Internet browser.  In theory, I could access the Synology DS109 from here.  I closed the Assistant and logged in on the browser tab.  I went into the File Browser and, what do you know, it was all working fine.  Having already run the firmware updater on the DS109, as described in the previous post, I didn't need to do that again, as I confirmed in Synology's Control Panel > DSM Update.  In short, it seemed that the easy approach, for this part, would have been for me to start with the Windows setup, where I was more familiar with everything, and set up the DS109 that way, and then come to this point in Ubuntu.

Now there was the matter of being able to work with files and folders on the DS109 from within Nautilus or Terminal.  This was where I had gotten stuck last time.  In Nautilus, near the top left corner, I clicked on the Tree option that I normally used and changed it to Places.  It showed several items that seemed to be a legacy of my previous attempts to set up networking. 

At this point, I contacted Synology tech support again.  They had used TeamViewer last time to troubleshoot my problem, so I went to PortableLinuxApps.org and downloaded TeamViewer 5.  It downloaded to my home folder.  I wasn't able to figure out how to run it, so I posted a note on it.  I tried to show my network devices by typing "sudo lshw -C network," but that just showed me my ethernet controller.  Meanwhile, though, Synology tech support pointed me toward an article on their wiki, on how to map a network drive in Linux.  In essence, they had me type these lines:

sudo mkdir /mount/SYNDATA
sudo gedit /etc/cifspwd
(I tried using that second line because what they actually recommended, "echo username=[username] > /etc/cifspwd," gave me a "Permission denied" error, even when I preceded it with "sudo."  So on the first line of that blank new cifspwd file that I was creating in gedit, I typed the Synology username I wanted to use, and on the second line I typed that username's password.  I hit Enter after the password but didn't type anything on the third line.  I saved and closed the cifspwd file.  This, however, was not the right approach.  After some trial and error, I guessed that maybe what I was supposed to put into the cifspwd file was not this:
[username]
[password]
but rather this:
username=[username]
password=[password]
I had assumed that the program would know that what I was typing on the first line was the username, but now it seemed that, no, I had to say so.  So if my username was "ray," then the first line would read "username=ray."

This got me partway there.  The other part was to type "sudo gedit /etc/fstab" and add these two lines to the fstab:

#Entry for SYNDATA
//192.168.2.1/SYNDATA                /media/SYNDATA        cifs    user,uid=ray,gid=users,rw,suid,credentials=/etc/cifspwd,iocharset=utf8 0 0
where 192.168.2.1 was the number I got from Synology's Main Menu > System Information > General tab > Network section > IP address and "ray" was the username on the computer (not on the Synology).  I found that, if you had more than one device like SYNDATA, you could still use the same IP address on a separate fstab line, and otherwise everything except for the name (e.g., SYNDATA) would be the same.  After finishing my edits to fstab, I saved and closed it and typed these lines:
sudo chmod 0600 /etc/cifspwd
sudo mount -a
First time around, when I had the wrong fstab entry (i.e., referring to "synologybox" rather than 192.168.2.1), this gave me an error:
mount: wrong fs type, bad option, bad superblock on //synologybox/SYNDATA, missing codepage or helper program, or other error (for several filesystems (e.g., nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so
I had also gotten another error, "mount point /media/SYNDATA does not exist," but I had fixed that by typing "sudo mkdir /media/SYNDATA."  In the course of troubleshooting, again with great help from the Synology tech support lady, I also discovered the alternative of mounting the DS109 from the command line, with something like this:
sudo mount -t cifs //192.168.2.1/SYNDATA /media/SYNDATA -o username=[username],password=[password],iocharset=utf8
A bit of playing with that led me to discover that the password I was using contained an exclamation mark (!), and while that was no problem when logging in from the Windows machine, it was a problem on the Ubuntu machine, at least if I was logging in from the command line.  After adjusting to resolve that problem, I was able to connect to the Synology, and now I was showing SYNDATA as a drive in Nautilus, just like other partitions.  I still wasn't sure what to do about those Windows Network and DISKSTATION entries that showed up in Nautilus > Places > Network, but I decided to let that be a problem for another day.  I typed "sudo gedit /etc/fstab" and corrected the line to read as shown above.

Incidental notes:  another troubleshooting step taken at some point (not sure when) was to type "sudo mount -t cifs."  Another problem was the demand, "Enter password to unlock your login keyring," but every password I tried failed.  I wanted to bail out, but the thing kept giving me the same dialogs.  I had to use Force Quit to get it to shut up.  Second time around, though, I tried Cancel instead, and that let me go right on through.  Weird.

Also, at one point in the troubleshooting process, the computer became completely unable to contact the outside world.  Firefox wasn't reaching webpages, and some of my Synology-related commands were producing a "Network is unreachable" error.  That seemed to be a pretty common problem.  I fiddled with some random commands, and it seemed that one of them had adjusted the situation.  The command in question might have been "dhclient eth0" but more likely was "/sbin/route add -net 0.0.0.0 gw 1.1.1.1 eth0" (replacing 1.1.1.1 with 192.168.2.1 in my case -- see above).  But then that turned out to be a false dawn; I was soon back at the "network is unreachable" error.  A day later, however, without any intervention by me other than to reboot the system, the computer was able to go online.

So at this point, writing up these notes a day or two after I was finally able to get to the Synology DS109 through the Ubuntu machine, the main things that I did seem to have been to add the username and password to the cifspwd file in the correct format and use the right syntax in the fstab entry, as shown above.

Monday, October 4, 2010

Wiping a Hard Drive in Ubuntu

I had an old ATA (PATA) hard drive that I wanted to wipe and dump. I didn't know if there had ever been anything particularly sensitive on there, but anything is possible, and anyway I wanted to know how to do it in Ubuntu.  This post presents what I learned in that process.

After a preliminary search, I came across this advice:  "The first thing to do is to see if hpa is enabled."  "HPA," according to Thinkwiki, was short for "Hidden Protected Area" or "Host Protected Area."  It was "a special area (usually a few gigabytes in size)."  The recommended way to see if hpa was enabled was to type "sudo hdparm -N /dev/sdx."  I did that and got this:

625142448/4385456(625142448?), HPA setting seems invalid (buggy kernel device driver?)
A search for that error message led to just a dozen or so hits, containing various bits of information.  For instance, thorkelljarl noted that "The HDD utilities supplied by Seagate, Samsung and some other HDD makers can set the HDD capacity and should remove HPA totally."  NeCod suggested trying this:
grep -i HPA /var/log/kern.log
That produced two messages, each beginning with the date and time and then reading as follows:
ubuntu kernel: [    3.076473] ata5.01: HPA unlocked: 625140335 - > 625142448, native 625142448
ubuntu kernel: [    3.482031] ata1.00: HPA unlocked: 156299375 - > 156301488, native 156301488
I wasn't sure what those messages meant.  I thought the system in question might have been confused about the hpa on this drive because I had booted the system using a live Ubuntu CD.  WilliTo offered some advice pertaining to RAID, but the first two steps sounded more generically useful:
1- Disable backup bios to disk in gigabyte bios if enabled
2- Disable HPA with HDAT2 (disk must be in sata mode not raid)
Similarly, that Thinkwiki page said "If the HPA is enabled in the BIOS (mode set to "Normal"), Linux may get confused about the correct partition geometry."  I tried rebooting, hit Del to go into the BIOS settings > Advanced BIOS Features > Dual BIOS Recovery Source.  That didn't have an option to disable this; it just had HPA or Backup BIOS options.  A Launchpad page gave me the impression that this was particularly a Gigabyte motherboard problem:
Virtual BIOS solutions - like GigaByte motherboards use

This is absolutely CRITICAL. When the hd is unlocked and fully used new GigaByte boards tend to write 1.5 MB at the end of IDE or SATA drives in legacy mode as BIOS backup. Then HPA is activated, every OS has to respect this otherwise it will definitely overwrite data. That's usally not directly visable for the user, but it definitely happens. Depending of the filesystem used for the partition using that last mb will immediately kill data or it will take time till it is filled, but corruption is inevitable.
That Launchpad page said that HPA takes only 1.5MB.  But I preferred not to have it there, for purposes of this erasure.  I had seen another note, somewhere, indicating that the person in question was finding that the HPA was more like 8GB.

Going back to WilliTo's advice, I did a search for HDAT2.  According to its homepage, HDAT2 was a "program for test or diagnostics of ATA/ATAPI/SATA, SSD and SCSI/USB devices."  A desultory glance at a few reviews suggested that HDAT2 was an OK program.  My next question was whether I could add it to a BartPE bootable USB stick as one more boot utility, but it looked like UBCD4Win (the Ultimate Boot CD for Windows) and/or UBCD had beat me to it.  Rather than pursue that, however, I found that Darik's Boot And Nuke (DBAN) was a simple and highly recommended bootable eraser, so I went with that.

Saturday, October 2, 2010

Ubuntu 10.04 and Windows XP Dual-Boot: GRUB2 Woes

I was installing Windows XP SP3 and Ubuntu.  I encountered some error messages early in the process.  This post describes the steps I took in an attempt to resolve those problems.

First, I got this error:
Setup did not find any hard disk drives installed in your computer.
In my search for a solution, I found a thread offering a number of suggestions.  The one that worked for me was to go into the BIOS (hit Del at startup) and make sure the SATA controller was set to ATA, not AHCI.  Mine had started out as ATA, but a notice had popped up when I was first booting it, offering to change it to the AHCI, and I had accepted.  After I fixed this, the next bootup problem was this message:
Windows could not start because the following file is missing or corrupt:
\system32\hal.dll
Please re-install a copy of the above file.
My search on that led to a page offering a number of suggestions.  I started with just rebooting.  That didn't fix it.  suggesting that I go into the BIOS and switch the order in which the BIOS would try to boot my hard drives.  This led to a new message:
error: no such device: [UUID number]
grub rescue>
This apparently happened because I had previously installed Ubuntu on that hard drive.  For this one, I took the advice to boot from the WinXP installation CD, choose R to repair an existing installation, choose the existing installation (no administrator password), and type this sequence of commands:
D:\WINDOWS> C:
C:\> CD \
C:\> FIXBOOT C:
C:\> FIXMBR
C:\> BOOTCFG /rebuild
The last one led to an offer to add D:\WINDOWS to the boot list.  I took that offer.  But then it asked "Enter Load Identifier" and "Enter OS Load Options."  I didn't know what to add, so I just hit Enter for each.  This led to a new error message:
Error: Failed to add the selected boot entry to the boot list.
I guessed that the system was seeing the Windows installation on drive D because I had reversed the boot order in the BIOS.  I did think I had cabled them correctly, with the Windows drive going to SATA0 (i.e., the first SATA connector) on the motherboard.  I looked into Load Identifier and found a Microsoft page with information on that and other parts of this situation.  It told me to type the name of my operating system:  Microsoft Windows XP Professional, though apparently anything would do.  For OS Load Options, it said, type /fastdetect.  Unfortunately, this was not satisfactory.  I got the "failed" error again.

The responses to this seemed to lean in the direction of editing the boot.ini file.  A thread on that gave me the idea of just copying my boot.ini file from another computer.  A Microsoft page gave information on editing it if I was already in WinXP, which I wasn't.  A webpage devoted to boot.ini said it should be possible to just delete the boot.ini file in order to boot the system.  The webpage also gave some sample boot.ini files.  I typed "help" at the prompt and got a list of options.  There didn't seem to be an editor in the Repair Console.  I typed "dir" to verify that boot.ini was there, and then typed "type boot.ini" to see what was in it.  It didn't look very complicated.  What's more, it looked like my XP Pro entry was already there, which could explain why my BOOTCFG command didn't work.  I pressed the up arrow to retrieve my BOOTCFG command and hit Enter to run that again, just in case.  It failed again.

I typed "exit" and shut down the computer and swapped cables so that maybe this drive would show up as C rather than D.  That merely resurrected the "no such device" GRUB error.  I decided to explore that one for a while.  A search led to some commands that I could have entered, but then I saw a suggestion that maybe I could fix it by just installing Ubuntu, which I had planned to do anyway, and let it sort itself out.  So I went ahead with that, following the Ubuntu installation approach I had worked out previously.  But at this point I only did the initial installation from the CD, and then rebooted to see what the Windows situation was now.  Sadly, I still had the "no such device" error.

I guessed that the problem was that I was using two drives.  One of them had perfectly reasonable Windows and Ubuntu installations on it.  The other had leftovers from some previous Ubuntu installation.  I didn't know for sure, but on that hunch I unplugged the second drive and rebooted.  But no, that gave me the "no such device" error either way, no matter which disk I plugged in.  I plugged in both drives, rebooted with the live CD, and ran System > Administration > GParted to take a look.  Interestingly, it showed that a partition on the second drive was marked as a boot drive, when it should not have been.  I changed that.  It didn't make a difference.

Ultimately, I plugged in a USB drive while booted with the Ubuntu live CD, copied over the data files, wiped both of the drives in the machine, and reinstalled WinXP and Ubuntu, in that order.  That solved the problem.