Showing posts with label workstation. Show all posts
Showing posts with label workstation. Show all posts

Monday, January 3, 2011

VMware Workstation 7.1 Unrecoverable Error

I was using VMware Workstation 7.1 on Ubuntu 10.04 with a Windows XP SP3 guest.  I had a WinXP virtual machine (VM) open, and suddenly it crashed, with this error message:

VMware Workstation unrecoverable error: (mks)
Unexpected signal: 11.
A log file is available in "/media/VMS/VMware VMs/WXMUProjectC/vmware.log". Please request support and include the contents of the log file.

To collect data to submit to VMware support, select Help > About and click "Collect Support Data". You can also run the "vm-support" script in the Workstation folder directly.
We will respond on the basis of your support entitlement.
I pursued that option, but it turned out I didn't have a support entitlement, so I turned to this process of researching the solution on my own.  I had gotten a similar error message once before.  I wasn't entirely sure what I had done to solve the problem in that case, other than to keep flailing around until something clicked.  But as I reviewed that previous post, I did recall a different error message that I had gotten when starting the VM.  I started it again and saw this in the lower right-hand corner:
Could not connect Ethernet0 to virtual network "/dev/vmnet8"
More information can be foun din the vmware.log file.
Virtual device Ethernet0 will start disconnected.
I tried a search for that error and came across some possible answers.  One was just to reboot, but I had already done that.  Another was to use the Virtual Network Editor.  I found a VMware video on that.  It told me that vmnet8 was associated with NAT, which was the kind of network connection I had selected in VM > Settings > Hardware tab > Network Adapter.  The video then started to talk about adding a network adapter.

It seemed that this could have two implications for me.  First, I had just put a network interface card (NIC) into the computer, as an alternative to the motherboard's onboard network connector.  I had done that in an attempt to deal with a networking problem that turned out to be just a bad cable.  I did think that, previously, I had been getting the second error message previously, the one just quoted, "Could not connect Ethernet0."  But I had not been getting the crashes previously.  So probably I could fix that error by just removing the unnecessary NIC, instead of trying to figure out how to configure it as described in the video.  Second, I had preserved my Ubuntu /home directory during this most recent installation of Ubuntu.  I had also recently installed a new motherboard.  Possibly the settings that I had saved in that /home directory were still dreaming of the old days, with the previous motherboard; perhaps I would have to configure the VM's ethernet adapter anyway, so as to make it comfortable with the new motherboard.

I started by shutting down the machine, removing that unnecessary NIC, and restarting.  (Before shutting down, I checked Ubuntu's System > Administration > Update Manager, just in case there were updates that would make my life easier in unknown ways.)  I powered up the VM.  No Ethernet0 error message.  OK, one problem solved.  Would it crash?  I worked with the VM for a couple of days, but then it did crash again.  It seemed that the frequency of the crashes was much reduced, so maybe removing the NIC helped.

This time, I took a look at the log file.  It was in the folder containing the other files for the VM, including the .vmdk file, and it was named simply "vmware.log."  It contained a large number of entries, going back days, including what looked like several hundred, at the end of the file, that all occurred within the last second before the crash.  The first one in that last second was "Caught signal 11 -- tid 3652."  There hadn't been any others for several minutes before that, and I also noticed that the "11" was the same number as appeared in the error message onscreen (quoted above).  This did seem to be the beginning of the end.  After that "Caught signal 11" message, there in the log file, I saw many repetitions of a few other types of messages, like these:
mks| SIGNAL: stack B6CE1AE0 : [etc.]
mks| Backtrace[0] [etc.]
mks| SymBacktrace[0] [etc.]
mks| Panic: dropping lock (was bug 49968)
mks| Unexpected signal: 11.
mks| Core dump limit is 0 KB.
mks| Child process 19031 failed to dump core (status 0x6).

mks| Backtrace[0] [etc.]
mks| SymBacktrace[0] [etc.]
where [etc.] refers to various sets of computer gibberish (e.g., 0xb7823410).  It went on from there, but now we seemed to be at the point of no return, where the log showed VMware giving me the "Unexpected signal" error message onscreen.  A search led to a thread in which people were saying that they avoided this error by tinkering with their screen resolution.  When I saw that, I figured that we were talking about a kind of general reaction to somewhat incompatible hardware:  could be the NIC, could be the screen resolution, etc.

I had noticed that this particular crash occurred when I used my KVM to switch from the computer in question to a different computer.  It was a new KVM, an IOGear GCS72U.  I had previously been using a PS/2 KVM without problems, but my new motherboard did not have two PS/2 sockets, so I had to switch to this USB KVM.  Had I installed the KVM before or after taking out the NIC?  I couldn't remember.  But the KVM was a suspect.

Another suspect was the keyboard.  I had also had to get a new keyboard and mouse -- because, of course, I was using PS/2 devices previously, and now I had to have USB devices.  It was an inexpensive keyboard, a Logitech K120, and I had noticed that it did not work consistently on the other computer.  It would be working fine, and then there would suddenly be no more keyboard input.  The mouse was still working, but not the keyboard.  At that point, it didn't matter whether the keyboard was connected to that other computer through the KVM or was plugged into it directly; either way, it wouldn't work.  But I hadn't done a scientific study to determine whether it was the keyboard or the USB KVM that was screwing up.

So, putting it together, what had happened in this case was that I was using VMware on computer no. 1 (C1).  The keyboard and mouse had been working fine on both C1 and C2.  I hit the KVM switch button to move to C2.  That's when VMware crashed; and at the same time, suddenly the keyboard was not working on C2.  But the USB mouse would still work.  So it seemed that either the keyboard or the KVM was sending a funky keyboard-related signal to C2.

C2 was an older system, and I was in the process of upgrading it, and I thought that might solve the problem.  But in the meantime, I plugged my old PS/2 keyboard into C2 and rebooted, with the new keyboard and KVM still connected to both computers.  In this setup, over a period of days, I observed that the PS/2 keyboard continued to work consistently throughout, but the USB keyboard would sometimes stop working on C2, like before.

Then I had another VMware crash on C1, and that made me decide to try another angle.  I unplugged the KVM from C2.  By this time, I had replaced C2; now it was pretty much the same computer as C1 (same kind of motherboard, CPU, and case).  So now the mouse and keyboard connected to the KVM would work only on C1.  On C2, I kept using the PS/2 keyboard, and added a USB mouse.  If there were still crashes or freezes, I would have a better idea of whether the problem was the new USB keyboard or the new USB KVM.  At this writeup, a couple of weeks later, my recollection was that I did not have any further crashes.

I decided to send the KVM back to IOGear.  In the interim, I connected my old PS/2 KVM to both computers, and used it only with the PS/2 keyboard.  So now I had a separate USB mouse for each computer, but only one keyboard for both of them.  It took a while to get used to switching mice when I switched the keyboard from one computer to the other, but the point here is that there were no further crashes.  I had to wait for the replacement KVM to arrive from IOGear before I could say for sure whether it was the keyboard or the KVM; but by that point I had decided to stop using VMware on Ubuntu,

Windows 7 as Host: Choosing a Virtualization Program

I had previously used VMware Workstation to run Windows XP in a virtual machine (VM) on Ubuntu Linux.  Now I was switching from Ubuntu to Windows 7.  I had found that WinXP was actually more stable in a VM than when running natively.  I also didn't want to go cold-turkey from WinXP; that is, I wanted to continue to have access to my familiar programs and other arrangements until Win7 felt natural.

I couldn't use my Linux-based copy of Workstation on Win7, so I would have to come up with a new virtualization tool.  I didn't want to shell out for another copy of Workstation.  I wondered if there were good free alternatives that would let me run WinXP in a VM on Win7.  I did a search, viewed some random remarks, and came up with some comparisons of several leading virtualization products.  These comparisons confirmed my sense that the main free contenders were Microsoft Virtual PC, Sun Virtual Box, and VMware Player.

Looking first at comparisons against Virtual PC, one reviewer suggested that Virtual PC was easiest for a purely Windows setup like mine, Virtual Box was more oriented toward Linux hosts, and Player was the most popular.  Another reviewer said that VirtualBox and VMware Workstation (not necessarily Player) had more advanced customization options, such as unity mode, snapshots, USB drive support, the ability to move VMs, and the option of allocating two CPU cores.  Another reviewer, comparing VirtualBox and Virtual PC, found that Virtual PC had the advantage of presumably greater Microsoft compatibility, better disk technology, a free WinXP license, support for Win7 and Vista guests, lower resource consumption on host, and easier physical drive configuration.  He favored VirtualBox nonetheless, though, because he found that VirtualBox supported more operating systems (especially Linux and Mac), supported 64-bit guests and multiple processor cores (i.e., better performance) as well as multiple kinds of virtual disks, and offered snapshots, unity mode, remote display, and 3D support.  Another reviewer, offering a video demonstration, said Virtual PC was better in supporting Aero, automatic login, USB device sharing, and integration into Windows Explorer (which he actually disliked), but favored VirtualBox for running on multiple operating systems and for other reasons stated by some of the other reviewers.  In short, there seemed to be some consensus that Virtual PC was the worst of the three.  I did a quick search to see if perchance Microsoft had upgraded it recently.  It didn't seem to have done so.

At about this point, I realized that probably I could continue to use my copy of VMware Workstation for Linux to make virtual machines containing WinXP, and use those in Player on a Win7 host.  So I wasn't sure if I should be comparing VirtualBox against VMware Player or Workstation.  I found both sorts of comparisons.  In what were probably the most professional reviews I saw, PCMag rated Workstation 6.0 (I was now using 7.1) at 4.5 stars, versus 3.5 for Virtual Box.  (At this writing, Workstation was on sale for $142 (with a free copy of VMware ThinApp Starter Edition thrown in) instead of its usual $189.)  One of the reviewers cited above encountered freezes in VirtualBox, and found that Workstation was the best performer.  She reported large differences in size between Workstation (~500MB) and the other two (~40MB), presumably reflecting more sophistication but also more resource demands in the former.  Another reviewer also compared VirtualBox against VMware Player.  He found VirtualBox faster in the guest, less burdensome for the host, and better in snapshots; but inferior in networking, and in support for Windows Aero, USB, 64-bit CPUs, and hardware virtualization.  Another reviewer favored VirtualBox because (s/he claimed) it was free, open source, more frequently patched, used fewer resources, ran faster, resumed faster, and was able to use its competitor's VMs.  The remark (above) that VirtualBox was more oriented toward Linux hosts was echoed in my own previous look at VirtualBox, where I cited a source indicating that VIrtualBox did far better in Linux hosts.

It seemed that VMware was ahead of the game or at least a solid contender in most ways, but that it was not a clear and obvious winner, especially when price was considered.  Having already played with VirtualBox a bit, and having learned VMware's approach, it seemed that I should start with whatever I could pull together from Workstation and Player.  If I ran into performance issues with a Windows host as I had with an Ubuntu host, maybe then it would be time -- especially before investing another $140-190 in VMware -- to give VirtualBox a more extended look, and that might also be true if I reached the point of having to build another WinXP VM from scratch.  I could do that in VirtualBox, and in that event might find it a relatively problem-free alternative.

Before proceeding to try VMware Player in a Windows host with my existing VMware VMs that I had created using Workstation on a Linux host, I looked into the recollection that VMware Server was also free.  Posts in one thread said that Server could create VMs too, but was not as fast as Player and did not have as many capabilities for an already existing VM.  I found EasyVMX.com and gathered that there were other ways to make VMs as well, though there didn't seem to be a point in doing so unless if I had been short of system resources to run Server.

I discovered at this point that VMware offered another free virtualization product, called the VMware vSphere Hypervisor, or ESXi for short.  This one differed from the others in being a "bare metal" hypervisor -- that is, not requiring a host at all.  In other words, I would install this first, before installing the operating system, and then I could install one or more operating systems, each in its own VM, without devoting resources to non-virtualization services being provided by the host (though it began to sound like ESXi would only run one operating system at a time).  In the long run, this sort of thing could be the end of dual-booting, and I could have Ubuntu, Win7, WinXP, and any other operating systems ready to run as needed, including those already packaged in free VMware appliances.  ESXi was said to offer better performance than Server.  It wasn't clear that it could be made to work on a PC as distinct from a dedicated server, though.  VMware's Hardware Compatibility Guide was not going to provide information for my puny little AMD Athlon (as distinct from Opteron or Xeon) CPU.  Another source made it sound like a home PC could run it, though.  Managing ESXi seemed to be the sticking point; for example, VMware's ESXi Management Kit cost $995.  Veeam offered a free ESXi manager, but there appeared to be a network administrator type of learning curve involved here.  Some people seemed to be using a minimal Linux to manage it.  It also sounded like there might be an ESXi manager within ESXi itself.  Apparently part of the reason Server was slower was that it was more user-friendly.  The purpose of the free ESXi seemed to be to get new system administrators into the VMware world and ready to upgrade to more powerful VMware server products.  Apparently there were ways to run ESXi in Workstation in Windows 7, but this seemed like the opposite of a bare-metal approach.  Basically, I liked the idea of a bare-metal hypervisor, but the sparse results I was getting on my searches told me that I was looking into something that was not happening for end users, at least not yet.  It could be done, but servers remained something that other people used for other purposes.

I wasn't quite ready to give up on the idea of a bare-metal hypervisor for home use, so I did another searchBrad Maltz gave me a nice chart to compare the options.  His bottom line:  "You can expect plenty of hurdles and years for perfecting this technology, but the client-side hypervisor is the catalyst to many greater things to come."  My goal for the coming months, it seemed, would be to continue to focus on the usual multi-layered host-virtualization-guest scenario.  For that purpose, I would start with VMware Player and my existing VMs.  If I needed to adjust them, I would try to do so in either Workstation for Linux or Server.  If I had to build a new VM, I would consider doing it in VirtualBox as an alternative to Workstation or Server.

Saturday, January 1, 2011

Farewell, Ubuntu

I first started looking into Linux in the 1990s.  I have been playing, and then subsequently relying on, Ubuntu since I started this blog in 2007.  And now I may be going away from Ubuntu, and Linux, for a while.

My situation is that, throughout this time, I have had work to do.  There were always applications or capabilities that I was using in Windows that I could not yet match in Ubuntu.  As a transitional step, I bought VMware Workstation and ran that on Ubuntu, so that I could use those Windows applications in Windows XP virtual machines while continuing to become more familiar with Ubuntu.  And this worked.  I did become fairly comfortable with Ubuntu.  I have spent what must have been hundreds of hours researching, tinkering, and troubleshooting, as shown in many posts in this blog.  The performance was never as good as on the Windows machine.  But it was a good arrangement nonetheless.

During the past year or so, unfortunately, VMware or the underlying Ubuntu installation grew flaky.  I reinstalled both a couple of times.  It just wasn't working.  The VM was working so slowly, even with faster hardware and a fair amount of attention to performance tweaks.  I still don't know why.  If I knew, I would fix it, because one odd thing I noticed was that Windows XP running in a VMware Workstation virtual machine was vastly more stable than Windows XP running in native mode.

Right now, there are too many things that I still can't do in Linux.  The promise has been a long time in coming, and it's still not here.  I will probably keep a dual-boot on at least one computer, and I may find that I need it sometimes, for some purposes.  I got to the point where installing and tweaking Ubuntu was a lot faster and easier than doing so in Windows.  I like Ubuntu.  But at the end of the day, I need my software to work.  So, for now, I am going back to Windows.

Thursday, September 30, 2010

Connecting Network Attached Storage (NAS) to a WinXP Guest in VMware: FAIL

I had two desktop computers running Ubuntu 10.04.  On one of them, I was running VMware Workstation 7.1, with Windows XP SP3 as a guest operating system in a virtual machine (VM).  I had just figured out how to network these two computers using Samba shares within a home network, where the two computers were connected via ethernet cables to a shared router.

Now there was a new question.  Could I add a Synology DS109 network-attached storage (NAS) device (essentially an external hard drive enclosure designed for network backup and file serving) to this network?  Of course I could, in the sense of running an ethernet cable from the Synology to the router; but what I was wondering was whether I could make this work despite the fact that the software for the Synology was available only for Windows and Mac, and not Linux.

It was a question, in other words, of whether I could run the Synology software in Windows XP in a guest VM.  I gave it a whirl.  I ran the Synology installation CD and went through the steps to set up the Synology Server.  This opened the Synology Assistant, a setup wizard; and after a moment, it gave me an error message:

No Synology Server was found on the local network.  Please make sure:

1.  Synology Assistant is not blocked by the firewall of your computer OS or anti-virus applications.

2.  Synology Server and your computer are both connected to the network.

3.  You have switched on the power of Synology Server.
Option 1 was the only one that seemed to explain the situation.  I decided to back up and make sure that I could see a shared folder on the other computer from within Windows.  In my first try, I set up that shared folder on an NTFS partition, and that led to a separate investigation of the difficulties of sharing an NTFS partition in Ubuntu.

That wound up taking longer than expected, so in the meantime I just focused on the link between the Synology and the computer in which I had VMware running.  I noticed that, in Ubuntu's Places > Network, it listed three items:  ANTEC (the name of this computer), Windows Network, and WINXP8 (the name of the computer running in the WinXP VM).  Plainly, Ubuntu was seeing Windows.  Was Windows seeing Ubuntu?  Or did it need to?  A first answer was that, of course, you could go into Windows Explorer > Tools > Map Network Drive and (assuming you had VM > Settings > Options tab > Shared Folders set up) you could gain access to NTFS and ext3 partitions outside of the drive C that existed inside the virtual machine.  These drives would be visible in Windows Explorer > My Network Places > Entire Network > VMware Shared Folders.

I tried running the Synology setup wizard again.  It gave me the same error as before.  I did a search and found webpages describing how to use NAS freeware to use another computer as an NAS device.  This raised two thoughts.  First, possibly I could use some software other than Synology's CD to make contact with the NAS device.  Second, perhaps I should consider using another computer myself, in lieu of the Synology unit.  I decided to go ahead with the Synology project for now; I could return or sell the device if it really wasn't what I wanted.  I probably could have assembled another computer at equal or lower cost, with far greater potential storage capacity, with more RAID options, with a more powerful processor (for e.g., checksum calculations) if needed, with what might prove to be more options in the choice of software packages and commands to manage and adjust it, and with more flexible hardware troubleshooting options (i.e., more than just fix it or replace it) in the event of malfunction.  Its drawbacks would include time and expense for software and hardware selection, learning, installation, maintenance, and troubleshooting; physical space requirements; power consumption; and noise and heat generation.

For the time being, I searched Synology's website and found a post raising the thought that perhaps a Windows connection was crucial only for the initial setup of the Synology device.  So I rebooted the computer into Windows XP instead of Ubuntu and ran the Synology setup CD from there.  This time, the wizard found the DiskStation right away.  So, really, I probably could have set the thing up using my laptop.  It seemed to be just a matter of connecting a Windows-based computer to configure the hard drive that I had inserted into the NAS unit.

Following the Quick Installation Guide, I looked for a Browse option in the Synology Assistant, but didn't see one.  Instead, in the Management tab of the Assistant, I double-clicked on the DiskStation entry, and that seemed to be the correct thing to do:  it opened a different Setup Wizard, or maybe a continuation of the same one.  The wizard said, "Please input the path of installation file."  Maybe this was where I was supposed to browse to the .pat file?  Sure enough. Browse brought up four different .pat files.  I chose the one for the 109 and opted for One-Click Setup.  It warned me that all data in the hard drive would be deleted.  I hoped it meant the hard drive that I had inserted into the NAS unit.  Lights began flashing on the unit.  It went through several steps:  Apply network settings, Format hard drive, Install DSM (DiskStation Manager) to hard drive, and Write configurations.  For my 2GB drive, the whole process took about 20 minutes.

When it was done, it said, "System has been installed successfully."  Then it just sat there.  Now what?  The other programs on the CD's Installation Menu were Data Replicator, in case I wanted to use the unit for backup rather than as a file server, and Download Redirector, for some purpose I didn't fully understand.  For lack of any better ideas, I rebooted into Ubuntu > Places > Network.  The list of places was the same as before.  I tried another search of the Synology website.  The product page for the DS109 definitely said that the unit was "designed for data storage and sharing among Windows, Mac, and Linux."  But how?

I knew I was desperate when I thought that perhaps I should consult the User's Guide.  But then -- what's this?  When I went to the downloads website, I saw that Synology Assistant was also available for Linux!  I had no idea.  I downloaded that and, while I was at it, also snagged what appeared to be a more recent DSM patch (.pat) file.  The User's Guide on the CD was for DSM 2.3, but the one online was for DSM 3.0, so I copied that too.  Apparently DSM was the firmware updater.  The included instructions were incorrect, as I eventually figured out.  All I had to do was to navigate to the folder where I had put the downloaded .tar.gz file ("cd /LOCAL/Synology") and the accompanying install.sh file, type "install.sh," designate /usr/local as the target directory, watch a bunch of error messages roll by, accept its offer to try again by sudo, copy and paste the command it offered to create a symbolic link, and then type "SynologyAssistant."

With that, Synology Assistant was up and running, and it found the DiskStation.  I double-clicked on it.  It opened a webpage in Firefox.  Having used the One-Click installation previously, I knew there was no administrator password, so I just clicked right on in.  Now I was looking at Management and Online Resources icons.  Management gave me all kinds of options.  I noticed I was in DiskStation Manager 2.3; did this mean that there was no DSM 3.0 for Linux?  On the left side, under System, I clicked on DSM Update.  Ah, of course.  This was the part where I got to Browse to the new .pat file I had downloaded.  It said, "Transferring data to the server.  Please wait."  This time, it was done in under 10 minutes.  It then confronted me with a signin screen.  I could not just click on through; it demanded that I enter something.  I tried Administrator without a password.  No go.  I tried my normal Ubuntu login.  Aha! . . . er, no.  That wasn't it either.  The hell.  I was locked out of my own NAS.  I wasn't alone.  Several other people had experienced this just within the last few days.  I suspected it was due to some quirk in newly released software.  I posted a "me too" note on it in Synology's moderated forum and waited.

But then -- reverting again, desperately, to the manual -- I noticed I was supposed to log in as "admin" with no password.  That worked, and now I was in DiskStation Manager 3.0.  I clicked on "Set up a volume and create a shared folder."  That opened Storage Manager.  I selected Storage > Create and that put me in Volume Creation Wizard.  The only option that wasn't greyed out was ISCSI LUN.  The manual didn't define that term, but Wikipedia said it was short for Internet SCSI, where SCSI is short for Small Computer System Interface.  The idea seemed to be that you were using the Internet instead of cables to create a SCSI setup.  LUN was short for "logical unit number."  An ISCSI LUN was apparently just any one of a set of drives in a SCSI array.  In other words, I was creating a logical drive.  So I went with that.

That gave me a choice of some more properties.  One was Thin Provisioning (default = yes), which was said to increase efficiency.  I was supposed to say how much of my 2TB (actually, 1829GB available, according to the dialog) I wanted to allocate to this first volume (default name:  LUN-1).  I was going to be backing up this file server to a 2TB drive, so I didn't worry about splitting the volume to a size that would match the external drive.  I thought it might be a good idea to have more than one volume, in case one went bad or needed maintenance.  The manual said that, on my unit, I could have up to ten.  I looked at my data and decided to go with three volumes of 600GB each.  (This would be changing later.)  Finally, there was an iSCSI Target Mapping option.  Again, the manual didn't explain this.  I found a website that sort of halfway did.  Eventually I just decided to go with the default, which was no, thank you.  I clicked Next > Apply and, in a few seconds, it was done.  I repeated for the other volumes -- or, I guess, LUNs, not volumes.  Then I clicked on the icons this process had created.  Each indicated that it had a 600GB capacity, but none of them actually seemed to have taken a bite out of the 1.8TB total.  Apparently that was how Thin Provisioning worked.  Then, to finish up with Storage Manager, I went to the HDD Management tab > Cache Management > Enable Write Cache.  I also ran a quick S.M.A.R.T. test.

This was all very nice, but I wasn't sure what it was actually accomplishing.  There weren't any new partitions appearing in Nautilus.  I wasn't sure if there were supposed to be.  I bailed out of Storage Manager.  I was looking again at Quick Start.  It said that now I needed to create a shared folder in the Synology.  I followed its link.  It put me into Control Panel - Shared Folder.  I clicked on Create.  In Create New Shared Folder, I set up a folder for LUNDATA, the first of my three LUNs.  It wouldn't let me select "Mount automatically on startup."  I gave both admin and guest read/write privileges for now.  I did the same with the other two LUNs.  I was confused, though:  after completing that step, I still didn't have anything to show for it.

It seemed that Chapter 7 of the User's Guide was where I wanted to be.  It told me to go to Main Menu (i.e., the down-arrow icon) > Control Panel > Win/Mac/NFS if I wanted to enable file sharing.  But that gave me an error:  "You are not authorized to use this service."  So, oops, that meant I had gotten logged out for dillydallying.  (First of many times!)  After re-login, the Quick Start reminded me that next on the list was "Create a User and assign privileges."  It had admin as the system default user already.  I selected that one and clicked edit.  Spooky thing here:  admin did have a password.  I wasn't sure why I didn't have to enter it when logging in.  I wasn't allowed to change the name of admin or disable that account.  I decided to change the password to something that I would actually know.  Admin already had full read/write privileges to my three LUNs.  The guest account was disabled.  I left it that way.  The manual (p. 66) said that each user could also have his/her/its own "home" folder.  It was something I had to enable if I wanted it.  I didn't need it, so I skipped that.

So now I went back to Win/Mac/NFS.  The User's Guide (p. 59) said that the unit supported file sharing in Linux in SMB, FTP, NFS, and WebDAV.  I unclicked the boxes so that the Synology would not offer Windows or Mac file service, which I did not need (and did not intend to provide to anyone else).  Instead, I clicked the Enable NFS box which, the manual (p. 61) said, was for Linux clients.  I figured that, in my Windows XP virtual machine, I would access the folders or LUNs on the Synology as network drives, just as if they had been ext3 drives inside the computer.

The remaining tab in this part of Control Panel had to do with Domain/Workgroup.  I didn't know if I wanted or needed to have the Synology be part of a domain, a workgroup, or both.  But then I found that the Domain/Workgroup tab was greyed out.  As I might have assumed, "workgroup" and "domain" appeared to be Microsoft-specific.  If I went back and enabled Windows file service, the Domain/Workgroup tab became ungreyed.  So that explained that:  it wasn't something I needed in Ubuntu.

In the Control Panel > Groups section of the Synology DSM, I saw that the default "users" group had read/write privileges only to the public folder, which I had disabled.  It was just me, so I didn't need a group.  So I left that all as it was.  Next, in Control Panel > Application Privileges, it appeared I could give users access to specific Synology applications (FTP, WebDAV, File Station, Audio Station, Download Station, or Surveillance Station).  Admin wasn't listed.  I assumed it didn't need to be.  I had no other users, so I skipped that part too.

Chapter 3 in the User's Guide, "Modify System Settings," told me that in Control Panel > Network, I could choose among several types of networks.  In my version of the Network dialog, those options were LAN, PPPoE, Wireless Network, and Tunnel.  The choice for my purposes seemed to be between LAN and PPPoE.  The manual said that I should use PPPoE if I used a cable or DSL modem and if my ISP used PPPoE.  I didn't know how to check that.  It didn't sound familiar, so I decided to start with LAN, the default (first) tab.  It gave me an option of manual or automatic configuration; I chose automatic (which was, again, the default).  That seemed to be about all I could do there.  While I was in the neighborhood, I went to Control Panel > Time and set it to synchronize itself with an NTP server.  

Now it was time to set up shared folders (User's Guide, p. 69).  In Control Panel > Shared Folder, I saw the three LUNs I had set up.  So apparently a LUN was a shared folder.  I had already taken care of this.  But that raised some questions.  If it was shared, what more did I need to do so that the computer would see it?  Should I have set up a "target" when I was creating the LUNs?  And did I want to encrypt them?

If I clicked on the Encrypt box, the "Mount automatically on startup" option became ungrayed.  I would want to enable that option.  But I had to think about that for a minute.  It seemed that encryption would protect the contents of the Synology in case of theft or loss of the physical device.  But apparently it would not protect those contents while the computer was turned on.  Anyone who could get into my computer, either physically or via the Internet, would have access to those contents.  I wasn't presently requiring myself to enter a login ID when I turned on the computer, so anyone sitting in my position would still have access, despite encryption.  I hadn't yet reviewed the part of the manual having to do with Internet access to the Synology, but evidently I would also have the option of logging in to it from elsewhere.  On the other hand, I had once had the experience of not being able to get into a backup that I had encrypted.  I wasn't sure if I had mis-recorded the password or if the encryption system on that backup had somehow gotten corrupted.  On balance, I decided that it would probably be a good idea to password the Internet-accessible data on the Synology, and to start requiring myself to enter a password to log in on the computer (System > Administration > Users and Groups).  But then, when I entered the password for the Synology and clicked OK, I got a warning telling me, "The performance of the encrypted shared folder will be decreased" and "The encrypted shared folder will not be available via NFS."  That would have defeated the purpose of having the Synology.  So I backed out of that.  No hard drive encryption in the Synology.

Well, the Synology was still not showing up in Nautilus.  I searched the manual for "target," in case that was the missing ingredient.  The User's Guide (p. 41) explained, "An iSCSI Target is like a connection interface . . . . [A]ll the LUNs mapped to the iSCSI Target are virtually attached to the client's operation [sic] system."  So apparently I would map my three LUNs to a target, and Ubuntu would see the target.  As the manual advised, I went into Synology's Storage Manager > iSCSI Target > Create.  There was an option to enable CHAP authentication, where the server would verify the client's identity.  I went with that.  I didn't go further and enable two-way authentication; I didn't need the computer to verify that it was contacting the right NAS unit.  I mapped all three LUNs to a single target.

In Edit > Advanced, I had an option to have it calculate CRC checksums for header and data digests.  The purpose would be to reduce or prevent data corruption.  The calculation would burden the CPU in the NAS, but I suspected the cabling would be more of a bottleneck than the processor nonetheless.  One post said that CRC might be a good idea for data traveling through a router, as would be the case here.  A year-old VMware webpage pertaining to a different VMware product (ESX) said that data digest for iSCSI was not supported on Windows VMs.  I decided to start out with these checksum items turned on, and see what the performance was like.  I also had options pertaining to maximum receive and send segment bytes.  The manual didn't seem to have anything on that, and nothing popped out in several different Google searches.  I decided to leave those at their default values of 262144 and 4096, respectively.

I still didn't see the Synology in Nautilus, but now (as I viewed p. 72 of the manual) I believed that was probably because I had not enabled my own username (ray) to have access.  In Synology's Control Panel > User, I added that username and gave myself full read/write access to the LUNs.  But then, whoa, on the next page, the User's Guide said that, to allow a Linux client to access a shared folder, I would have to go into Control Panel > Shared Folders > select the folder > NFS Privileges > Create and set up an NFS rule.  The first box there called for Hostname or IP.  It looked like the best way to identify the client would be by its IP address.  What was the IP address of my Ubuntu computer?  Zetsumei said I should type "/sbin/ifconfig" in Terminal.  I did that and got a bunch of information regarding eth0, lo, vmnet1, and vmnet8.  Same thing if I just typed "ifconfig -a."  A search didn't shed any light.  The number for eth0 came first and looked most familiar, so I tried that, with no mapping and asynchronous enabled.  This still didn't produce anything in Nautilus, so I thought probably I should have mapped.  But to what?  The only options were "Map to admin" or "Map to guest."  How about "Map to ray"?

A search of the Synology website led to a thread that yielded more questions than answers.  For the first time, the thought crossed my mind that the quality of the Synology organization was possibly not as gold-plated as I had hoped or imagined.  Surely the manual could have been clearer; surely, at these prices, the people posting these questions deserved some enlightenment.  At any rate, links in that thread led to one of those multiyear Ubuntu discussions, this one dealing particularly with NFS.  It seemed I should focus on learning about NFS; among other things, some posters felt that it was far better than Samba for sharing files and folders.

So I did a search and found a recent webpage promising to show me how to set up NFS.  I guessed that the real problem might be on the client side, so I started with that part of the webpage.  First off, they wanted me to install some packages:  portmap, nfs-common, and autofs.  A check of Synaptic told me that Synology had not installed these.  After installing them, I looked in the manual for the Synology IP address.  On page 161 (after many references to the IP address), the manual said that I could find it in Main Menu > System Information -- not, that is, in Control Panel.  The IP address it gave was, however, the same as the default entry it showed in Control Panel > Network > Use manual configuration; it was not the number shown in the DNS Server box.  So in the client, following the instructions on that webpage about NFS, I typed "sudo gedit /etc/hosts.deny" and added a line that said "portmap : ALL."  Then I typed "sudo gedit /etc/hosts.allow" and added a line that said "portmap : [Synology IP address]," using the address I had just found in Main Menu > System Information.  Next, I typed "sudo gedit /etc/hosts" and added a line near the top that said "[Synology IP address] [Synology Server Name]," in the same format as the other lines there.  (The server name was shown in Main Menu > System Information.)

Continuing with the NFS webpage's instructions, I was supposed to type something along the lines of "sudo mount [Synology Folder] [Local Folder]."  For that purpose, I understood that Synology Folder = [Synology IP address]:[Synology Shared Folder].  But I was not sure what the Shared Folder part was supposed to be.  Was I supposed to refer to the LUN or the iSCSI Target on the Synology unit?  Since the User's Guide (p. 41) said that an iSCSI Target was "like a connection interface," and that all the LUNs attached to it would be attached to the operating system, it seemed that I would need only one target, as I had set it up.  But now that I had learned more about security on the Synology, I had changed my mind about the number of shared folders I wanted.  I just wanted two, each 900GB in size:  one to contain stuff that shouldn't be changing very often, and that only the administrator should have write privileges for, and one for everything else, i.e., for the stuff that I would want to be able to mess with on a daily basis.  So after changing the LUNs and target in Storage Manager, I guessed that I would be creating two folders using the pattern of "/home/[username]/[foldername]" (where "username" would be "ray" in my case) -- one for each of the two LUNs on the Synology.  One of them was called SYNDATA.  On that basis, I typed "sudo mount [Synology IP address]:[Synology 900GB folder name] /home/ray/SYNDATA."  This gave me "access denied by server while mounting [Synology Folder]."  Not a desirable answer, but at least it was a reply of some kind!

By now, I was completely confused, and more than a little irritated at how very long this was taking.  The NAS was supposed to be simplifying my situation, not making it more complex.

It did seem, at this point, that it might have been easier to troubleshoot this if I had been using a computer as my NAS:  I could have gone into it and typed various commands to maybe get a bit more insight on what was happening in there.  A search for that error message led to the suggestion that I type "/usr/sbin/rpcinfo -p" to see what ports the server was using, but that gave me a "No such file or directory" error.

I decided to put in a support request at Synology.  The form required me to enter the Firmware Version -- but, of course, this was not provided in the System Information dialog.  I just entered something that seemed approximately right.  It also asked for the serial number -- and that, they helpfully indicated, was located on the bottom or perhaps the rear of the unit.  After turning it around and risking unplugging it, doing gymnastics to hold it while typing, I realized that, well, they might have mentioned that that bit of information actually *was* in the System Information dialog.  But when I got down to the part where they were ready and listening to what I had to say, I was not sure what to type.  There wasn't an option of talking to (or even chatting with) a live person.  I had to type something.  But what?  How could I possibly explain all this in a few words?

What I needed, somewhere in the Synology software, was a tool that would tell me what was happening.  "You have connected to a computer" or "You have not connected to a computer," etc.  I wasn't sure -- I hadn't done much networking before -- but I suspected that I could get that kind of information by using regular Linux commands on a computer in a network.

I decided that what I would tell the Synology people was just that they should look at this post.  I had identified a number of areas they could improve; and if they really got on the stick, they might even be able to respond in time to help me, before I returned the unit to the vendor or resold it.  The unit had more than a dozen positive remarks from other purchasers at Newegg, so I was hopeful.  But meanwhile, I started a post on the alternative of using a separate computer to create my own NAS.

Wednesday, September 22, 2010

Installing Windows XP SP3 in VMware Workstation 7.1 on Ubuntu 10.04

I had decided to stay with VMware Workstation 7.1 for a while longer, adopting a wait-and-see strategy toward VirtualBox and whatever other virtualization developments might be underway.  A couple of years had passed since I had first installed Windows XP on VMware Workstation in Ubuntu.  My old virtual machines (VMs) were creaking and malfunctioning.  It was time to create a new WinXP VM.

I had obtained mixed results from efforts to use VMware Converter to build VMs from existing WinXP installations or other sources.  I decided to build a fresh installation, installing XP within a newly created VM.  I had already installed VMware Workstation.  Now I started it as root user by typing "sudo vmware," and then I went into Ubuntu's Applications > Accessories > Terminal.  In Workstation, I went into Edit > Preferences and adjusted the settings that would apply to all VMs, some of which could only be set by root.  In the Preferences > Workspace tab, I went with the default settings for the most part.  For the default location of my VMs, I chose a partition on a separate hard drive for better performance.  In the Display tab, I selected the three Autofit options.  I selected all of the Updates options.  In the Memory tab, I left about 1.5GB of RAM for the system; the rest was for VMs.

When I was done with the settings, I killed that session and restarted VMware as a normal user.  I created a new VM with these characteristics:

  • I started with a 40GB independent, persistent SCSI VM, but ran into some problems when I tried to shrink it, and ultimately had to start over.  Since it was easier to grow a VM than to shrink it, I decided on 20GB to start.  Although I initially created this as a preallocated space, it seemed that clones (which I made from time to time as backup, as the installation and tweaking process went along) would default to being non-preallocated, and I decided that was actually better until things got settled, because non-preallocated drives were smaller and therefore easier to clone and to back up.
  • A 4GB independent, persistent IDE virtual drive, within this VM, for the paging file (VM > Settings > Hardware > Add).  This, I discovered, was best created after WinXP was installed.  Otherwise, WinXP was quite capable of brainlessly installing itself into this little partition and leaving the 20GB partition unused, thereby providing yet another reason to start over from the beginning.  Also, this drive was best created when the VM was powered down; only a SCSI (not preferred) drive could be created while the machine was powered up.  Note also that, as soon as you start down this path, Workstation may create a miniature version of a file for such a hard drive, even if you then abort the process -- in which case your later attempts to create that file may trigger strange results, until you investigate and delete any such runt file.
  • 1.5GB RAM.  I had opted for 32-bit Ubuntu and WinXP after numerous previous hassles with 64-bit systems.  The discovery of PAE-enabled kernels meant I could go above the ordinary 32-bit limit of 4GB of RAM, so I was comfortable with this 1.5GB allocation for a single VM.  I could have gone higher, but I had almost never reached the point of using even this much.
  • One single-core CPU.
  • Based on my own usage, I set "Don't automatically connect" for floppy, USB devices,  or printer.
  • Power:  enter full screen mode after powering on; close after powering off.
  • Shared folders:  always enabled, map as network drive, make read/write only those that needed to be written to in Windows.
  • No AutoProtect.
  • Guest isolation:  enable drag and drop; enable copy and paste; don't enable VMCI.
  • VMware Tools Updates:  use application default (currently update automatically).
Then I inserted the Windows XP CD and installed Windows XP, using a slipstreamed CD with Service Pack 3 (SP3) on it.  In my first attempt at this, I had to deal with the problem of making the VM boot from the CD.  I had just learned how to do this.  First, in Ubuntu's Terminal, I typed "sudo gedit [path][filename].vmx," for the .vmx file pertaining to this VM, and then I went to the end of that file and added bios.bootDelay = "10000" and saved the edited file.  This gave me a ten-second delay on VMware's splash screen.  So then I had time, on reboot, to read the options and choose F2 for BIOS setup.  In BIOS setup, I chose Boot, moved the CD-ROM drive up to be first in the boot sequence, and then hit F10 to save and close.  Then I installed WinXP.  The process was completely automatic, this time; apparently the latest version of VMware Workstation was able to detect my settings from the underlying Ubuntu installation.

Once WinXP was done with its basic installation process, I right-clicked on the VMware Tools icon in the system tray (i.e., the lower-right-hand corner of the WinXP desktop), chose "Open VMware Tools," and selected all three items in the Options tab, and then closed that.  To get the Windows desktop to stretch all the way across the monitor, I used VMware's View > Stretch Guest (using the menu at the top of the screen), and then returned it to View > Autofit Guest, and for some reason that did it.  I had to map my network drives after listing them in Workstation's VM > Settings > Options > Shared Folders.  Then I turned to the process of tweaking my WinXP installation.  There are some additional VM-related notes in my post on that.  When I tried to open a PDF file from within the VM, I got a Default Host Application error message.  Another post discusses that problem.

I wanted to use Acronis TrueImage to make periodic images throughout the installation process, so as to capture each working state of the system in a single snapshot.  On a standalone WinXP installation, that would have been just a matter of inserting the Acronis CD and making the backup to a separate partition.  In the VM context, though, Acronis was only willing to recognize only those partitions that were defined as part of the VM.  What I did instead, then, was to power down the VM, use Nautilus to copy the entire VM's folder to an NTFS drive, boot Acronis, and make an image of that VM folder.  (I also made a .zip of it in 7zip, just in case.)

To get the system to pause before loading Windows within VMware, so that I would have time to make a decision to adjust the BIOS or choose boot devices, I edited the individual virtual machine as described in a previous post.  Basically, I set Workstation's VM > Settings > Hardware tab > CD/DVD > "Connect at power on" and "Use a physical device" (though I was intrigued by the "Use ISO image" option).  In Terminal, I typed "sudo gedit [path][filename].vmx," for the .vmx file pertaining to this VM; and at the end of that file I added a line that said this:

    bios.bootDelay = "10000"

and that bought me ten seconds instead of one or two, when that vmware logo came up.

*** NOTE ***

At this point, I stopped developing this post.  Several other system issues had to be taken care of first, and those were superseded by non-system tasks.  When I returned to this post several months later, my purpose was just to close it down.  I had decided, by that time, to stop working on VMware within Ubuntu.  The following fragments are the other notes I had left over, in incomplete form, when I ceased working on this post.

Fragmentary notes continue as follows:

*  *  *  *  *


At some point in the process, I started getting an error message when I was trying to shut down the WinXP VM:
End Program - VMwareUser.exe
This program is not responding.
I did a search and found that virtually nobody was having this problem.  That was not a good sign.  A different search suggested that lots of people were having problems of one sort or another with VMwareUser.exe.  That was not a good sign either.  I didn't have an answer for this problem at this time.

After adding another program, Windows Explorer began flashing, refreshing itself every couple of seconds, in what appeared to be a different network drive problem.  I have written up the process of solving that in a separate post.

After tweaking the WinXP installation, I began adding programs.  I installed Copernic Desktop Search as one of those programs.  VMware Workstation treated my data drives as network drives, and Copernic regarded network drives as a sign that I was in a workplace, so I could not use their free version with VMware.  I downloaded their professional version ($40 to buy) and used it on a trial basis.  It was not able to save its index to a network drive.  Its index, I heard somewhere, would be about 10% of the total size of the files being indexed.  So if I had 100GB worth of files, I would have a 10GB index.  I definitely did not want that kind of monster sitting on my drive C.  But there seemed to be no alternative.  I decided to add a third virtual hard drive (30GB max, not preallocated) to the VM, made it drive Q, and told Copernic to save its index there.  When Copernic was done with its indexing, that drive

The only tweak that didn't seem to work was the one where I went into Internet Explorer and tried to save its Temporary Internet Files folder to some drive other than C.  It wouldn't save to a network drive.  But it also wouldn't even save to the little drive that I had created to serve as a paging file.  Possibly there wasn't enough disk space available for it there.

In the interests of improving performance, I worked through a VMware document on that subject.

Monday, September 20, 2010

Windows XP in VMware Workstation 7.1: Windows Explorer Keeps Refreshing

I was running Windows XP SP3 as a guest in a virtual machine (VM) in VMware Workstation 7.1.1 on an Ubuntu 10.04 (Lucid Lynx) host.  I had just installed a couple of freeware utilities when suddenly Windows Explorer began refreshing itself about every two seconds.  More specifically, it was refreshing the right-hand pane, showing files and folders, but not the left pane, showing the folder tree.

I thought at first it was a virus, but I was running antivirus software, and a scan with a different antivirus program turned up nothing.  Besides, I had just downloaded those programs from reputable sources (e.g., CNET) that supposedly certified them to be virus-free.

A search suggested this was a relatively common problem.  Several posts made me think it had to do with network drives, which in this case would mean the link between Windows and VMware.  I killed the VM, reverted to a previous snapshot, and started that, but the same thing was happening there as well.  I powered up a different VM in a different session of Workstation.  The problem was not occurring there.  I closed all sessions of Workstation and powered up the misbehaving VM in a new session of Workstation.  The problem recurred -- but then, after a minute or two, it stopped.  But then, after I used Windows Explorer some more, it resumed; but then it stopped again.

I downloaded the "Prevent Automatic Folder and Icon Refresh" registry edit from Kelly's Korner and ran that, and then rebooted Windows within the VM.  The problem was still there.  I took this to mean that *automatic* refresh was not the problem -- that, presumably, something was manually refreshing Explorer.

I tried displaying different drives' contents in Explorer, on the theory that maybe this was happening in connection with just one particular network drive.  That was not the case; it happened on all of them.

I wondered if a program installation was responsible for the problem.  But it was not clear to me how any of these would have been responsible for the fact that the flashing occurred within a snapshot that I had taken before installing them.

Then I came closer to what seemed like a possible answer.  I tried again to install the Microsoft Task Switch Powertoy.  I had tried before, and it had given me an error message, and the same thing happened again.  The error message was "Error 1606.  Could not access network location [gibberish]."  The gibberish was actually just a set of five squares with no characters in them.  It appeared that the installer was trying to access a network location whose name did not consist of valid characters.  Following advice, I checked the following registry keys for incorrect addresses:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders

I did find several bad entries in the second of those five locations.  I corrected those and rebooted the VM.  I was now able to install the PowerToy.  Unfortunately, the flashing was still going on.  I went into VM > Settings > Options > Shared Folders and switched that to Disabled > Save.  The flashing stopped.  In Windows Explorer, I tried to go to another network drive, but got the message that it "is not accessible.  The network path was not found."  I went to drive C (not a network drive).  Its contents displayed OK.  No flashing.  I set Shared Folders back to Always Enabled.  No flashing on drive C.  Flashing on drive D.  I tried C again.  The folders there would refresh once, immediately after being selected, but then not again.  In Windows Explorer's menu, I went to Tools > Disconnect Network Drive and disconnected drive D.  Drive E was still flashing.  I went to Tools > Map Network Drive and mapped D again.  It was flashing.

A Microsoft Knowledgebase webpage said that a somewhat related problem (flickering in the left-hand pane of Windows Explorer) could be repaired at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer.  I had only a lowercase "policies" (not "Policies") key at that location.  Regedit would not let me create the uppercase version.  So I went into the lowercase "policies" key.  It did have an Explorer subkey.  I went in there and created a new key, NoRemoteRecursiveEvents, of REG_DWORD type, and gave it a value of 1.  I exited regedit and rebooted.  This step did not solve the problem.

I closed this VM and went back to the previous version, the one that did not flash.  I corrected the incorrect registry addresses (above) first.  Then I started reinstalling the programs that I had been installing when the problem began.  I went down the list and, what do you know, the flickering started when I installed FolderSize, and it stopped when I uninstalled it.  FolderSize calculates the size of folders, and apparently refreshes the screen while it does so.  After I uninstalled it, the problem went away.

Saturday, September 18, 2010

Windows XP in VMware Workstation: Default Host Application Error

I was running a Windows XP SP3 guest in VMware Workstation 7.1 on an Ubuntu 10.04 (Lucid Lynx) host.  When I tried to open a PDF file in the VM, I got a Default Host Application error message:

Default Host Application
Make sure the virtual machine's configuration allows the guest to open host applications.
A search for that sentence turned up no hits.  A modified search turned up only a few.  One thread led me to right-click on the PDF and select Open With.  There, for the first time ever, I saw the VMware icon and "Default Host Application" at the top of the list, labeled as the Recommended Program.  I clicked OK, but this put me back to the Default Host Application error message.  I guessed that maybe the problem was that I had already told Ubuntu to use Adobe Reader for Ubuntu (or maybe some other program) to be the default application for PDF files.  I had meant that to apply only when I was in Ubuntu, not when I was in a WinXP VM.  I checked VM > Settings > Options tab > Guest Isolation, and it did not have the box checked to "Enable VM communication interface (VMCI)."  So that seemed to be prohibiting the host (Ubuntu) from using its own default program to open the PDF when I was in the guest (WinXP).

The VMware Workstation 7.1 User's Manual made me wonder whether I had slipped into Unity mode, but I checked Workstation's View menu pick and, no, the setting was Autofit Guest.  I tried Open With again, but this time I browsed to the program that I wanted to use to read PDFs.  At this point, that program was Sumatra PDF.  But after I clicked on the Sumatra PDF .exe file, I was back at the Open With, and it had not remembered what I had just done.  I did it again; same result.  I tried WordPad instead.  It rendered the PDF as what looked like an XML file.  I closed that, went back to right-click > Open With, and this time the VM Default Host Application and WordPad were the only two programs listed.  Still no luck with Sumatra.  Another post made me think this was actually intended behavior, under the concept of "application sharing."  I didn't pursue that at this point.

In Windows Explorer, I went to Tools > Folder Options > File Types, scrolled down to the PDF file type, clicked Change, navigated to Sumatra . . . but still the same result:  it didn't remember what I had selected.  Following a tip, I went to Start > Run > CMD and typed "assoc .pdf."  This told me that the PDF file type was associated with pdf_auto_file.  A corresponding search led to the suggestion to go back into the File Types list, down to PDF, and click on Restore to reset it to the Windows default program for PDFs.  Now it wasn't set to run with anything.  I clicked on Change and navigated to Sumatra again, but it still didn't remember it.

The Microsoft Support page didn't seem to address this situation.  Ramesh, however, did provide an OpenWithAdd utility that added Sumatra to the Open With list.  After that, double-clicking on the PDF file opened the Open With list, with Sumatra PDF highlighted.  I clicked on the "Always use the selected program to open this kind of file" box and then OK.  I killed Sumatra, double-clicked on the PDF again, and this time Sumatra opened right up.  Problem solved!  Another way of doing it, if I hadn't wanted to run the OpenWithAdd utility, was apparently to use this registry edit:
[HKEY_CLASSES_ROOT\.pdf]
@="AcroExch.Document"
When I went to that location in my registry, I saw that it was still set to pdf_auto_file, so I made that change.  I wasn't sure whether that would have eliminated the need for the OpenWithAdd utility.

Saturday, August 28, 2010

Transitioning Away from Windows Toward Ubuntu: The Next Step

In September 2008, I reached a point of relative stasis in the development of my computer setup.  I was using two computers, each with its own monitor, but with a common mouse and keyboard via KVM switch.  Both computers were dual-boot setups, so that I could have gone into Windows or Ubuntu on either one; but I tended to just use one or the other.  Specifically, on one computer, I installed Ubuntu and then used VMware Workstation to run Windows XP in virtual machines (VMs).  On the other computer, I was almost always running Windows XP, rarely going into Ubuntu.

I had assumed, there at the end of summer 2008, that I would be revisiting this layout in summer 2009.  Generally, that didn't happen.  Instead, two years passed.  At this point in 2010, however, I found that there had been some developments, such that this system could evolve.

One development was that, sometime in the intervening two years, I set up a third desktop computer.  It was mostly a collection of hand-me-down parts, but it ran Windows XP well enough.  If I really needed to do something in XP, I probably could do it there.  Moreover, I had acquired a laptop, for a bargain price of around $350 -- plus another $250 or so that I didn't entirely expect to spend, when I went to the store, for an extended warranty, laptop sleeve, screen protector, wireless mouse, etc.  Such a deal!  Anyway, neither the laptop nor the bucket-shop computer were anything to write home about, but they would serve in a pinch.

Another development was that I had run into several systems problems on the primary Windows XP machine.  It still ran stably, but I was getting occasional flaky problems.  Probably there would have been a list of them, if I had taken a minute to list the ways in which the machine was not performing up to snuff.

Probably the most worrisome such problem was that I had reached a dead-end in my efforts to install Windows updates on that primary machine.  It just wouldn't install them.  I had revisited the problem repeatedly.  At this point, it would probably have been more effective to just reinstall WinXP from scratch on that machine.

That update situation had been persisting for a while.  It hadn't bothered me much.  Recently, though, something else had happened.  I had somehow started using McAfee antivirus software, and I had just discovered that McAfee had been piling up gigabytes of stuff in .bup files.  I was concerned that this feat would have been possible only by mixing in data files; therefore, I had begun an effort to compare against an old backup and figure out what, if anything, might have gone missing over the past several months or longer.  So that was the end of McAfee, for me, but it was also a wake-up call to take computer security more seriously.  That meant keeping updates installed, but perhaps it also meant it was time to continue my migration away from Windows.

For such reasons, I thought it might be time to consider running both of my primary computers under the same kind of Ubuntu - VMware - WinXP VM setup.  If I needed XP, I could still drop back into the dual-boot, or just use that hand-me-down computer.  If I needed Vista or Windows 7, I could use the laptop, which was presently running the one but which apparently qualified for a free upgrade to the other.

In addition to those developments on the Windows side, things had also been happening on the Ubuntu side of the equation.  First, the good things.  Ubuntu was looking good and running well.  I had learned a bit more about Wine.  Generally, I was continuing to become more familiar and comfortable with the world beyond Windows.  I still had occasional issues with VMware, but generally nothing lethal.  As an additional consideration, Oracle had created the impression that it might be positioning VirtualBox to compete effectively with VMware.  Even without that, it was still nice that I could leave a WinXP VM running in Ubuntu for a week without needing to reboot it, while that would just never work on the native WinXP machine.

At the same time, after these several years of experimenting with Ubuntu, I had to agree with someone who had said that Linux distributions still tended to be terribly unpolished in comparison with Windows.  Somewhat contrary to my expectations, I was not finding many instances in which Ubuntu programs were delivering superior functionality and reliability.  For instance, I had bought a copy of Beyond Compare, a file synchronization program.  (I subsequently realized that I probably could have gotten by with a freeware alternative, but whatever.)  There were Windows and Linux versions of Beyond Compare.  The Linux version did not seem to be very actively developed, and it was having problems that I wasn't having in the Windows version.  The same was true elsewhere.  I was still using IrfanView, which did not yet have a Linux version; I was still using CoolEdit 2000, because it had features that Audacity did not provide.  Generally, I was finding that Ubuntu was great as an operating system; I was finding it useful as an underlying layer, to handle tasks that WinXP couldn't handle (e.g., delete files that WinXP couldn't delete); I found that WinXP running in a VM on Ubuntu was more stable (although slower) than a native WinXP installation.  But at the point of application, for my purposes, Ubuntu wasn't a serious competitor against Windows XP.  And I was increasingly unwilling to invest the time to learn how to do everything in two or more different ways.

My conclusion, at this point, was that the best of both worlds called for running Windows XP within virtual machines (in VMware or otherwise) on an Ubuntu operating system base, on an Ubuntu/XP dual-boot computer.  I had already worked through many of the issues in this sort of setup, and could therefore hope to be efficient and preserve multiple troubleshooting options without too much of a time investment.  If Windows 7, Ubuntu, or some other operating system (OS) began to display capabilities that I badly needed, I would hopefully be able to incorporate those OSs into my setup, one computer at a time, without too much disruption overall.

Sunday, July 25, 2010

Using XQDC X-Setup Pro in Windows XP

According to Tex and Eric, the bankruptcy of WUG has ended that organization's sponsorship of X-Setup Pro, a Windows tweaking utility.  Their message ends as follows:

You can still download the last version from MajorGeeks or BetaNews. The portable edition and the U3 version are available from MajorGeeks as well.

In case you lost your serial number use this one instead: XSA092-11TA9R-8K12YT
I had used this program years earlier.  I now installed and tried to run it in a Windows XP virtual machine (VM) in VMware Workstation 7.1 on an Ubuntu 10.04 host.  When I clicked on the "Classic" button, it immediately presented me with this error message:
C:\Program Files\X-Setup Pro\bin\xqdcXSPUI.exe (/START): Access is denied. (Win32 Error Code 5)
A search for precisely that error turned up nothing.  In a modified search, I saw an indication that "Access is denied" and "Win32 Error Code 5" mean the same thing.  Before researching that further with a refined search, it occurred to me to see whether it would happen again.  This time, clicking on Classic raised a Comodo Defense+ (firewall) dialog asking me if it was OK to proceed.  Allowing that, and another two or three after it, solved the problem:  the X-Setup windows opened.

I wondered whether X-Setup superseded Tweak UI.  Laptop (magazine?) rated X-Setup more highly than Tweak UI; but they observed that novices can get lost in X-Setup and may find Tweak UI more suitable for their needs.  My recollection, circa 2001, was that I had hosed my system with X-Setup once or twice.  I decided to use both for a while.  Following the tip from Tex and Eric, and consistent with a current effort to make Windows XP perform better in VMware Workstation, I decided to try using the portable versions of X-Setup and Tweak UI.

The most interesting feature of X-Setup, for me, was its Record function.  This feature made it possible to create distinct .reg files for each registry tweak.  So I could pick and choose the registry edits I wanted to use this time, and could run their "undo" counterparts if I didn't like the effects.

The explanations of various options in X-Setup were surprisingly articulate and lucid.  Those explanations did not conceal some degree of confusion in the structure of X-Setup tweaks, however.  There was redundancy; there were instances when I thought I had changed something and then came across something that looked very similar but yet had not been changed.

While there was no denying that X-Setup had a large number of tweaks, I found that they did not cover some registry edits I would have liked to see.  The authors readily acknowledged, as well, that some of the tweaks performed by some X-Setup plug-ins would not be captured in .reg files.  I found, in addition, that many of the tweaks I generated from within X-Setup were already set out on the Kellys-Korner and Elder Geek websites, among others.

After running X-Setup and making system adjustments, I found that the new Windows XP installation I had used it on was performing very slowly -- even though I had made sure not to make any changes that X-Setup had flagged as potentially dangerous.  Ultimately, this slowness and other problems led me to wipe my new Windows installation and start over.  I was not sure whether X-Setup was directly to blame for such problems, but I did make sure, the second time around, to proceed with caution.

Despite those caveats, X-Setup Pro did generate a number of useful scripts for me, and when used with suitable caution, it provided very helpful ways of improving my Windows XP installation.

Sunday, July 18, 2010

Improving Performance in VMware Workstation 7.1

I reviewed the latest version of VMware's document, Performance Best Practices for VMware Workstation, to see what hardware purchases or sales it would suggest for my situation.  The document consisted of four main sections, pertaining to host system hardware, the host operating system (OS), VMware Workstation and virtual machines (VMs), and guest OSs.  I was particularly interested in information about running Windows XP on an Ubuntu host, since that was the setup I was using.  This post does not say much about Windows host systems.

Section 1:  Hardware

A.  CPUs

1.  Hyperthreading

VMware (p. 7) recommended using a CPU that would support hyperthreading (also called "logical processing").  (The OS and the BIOS would have to support it, and the user would have to make sure it was enabled in the BIOS.)  Patrick Schmid at Tom's Hardware said that the primary benefit of hyperthreading was to permit smoother responsiveness, but that it would not yield noticeable increases in performance otherwise, and certainly would not substitute for having multiple cores in the CPU.  Intel's own writeup of hyperthreading affirmed that responsiveness was a leading benefit.

AMD quoted VMware as saying, “Virtual machines are preferentially scheduled on two different cores rather than on two logical processors on the same core.”  That is, VMware tried to assign different VMs to different CPU cores, if available.  This seemed to imply that AMD CPUs would do better when the number of CPU cores matched or exceeded the number of VMs being run.  But AMD suggested that increasingly complex software (e.g., multithreading in Microsoft Excel 2007) could keep as many as 48 CPU cores busy, even if the number of VMs being run was much lower.

AMD's point in that particular article was that its Opteron CPU, with more cores, could significantly outperform Intel's Xeon, with hyperthreading and fewer cores.  Anandtech's comparison of state-of-the-art Intel and AMD CPUs in March 2010 found, however, that the Xeon did much better than the Opteron.  Recent observations suggested that AMD might be moving toward implementing hyperthreading after all.

A search on Newegg.com turned up 20 Intel CPUs with hyper-threading capabilities, starting at $115 and ranging above $1,000.  (The least expensive Intel CPU listed on Newegg at that point cost $41.)  Anandtech said that the AMD advantage was in terms of price, with good performance at much lower cost.  One Anantech commentator said, "The twelve-core AMD Opteron 6100 and six-core Xeon 5600 perform more or less the same," but suggested that Intel had two advantages at the enterprise level:  RAS (i.e., reliability, availability, and serviceability, including the ability of systems to heal themselves) and licensing.

2.  MMU Virtualization

VMware (pp. 7-8) also expressed a preference for second-generation hardware-assisted MMU virtualization, called rapid virtualization indexing (RVI) or nested page tables (NPT) in AMD processors or extended page tables (EPT) in Intel processors.  (Wikipedia indicated that NPT was used during development, but that RVI was the term currently used.)

VMware found that, in its ESX product, AMD's "RVI provides performance gains of up to 42% for MMU-intensive benchmarks and up to 500% for MMU-intensive microbenchmarks."  VMware found similarly dramatic performance improvements for Intel's EPT, provided the virtualization product made suitable adjustments -- which, VMware said, ESX did.  It was not clear that the same could be said for VMware Workstation.  Pending further research, this information made an AMD CPU with RVI the more certain performance boost for an ordinary user of Workstation.

At this writing, neither Newegg nor TigerDirect offered products featuring any of those CPU-related acronyms.  According to Wikipedia, MMU debuted in the third-generation AMD Opteron, and at Intel EPT debuted in the Nehalem architecture.  (That same Wikipedia page said that RVI was supported, at VMware, in ESX Server 3.5 and later -- and also, interestingly, in Oracle's VirtualBox 2.0 and later.)  At Newegg, at this writing, Opterons were available in the range of $190-1,300 (and would require motherboards in the $200-600 range).  The Nehalem appeared in the Core i7 line of CPUs, available at Newegg for $290-1,140.  (Newegg didn't list a canned search option for Nehalem or Westmere cores.)

I looked at some historical prices to get a rough idea of how processor pricing trends worked.  On the Intel side, the Core 2 Duo E6700, introduced in July 2006 for $530, was apparently available (in some form) for $316 in June 2007, around $212 in July 2008, $130 in September 2009, and $95 in July 2010.  These values suggested that prices dropped dramatically (perhaps 40%) in the first year, less dramatically (perhaps 20% of the original price) in the second year, and likewise (perhaps 10% of the original price per year) over the next couple of years.  (Intel apparently discontinued the E6700 (presumably meaning that manufacturing ceased) in February 2008.  At that time, the chip may have been selling for somewhat less than half the original price.)  On those data, the rate of discount from the original price was cut in half in each succeeding year, during the first several years of the product's life.

I took a particular interest in one of the Core i7 CPUs at the bottom of Newegg's list, pricewise.  The Core i7-870 that was available for $290 in July 2010 debuted at a list price of $562 in September 2009, representing a 49% drop in less than a year.  The data from the preceding paragraph suggest that the consumer might anticipate another 25% reduction from the original price (i.e., half of the previous year's price cut), for a price of around $145, by summer 2011.  On this basis, it seemed to me, personally, that I might thus save myself $150 (plus whatever price drop might apply to the corresponding motherboard) if I waited to implement these particular suggestions for VMware performance until summer 2011.

Intel described the Core i7-870 as having both hyper-threading and VT-x virtualization technology.  But VMware (p. 8) indicates that VT-x is the first-, not the second-, generation of virtualization technology.  Its potentially outdated status is reflected in a VirtualBox recommendation that VirtualBox has been designed to perform better without enabling this sort of hardware-assisted CPU virtualization at all.  As of late 2008, someone in a VMware Community post considered VT-x a major step forward, but noted that hardware-assisted virtualization in Workstation 7 was supported only on 64-bit hardware.  I did have 64-bit hardware, so that was not a concern for me.

But which Intel CPU would I have to be tracking, if I wished to get into the second-generation Intel EPT (or AMD RVI) MMU virtualization technology?  Intel characterized EPT as an "extension" of VT-x and, to revert to the (Wikipedia) observation offered above, that extension was apparently to be found on the Nehalem (45nm) or Westmere (32nm) architectures.  Evidently not all Core i7 CPUs employed that architecture, then, else the i7-870 would have it.  (I was not alone in being confused about this.)  It seemed that what I was looking for might be, in Intel-speak, VT-x2.  Further searches for insight led to a simple request for a list of VT-x2 features implemented in various Core i7 CPUs -- to which Intel provided the bizarre response that, no, actually, it was hard to provide any such list, and a pointer to lengthy software developer's manuals.  Indeed, it seemed that VMware was somewhat behind the curve:  while it was talking about EPT (as implemented in VT-x2), Intel was meanwhile moving on to VT-d and other technologies.  Then again, another Intel source seemed to say that VT-d was an older technology.

The message seemed to be that I, as a consumer, didn't need to know about this yet.  I decided to try a different approach.  I went back to Newegg's list of Core i7 processors and tried working my way up the list until I found one that did have VT-x2.  After the i7 860 and 870, next on Newegg's list was the 930.  My search regarding the 930 and VT-x2 led to an Intel Virtualization Technology List indicating, that, well, yes, a number of Intel CPUs did support VT-x.  I looked at them individually to see if perchance they supported VT-x2, that information unfortunately not being included in the alleged virtualization technology list.  Bottom man on this list was, again, the 860, and they confirmed that it did support both VT-x and VT-d.  At the top end of the set, we had the 970.  The 970's spec sheet didn't say anything about VT-d, so maybe it was indeed being phased out.  No mention of VT-x2 either; just VT-x.  Following some leads, I came around to the discovery that there was also something known as VT-i, referring to the Itanium processor.  It wasn't helpful information, but at least it was information.

Looking back at that page on the i7-970, I noticed that Intel said, in greyed-out letters, "No Datasheet Available."  But, hmm, did that mean there were datasheets for others on that virtualization technology list?  I tried the 920.  There, they had a "Download Datasheet" link that led to about a dozen Technical Documents.  I started with the 96-page Intel® Core™ i7-900 Desktop Processor Extreme Edition Series and Intel® Core™ i7-900 Desktop Processor Series Datasheet, Volume 1.  But no, according to Acrobat, there were no references to VT-x2 there.  How about VT-x?  Nope.  Alright, then, volume 2?  None!  Well, how about just plain old "virtual"?  Still nothing on what virtualization technology any particular CPU might have.  This was a contrast against another set of technical documents provided on that same page, for the i7-800 series.  Here, I found references to both VT-x and VT-d.  Volume 1 of that datasheet said, on page 29, that the i7-800 series did support EPT.  So that was pretty confusing.

I decided to try the Developer's Manuals.  The description mentioned virtualization only in connection with the Intel® Virtualization Technology FlexMigration (Intel® VT FlexMigration) Application Note and the Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 3B: System Programming Guide.  The Application Note contained several references to VT-x, but did not distinguish it from VT-x2 or VT-d.  Volume 3B of the Software Developer's Manual contained no references to VT- of any type.  Both documents did refer to Virtual Machine Extensions (VMX), and the Manual contained lots of information on how virtualization works.  But I was not able to figure out, from this information, which CPU I should buy.  This was pretty strange, given the conclusion that Intel's whole reason for offering virtualization in only some CPUs was driven by marketing.

It occured to me that perhaps the people at VirtualBox would provide some insight into what they would recommend, if I opted for a VirtualBox-compatible CPU.  A search produced very meager results along these lines.  I went to the VirtualBox website and looked at their documentation.  They said that "the vast majority of today's 64-bit and multicore CPUs ship with hardware virtualization."  No distinction there between VT-x and VT-x2.  They also said, "The biggest difference between VT-x and AMD-V is that AMD-V provides a more complete virtualization environment."  The use of what they called "nested paging" (i.e., more advanced virtualization, apparently what others meant when they referred to VT-x2) could bring a "significant" performance improvement -- of up to 5%.  Five percent!  I was thinking we were talking about the difference between success and failure, and now it appeared this might be just one more incremental improvement.  Nested paging, they said, was standard on Intel's Core i7 (Nehalem) CPUs, and also on AMD's Barcelona CPUs.

I did finally find, at Tom's Hardware, a list of CPUs that would support "XP Mode" Virtualization.  XP Mode was the capability of running a near-perfect emulation of Windows XP within Windows 7 (which would enable people to use older applications on the newer operating system).  In March 2010, Microsoft altered Windows 7 so that it would no longer require hardware virtualization in order to provide XP Mode.  But the Tom's Hardware list dated from a year earlier, so it gave an idea of what Intel CPUs I would need to consider if I wanted hardware virtualization for purposes of improved performance in VMware.  The Tom's Hardware list actually drew from a list posted by Ed Bott on ZDNet.  Ed provided a list of Intel desktop and mobile CPUs.  His desktop list boiled down to the following, which I provide here, in ascending order according to their current prices according to Pricewatch.com (or, failing that, on Newegg or Amazon):


So, bearing in mind that these were approximate prices, a person dead-set on obtaining a virtualization-supporting Intel CPU for less than $100 would have more than a half-dozen to choose from.

It appeared, in other words, that we were no longer dealing in the rarified world of enterprise-level Xeon processors; we humble consumers were treating virtualization as a simple commodity.  Paying more would bring, not necessarily any improvements in virtualization per se, but rather in those other characteristics that people like in their CPUs, including hyperthreading.

In that case, I thought that perhaps I should take a look at AMD, just to be sure that I wasn't blowing off an already affordable option.  If we were forced to accept the simplistic conclusion that you should just be happy knowing you could get some kind of hardware virtualization with any Core i7 CPU, why not price any AMD CPU with AMD-V virtualization?  According to a simple statement from AMD, that meant almost any CPU that I would be looking at.  Here, comparable to the situation with Intel, a Newegg search for any desktop CPU with virtualization technology support gave me AMD Semprons for as low as $37.

I reflected on my current situation.  To improve VMware's performance, I was looking to replace an AMD Athlon 64 X2 5000+ CPU.  But that dual-core CPU, which was hot stuff four years earlier, did support virtualization already.  The question seemed to be, what kind of virtualization?  What they were offering now was AMD-V.  Seeing the amount of time I had already invested in this general line of questioning, I decided I should just assume that it was better than the virtualization of yesteryear, and that having it on a faster CPU would be better still.  It seemed, in short, that I might just upgrade to a somewhat more up-to-date CPU, without worrying much about understanding VMware's hardware suggestions.  VMware (p. 17) said that, if I did have hardware-assisted virtualization in my CPU, Workstation would typically set it up automatically, but there was the option of changing the default in VM > Settings > Hardware tab > Processors > Virtualization engine > Preferred mode.  They also said (p. 26) that, if the system was using MMU, performance would be best if VMI (i.e., software virtualization:  "virtual machine interface") was disabled (VM > Settings > Hardware tab > Processors > VMware kernel paravirtualization).  Mine was grayed out.  I assumed it was something I would have to set when the machine was powered down, or perhaps in root mode ("sudo vmware").  They also said, "No Microsoft operating systems support VMI," but I wasn't sure what the situation would be in the case of an Ubuntu host.

B.  Memory

VMware's recommentation on memory (p. 8) was just to make sure you had enough.

C.  Storage

VMware recommended (pp. 8-9) having enough disk storage space, but also emphasized making sure it was configured correctly.  They mentioned the potential for improved performance from RAID.  Browsing among various sources suggested, generally, that there could be significant performance improvements (and possibly greater performance smoothness) in a RAID 0 setup, where the program files were installed on two (or more) hard drives.  By contrast, it seemed to be generally agreed that a RAID array would make less of a performance difference in the handling of data files.

D.  Other Hardware

VMware offered suggestions about networking and hardware BIOS settings.  These recommendations were worth reviewing for some purposes, but did not seem to require any purchase decisions for my purposes.

E.  Summary

The Hardware section of Performance Best Practices for VMware Workstation left me with the impression that, all other things being equal, VMs will perform better on multiprocessor CPUs, and that hyperthreading is a plus.  I was not entirely able to penetrate the jargon about MMU virtualization; the general conclusion there seemed to be that I should shop for a CPU that supported a relatively recent generation of virtualization technology.  Assuming no bottlenecks due to inadequate RAM or disk storage space, the other main performance recommendation for my purposes was to use a striping RAID arrangement.

Section 2:  Host Operating System

This section contained virtually no relevant suggestions for Linux-based systems.

Section 3:  VMware Workstation and Virtual Machines

VMware said (p. 16) that most applications running in Workstation would perform nearly as well as in native Windows.  For the "small percentage of workloads" that would experience noticeable performance degradation, they had several CPU-related suggestions:
  • Don't assign more of a load to the CPU than it can handle.
  • Don't assign more CPU cores to a VM than it can use.
  • Monitor CPU usage with the Linux "top" program.
  • When using a single virtual CPU (vCPU), as I was likely to do, I would get better performance on an UP rather than SMP kernel or hardware abstraction layer (HAL).
  • The guest operating system may not switch to the appropriate HAL if the CPU settings change later (p. 27).
They said that Windows operating systems (OSs) newer than XP would use the same HAL/kernel for both UP and SMP installations.  It sounded like that was not the case for WinXP, however.  Microsoft seemed to say that XP would detect the type of system and would install the correct HAL automatically.  They said this:
Microsoft does not support running a HAL other than the HAL that Windows Setup would typically install on the computer. For example, running a PIC HAL on an APIC computer is not supported. Although this configuration may appear to work, Microsoft does not test this configuration and you may have performance and interrupt issues. . . . Microsoft recommends that you switch HALs for troubleshooting purposes only or to workaround a hardware problem.
So the HAL issue seemed to be something to be aware of, in some situations, but not something of practical relevance for a user of Windows XP, Vista, or Windows 7.  I was curious which HAL was installed on my system, though.  As advised by Kelly's Korner, I went to Control Panel > System > Hardware > Device Manager > Computer.  On my native WinXP installation, it said ACPI Multiprocessor PC.  In a newly installed WinXP VM in Workstation set to use just one processor and one core, it said ACPI Uniprocessor PC.

VMware (p. 17) said that, if there were other VMs or programs running in the background, performance of a VM in the foreground would be noticeably better if the settings were changed in Workstation (i.e., not in any particular VM).  The advice was to go to Edit > Preferences > Priority, and set "Input grabbed" to High, and "Input ungrabbed" to Normal.  But Workstation gave me no such options.

According to VMware (p. 18), memory-related performance could be affected in several ways.  First, there needed to be enough RAM available to the host system for its own purposes.  My system had 6GB of RAM. Normally, some of that might have gone unrecognized by a 32-bit OS, but I was running a PAE-enabled kernel in 32-bit Ubuntu 10.04.  Ubuntu's Sysinfo reported that my system's total RAM was 6050 mebibytes (MiB) (i.e., about 6.3 billion bytes).  Running Workstation as root (i.e., "sudo vmware"), I had set RAM to 5000MB (by which Workstation presumably meant 5000 x 1 million), leaving more than 1GB of RAM for Ubuntu system operations and whatever programs I might be running in native Ubuntu.  I did not typically run many programs in Ubuntu.  So it seemed that I had allowed enough RAM for the system.  It did occur to me, though, that if I was going to run two distinct sessions of Workstation (as opposed to running two VMs within a single Workstation session), I might want to cut that 5000MB figure in half for each Workstation session.

VMware (p. 18) also advised that the best possible performance would come from requiring Workstation to "Fit all virtual machine memory into reserved host RAM" (Edit > Preferences > Memory tab > Additiaonal memory).  But they provided this caveat:
NOTE:  The setting described in this section affects only whether or not a virtual machine is allowed to start, not what happens after it starts . . . . After a virtual machine starts, other factors . . . [e.g., change of applications running in the host OS] can change.  In such situations, even if you selected the Fit all virtual machine memory into reserved host RAM option, virtual machine memory swapping might still occur.
Since I did most of my work in WinXP, the message to me seemed to be to make sure that there was enough RAM available to the Ubuntu host so that it would not need to be raiding the WinXP guests.  This was consistent with the advice of VMware (p. 19).  They warned particularly about host applications that lock memory.  While it did not apply to my configuration, it was also interesting that they recommended providing no more than 896MB to 32-bit Linux VMs.  To monitor what might be happening, they suggested checking for swap activity in the host and virtual machines.  Doing in this Linux, they said (p. 29), involved running "stat" to dispay the "swap counters," and verifying that both the si and so counters were near zero.  I wasn't sure that their remarks applied to the Ubuntu version of stat, though.  A search turned up a manual page that didn't say anything about swap.  That page said made me think that a different search, focusing on the bash shell, might be more illuminating.  But that turned up nothing.  This really did not seem to be something that the world was blogging about.  Eventually, it appeared that what we were really looking for was vmstat, for which a search produced a couple hundred hits.  Brian Tanaka recommended running "vmstat 5 10" to get an average impression of what was happening on the system.  That didn't work on my system, but the vmstat manpage led me to try "vmstat -a -n 5 10" and that gave me ten indications that si and so were at zero.  So I seemed to be OK there.

VMware (p. 29) also pointed toward a knowledgebase page about "excessive page faults generated by Windows applications."  To see if this was a problem, they suggested using Start > Run > perfmon. I tried that, inside a WinXP VM.  At the top center of the System Monitor graph, I clicked the + (Add Counters) button.  I got an error message:
System Monitor Control
At least one data sample is missing.  Data collection is taking longer than expected.  You might avoid this message by increasing the sample interval.
This message will not be shown again during this session.
I took that to be a statement that my VM was running very slowly, which was not surprising, because I had some very intensive processes going on elsewhere on the computer.  I okayed out of that message and, following the advice on that page fault webpage, proceeded to choose Memory as my performance object, selected Page Faults/sec as my counter, clicked Add.  To get an accurate sample, I considered the advice from their error message:  I clicked on the Properties icon along the top and thought about changing it to "Sample automatically every 2 [or 3] seconds."  But then I decided the one-second sample was ticking along OK, and left it at that.  I was seeing occasional spikes in page faults.  The webpage advised that I could trace this to a particular application by going back to the Add Counters button, making Process my performance object, and then choosing a process of particular interest.  I named one of the very intensive processes I had underway.  Sure enough, I got a line across the top of the graph, indicating that that process was accounting for 90-100 (percent?) of something related to page faults.  Very interesting.  So basically this seemed to be telling me that a process that I knew was soaking up a lot of system resources was, in fact, soaking up a lot of system resources.

VMware (pp. 20-21) discussed ways in which page sharing and memory trimming, intended to promote efficiency, could degrade performance in some instances.  My situation did not seem to fall into those kinds of situations, so I made no adjustments there.  They said that, of course, a local disk drive would be a faster home for a VM than would a network drive.  They provided other tips that they had also indicated somewhere during the VM setup process:  for best performance, use IDE rather than SCSI virtual disks, and preallocated rather than growable, and independent and persistent rather than nonpersistent, and don't use snapshots.  They also (p. 22) offered some suggestions that I hadn't encountered previously:  with the machine powered off, turn off debug mode (VM > Settings > Options tab > Advanced > Settings > Gather debugging information > None).  Other performance tips (p. 23):  run a general availability (GA) version of Workstation, not a debug or beta version.  Make sure you have designated the right operating system (VM > Settings > Options tab > General > Version).  Disconnect your optical drives from your VM until you need them (VM > Settings > Hardware > CD/DVD > uncheck Connect at Power On).

To sum up, section 3 of Performance Best Practices for VMware Workstation did provide a number of practical tips on how to adjust Workstation to run more efficiently.  I was not able to understand and apply all of them, and some (e.g., make sure you have enough RAM) were rather commonsense if not simply redundant.  What I derived from the discussion of cores was that, if I did get a new multicore processor, I should probably experiment, as I had done with my present CPU, to see how it performed with various numbers of cores assigned in Workstation.

Section 3:  Guest Operating System

In this section, VMware led off (p. 25) with suggestions:  make sure you're using a guest operating system that Workstation supports; keep VMware Tools updated; disable screen savers and animations; run backup and antivirus scans in off-peak hours; use a timekeeping utility suitable for the guest rather than the VMware Tools time-synchronization option.  VMware (p. 28) also referred to impacts on efficiency wrought by guest OS "idle loops."  It appeared that tweaking this would be painstaking and would likely yield minor effects.

VMware (p. 30) confusingly said, "It is best to use virtual SCSI hard disks in the virtual machine."  This differed from the installation process, which said (at least at one point) that IDE drives were recommended.  Bizarrely, VMware directed me to a Windows webpage dated December 4, 2001.  More promisingly, VMware also pointed toward their KB9645697 webpage regarding the splitting of large I/O requests into 64KB units.  The gist of their suggestion here was, "Changing the guest registry settings to issue larger block sizes can eliminate this splitting, thus enhancing performance" (p. 30).  The way to do that was sketched out on page 30 (section 2.2.6.1) of a PDF document entitled User's Guide: Fusion-MPT Device Management.  But in any case, this called for an edit of the registry setting HKLM\SYSTEM\CurrentControlSet\Services\Symmpi\Parameters\Device\MaximumSGList, and there was no such setting in my VM.

VMware (p. 30) also recommended that, if I did use IDE rather than SCSI virtual disks, I should make sure DMA access was enabled.  To do this, I went into Start > Run > devmgmt.msc > IDE ATA/ATAPI controllers > right-click on each channel > Advanced Settings tab > look at Current Transfer Mode.  If it says PIO, toggle the other box, Transfer Mode, between PIO and DMA to get Transfer Mode = DMA and Current Transfer Mode = DMA.

Another performance suggestion (pp. 30-31):  defragmentation.  Start by defragmenting the guest, then use VM > Settings > Hardware tab > Hard Disk > Utilities > Defragment, then defragment the host (not applicable in Linux hosts).  Defragment before creating linked clones or snapshots; afterwards is too late.  I was only creating independent clones, so this did not seem to apply.  Nonetheless, I did have a defrag utility in the WinXP guest.  Defragmentation in VMware itself had always been almost instant, when I had done it.

For network performance, VMware (p. 31) recommended using the VMXNET driver.  They noted, however, that that driver was installed automatically with VMware Tools.  There were a few other network performance suggestions in the document.  Since I was not having networking performance issues, I did not investigate these.  VMware (p. 32) also offered some other concluding, sensible suggestions (e.g., use general-availability software, not beta versions; make sure the latest version of VMware Tools is installed.  Here, again, the advice did not seem to apply.

Summary (for My Purposes) 

A single problem with hardware or software could seriously impair performance.  I did not attempt to scour the Performance Best Practices for VMware Workstation document for every possible thing that might be improved.  Rather, at least in this first pass through it, I was focused on big-picture items that sounded like they might have a great impact on the performance of my system.  The Hardware section led me to think especially about upgrading to a faster multiprocessor CPU, perhaps with hyperthreading, but in any event with a recent generation of virtualization technology, and also to switch to a striping RAID arrangement for my program files (presumably including both the Ubuntu host program partition and the partition on which I kept my VMs.  Other than that, improved performance in VMware Workstation appeared to be a matter of tuning a variety of settings, some of which were becoming obvious as I gained more experience, and some of which would come to mind only as I reviewed the pages of the document and/or of this post.