Sunday, July 18, 2010

Improving Performance in VMware Workstation 7.1

I reviewed the latest version of VMware's document, Performance Best Practices for VMware Workstation, to see what hardware purchases or sales it would suggest for my situation.  The document consisted of four main sections, pertaining to host system hardware, the host operating system (OS), VMware Workstation and virtual machines (VMs), and guest OSs.  I was particularly interested in information about running Windows XP on an Ubuntu host, since that was the setup I was using.  This post does not say much about Windows host systems.

Section 1:  Hardware

A.  CPUs

1.  Hyperthreading

VMware (p. 7) recommended using a CPU that would support hyperthreading (also called "logical processing").  (The OS and the BIOS would have to support it, and the user would have to make sure it was enabled in the BIOS.)  Patrick Schmid at Tom's Hardware said that the primary benefit of hyperthreading was to permit smoother responsiveness, but that it would not yield noticeable increases in performance otherwise, and certainly would not substitute for having multiple cores in the CPU.  Intel's own writeup of hyperthreading affirmed that responsiveness was a leading benefit.

AMD quoted VMware as saying, “Virtual machines are preferentially scheduled on two different cores rather than on two logical processors on the same core.”  That is, VMware tried to assign different VMs to different CPU cores, if available.  This seemed to imply that AMD CPUs would do better when the number of CPU cores matched or exceeded the number of VMs being run.  But AMD suggested that increasingly complex software (e.g., multithreading in Microsoft Excel 2007) could keep as many as 48 CPU cores busy, even if the number of VMs being run was much lower.

AMD's point in that particular article was that its Opteron CPU, with more cores, could significantly outperform Intel's Xeon, with hyperthreading and fewer cores.  Anandtech's comparison of state-of-the-art Intel and AMD CPUs in March 2010 found, however, that the Xeon did much better than the Opteron.  Recent observations suggested that AMD might be moving toward implementing hyperthreading after all.

A search on Newegg.com turned up 20 Intel CPUs with hyper-threading capabilities, starting at $115 and ranging above $1,000.  (The least expensive Intel CPU listed on Newegg at that point cost $41.)  Anandtech said that the AMD advantage was in terms of price, with good performance at much lower cost.  One Anantech commentator said, "The twelve-core AMD Opteron 6100 and six-core Xeon 5600 perform more or less the same," but suggested that Intel had two advantages at the enterprise level:  RAS (i.e., reliability, availability, and serviceability, including the ability of systems to heal themselves) and licensing.

2.  MMU Virtualization

VMware (pp. 7-8) also expressed a preference for second-generation hardware-assisted MMU virtualization, called rapid virtualization indexing (RVI) or nested page tables (NPT) in AMD processors or extended page tables (EPT) in Intel processors.  (Wikipedia indicated that NPT was used during development, but that RVI was the term currently used.)

VMware found that, in its ESX product, AMD's "RVI provides performance gains of up to 42% for MMU-intensive benchmarks and up to 500% for MMU-intensive microbenchmarks."  VMware found similarly dramatic performance improvements for Intel's EPT, provided the virtualization product made suitable adjustments -- which, VMware said, ESX did.  It was not clear that the same could be said for VMware Workstation.  Pending further research, this information made an AMD CPU with RVI the more certain performance boost for an ordinary user of Workstation.

At this writing, neither Newegg nor TigerDirect offered products featuring any of those CPU-related acronyms.  According to Wikipedia, MMU debuted in the third-generation AMD Opteron, and at Intel EPT debuted in the Nehalem architecture.  (That same Wikipedia page said that RVI was supported, at VMware, in ESX Server 3.5 and later -- and also, interestingly, in Oracle's VirtualBox 2.0 and later.)  At Newegg, at this writing, Opterons were available in the range of $190-1,300 (and would require motherboards in the $200-600 range).  The Nehalem appeared in the Core i7 line of CPUs, available at Newegg for $290-1,140.  (Newegg didn't list a canned search option for Nehalem or Westmere cores.)

I looked at some historical prices to get a rough idea of how processor pricing trends worked.  On the Intel side, the Core 2 Duo E6700, introduced in July 2006 for $530, was apparently available (in some form) for $316 in June 2007, around $212 in July 2008, $130 in September 2009, and $95 in July 2010.  These values suggested that prices dropped dramatically (perhaps 40%) in the first year, less dramatically (perhaps 20% of the original price) in the second year, and likewise (perhaps 10% of the original price per year) over the next couple of years.  (Intel apparently discontinued the E6700 (presumably meaning that manufacturing ceased) in February 2008.  At that time, the chip may have been selling for somewhat less than half the original price.)  On those data, the rate of discount from the original price was cut in half in each succeeding year, during the first several years of the product's life.

I took a particular interest in one of the Core i7 CPUs at the bottom of Newegg's list, pricewise.  The Core i7-870 that was available for $290 in July 2010 debuted at a list price of $562 in September 2009, representing a 49% drop in less than a year.  The data from the preceding paragraph suggest that the consumer might anticipate another 25% reduction from the original price (i.e., half of the previous year's price cut), for a price of around $145, by summer 2011.  On this basis, it seemed to me, personally, that I might thus save myself $150 (plus whatever price drop might apply to the corresponding motherboard) if I waited to implement these particular suggestions for VMware performance until summer 2011.

Intel described the Core i7-870 as having both hyper-threading and VT-x virtualization technology.  But VMware (p. 8) indicates that VT-x is the first-, not the second-, generation of virtualization technology.  Its potentially outdated status is reflected in a VirtualBox recommendation that VirtualBox has been designed to perform better without enabling this sort of hardware-assisted CPU virtualization at all.  As of late 2008, someone in a VMware Community post considered VT-x a major step forward, but noted that hardware-assisted virtualization in Workstation 7 was supported only on 64-bit hardware.  I did have 64-bit hardware, so that was not a concern for me.

But which Intel CPU would I have to be tracking, if I wished to get into the second-generation Intel EPT (or AMD RVI) MMU virtualization technology?  Intel characterized EPT as an "extension" of VT-x and, to revert to the (Wikipedia) observation offered above, that extension was apparently to be found on the Nehalem (45nm) or Westmere (32nm) architectures.  Evidently not all Core i7 CPUs employed that architecture, then, else the i7-870 would have it.  (I was not alone in being confused about this.)  It seemed that what I was looking for might be, in Intel-speak, VT-x2.  Further searches for insight led to a simple request for a list of VT-x2 features implemented in various Core i7 CPUs -- to which Intel provided the bizarre response that, no, actually, it was hard to provide any such list, and a pointer to lengthy software developer's manuals.  Indeed, it seemed that VMware was somewhat behind the curve:  while it was talking about EPT (as implemented in VT-x2), Intel was meanwhile moving on to VT-d and other technologies.  Then again, another Intel source seemed to say that VT-d was an older technology.

The message seemed to be that I, as a consumer, didn't need to know about this yet.  I decided to try a different approach.  I went back to Newegg's list of Core i7 processors and tried working my way up the list until I found one that did have VT-x2.  After the i7 860 and 870, next on Newegg's list was the 930.  My search regarding the 930 and VT-x2 led to an Intel Virtualization Technology List indicating, that, well, yes, a number of Intel CPUs did support VT-x.  I looked at them individually to see if perchance they supported VT-x2, that information unfortunately not being included in the alleged virtualization technology list.  Bottom man on this list was, again, the 860, and they confirmed that it did support both VT-x and VT-d.  At the top end of the set, we had the 970.  The 970's spec sheet didn't say anything about VT-d, so maybe it was indeed being phased out.  No mention of VT-x2 either; just VT-x.  Following some leads, I came around to the discovery that there was also something known as VT-i, referring to the Itanium processor.  It wasn't helpful information, but at least it was information.

Looking back at that page on the i7-970, I noticed that Intel said, in greyed-out letters, "No Datasheet Available."  But, hmm, did that mean there were datasheets for others on that virtualization technology list?  I tried the 920.  There, they had a "Download Datasheet" link that led to about a dozen Technical Documents.  I started with the 96-page Intel® Core™ i7-900 Desktop Processor Extreme Edition Series and Intel® Core™ i7-900 Desktop Processor Series Datasheet, Volume 1.  But no, according to Acrobat, there were no references to VT-x2 there.  How about VT-x?  Nope.  Alright, then, volume 2?  None!  Well, how about just plain old "virtual"?  Still nothing on what virtualization technology any particular CPU might have.  This was a contrast against another set of technical documents provided on that same page, for the i7-800 series.  Here, I found references to both VT-x and VT-d.  Volume 1 of that datasheet said, on page 29, that the i7-800 series did support EPT.  So that was pretty confusing.

I decided to try the Developer's Manuals.  The description mentioned virtualization only in connection with the Intel® Virtualization Technology FlexMigration (Intel® VT FlexMigration) Application Note and the Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 3B: System Programming Guide.  The Application Note contained several references to VT-x, but did not distinguish it from VT-x2 or VT-d.  Volume 3B of the Software Developer's Manual contained no references to VT- of any type.  Both documents did refer to Virtual Machine Extensions (VMX), and the Manual contained lots of information on how virtualization works.  But I was not able to figure out, from this information, which CPU I should buy.  This was pretty strange, given the conclusion that Intel's whole reason for offering virtualization in only some CPUs was driven by marketing.

It occured to me that perhaps the people at VirtualBox would provide some insight into what they would recommend, if I opted for a VirtualBox-compatible CPU.  A search produced very meager results along these lines.  I went to the VirtualBox website and looked at their documentation.  They said that "the vast majority of today's 64-bit and multicore CPUs ship with hardware virtualization."  No distinction there between VT-x and VT-x2.  They also said, "The biggest difference between VT-x and AMD-V is that AMD-V provides a more complete virtualization environment."  The use of what they called "nested paging" (i.e., more advanced virtualization, apparently what others meant when they referred to VT-x2) could bring a "significant" performance improvement -- of up to 5%.  Five percent!  I was thinking we were talking about the difference between success and failure, and now it appeared this might be just one more incremental improvement.  Nested paging, they said, was standard on Intel's Core i7 (Nehalem) CPUs, and also on AMD's Barcelona CPUs.

I did finally find, at Tom's Hardware, a list of CPUs that would support "XP Mode" Virtualization.  XP Mode was the capability of running a near-perfect emulation of Windows XP within Windows 7 (which would enable people to use older applications on the newer operating system).  In March 2010, Microsoft altered Windows 7 so that it would no longer require hardware virtualization in order to provide XP Mode.  But the Tom's Hardware list dated from a year earlier, so it gave an idea of what Intel CPUs I would need to consider if I wanted hardware virtualization for purposes of improved performance in VMware.  The Tom's Hardware list actually drew from a list posted by Ed Bott on ZDNet.  Ed provided a list of Intel desktop and mobile CPUs.  His desktop list boiled down to the following, which I provide here, in ascending order according to their current prices according to Pricewatch.com (or, failing that, on Newegg or Amazon):


So, bearing in mind that these were approximate prices, a person dead-set on obtaining a virtualization-supporting Intel CPU for less than $100 would have more than a half-dozen to choose from.

It appeared, in other words, that we were no longer dealing in the rarified world of enterprise-level Xeon processors; we humble consumers were treating virtualization as a simple commodity.  Paying more would bring, not necessarily any improvements in virtualization per se, but rather in those other characteristics that people like in their CPUs, including hyperthreading.

In that case, I thought that perhaps I should take a look at AMD, just to be sure that I wasn't blowing off an already affordable option.  If we were forced to accept the simplistic conclusion that you should just be happy knowing you could get some kind of hardware virtualization with any Core i7 CPU, why not price any AMD CPU with AMD-V virtualization?  According to a simple statement from AMD, that meant almost any CPU that I would be looking at.  Here, comparable to the situation with Intel, a Newegg search for any desktop CPU with virtualization technology support gave me AMD Semprons for as low as $37.

I reflected on my current situation.  To improve VMware's performance, I was looking to replace an AMD Athlon 64 X2 5000+ CPU.  But that dual-core CPU, which was hot stuff four years earlier, did support virtualization already.  The question seemed to be, what kind of virtualization?  What they were offering now was AMD-V.  Seeing the amount of time I had already invested in this general line of questioning, I decided I should just assume that it was better than the virtualization of yesteryear, and that having it on a faster CPU would be better still.  It seemed, in short, that I might just upgrade to a somewhat more up-to-date CPU, without worrying much about understanding VMware's hardware suggestions.  VMware (p. 17) said that, if I did have hardware-assisted virtualization in my CPU, Workstation would typically set it up automatically, but there was the option of changing the default in VM > Settings > Hardware tab > Processors > Virtualization engine > Preferred mode.  They also said (p. 26) that, if the system was using MMU, performance would be best if VMI (i.e., software virtualization:  "virtual machine interface") was disabled (VM > Settings > Hardware tab > Processors > VMware kernel paravirtualization).  Mine was grayed out.  I assumed it was something I would have to set when the machine was powered down, or perhaps in root mode ("sudo vmware").  They also said, "No Microsoft operating systems support VMI," but I wasn't sure what the situation would be in the case of an Ubuntu host.

B.  Memory

VMware's recommentation on memory (p. 8) was just to make sure you had enough.

C.  Storage

VMware recommended (pp. 8-9) having enough disk storage space, but also emphasized making sure it was configured correctly.  They mentioned the potential for improved performance from RAID.  Browsing among various sources suggested, generally, that there could be significant performance improvements (and possibly greater performance smoothness) in a RAID 0 setup, where the program files were installed on two (or more) hard drives.  By contrast, it seemed to be generally agreed that a RAID array would make less of a performance difference in the handling of data files.

D.  Other Hardware

VMware offered suggestions about networking and hardware BIOS settings.  These recommendations were worth reviewing for some purposes, but did not seem to require any purchase decisions for my purposes.

E.  Summary

The Hardware section of Performance Best Practices for VMware Workstation left me with the impression that, all other things being equal, VMs will perform better on multiprocessor CPUs, and that hyperthreading is a plus.  I was not entirely able to penetrate the jargon about MMU virtualization; the general conclusion there seemed to be that I should shop for a CPU that supported a relatively recent generation of virtualization technology.  Assuming no bottlenecks due to inadequate RAM or disk storage space, the other main performance recommendation for my purposes was to use a striping RAID arrangement.

Section 2:  Host Operating System

This section contained virtually no relevant suggestions for Linux-based systems.

Section 3:  VMware Workstation and Virtual Machines

VMware said (p. 16) that most applications running in Workstation would perform nearly as well as in native Windows.  For the "small percentage of workloads" that would experience noticeable performance degradation, they had several CPU-related suggestions:
  • Don't assign more of a load to the CPU than it can handle.
  • Don't assign more CPU cores to a VM than it can use.
  • Monitor CPU usage with the Linux "top" program.
  • When using a single virtual CPU (vCPU), as I was likely to do, I would get better performance on an UP rather than SMP kernel or hardware abstraction layer (HAL).
  • The guest operating system may not switch to the appropriate HAL if the CPU settings change later (p. 27).
They said that Windows operating systems (OSs) newer than XP would use the same HAL/kernel for both UP and SMP installations.  It sounded like that was not the case for WinXP, however.  Microsoft seemed to say that XP would detect the type of system and would install the correct HAL automatically.  They said this:
Microsoft does not support running a HAL other than the HAL that Windows Setup would typically install on the computer. For example, running a PIC HAL on an APIC computer is not supported. Although this configuration may appear to work, Microsoft does not test this configuration and you may have performance and interrupt issues. . . . Microsoft recommends that you switch HALs for troubleshooting purposes only or to workaround a hardware problem.
So the HAL issue seemed to be something to be aware of, in some situations, but not something of practical relevance for a user of Windows XP, Vista, or Windows 7.  I was curious which HAL was installed on my system, though.  As advised by Kelly's Korner, I went to Control Panel > System > Hardware > Device Manager > Computer.  On my native WinXP installation, it said ACPI Multiprocessor PC.  In a newly installed WinXP VM in Workstation set to use just one processor and one core, it said ACPI Uniprocessor PC.

VMware (p. 17) said that, if there were other VMs or programs running in the background, performance of a VM in the foreground would be noticeably better if the settings were changed in Workstation (i.e., not in any particular VM).  The advice was to go to Edit > Preferences > Priority, and set "Input grabbed" to High, and "Input ungrabbed" to Normal.  But Workstation gave me no such options.

According to VMware (p. 18), memory-related performance could be affected in several ways.  First, there needed to be enough RAM available to the host system for its own purposes.  My system had 6GB of RAM. Normally, some of that might have gone unrecognized by a 32-bit OS, but I was running a PAE-enabled kernel in 32-bit Ubuntu 10.04.  Ubuntu's Sysinfo reported that my system's total RAM was 6050 mebibytes (MiB) (i.e., about 6.3 billion bytes).  Running Workstation as root (i.e., "sudo vmware"), I had set RAM to 5000MB (by which Workstation presumably meant 5000 x 1 million), leaving more than 1GB of RAM for Ubuntu system operations and whatever programs I might be running in native Ubuntu.  I did not typically run many programs in Ubuntu.  So it seemed that I had allowed enough RAM for the system.  It did occur to me, though, that if I was going to run two distinct sessions of Workstation (as opposed to running two VMs within a single Workstation session), I might want to cut that 5000MB figure in half for each Workstation session.

VMware (p. 18) also advised that the best possible performance would come from requiring Workstation to "Fit all virtual machine memory into reserved host RAM" (Edit > Preferences > Memory tab > Additiaonal memory).  But they provided this caveat:
NOTE:  The setting described in this section affects only whether or not a virtual machine is allowed to start, not what happens after it starts . . . . After a virtual machine starts, other factors . . . [e.g., change of applications running in the host OS] can change.  In such situations, even if you selected the Fit all virtual machine memory into reserved host RAM option, virtual machine memory swapping might still occur.
Since I did most of my work in WinXP, the message to me seemed to be to make sure that there was enough RAM available to the Ubuntu host so that it would not need to be raiding the WinXP guests.  This was consistent with the advice of VMware (p. 19).  They warned particularly about host applications that lock memory.  While it did not apply to my configuration, it was also interesting that they recommended providing no more than 896MB to 32-bit Linux VMs.  To monitor what might be happening, they suggested checking for swap activity in the host and virtual machines.  Doing in this Linux, they said (p. 29), involved running "stat" to dispay the "swap counters," and verifying that both the si and so counters were near zero.  I wasn't sure that their remarks applied to the Ubuntu version of stat, though.  A search turned up a manual page that didn't say anything about swap.  That page said made me think that a different search, focusing on the bash shell, might be more illuminating.  But that turned up nothing.  This really did not seem to be something that the world was blogging about.  Eventually, it appeared that what we were really looking for was vmstat, for which a search produced a couple hundred hits.  Brian Tanaka recommended running "vmstat 5 10" to get an average impression of what was happening on the system.  That didn't work on my system, but the vmstat manpage led me to try "vmstat -a -n 5 10" and that gave me ten indications that si and so were at zero.  So I seemed to be OK there.

VMware (p. 29) also pointed toward a knowledgebase page about "excessive page faults generated by Windows applications."  To see if this was a problem, they suggested using Start > Run > perfmon. I tried that, inside a WinXP VM.  At the top center of the System Monitor graph, I clicked the + (Add Counters) button.  I got an error message:
System Monitor Control
At least one data sample is missing.  Data collection is taking longer than expected.  You might avoid this message by increasing the sample interval.
This message will not be shown again during this session.
I took that to be a statement that my VM was running very slowly, which was not surprising, because I had some very intensive processes going on elsewhere on the computer.  I okayed out of that message and, following the advice on that page fault webpage, proceeded to choose Memory as my performance object, selected Page Faults/sec as my counter, clicked Add.  To get an accurate sample, I considered the advice from their error message:  I clicked on the Properties icon along the top and thought about changing it to "Sample automatically every 2 [or 3] seconds."  But then I decided the one-second sample was ticking along OK, and left it at that.  I was seeing occasional spikes in page faults.  The webpage advised that I could trace this to a particular application by going back to the Add Counters button, making Process my performance object, and then choosing a process of particular interest.  I named one of the very intensive processes I had underway.  Sure enough, I got a line across the top of the graph, indicating that that process was accounting for 90-100 (percent?) of something related to page faults.  Very interesting.  So basically this seemed to be telling me that a process that I knew was soaking up a lot of system resources was, in fact, soaking up a lot of system resources.

VMware (pp. 20-21) discussed ways in which page sharing and memory trimming, intended to promote efficiency, could degrade performance in some instances.  My situation did not seem to fall into those kinds of situations, so I made no adjustments there.  They said that, of course, a local disk drive would be a faster home for a VM than would a network drive.  They provided other tips that they had also indicated somewhere during the VM setup process:  for best performance, use IDE rather than SCSI virtual disks, and preallocated rather than growable, and independent and persistent rather than nonpersistent, and don't use snapshots.  They also (p. 22) offered some suggestions that I hadn't encountered previously:  with the machine powered off, turn off debug mode (VM > Settings > Options tab > Advanced > Settings > Gather debugging information > None).  Other performance tips (p. 23):  run a general availability (GA) version of Workstation, not a debug or beta version.  Make sure you have designated the right operating system (VM > Settings > Options tab > General > Version).  Disconnect your optical drives from your VM until you need them (VM > Settings > Hardware > CD/DVD > uncheck Connect at Power On).

To sum up, section 3 of Performance Best Practices for VMware Workstation did provide a number of practical tips on how to adjust Workstation to run more efficiently.  I was not able to understand and apply all of them, and some (e.g., make sure you have enough RAM) were rather commonsense if not simply redundant.  What I derived from the discussion of cores was that, if I did get a new multicore processor, I should probably experiment, as I had done with my present CPU, to see how it performed with various numbers of cores assigned in Workstation.

Section 3:  Guest Operating System

In this section, VMware led off (p. 25) with suggestions:  make sure you're using a guest operating system that Workstation supports; keep VMware Tools updated; disable screen savers and animations; run backup and antivirus scans in off-peak hours; use a timekeeping utility suitable for the guest rather than the VMware Tools time-synchronization option.  VMware (p. 28) also referred to impacts on efficiency wrought by guest OS "idle loops."  It appeared that tweaking this would be painstaking and would likely yield minor effects.

VMware (p. 30) confusingly said, "It is best to use virtual SCSI hard disks in the virtual machine."  This differed from the installation process, which said (at least at one point) that IDE drives were recommended.  Bizarrely, VMware directed me to a Windows webpage dated December 4, 2001.  More promisingly, VMware also pointed toward their KB9645697 webpage regarding the splitting of large I/O requests into 64KB units.  The gist of their suggestion here was, "Changing the guest registry settings to issue larger block sizes can eliminate this splitting, thus enhancing performance" (p. 30).  The way to do that was sketched out on page 30 (section 2.2.6.1) of a PDF document entitled User's Guide: Fusion-MPT Device Management.  But in any case, this called for an edit of the registry setting HKLM\SYSTEM\CurrentControlSet\Services\Symmpi\Parameters\Device\MaximumSGList, and there was no such setting in my VM.

VMware (p. 30) also recommended that, if I did use IDE rather than SCSI virtual disks, I should make sure DMA access was enabled.  To do this, I went into Start > Run > devmgmt.msc > IDE ATA/ATAPI controllers > right-click on each channel > Advanced Settings tab > look at Current Transfer Mode.  If it says PIO, toggle the other box, Transfer Mode, between PIO and DMA to get Transfer Mode = DMA and Current Transfer Mode = DMA.

Another performance suggestion (pp. 30-31):  defragmentation.  Start by defragmenting the guest, then use VM > Settings > Hardware tab > Hard Disk > Utilities > Defragment, then defragment the host (not applicable in Linux hosts).  Defragment before creating linked clones or snapshots; afterwards is too late.  I was only creating independent clones, so this did not seem to apply.  Nonetheless, I did have a defrag utility in the WinXP guest.  Defragmentation in VMware itself had always been almost instant, when I had done it.

For network performance, VMware (p. 31) recommended using the VMXNET driver.  They noted, however, that that driver was installed automatically with VMware Tools.  There were a few other network performance suggestions in the document.  Since I was not having networking performance issues, I did not investigate these.  VMware (p. 32) also offered some other concluding, sensible suggestions (e.g., use general-availability software, not beta versions; make sure the latest version of VMware Tools is installed.  Here, again, the advice did not seem to apply.

Summary (for My Purposes) 

A single problem with hardware or software could seriously impair performance.  I did not attempt to scour the Performance Best Practices for VMware Workstation document for every possible thing that might be improved.  Rather, at least in this first pass through it, I was focused on big-picture items that sounded like they might have a great impact on the performance of my system.  The Hardware section led me to think especially about upgrading to a faster multiprocessor CPU, perhaps with hyperthreading, but in any event with a recent generation of virtualization technology, and also to switch to a striping RAID arrangement for my program files (presumably including both the Ubuntu host program partition and the partition on which I kept my VMs.  Other than that, improved performance in VMware Workstation appeared to be a matter of tuning a variety of settings, some of which were becoming obvious as I gained more experience, and some of which would come to mind only as I reviewed the pages of the document and/or of this post.

0 comments: