Wednesday, February 29, 2012

Excel 2003: Print or Export the Formulas Used in Each Cell

I had a spreadsheet in Excel 2003.  (I suspect the approach used here would also work in other versions of Excel, but I have not tried it.)  I wanted to print out the formulas used in each cell.  I did a couple of searches and wound up in a thread where they were advising me to use a macro for this purpose.  The steps I used to set up the macro were similar to those that I had used for another Excel macro:

  1. Close all Excel files other than the one you're working on.
  2. Go into Tools > Macro > Visual Basic Editor > Insert > Module.
  3. Copy and paste macro text (see below) into the window.
  4. Go to File > Close and return to Microsoft Excel.
  5. In this case, I used the macro by going into Tools > Macro > Macros and running the ListFormulas macro.
The text of the macro -- what I copied and pasted into the module window -- was as follows:
Sub ListFormulas()
    Dim FormulaCells As Range, Cell As Range
    Dim FormulaSheet As Worksheet
    Dim Row As Integer
    
'   Create a Range object for all formula cells
    On Error Resume Next
    Set FormulaCells = Range("A1").SpecialCells(xlFormulas, 23)
    
'   Exit if no formulas are found
    If FormulaCells Is Nothing Then
        MsgBox "No Formulas."
        Exit Sub
    End If
    
'   Add a new worksheet
    Application.ScreenUpdating = False
    Set FormulaSheet = ActiveWorkbook.Worksheets.Add
    FormulaSheet.Name = "Formulas in " & FormulaCells.Parent.Name
    

'   Set up the column headings
    With FormulaSheet
        Range("A1") = "Address"
        Range("B1") = "Formula"
        Range("C1") = "Value"

        Range("A1:C1").Font.Bold = True
    End With
    
'   Process each formula
    Row = 2
    For Each Cell In FormulaCells
        Application.StatusBar = Format((Row - 1) / FormulaCells.Count, "0%")
        With FormulaSheet
            Cells(Row, 1) = Cell.Address _
                (RowAbsolute:=False, ColumnAbsolute:=False)
            Cells(Row, 2) = " " & Cell.Formula
            Cells(Row, 3) = Cell.Value
            Row = Row + 1
        End With
    Next Cell
    
'   Adjust column widths
    FormulaSheet.Columns("A:C").AutoFit
    Application.StatusBar = False
End Sub 
(Note that the format of this blog may wrap some lines.  Copying and pasting may yield better results than retyping.)  The author named in that code, John Walkenbach, provided this code and also offered a Power Utility Pak ($40) that contained this and other tools.  I had installed a couple of freeware utility collections -- ASAP Utilities and Morefunc -- and I hardly ever used them.  I checked the list of tools in ASAP Utilities, just in case, but didn't find anything quite along these lines.  A quick glance revealed no list of Morefunc utilities.

When I ran the macro (step 5, above), it seemed to hang my machine.  I was using a fairly large spreadsheet -- I probably should have tried it on something smaller -- but instead I went to bed.  I didn't know how long it took, but it worked.  When I awoke, it had created a new worksheet (i.e., a new tab at the bottom of the spreadsheet), with three columns:  Address (e.g., F2), Formula (e.g., =C2), and Value (e.g., 17).

Sunday, February 26, 2012

Windows 7: Windows Media Encoder: No Specified Device Driver Is Present

I had downloaded Microsoft Windows Media Encoder (WME) 9.0.  I was aware that it was unsupported and otherwise limited in comparison against Microsoft Expression Encoder 4 Pro ($199).  I was trying to use it to do video screen capture in 64-bit Windows 7.  I went into New Session > Capture Screen > Specific Window and Capture Audio from the Default Audio Device > designate the window > name the output file > high quality > diplay information (blank) > Finish.  At that point, I got this error:

Windows Media Encoder

No specified device driver is present.(0xC00D0072)
A search led to a Microsoft webpage that said this occurred because DirectX Media 6.0 was not installed.  Unfortunately, the page to which they directed me for the appropriate download was defunct.  Wikipedia said that DirectX 6.0 was released in August 1998, that the current version was DirectX 11, and that DirectX had been included in Windows ever since Windows 95 SR2.  (I also saw an indication that this problem might arise only when trying to record audio along with the video.  I didn't investigate that; I was only interested in video plus sound at this point.)

My theory was that DirectX had grown and that WME had not been maintained to keep up with it, and therefore I needed to install an older version of DirectX for that purpose.  I was not sure that this would work without screwing up the more recent DirectX.  I made a System Restore point and then downloaded DirectX 8.1 Runtime for Windows 2000, released in November 2001.  I wasn't sure whether that was the right thing; I also downloaded the DirectX Media Platform SDK Redistributable (June 2001).  Both were around 6-8MB.

Before installing them, I looked at a few more posts.  One post said that what was missing was actually a WDM (Windows Driver Model?) driver for the microphone.  The plausible theory in this case was that I would gain nothing by installing a retrograde version of DirectX; instead, I just needed a current driver.  But I was using onboard audio; I had only installed the motherboard recently; and I had made sure to get the latest drivers at that point.  I guessed that the motherboard drivers were no longer attempting compatibility with WME.

There was always the option of using a handheld audio recorder to record sound from the speakers.  Or if I didn't mind risking blowing up my handheld recorder, I could try running a cable directly from the computer's headphone output to the recorder's microphone input.  I would still have the hassle of synchronizing audio with video in a video editor.  That would be more difficult if the video showed lip motion or other things that would become bothersome if they weren't closely aligned.

There was also the possibility of doing further research into who was using WME on Win7 successfully at this point, and what hardware they were using, and buying the appropriate sound card.  I was out of time for this project, but it could also develop that further research would turn up a solution I had overlooked.  But good video capture had been elusive.  I had bought Debut and had been using it with good results; the question only arose because there was a hopefully temporary licensing snafu where Debut didn't work for me in the middle of the night.  Debut was now working again, so I shelved this question for the time being.

Saturday, February 25, 2012

Robocopy Commands for File and Folder Synchronization

I wanted to know if Robocopy could replace synchronization programs like GoodSync and Allway Sync.  I decided to take a look at its options.  This post describes what I found.

The manual for Robocopy itself seemed to confirm my prior sense that Robocopy was a unidirectional copying utility. That is, you could command it to copy from A to B, or from B to A, but not bidirectionally between A and B, never mind multidirectionally. In particular, manual page 16, discussing Robocopy's /XO option, said this:

The most appropriate use for /XO is to synchronize two directory trees so that they can be updated simultaneously in separate areas. To ensure that the latest files are present in both directory trees, copy with /XO first in one direction and then in the other.
So apparently synchronization with Robocopy would require reciprocal commands: one to copy newer files from A to B, and another, otherwise identical, command to copy from B to A.  Wikipedia noted that Robocopy would not copy open files, but that seemed to be true of GoodSync as well.  I didn't expect it to be a problem for my purposes.

(The information from the manual seemed pretty similar to what I got from a command-line "robocopy /?" command.  A Windows Server webpage provided yet another list of Robocopy commands.  I had noticed, though, that Wikipedia said Robocopy was not entirely consistent between platforms.  It seemed I had better verify that my solutions would work on Windows 7 specifically.)

I wasn't yet sure what kind of analysis /XO or other Robocopy commands would perform.  I assumed, but was not yet positive, that they would take account of time as well as date, down to the minute if not the second.  This thought prompted me to look into something that would improve the present gap of about 20 seconds between the clocks on the two computers -- something more fun than just resetting the clock manually, that is. It seemed I would want one or two minutes' tolerance in timestamp comparisons in any case; but in a separate post, I took a closer look at possible solutions to get the two machines closer to chronological synchronization.  (Later I saw that the manual (p. 17) said, "File-time granularity is 100 nanoseconds on NTFS, and two seconds on FAT.")

The next step seemed to be to peruse the Robocopy command-line options, to see which ones might work for my synchronization project.  The manual (pp. 7-11) provided a concise command-line reference.   (Apparently lowercase versions of the commands worked; I use uppercase here for visibility.)  The options that seemed most relevant were as follows, leaving out those that would be included by default:
/S copy subdirectories, excluding empty ones

/ZB resume a copying task from a point of previous failure (presumably instead of starting over), but switch to backup mode if restartable mode fails (as when using an unreliable network); but Wikipedia said that /Z would slow copying significantly

/MOT:X and /MON:Y   monitor the source directory and run Robocopy again when X number of files have changed and Y minutes have elapsed

/XD   exclude specified directories

/XO   don't overwrite destination files with source files having the same name and an older timestamp

/R   specify the number of retries

/W   specify wait time between retries

/REG   save /R and /W in registry as default settings

/L   list files

/V   produce verbose output

/TS   display source file timestamps in output log

/FP   display full pathnames in output log

/NC   suppress output of Robocopy file class information


/NS   suppress output of file and directory sizes

/NJH and /NJS   turn off logging of job header and summary

/ETA   show estimated time of completion

/LOG:filename or /LOG+:filename   redirect output to named file, overwriting or (+) appending if it already exists

/TEE   display output as well as writing LOG

/JOB:jobfile   read parameters from jobfile; optionally /IF to include files with specified names or paths; optionally /SD:path and /DD:path to specify source and destination directories; optionally /QUIT to view job file contents without any actual copying

/SAVE:jobfile   write current parameter settings to jobfile
This was an intimidating set of potentially relevant options.  There were many things that could go wrong with so many choices.  I decided to look for examples and more information.  As the manual said (p. 22), these options could result in long, unwieldy commands, and it did appear that the JOB options would help with that.  I ran a search with that in mind.  It led most immediately to some general-reference sites (e.g., SS64, PowerCram, FixMyITSystem).  I tried again.  That search led almost nowhere.  Likewise another, seemingly broader search.

Eventually, I found an eHow website that said I could use XCopy to synchronize computers.  Was I trying to use the wrong tool?  It didn't seem so.  I saw indications that Robocopy had more options and better performance, and that use of XCopy was deprecated.  But why was I not seeing tons of obvious references to the use of Robocopy JOB options for synchronization?  (Another possibility with lots of options:  XXCOPY.  But the least expensive version for networked computers cost $100, and its reception at CNET was underwhelming.)

I tried another search, focusing only on Robocopy job options.  Posts at Serverfault and Skonet seemed to suggest that I could work up my desired Robocopy command on the command line itself, and then add the /SAVE option to save it to a file, and that my job options would go into a separate file with an .RCJ extension.  I wasn't sure exactly how all that would work in practice, but at least there seemed to be a starting point.

It appeared that the slash (/) character was the signal, in the job options file and probably also for Robocopy generally, indicating that one option had ended and a new one was starting.  So, for instance, if I gave the /XD command, I could then proceed to list a boatload of directories to exclude, without worrying too much about format -- putting them on the same line, putting each on a different line, or whatever.  Everything I wrote after /XD would be treated as another directory to exclude, until I came to another / command (e.g., /S) or else reached the end of the job options file.

One thing that was not clear to me, from the foregoing options:  what if I had deleted a file on computer B, intending that synchronization should delete it on computer A as well?  GoodSync knew how to interpret such situations.  I suspected that Robocopy, run in a reciprocal setup starting with robocopy A > B, would just restore the file from computer A to computer B.  That, as I found in tinkering, was what Beyond Compare might do.  In other words, there was a real difference between a backup program and a synchronization program.  The former would just make sure that the target contained what was on the source; the latter would keep track of what had been removed from each side, and would decide whether the removal occurred later than the most recent update of those files on the other side.  I could control such things manually, more easily in Beyond Compare than in robocopy, but for an automated solution I would need a program that would keep track of dates and times of deletions as well as of file changes.  I posted a question on this, just to make sure I was understanding Robocopy correctly.  The response confirmed it, and suggested using regular synchronization programs.

Windows 7: Setting and Maintaining Accurate System Time

I wanted to keep two computers' clocks set the same, for purposes of synchronization, so that they would have an accurate sense of whether the version of File X on computer A was newer than the version of File X on computer B.  I had previously installed (or, more accurately, just added a copy of) Judah Levine's portable NISTIME 32 in something of a rush, when installing Windows 7, and, later, had vaguely recognized that it was not working right and/or I had not set it right.  Now I decided to work out the kinks in this function.

NISTIME-32BIT.EXE

I started with the National Institute of Standards and Technology (NIST), from which programs like NISTIME 32 would draw the current time.  It developed that NIST had a program called nistime-32bit.exe.  It turned out to be the same as NISTIME 32, just slightly updated.  The webpage's instructions were to start by going into File > Select Server and then Query Server > Now.  Somewhere I saw advice to choose a server near me.  I was tempted to choose two different ones, one for each computer, so as to have accurate time in case there was some terrible disruption of the national timekeeping system.  Then I realized that this could have the effect of making rivers run upstream, where my files were concerned, to wit:  new could be replaced by old.  Being up-to-date on the latest developments in American chronology suddenly seemed less important than making sure I didn't accidentally overwrite today's crossword puzzle.

When I went to the Query Server > Now menu pick, I got a dialog indicating that NISTIME 32 was prepared to adjust my computer by 0.953 seconds.  I told it to go ahead.  I also went into Query Server > Periodically and told it to update the computer every 12 hours.  Query Server > Server Status confirmed these settings.  File > Help in Choosing Dirs told me to hit File > Save Config to save my settings.  This gave me "File Error:  Cannot open file to save configuration."  That problem may have been caused by nesting the program too deeply in a subfolder.  I moved it elsewhere and tried again. Now it seemed to confirm that it had saved my settings, and it created NISTIMEW.CFG in the same folder as the program's portable executable (nistime-32bit.exe).  I exited and restarted, and it remembered what I had told it.  But I had to remember to hit File > Save Config; it would not remember anything.

But then, when I did go into Query Server > Periodically, specified 12 hours, and hit File > Save Config and then File > Exit, I could not get it back.  The program refused to become visible.  I tried a couple of times, and then looked at Windows Task Manager (Start > Run > taskmgr.exe) > Processes tab.  Taskmgr showed four separate instances of "nistime-32bit.exe *32."  I selected them and clicked End Process, one by one, and then ran nistime-32bit.exe again.  It returned to taskmgr.exe, but not to the screen.  I minimized all windows, one by one, but, no, it was not lurking anywhere.  There didn't seem to be a taskbar or system tray icon for it.  It was here, and yet not here.  I killed the processes again, now that I had started one or two new ones.  I renamed NISTIMEW.CFG to be something else, and now it would start, and it saved new settings in a new NISTIMEW.CFG.  Apparently the config file had gotten corrupted.  I had originally created that file manually in lowercase (nistimew.cfg); possibly something about the program needed the uppercase filename.

But now, same thing again.  Exiting and restarting gave me a hidden program:  visible in Task Manager's Processes tab, but not visible onscreen.  When I right-clicked on nistime-32bit.exe *32 in Task Manager and selected Properties, I got an error:  "Windows cannot find [pathname] nistime-32bit.exe."  I ended the process again.  I created a shortcut to the .exe and tried starting it that way.  I had no reason to think that would make any difference, and in fact it didn't.  I tried moving all of the files from the folder where I had put nistime-32bit.exe, and placed them all instead in C:\Windows, with a shortcut to the executable in my Start Menu.  That wasn't the answer; I still got lurking program sessions that appeared in Task Manager but nowhere else.  I deleted the CFG again and tried again.  Now it ran.  I went directly to File > Save Config without making any changes.  It indicated that it had saved the config file.  I exited and restarted the program.  It ran.

Now I saw something that may have explained the config file problem.  The server list had changed.  The Colorado server that I had selected previously was no longer listed in File > Select Server.  I had previously gone into File > Update Server List, and that had generated a message:  "New server file is C:\Windows\NIST-SRV.LST."  It did that again now, when I designated a new server.  I hit File > Save Config and then File > Exit, and then restarted the program.  Now it was running normally.  I moved the three files (the exe, cfg, and nist-srv.lst files) from C:\Windows back to the folder where I really preferred to have them.  It seemed that the server list had not properly updated when the files were in that folder originally.  I restarted and went through the same steps -- update server list, choose a new server, save config -- and now I was exiting and restarting without a problem.

But no, I spoke too soon.  When I restarted, saved a 12-hour periodic refresh, and exited, it would not restart.  Deleting the config and moving the other files back to C:\Windows did not fix it.  The problem seemed to relate specifically to the attempt to set up recurrent time checks.  I was doing something wrong, or perhaps the program had a bug, or maybe it was not suited for 64-bit Win7.  I went to the NIST webpage cited in the program's Help > More Help and sent an email to the Webmaster link at the bottom of that page, pointing them here.

The Built-In Windows Time Sync Option

I decided to look for an alternative time-updating program.  I ran a search and discovered that there was apparently some kind of automatic time-updating arrangement built into Windows.  The advice there was, however, that "The W32Time service is not a full-featured NTP solution that meets time-sensitive application needs."  That was consistent with the fact that my two computers' timeclocks tended to be somewhat inconsistent with one another.  I had not tried to see how inconsistent they could be, or how long they could remain that way.  I did see an indication somewhere that Windows defaulted to a weekly time update, so maybe it would verify that it was accurate to within a minute, or something, every week or so.

That appeared to be steered by Control Panel > Date and Time.  That dialog could also be opened by right-clicking the clock in the system tray and choosing Adjust Date/Time.  Or, as I now learned from Eric Phelps, it could also run from the command line via "rundll32.exe shell32.dll,Control_RunDLL timedate.cpl."  The latter option would facilitate the option of opening the Date and Time dialog for manual adjustment via, say, a batch file that would open it automatically (to the correct tab) every day, week, or whatever.  (Later, I found a How-To Geek webpage that said I could just run "w32tm /resync" as administrator to resynchronize the clock without even going into the Date and Time dialog.  That, too, could be incorporated into a scheduled batch file.)

The Date and Time dialog > Internet Time tab > Change Settings option gave me a choice of synchronizing with time.nist.gov, which I understood to be the most accurate (though others in that list, not counting time.windows.com, appeared to be cousins of NIST).  I noticed that the dialog told me, here, that "This computer is set to automatically synchronize on a scheduled basis."  The previous sync site, as I saw on the other computer, was time.windows.com."  I wasn't sure how synchronizing with that site could have left my two computers with different times -- differing by seconds, that is, not by minutes -- unless maybe time.windows.com was just not that worried about the seconds.  Or maybe it was trying to synchronize when my router was doing its daily self-restart, and was therefore not getting access to the online clock?  I wasn't sure.  (Note:  Fouzan said that this whole process wouldn't work if the computer was on a domain.)

Curious about the timing, I went into Start > Run > taskschd.msc > Task Scheduler Library.  There were maybe 15 items in the list, and none of them were obvious time sync tasks.  So another possibility was that some bug or tweak, brought into my system somewhere along the line, was preventing the creation or execution of the scheduling function.  Another emerging possibility was that, as stated in a How-To Geek webpage, time.windows.com (which my systems had been using by default) had "a ton of problems with uptime."  So possibly I had already fixed my problem, just by switching the machines to use time.nist.gov in the Date and Time dialog.  (I did notice, as soon as I made that switch and clicked the update button, that both computers' clocks showed exactly the same time.)

Other Possibilities

I ran another search and found a Gizmo recommendation for Dimension 4 as a time correction utility.  It occurred to me, at this point, that possibly I had fixed my problem, just by switching away from time.windows.com (above), and that maybe I should just let things slide for a week or two.  I decided mostly just to record some notes, here, for possible future reference.  So instead of installing Dimension 4, I just dragged the icon for its webpage from my browser's Address bar over to the Time subfolder in my customized Start Menu.  If I ever needed it, I could follow the link at that time.

There also appeared to be more to know than I had realized, regarding Task Scheduler (taskschd.msc).  In Task Scheduler's left-hand pane, I went down the tree into Task Scheduler Library > Microsoft > Windows > Time Synchronization.  Now I saw that my machine was indeed set to synchronize time at 1 AM every Sunday.  I saw advice from Tina Sieber on a way to adjust and improve the scheduling via Task Scheduler.  Tina seemed to believe, however, that using a separate program might be the simpler and more accurate approach.  Tina pointed toward two other programs, Atomic Clock Sync and AtomTime.  The webpage for the latter seemed very old.  I was not sure how it would fare in a 64-bit Windows 7 world.

For now, the solution seemed to be simply to go into the system's clock and change its time source to NIST.  My monthly batch file brought up the NIST/USNO timepage on the first of every month, so I could observe, later, whether my two computers were again diverging from one another and/or from the time on that webpage.  If they didn't stay in line, I would have two options.  One would be to add one of the foregoing command lines to my daily or weekly batch files, to permit manual and/or automatic checking and/or resynchronization.  Another would be to try one of the several freeware utilities just mentioned, particularly Dimension 4 or Atomic Clock Sync.

Sunday, February 19, 2012

Windows 7: Thunderbird: Add Security Exception

I was using Mozilla Thunderbird 10.0 for email.  I suddenly got this message:

Add Security Exception

You are about to override how Thunderbird identifies this site.

Legitimate banks, stores, and other public sites will not ask you to do this.

Server Location:  imap.exchange.iu.edu:993

Certificate Status

This site attempts to identify itself with invalid information.

Wrong Site

Certificate belongs to a different site, which could indicate an identity theft.
Below that, there was an option to "Permanently store this exception" and buttons to Confirm Security Exception or else Cancel.  Canceling didn't achieve anything:  the same dialog came right back.  I was afraid to click anything else.  The site in question was linked to Indiana University, which seemed legit.  I had gotten something like this before, using an earlier version of Thunderbird, but there the problematic email account had been Hotmail.

I wasn't sure why I was getting this.  A Mozilla page said, "The problem usually arises when the mail server's certificate is invalid for some reason. . . . Often this problem takes care of itself, in that the mail server provider will realize that they have made an error with their certificate and will replace it with a corrected version."  I had started getting this program maybe a year earlier.  I didn't know if this meant that I was the only person at Indiana University using Thunderbird, or what the explanation might have been.

I didn't find anything on point in Indiana's knowledgebaseA search led to advice to make a change in Server Settings.  To get there, I had to click Cancel with the "Permanently store this exception" box checked; otherwise the dialog wouldn't budge.  Even so, I had to click Cancel a bunch of times to get out of there.  I went into Thunderbird > Tools > Account Settings.  I went to the Server Settings heading under the listing for the Indiana University account on the left side.  The advice seemed to be that, in that area, I should go to the Security Settings area and set Connection Security to None, instead of its present setting of "SSL/TLS."  Doing that changed the Authentication Method from "Normal password" to "Password, transmitted insecurely."

I wasn't sure about that.  I looked further down that same thread.  Someone else seemed to be saying that the address should be imap.exchange.iu.edu.:993, with a period before the colon.  Staying in the Account Settings dialog, and looking specifically at the list of options on the left side, I moved from the Server Settings option down to "Security," the last option for the Indiana University email account.  There, I went into Certificates > View Certificates > Servers tab.  I saw that I had certificates here for Mozilla, Google, Yahoo, etc.  It seemed like a legitimate list.  I decided to go with the advice, which was to go into Add Exception and type https://imap.exchange.iu.edu.:993.

Before I finished that, I took another look at the Mozilla page.  They said that the mail server provider (i.e., IU) should provide the necessary connection information.  They said that the Add Security Exception option that I was just about to finish would make my email through that account nonencrypted and visible to third parties.  Writers in another recent thread indicated that they, too, were having this problem.  The list of certificates (for e.g., Google, Yahoo) seemed to indicate that they had done what was necessary to provide email security at some level, but Indiana University had not.  I tried another search, but it led to surprisingly few hits, among which the main relevant reaction was puzzlement.

The Mozilla page seemed to be saying that you have three options in this kind of case.  You can ask the mail server people to get it together.  You can add a security exception -- or, as it appeared in my case at least, you would have to add a security exception if you wanted the program to be usable.  Or you could switch to a different account.  I wasn't sure whether switching to a different email program would provide another possible solution.

Saturday, February 18, 2012

Windows 7: Overlapping Partitions, Entire Drive Is Unallocated, Has No Brain, Still Feels Great

I was using Windows 7.  I had a hard drive containing multiple partitions.  I was in the habit of booting with an Ubuntu Live CD, now and then, to get a GParted view of the drive's condition.  GParted would quickly show me problems with partitions, in a way that seemed superior to what I could get in Windows.

(I was new to Ubuntu 11.10.  Unlike earlier versions, there was no longer an option to start GParted via an easy menu pick, which as I recalled was System > Information > Gparted.  I found it in 11.10 by mousing to the top left button (tooltip:  "Dash Home") and doing a search for GParted.  Once I did that, the Live CD temporarily added it to the toolbar stretching down the left side of the screen.)

This time, GParted gave me the surprising information that the entire drive (which I had just been using, minutes earlier) was unallocated; and when I took a closer look via the right-click Information option, GParted said, "Can't have overlapping partitions."  This post discusses this situation.  (The processes described here unfolded over a period of some days, so there may be some discontinuities in the account, but I think the basic picture comes through.)

I began with a broad search and then a narrower one.  These included repeated suggestions that I go into Terminal (available via search, and also down toward the bottom of the left-side button bar, as above) and type "sudo fdisk -lu."  (That's a -LU, not -1U.)  I had a couple of drives, and thus had to enlarge the Terminal window (or scroll back up) to see what it was saying about /dev/sda, which was the drive GParted had considered problematic.  As I looked down the list of what fdisk was telling me about partitions, I couldn't figure out what GParted was complaining about.  What I expected to see was something like this:

Device        Start        End
/dev/sda1         63     5000000
/dev/sda2    4999999     8000000
In that example, sda2 would start before sda1 ended.  But I didn't see anything like that.  The numbers in my list made sense.  I also didn't have an error message in my list, like that shown by one user:  "Partition 1 does not end on cylinder boundary."  A later post in that thread suggested typing this in Terminal:
sudo parted /dev/sda unit s print
That just gave me the same "Error:  Can't have overlapping partitions" message.  This was to be expected:  GParted was a front end to parted.  This was just two ways of getting the same report from the same program.

The problem identified in another thread was in the total disk size reported by fdisk.  The top part of the fdisk output said there was a total of 312581808 sectors in /dev/sda, but the list of individual partitions said that the extended partition (and a logical partition within it) ended at 312590879.  The latter was a bigger number than the former.  That is, the partitions were supposedly continuing on past the end of the drive.  There was also a discrepancy between an early line in the fdisk output, which said that sector size was 512 bytes, and a later line, which said that sector size was 2048 bytes.  The advice given in that thread was to use fdisk to delete and recreate the partitions with the correct size.  I would have been inclined to use GParted for that, as it seemed easier, but on reflection I realized that I had probably used GParted to create these partitions in the first place.  But I guess I could have used GParted and then tested it with fdisk again.

But anyway, I didn't have those problems.  The numbers in my fdisk output made sense.  So far, no answer.  I drifted through another thread that pointed me toward TestDisk.  Typing "TestDisk" in Terminal told me that I would have to install the "universe" repository of program downloads in order to install TestDisk.  This might not have been a problem with an installed copy of Ubuntu but I wasn't sure how to do that with a live CD in Ubuntu 11.10.  It appeared that I might have to remaster my Live CD to include the universe repository.  That seemed to be getting pretty far away from the original mission.

It occurred to me that I ought to be able to get similar output from a Windows program -- to see a list of partitions and sectors like that which I could see in fdisk in Ubuntu.  This would not be CHKDSK, which would check the file structure within a partition.  At the moment, I wasn't sure what program I would use for that purpose.

Before I could pursue that thought, I looked again at the fdisk output.  Now I saw something I hadn't noticed before.  My last partition did not go beyond the end of the drive.  But it did go beyond the end of the extended partition.  In other words, I was supposed to have this arrangement:  Primary Partition 1, Primary Partition 2 (optional), Primary Partition 3 (optional), and then either Primary Partition 4 or Extended Partition; and the Extended Partition was supposed to contain any additional (logical) partitions -- up to the end of the drive, usually.  Mine didn't do that.  The relevant lines from fdisk -lu looked like this:
Device        Start       End
/dev/sda3    7000000    9000000
/dev/sda8    8000000    9500000
In the System column provided by fdisk (not shown here because of insufficient line space), sda3 was the extended partition.  In other words, these numbers were saying that dev8 was starting inside the extended partition, as it should; but it was ending after the extended partition, and that was improper.  It wasn't a question of GParted not being reliable, as I had begun to fear.  GParted appeared to be identifying a legitimate issue.  But GParted wasn't going to help me fix it:  as noted above, it was showing the whole drive as being unallocated, which was incorrect, and the only option it was offering me was to create a new partition in that big unallocated space -- that is, wipe out all my data and partitions on that drive.

I was thinking that I should double-check GParted using a Windows tool, and that anyway it would be nice to have a Windows-type alternative to this Ubuntu tool.  I assumed Microsoft itself would not be inclined to give me something useful for this purpose.  The last partition was a Linux partition, not a Windows partition -- using ext3 format, I believed, not NTFS -- and Microsoft was not known for doing much that would be helpful in the Linux world.

It seemed that I would have to use fdisk, from the Ubuntu CD.  I wasn't entirely sure how to proceed.  Fdisk was giving information in terms of "blocks," but when I typed "fdisk" by itself at the command prompt, it gave me options in terms of cylinders, heads, or sectors per track, but not blocks.

Then again, as I thought about it, I realized that I actually could go into Windows > diskmgmt.msc and delete that ending Linux partition.  Disk Management did display it.  It surprised me that, if I wanted to use a GUI tool rather than a command-line option like fdisk, the tools offered in the Windows operating system would be more helpful, in this case, than those offered by Ubuntu.

I had been able to see that ending partition in GParted previously.  It had been marked in some way to indicate that it was problematic.  Why had GParted ceased to display it that way?  It occurred to me to boot up with an older Ubuntu CD -- version 10.04 rather than 11.10.  I did that and went into System > Administration > GParted.  But no, it was showing "unallocated" too.  So something had changed.  GParted was wrong, and I was definitely going to have to use another tool to fix the situation.

Before taking the plunge into fdisk, I decided to use this opportunity to play with one or two other partitioning utilities I had burned to CD.  One was Minitool Partition Wizard.  It got a glowing Editor's Review and four stars from 389 voters at CNET.  Unfortunately, it produced "Boot failed: press a key to retry" when I tried to boot my machine with the CD I had burned.  Oops.  Same thing on retry.  Well, evidently it was time to download a newer copy.  They were up to version 7.0.

While that was downloading, it turned out that I had another copy of Minitool Partition Wizard CD, version 5.2.  I tried that.  It loaded OK, and it showed the partitions without a problem.  It was also faster and easier to get to the information, using a dedicated partition CD, instead of having to load Ubuntu and find GParted.

So, OK, this was looking promising.  In Partition Wizard, I selected the appropriate drive and clicked the "Show Disk Properties" option.  It didn't report errors.  I wasn't sure if it was even capable of reporting errors.  It said that that last, troublesome partition was actually unallocated space.  As I recalled, I had formatted that partition to be ext3, in case I wanted somewhere to install Ubuntu.  Maybe the Linux partition had deteriorated somehow; maybe that's why it was now problematic.

I decided I could do without that Ubuntu partition.  Since I knew there was nothing in it, I told Partition Wizard to extend the preceding partition to take over this allegedly unallocated space.  But not all of it.  I had become superstitious about running partitions right up to the ends of drives.  Just for good luck, I made that unallocated space into an NTFS drive.  Partition Wizard decided that the last 14MB of that space would have to remain unallocated.  I clicked Apply.  Then I deleted the partition I had just made and replaced it with a relatively small (3GB) ext3 partition (which would not show up in Windows, just in case "unallocated" was a potentially troublesome status at drive's end), putting the rest of the space into the preceding partition.

That last step didn't go swimmingly.  Partition Wizard said that it had "Failed to execute the following command" with "Error Code 36':  Minitool Partition Wizard detects that file system of the partition have errors.  Please use 'Check File System' function to fix it first."  Ah, so there was a tool of that sort lurking there somewhere in Partition Wizard.  But where?  Now I saw why it hadn't popped out at me earlier:  it was in the Partition menu (and also the context menu), but it was greyed out.  I couldn't very well check the file system with a greyed-out tool.  It didn't seem willing to run on any partition or drive on that computer.

By this point, the download of Partition Wizard version 7 had completed on the working machine, so I installed it there and took a look at it.  Apparently it would work, for at least some purposes, while Windows was running.  I doubted it would help the troubled computer, since the partition that I was trying to resize contained my paging file.  The Check File System option was not greyed out.  But now it seemed that, if I wanted a bootable CD, I had downloaded the wrong thing.  The Minitool page for the bootable CD didn't specify that the bootable CD ISO would give me the Check File System option, but I decided to give it a whirl.  I downloaded that ISO and tried to burn it to a blank DVD.  ImgBurn said, "Invalid or unsupported image file format!  Reason:  First image file part is less than 2048 bytes in size."  So, OK, bad download.  I re-downloaded the ISO and tried again.  It was a slow download.  This time, when I tried to boot the newly burned DVD, I got "Unknown keyword in configuration file:" followed by eight junk characters and then "Could not find kernel image:  linux."  I tried a cold boot but still got the same thing.  This DVD was junk.  I looked for other sites to download the ISO from, but they seemed to be the kinds of sites that would install malware.

I did want a bootable CD alternative to GParted, for situations like this.  I looked into Parted Magic, but it didn't sound like it had the power of GParted.  Their Live USB option had TestDisk, though, and would apparently only take about 45MB, so I thought it might be a good option to put on an old 256MB USB drive.  I started up UNetbootin and pointed it toward Distribution > Parted Magic > Latest_Live.  I had already plugged in my USB drive, so I chose that, and indicated that I wanted to reserve 50MB for files to be preserved across Ubuntu reboots.  (I wasn't sure exactly what that was about, but it sounded good.)  It started the process of downloading and installing whatever it needed, onto my USB drive.  It said the download would be 175GB.  Larger than expected.  I decided to install from an ISO instead, so that I wouldn't have to re-download if the first try didn't succeed.  So I downloaded the Parted Magic ISO and then went through the UNetbootin installation process that way.

While that was in play, I rebooted the troubled machine and went into Windows > Start > Run > diskmgmt.msc.  As expected, Disk Management reported no drive problems.  It showed all my partitions, including the smallish ending partition I had set aside for some possible future Ubuntu installation.  I right-clicked on the partition adjacent to the ending partiiton, which Disk Management showed as "unallocated."  There was an option to "Extend Volume."  I went partway through that.  It looked like Windows 7 was ready to fix the problem.

I installed Partition Wizard on the troubled machine.  I wondered whether it would perform differently than the bootable CD version (above).  It looked like it, too, was ready to go.  Why was life so hard for had the CD?  Another highly recommended alternative was Easeus Partition Master.  I downloaded it from CNET (four stars, 943 voters) and installed it.  Same thing there:  the Windows installed version saw the partitions as expected, and seemed prepared to merge or resize as desired.  But their "Bootable CD" option took me to a webpage that said the free version -- what I had just installed -- wouldn't include a bootable disk option.

By this point, the UNetbootin process was nearly done.  But I decided to reboot with the Ubuntu 11.10 CD to test some other drives first.  To my surprise, GParted was now showing everything as being OK on the previously troubled machine.  Had we fixed something when I wasn't looking?  And now I saw what the problem was -- why I'd gotten that funky fdisk output (above).  The last partition on the disk, the one that I had set aside as unallocated, was not in the extended partition.  It was a primary partition.  Somehow, I had gotten myself into this arrangement:  Primary Partition 1, Primary Partition 2, Extended Partition, and then Unallocated space outside of the extended partition.

Well, I didn't want that, especially not if it was going to confuse GParted or anyone else.  It looked like I was going to have to wipe out the extended partition -- what's 900GB, between friends? -- and rebuild the thing, and have no excuse to test my cool new bootable USB version of Parted Magic (sniff!).  Then it occurred to me to wonder what those Windows programs -- Disk Management and Easeus and Partition Wizard -- were planning to do with this situation.  Were they somehow going to merge that last unallocated space into the extended partition, in some way that GParted wouldn't do?

Or, no, wait.  I was trying to get GParted to merge the unallocated space directly into the last logical partition.  That's not how these things are done.  I needed to merge the unallocated space into the extended partition, and then shuffle that space on down the line, inside the extended partition, to whatever logical drive was most deserving.  Is that what Windows Disk Management was planning to do?

I decided to find out, because if Windows could walk & chew partitions at the same time, I could meanwhile use the troubled machine to work on other things.  Boot back into Win7; back into Disk Management; extend the volume.  It took ... about seven seconds.  Kind of ridiculous.  I checked it with a GParted reboot.  Now GParted was reporting a single partition filling the entire drive, plus a 1MB partition at the end.

Some days had passed since I had started the processes described in this post, and I wasn't entirely clear on exactly what I had done as I reviewed my notes (above).  But I was sure I had set up an empty ending partition, in my superstition that having a little space at the end could sometimes prevent problems, and 1MB sounded like a possibility.

But in any case, the question had recurred:  why was GParted not seeing the multiple partitions that I had just been working in, in Windows?

I decided to try out the bootable Parted Magic USB drive that UNetbootin had concocted for me.  I made sure my BIOS was set to boot USB-ZIP first (instead of USB-FDD or USB-CDROM), and proceeded to boot that Parted Magic USB.  Its boot menu gave me choices between the default, which would run from RAM, and a couple of bootup alternatives, in case my system had less than 312MB of RAM.  It also gave me submenus for Extras, Failsafe, and RAID.  The Extras menu contained options to load various hardware diagnostics (e.g., Hardware Detection Tool, Memtest86+) and boot managers (e.g., GRUB, GRUB2).

I loaded the default (which, as I soon discovered, would load automatically after 20 seconds if I didn't make a selection).  This gave me an impressive desktop:  Parted Magic was offering me at least 20 to 30 utilities (e.g., Disk Health, Monitor Settings, File Manager) plus Firefox.  I was surprised they were able to squeeze so much onto one little USB drive.  I tried out the Firefox:  it worked.  This definitely seemed like a tool worth having.

But then it seemed that maybe I had played with the Parted Magic boot menus too much.  After a first or second reboot, the graphics became kind of buzzy (i.e., unexpected colored dots flashing in various colored spots, and along random horizontal and vertical lines) and off-centered (i.e., with a couple inches of black space on the left edge of the monitor) and began flashing on and off (i.e., intermittent black screen).  I tried a cold boot (i.e., shut the machine down for at least 30 seconds before restarting, to clear memory).  (Incidentally, the shutdown menu gave me the option of saving my current Parted Magic session.  I guessed that this option was possible on a USB drive, which would have space to store such information, but might not have been available on a bootable Parted Magic CD.)  But when I rebooted, I got two unexpected results:  the USB drive did not boot -- instead, I went into Windows -- and now the buzzy and off-center graphics were affecting Windows too.  Hmm ... probable hardware issue.  I tried rebooting with an Ubuntu DVD, without the USB drive plugged in.  But no, same thing there.  It seemed to be getting worse:  the black spells were longer.  A monitor reset didn't help.  I connected the monitor to a different computer.  It worked OK there.  I tried doing a longer cold shutdown -- several minutes -- and booting again, still without the USB drive plugged in.  That worked.  Now I got a normal Ubuntu screen.  I tried booting the USB drive again.  Now it worked.  Without further ado, I went straight into its Partition Editor.  But that turned out to be just GParted.  It showed the same thing as GParted had shown when run from the Ubuntu Live CD.  And I was getting the funky graphics again, and had to do another five-minute shutdown.

I had not yet succeeded in finding a bootable freeware CD or USB drive that would give me a believable impression of the partitions existing on that hard drive.  I booted into Windows to take a look with the installed (as distinct from bootable) versions of Easeus and Minitool that I had installed there.  I expected them to provide a realistic picture, even if all they did was to parrot what Windows was detecting (i.e., multiple NTFS partitions on that drive).  But now, even after a 10-minute shutdown, the graphics were still buzzy, off-centered, and flashing (indeed, mostly) black.

What in the world had happened?  I was tempted to try the bootable USB drive in another computer, to see whether it was the cause of this, but then I decided I really didn't want two messed-up computers.  It did appear that the USB drive had caused it; there had never before been anything of this nature.  The screen was totally black by now.  I had to do a hard reset to see anything.  I wasn't getting any distortion at the bootup phase.  I tried loading the fail-safe defaults in the BIOS.  I got an option to boot into Safe Mode, so I tried that.  I was still getting some buzziness there.  I tried Control Panel > Device Manager > Display Adapters > right-click on the adapter > Update Driver Software.  It said my software was up to date.  I tried right-click > Uninstall the display adapter (but not its driver software).  A reboot into Normal Mode still showed some buzziness here and there (e.g., in a CMD window).  I went back into Device Manager.  Instead of Display Adapter, I had Other Devices > Video Controller (VGA Compatible).  After five minutes or so, I saw a balloon tip telling me that the drivers specific to my video adapter (ATI Radeon HD 4250) had installed successfully, and now that device was visible as a Display Adapter in Device Manager.  But apparently that took us back to the Dark Ages.  After a reboot, the screen was black.

I rebooted into Safe Mode, hoping to do a System Restore.  Funny thing:  the login screen wasn't taking keyboard input.  I couldn't enter my password.  Even if I typed the password and hit Enter, the login screen did not change.  I rebooted and tried the same thing in Normal Mode, though this time I was entering the password into a black screen.  (I saw a flash of the login screen before it went black.)  I got a Microsoft happy sound, which as I recalled indicated that I had succeeded in logging in.  But the screen remained dark.  Moving the mouse, hitting WinKey, etc. brought no joy.  I could see the hard drive light burning away -- there was obviously a lot going on -- but I was blind to it all.

The monitor had VGA and DVI ports.  It was connected via DVI.  I thought I should try a VGA cable and see if that made a difference.  This transition led to the culprit:  loose DVI connector.  No VGA necessary.  Sorry for slandering the good name of the Parted Magic bootable USB drive.  I mean, it still had GParted, and thus continued to be useless for present purposes in that regard.  But at least it hadn't completely fubared my graphics circuits.  At least not as far as I could tell.

Back in Windows Normal Mode, I started Easeus Partition Master 9.1 and MiniTool Partition Wizard Home Edition 7.1, both in their installed forms.  Both saw the multiple partitions on that drive that GParted had been unable to see.  Minitool listed them in alphabetical order by name; Easeus listed them in the preferred alphabetical order by drive letter.  Easeus did, and MiniTool did not, show an 1.6MB unallocated partition at the end of that drive.  Both appeared to be glorified and perhaps enhanced versions of Windows Disk Management.  Both provided an indication as to whether a given partition was primary or extended.

On the drive in question (unlike another drive in that machine), both Easeus and MiniTool were listing the partitions as neither primary nor logical, but rather as "simple" partitions.  I hadn't paid much attention to the difference until now.  I didn't recall requesting any simple partitions; I had always just used primary and extended.  I guessed that Windows Disk Management had converted the primary and extended partitions previously visible in GParted to simple partitions in that little seven-second operation where it "resolved" the previous situation.  The general idea was that there were dynamic disks with simple partitions, and there were basic disks with primary and extended partitions, but the two did not mix:  you could not have a dynamic disk with a primary or extended partition, or a basic disk with a simple partition.

It seemed that GParted was unable to work with simple partitions.  Another way to say this was apparently that GParted would work only with basic (not dynamic) volumes, and the latter was what Disk Management had given me.  I noticed that MiniTool did, but Easeus did not, provide a right-click option to resize a partition, also available in Windows Disk Management.

A search led to a MiniTool webpage that said MiniTool could convert a dynamic drive back to a basic drive without data loss.  (Of course, one could always wipe and recreate partitions and then restore data from backup, assuming backup existed.)  It looked like I would have to buy the pro version to get this capability.  Another thread said that this wasn't possible without data loss via Acronis products, though one person did interject that a bit of expert-level hex editing could do it pretty easily.  (I suspected that, if it were that easy, these programs would have been offering the capability, but maybe that was exactly what Minitool was doing.)  The MiniTool webpage said that Partition Magic could do it.  I had used PartitionMagic for years, almost always with good results, but thought it was defunct and incompatible with Windows 7.  (Symantec had bought a good program and let it go to pot.)  So I wasn't quite sure what that MiniTool page was trying to say.

I was out of time for researching this issue.  My present impression was that I could (a) stay with the dynamic disk and use Windows Disk Management or the free MiniTool to resize or delete its simple partitions as needed; or (b) backup and wipe the drive, and then create a basic drive and fill it with primary and extended partitions, using virtually any of these tools, including GParted; or (c) buy the pro version of MiniTool and try converting the simple partition to primary without data loss; or (d) explore that expert editing approach to convert the partition manually.

My principal reason for wanting to be able to use GParted was to have a non-Windows perspective on what was happening on the drives.  This was useful in two regards.  First, until Windows converted the basic disk to a dynamic disk, I had been able to see partition information (with GParted and also with fdisk) and get insight into possible problems on my drive.  Second, GParted gave me a very quick heads-up as to whether there were problems on a drive that would call for CHKDSK /R.  Without GParted, I just ran CHKDSK /R on each partition.  It was a very slow and inconvenient process, but I had the impression that it was better than just using the disk tools available within Windows drive properties.  Its inconvenience tended to discourage doing it.  Being able to book GParted (with or without Ubuntu) and take a quick look seemed to encourage more often disk checks.  Typically, there would be no more than one partition needing this attention.

I decided that it would be easier to stay with the dynamic disk, at least for now, and that doing so would give me a chance to learn something about that kind of disk and its simple volumes.  I just had to remember that this was why I was getting those weird results from GParted.  There was also the possibility of an eventual update from GParted or some other tool to handle these volumes.

Saving Disk Space; Finding Types of Files to Shrink

I wanted to save drive space.  The first line of attack was to use a freeware program to find large files.  Among the various possibilitiesWinDirStat, TreeSize, and SpaceSniffer seemed to be the most positively reviewed and/or familiar to me.  Among those three, TreeSize and WinDirStat used a directory listing to indicate the largest folders, while SpaceSniffer and WinDirStat used a graphic approach to highlight large individual files.  In other words, WinDirStat offered both.  The graphic approach led me to multiple space-saving solutions faster than the TreeSize approach.  Between the two graphic progams, I found SpaceSniffer's graphic presentation more readable and zoomable than that of WinDirStat, except that WinDirStat made it easier to tell, right from the start, whether a folder was large because it had many small or few large files.  SpaceSniffer also offered a full right-click Windows Explorer context menu, while the right-click options in WinDirStat were more limited.

This sort of program was good for quickly locating large space hogs.  The concept, which I pursued to some extent, was that you would probably free up the most space most quickly by homing in on very large files or folders.  But once I got past the point of being impressed by the graphics, I noticed certain drawbacks.  One was disorientation, especially in SpaceSniffer:  its method of zooming in on a particular folder was not the exact opposite of its method of zooming out.  It seemed like I was coming out by a route different than the one I had gone in on.  So it could be hard to build an intuitive sense of where you were located, with respect to the drive or directory as a whole.  Another drawback was that I could not compare across partitions.  I had to back out and start over, or run a different session of the program, to see clearly that I should be focusing on one drive rather than another.  Another missing dimension of coherence:  file type.  I might never know, from looking at the graphic maps, that I was consistently making PDFs or JPGs that were larger than they needed to be.  Even a grossly large PDF could escape notice when nestled among AVIs ten times its size.  There was also no logging or comparison capability that might alert me to the fact that a certain folder had been growing more rapidly than I would have expected -- or, for that matter, that a certain folder had disappeared since the last time I ran the comparison.

I thought that I might be able to capture the chief benefits of these programs -- that is, their ability to draw attention to large files -- while also adding at least some of those missing ingredients (though admittedly I could not compete with their graphics).  What I wanted, for this purpose, was a simple file list that I could sort according to selected criteria, particularly folder size, file date, file size, and file extension.  This approach seemed likely to require a much larger time investment up front; but once I had the file list, it seemed I would be able to do quite a bit of analysis and revision of files and processes, so as to root out a variety of kinds of disk space waste.  In short, I was seeking a systematic approach that would minimize my preoccupation with the same few large files (which may have had to remain large for good reason), turning my attention instead to other areas where I could make an impact on drive bloat.

The approach I took was to develop a somewhat automated method of generating a list of files across multiple partitions, and then put that list into a spreadsheet.  It took some work to figure out a way to automate the production of a file list that would work simply in a spreadsheet.  The problem was that simple batch commands, with which I had some familiarity, did not want to put all relevant information about each file on a single line.  (While I was particularly interested in the full path, file size, and date, others might have wanted to draw on some other available types of file information, such as file attributes.)

As described in another post, I found the desired solution by installing TCC/LE and running its PDIR command in a batch file. This operation required two parts. First, I had to work up the command that I would run to make everything happen. This command would start TCC/LE and would tell it to open a batch file. To find where TCC was, I used the Properties > Target path I found in a right-click on its Start Menu icon. To see if this would all work, I generated a little batch file called Test File.bat. Then I ran this:

"C:\Program Files\JPSoft\TCCLE13x64\tcc.exe" "D:\Current\Test File.bat"
It worked. Test File.bat ran. So I went ahead and replaced the second part of that command with the name and location of the real batch file that I wanted to run. I called it ListAllFiles.bat, and I put it in a permanent location with other batch files that I would run for various purposes. (For the moment, it was an empty file, but that would change momentarily.) I also made a copy of the Start Menu shortcut that would run TCC. I modified that shortcut's Properties so that its Target line contained the information just described: the path to tcc.exe, and then the path to ListAllFiles.bat, each in quotation marks as shown in the line quoted above. Now I could double-click on that icon to make TCC run ListAllFiles.bat, instead of having to look up the proper command syntax. Once that was done, I just needed to make sure ListAllFiles.bat said what I wanted. I was still working on that, but for now it looked like this:
@echo off
cls
echo.
echo For cleanest results, empty the Recycle Bin before proceeding.
echo.
pause
cls
echo.
echo Notepad may take several minutes to display a large list.
echo.
echo Be patient ...
echo.
:: PDIR requires TCC/LE to be installed

pdir D:\ /s /(dy-m-d zc fpn) > "D:\Current\List of All Files on D and E.txt"

pdir E:\ /s /(dy-m-d zc fpn) >> "D:\Current\List of All Files on D and E.txt"

start notepad.exe "D:\Current\List of All Files on D and E.txt"

exit
The last four lines (double-spaced for clarity, in case the blog wraps them) were where the action was; everything before that just displayed a few informational notices and a comment about PDIR that would not be visible when the batch file ran. I opened the resulting text file, "List of All Files on D and E.txt," in a LibreOffice Calc spreadsheet, since LibreOffice (alternately OpenOffice 3.3) could accommodate a large number of rows. When I did that, LibreOffice Calc detected that it was a text file and defaulted to LibreOffice Writer, but I just copied and pasted from there into Calc. I told it that the imported text had fixed-width columns, and I pointed out where those columns were.  LibreOffice Calc crashed repeatedly during this process -- I had to remember to save frequently -- but in the end it came through.

So I had my table. I added a column to display the extension, extracted it using the RIGHT function (using different values to extract extensions of different length, e.g., .html, .px), and saved the extension in an adjacent column (i.e., undisturbed by those calculations of varying extension length).  I added a column in which I could record the date on which I had last examined the file to see whether it was currently feasible to shrink it, and another column for notes.  For instance, I had a 6GB zip file that I couldn't get into right now, because there were things I needed to do and learn before I would be ready for the project that its opening would commence.So I added that "Last Examined" column.  For purposes of making a first pass through the files, I could filter out that one, among many others, as not being of further concern right now.

Now that the spreadsheet was ready, I could do some file sorting.  First, I sorted by date, and also by extension, to find any files whose name or other properties might need to be adjusted, perhaps with the aid of a relevant utility (e.g., SetFileDate, TrID).  Then I sorted by file extension.  I added a Flag column to the spreadsheet, to indicate files that would be worth looking at, to see if I could possibly shrink them.  I put an X in the Flag column for each AVI file.  I suspected that, with or without editing, most AVIs could probably be converted to MP4 or some other more compressed format that would retain about the same apparent quality for a fraction of the space.  Likewise for BMPs (most could probably be JPGs) and WAVs (many could be converted to MP3).  I also flagged all ISOs, which I probably didn't need to keep, and all ZIP (and RAR and 7z) files, because they could hold a lot of unnecessary stuff somewhat removed from notice.

Next, I sorted the spreadsheet in order of declining file size, and flagged all files larger than 500MB.  Within that sort, I filtered for common image extensions (including PDF as well as JPG and PNG) and flagged all files larger than 100MB.  I could see that these steps were going to draw attention to whole folders full of files.  For example, I noticed a folder of mixed MP3 and WAV files, all of which could have been MP3s.  In that instance, I suspected I would probably wind up converting the whole folder at once.

I figured I would probably refine this approach, if I went back through it again sometime in the future.  But for right now, these steps resulted in the flagging of about 9% of the total number of files on these partitions -- and those 9% accounted for about 60% of disk usage.  I probably wouldn't have time to work through all of those files individually.  But identifying a few major categories of unnecessarily large files did seem likely to yield some reductions in disk usage that I could achieve through mass conversions and other relatively simple steps.  Finally, while I had concluded that I liked LibreOffice Calc -- it had pretty much stopped crashing as I got more familiar with it, thus probably making fewer finger fumbles -- the focus on just 9% of the files gave me a list short enough to handle in Excel 2003.

Friday, February 17, 2012

Windows 7: Finding a DIR Alternative

I needed a DIR-type listing that would provide extended information about a file:  its name, date, and size, and also its path (i.e., the folder and subfolder where it was located), all on a single line of output.  DIR didn't seem to be capable of this, and neither did the utilities I found with a search (e.g., Karen's Directory Printer).

Another search raised the possibility that certain Linux utilities brought over to Windows might have this kind of capability.  I didn't want to run a Linux virtual machine on Win7; I just wanted to be able to run Linux commands that might add functionality I wasn't getting in Windows 7.

Linux commands were probably not the only alternative.  For instance, I could have learned how to use Windows PowerShell scripts.  My general impression of the Microsoft approach (as in the contrast between original BASIC and VB) was that, unfortunately, something that could be done with one relatively simple command in another tool would require three or four lines of code, which I would be able to write only after mastering a handful of relatively abstruse programming concepts, in the Microsoft product.  This impression seemed borne out when a search led to an indication that the DIR equivalent in PowerShell would require a multiline FOREACH loop.

Preliminary inquiries gave me the impression that Cygwin sought to provide a subsystem that would emulate a Linux machine within Windows.  There were indications that other projects (e.g., MSYS) sought to provide a somewhat comparable (e.g., 110MB) environment.  These seemed a tad heavy for my purposes; I was looking for something more like GnuWin, which was described as relying "only on libraries provided with any standard 32-bits MS-Windows operating system" and as not needing any Unix emulation.  Ideally, I would have some cool, relatively simple Linux-like commands available at the Windows command prompt.

By this point in my investigation, several people had mentioned CoreUtils.  This turned out to be a package within GnuWinThe CoreUtils homepage described it as "the basic file, shell, and text manipulation utilities of the GNU operating system."  GNU was "a Unix-like operating system," in development since 1983, that apparently provided most of the materials used by Linux (which was, in turn, the source of Debian Linux, from which Ubuntu was built).

To clarify, it appeared that the CoreUtils existed in GNU, and there was an offshoot called CoreUtils for Windows.  Apparently this was what I would be getting through GnuWin.  There were other approaches to this sort of thing (e.g., Gow, UTools, UnxUtils), but my sense at this point was that GnuWin was dominant in this category.

I looked at the list of tools included in CoreUtils (for Windows).  I didn't count them, but I thought I remembered seeing an indication that there were more than 100 of them.  They were grouped into three main categories:  file utilities, text utilities, and shell utilities.  In the file utilities group, the description of the ls command was simply "lists directory contents,"; and vdir would apparently provide a "long directory listing."  These sounded like what I needed.  Examples in the text utilities category included comm ("compares two sorted files line by line") and uniq ("remove duplicate lines from a sorted file").  Examples in the shell utilities category included sleep ("suspends execution for a specified time") and uname ("print system information").

Although I could have just clicked on a download link, I went into the folder for the latest version and saw that it had not been updated since 2005.  This made me wonder whether I should have opted instead for Gow (short for GNU on Windows), which had apparently been updated as recently as November 2011.  I found a spate of (1 2 3 4 5) brief summaries of Gow published about that time.  Their similarities raised the thought that they may have been written from similar press releases.  Not that that would necessarily be bad.  Any product being promoted in 2011 could count as fresh air against a 2005 alternative.  But it was not reassuring that none of these explained clearly whether Gow was genuinely different, or just a borrowing, from the seemingly better-documented and more widely used GnuWin.  I found a page stating that Gow had been developed by a corporation in 2010 and used for some years before being released as open source.  This appeared to be an authoritative page.  It puzzlingly characterized GnuWin as being appropriate "if you want just one or two utilities."  A list of Gow utilities seemed similar, at a glance, to the GnuWin list (above), though I noticed that it did not have vdir.  The seeming mischaracterization of GnuWin, combined with the sense of evasion in the press-release writeups, persuaded me to stick with Plan A.

So now I did download and install the executable (exe; not src.exe) version of CoreUtils (6MB).  But, weird thing, they didn't give me a way to run the program.  My Start Menu had links to several PDFs.  Actually, it was rather messed up: they gave me four shortcuts to a total of two PDFs, and some of those links were buried about five layers deep in superfluous subdirectories. They also gave me two links to CoreUtils Help files that, when I clicked on them, gave me the familiar "Why can't I get Help from this program?" message that Windows 7 kindly provided when I would try to run Help files written for Windows XP.

Obviously, I ignored the manuals' actual contents and went looking for a way to run the program.  Weird thing:  I had all these redundant and dysfunctional help materials, and a link to an Uninstall routine, but no actual "Run CoreUtils" shortcut. I was half-tempted to uninstall them as defective, when it occurred to me that, well, they're supposed to be run from the command line, not the Start Menu.  So, OK, I went to the command line and typed "ls."  Windows said, "'ls' is not recognized as an internal or external command, operable program or batch file."  Hmm.  The manual, then, if I must.  Or manuals, I should say:  a regular-looking manual and also what appeared to be the set of Linux MAN (i.e., manual) pages, both in PDF format.  Neither had installation instructions.  I went to the ls MAN page.  It seemed to say that "ls -a" would be a working command.  Well, not on my machine, it wasn't.

I rooted around and found an article on how to use CoreUtils.  It said that I would have to adjust the PATH environment variable to tell the system where to look for the CoreUtils command instructions.  My way of applying those instructions was as follows:  first, in Windows Explorer, find where the CoreUtils executables (e.g., ls.exe) were installed.  On a 32-bit Windows 7 system, the location would probably be C:\Program Files\GnuWin32\bin; on a 64-bit system, C:\Program Files (x86)\GnuWin32\bin.  With that folder selected, click on the address bar at the top of Windows Explorer, make sure the whole address was highlighted (Ctrl-A if necessary), and copy the address (Ctrl-C).  Now I went to Start > Run > SystemPropertiesAdvanced.exe (could have used sysdm.cpl and then the Advanced tab) > Environmental Variables > System Variables > highlight Path > Edit > hit the End key.  There, I typed a semicolon (";") and then pasted in what I had copied from the Windows Explorer address bar.  (Could have typed it manually, using the 32-bit or 64-bit address just shown, but this was more accurate and it also forced me to verify the actual location.)  I OKed out of there and tried ls -a again on the command line.  Did I have to reboot to make the Path take hold?  Yes.  That was it.  I had ls, and it listed files.

So now, how about getting all that information mentioned at the outset -- path, date, etc., all on one line?  First question:  how could I get command-line command help?  In Windows, it was DIR /?.  But the /? option gave me an error with ls.  "man ls" didn't work either.  Page 9 of the manual PDF said the MAN pages were no longer being maintained.  I wasn't sure if that applied to what looked like the MAN pages included with GnuWin.  There wasn't a MAN MAN page in that PDF.  Page 10 said --help might work.  I tried "ls --help" and experienced satisfaction.  What I was seeing there looked like what appeared on pages 50-52 of the man PDF, pages 60-70 of 176 (text pages 52-62) in the more explanatory help PDF.  I wasn't inclined to read 11 pages to figure out how to get my directory listing.  Skimming down through the ls --help output, I tried "ls -l -N -R."  Good, but no cigar:  the path wasn't on the same line as the filename; no improvement over DIR.

The user's guide PDF didn't seem to think that there actually was a way to print the file's path on the same line as its date, filename, etc.  And so there I was.  I had come all this way with faith in my heart for the infinite possibilities of Linux.  I fervently believed that, with GNU, anything was possible.  But now, with my limited knowledge of Linux and such, cruel reality was saying Bismillah, no! we will not let you have all that stuff on one line of output.  There actually probably was a way to do it with some other tool, like the awe-inspiring grep, available in a different GnuWin package.  But I wasn't quite ready to go there.  In this project, grep looked, for me, like a bridge too far.

I thought about posting a question in the GnuWin Help forum.  But there had only been a handful of posts there in the last couple of months.  I also thought about going down the list of other utilities contained in CoreUtilities, so as to demonstrate to myself that this hadn't been a wild goose chase.  I thought about trying Gow after all, just in case its version of ls had different capabilities.  I thought about working up a kludge in which I would do a listing of all directories first (with e.g., "dir /ad /s /b") and then try to invent a way to append the pathname to each file line.

But before pursuing those rather lame possibilities, I noticed TCC/LE, advertised as a complete, powerful replacement for Windows CMD.  (TCC was short for "Take Command Console.")  It got 3.5 stars from 43 voters at Softpedia, only a solitary vote (five stars) at CNET -- but it had apparently been updated there just a few days earlier.  At MajorGeeks, it averaged 4.07 from 38 voters.  The description said it had enhanced commands (specifically including DIR) with new options.  A search didn't encourage the sense that there was a regular category of this sort of thing, with lots of competitors.  I downloaded and installed it.  The installation process seemed pretty slick, ending with a direct ride to their forums.  The installation left me with an open CMD window with a funky prompt, though apparently it was actually their own version of a command window.  (I did have another Win7 command window open throughout the installation.  It remained functional; I was able to close and open a new one after installation.)  I typed Help at their command prompt and went straight into their GUI help dialog, which actually made me say "Wow."  It wasn't spectacular; it was just good, and helpful, which I guess counts as spectacular after a long slog.  I replaced their ugly prompt with the ordinary Windows one by typing "prompt $P$g" at the prompt, though not without first amusing myself with variants (e.g., "Now what?").

Eventually I discovered that their help dialog was more or less the same as their online help page.  The manual had a large number of further instructions on how to tinker with the prompt and, it seemed, everything else.  Typing "option" at the prompt brought up settings, but not an obvious way to preserve prompt settings between sessions; it appeared the answer to that might lie somewhere within their SET command.  Anyway, I found information on their DIR command almost instantly, and also got a cursory version of it by typing dir /? at their prompt.  It led me to PDIR, and there I found the answer I was looking for.  What I had to type in a TCC/LE command window was this:

pdir D:\ /s /(dy-m-d zc fpn) > dirlist.txt
That gave me all of the information I was looking for, on a line-by-line basis, for every file on drive D, output into dirlist.txt.  Specifically, with the options in that sequence, I got the date (y-m-d), size (with commas), and the file path and name.

I took a quick look at their list of Commands by Category.  I also saw that they had a number of video and textual tutorials.  An impressive program.  But in any case, this investigation was done.