Tuesday, May 25, 2010

PUSBLANXAD01 AD-LAN01 PC USB Network Adapter Driver Blues

On Amazon.com, I bought an item described as "NEW USB to LAN RJ45 Ethernet 10/100 Network Adapter Card."  The purpose was to see if I could use USB to connect to the Internet, where I did not have a working ethernet connector for the usual cable connection.  The device was cheap -- about $7 with shipping -- and when I plugged it in, it lit right up.  The problem was, contrary to the ad, it did not come with a driver CD.  And since the Windows Vista computer I wanted to use it on was, of course, not able to go online, I could not download and install the drivers automatically.

The device came in a little box with a label on it that read, "16254 PUSBLANXAD01[J133] PC USB Network Adapter."  The UPC product number on the box was 8-77083-03542-3 (or 877083035423).  The device itself had a label that said, "Model:  AD-LAN01."  Both stickers said, "Made in China."

Buy.com had a review that I didn't see before buying, in which the buyer said that s/he had the same problem -- no driver CD.  That Buy.com webpage said the item was made by eForCity.  I went to the eForCity webpage.  It had no link for driver downloads.  There were several reviews.  Some said they had no problem; some said it didn't work.  I wondered whether it mattered if you used it on Windows XP or Vista.  I plugged it into a WinXP machine and tried that.  Windows recognized it as a "USB Network Controller," but the Found New Hardware Wizard said, "The wizard could not find the software on your computer for USB Network Controller."  I chose the option of connecting and searching for the software on the Internet -- using the existing ethernet connection on that computer to do so.  A minute later, it said, "Cannot Install This Hardware.  The hardware was not installed because the wizard cannot find the necessary software."  I verified that my online connection was working OK, tried again, and got the same thing.

Some of these reviews said something about downloading drivers from other websites.  Good way to get a virus.  I tried a search for AD-LAN01 instead of the previous search for PUSBLANXAD01.  That turned up only a couple of hits, one of which was a thread on which the person said they had tried downloading drivers and still had no luck, and another of which was a driver download page for some kind of graphics device that was apparently also called the AD-LAN01.  I tried a search for the UPC.  This gave me a post from someone who said the CD did come with theirs, but it was nonreadable and they had to reinstall the driver weekly.  So, OK, toss it in the parts box and maybe someday I'll find a computer that it works on.  Junk.  Should have bought a brand-name product.  And that was my next step.

Monday, May 24, 2010

Excel 2003: Count the Number of Times a Letter Appears in a Cell

I had tracked down the solution to this problem once before, and then couldn’t remember it or find it when I needed it again, so here it is.  It’s borrowed from another source, but I don’t mind, as long as it meets the need.

The question is, how do you count the number of times a letter occurs within a cell, in Microsoft Excel 2003?  I was searching for this:

count occurrences "of a letter in a cell" "excel 2003"

when what I should have been searching for was this:

“Count the times a specific character appears in a cell”

but probably not this:

"excel 2003" "Count the number of times a character appears in a cell"

Anyway, the solution is to use either of these:

=LEN(A1)-LEN(SUBSTITUTE(A1,"/",""))
=-LEN(SUBSTITUTE(A1,"/",""))+LEN(A1)

The former is simpler, and it works.

Sunday, May 23, 2010

How to Arrange Cells from Many Columns into One Column in Excel 2003

Suppose you're using Microsoft Excel 2003.  Suppose you have data in multiple columns, in an irregular array, like this:


And suppose you want to get all of that data into column A, like this:


How should you proceed?

Summary

To arrange an irregular table so that all of its cells are in a single column, create a separate worksheet for your calculations.  Count the number of cells containing data, so that you can be sure your process works correctly.  Use the CELL function to return the locations of the cells that actually contain data.  Copy those results into Word.  Convert that table to text.  Use Find-and-Replace to shrink that text file.  Paste it back into Excel.  Use string functions and indexing as needed to arrange the list as you wish. Use the INDIRECT function to show the contents of the referenced cells.

If this post is helpful, please add a comment below.

Step by Step

In this answer, I'll take the slow route, because it may be more efficient.

First, find out how big your spreadsheet is.  From anywhere in the spreadsheet, hit Ctrl-Home.  Let's say that takes you to cell A1.  That's the upper left corner of your spreadsheet.  Now hit Ctrl-End.  Let's say that takes you to FB1765.  That's the lower right corner of your spreadsheet.  (That's a pretty big spreadsheet.)

With a spreadsheet that big, things can get confusing.  If it were smaller, you could do the conversion manually.  There are a couple of ways to do that.  One would be to sort each column, by itself, and cut and paste only those cells containing data to the bottom of column A.  Another way would be to do an AutoFilter (Data > Filter > AutoFilter) and cut and paste the results from each column.

But we have a big spreadsheet, and we want a faster and safer solution than we could get with a manual cut-and-paste operation.  So now open a new worksheet within the existing file.  That's Insert (from the menu bar) > Worksheet.  Do your work here.  This will give you more space to work in, and will protect your original spreadsheet from unwanted changes.  (I'm referring to "spreadsheet" and "worksheet" interchangeably here.)  So remember:  we won't be making any changes to your original spreadsheet; all of this will take place on other spreadsheets.

Let's say the original spreadsheet is called Multiword and this new spreadsheet is called Sheet1. In Sheet1, go to cell A1 and type this:

=IF(LEN(Multiword!A1)>0,"x","")

This says, if the length of cell A1 in Multiword is greater than zero (that is, if there's something in the cell, even just a spacebar space), then give me an "x"; otherwise, give me nothing.  This is useful because sometimes formatting can cause cells in Excel to behave as though there were something in them, when there's not.

Now let's copy that formula so that it covers the same territory in Sheet1 that your data occupy in Multiword.  Move your mouse cursor to the lower right corner of cell A1.  Your cursor will change into crosshairs.  Left-click on that lower right corner of cell A1 and drag it down to A1765.  Let go, and then left-click and drag it across to column FB, and let go.  This gives you an "x" corresponding to each cell in Multiword that contains data.  Now select it all (Ctrl-A) and then go to Format > Column > Width > 1.  (Or even narrower, e.g., .5.)  This gives you a more easily visualized map of how your data is laid out.  You could have done the same thing by just making the formula say =Multiword!A1, and this would have had the advantage of showing you the actual contents of Multiword, as you moved your cursor from one cell to another; but this can be easier to think about.  Besides, we can use those consistent "x" values.

Now let's see how many entries you should wind up with at the end.  In Sheet1, go to a cell outside your data map.  In this example, let's go to A1767.  There, type this:

=COUNTIF(A1:FB1765,"x")

That will tell us how many cells contain data.  It may not display correctly in cell A1767 because the column width is too narrow, but you can see what it says by either widening the column or going to A1767 and hitting F2 and then F9.  In my example, that shows me that I have 21,242 cells containing data.  So in my final result, I should wind up with data in cells A1 through A21242, and nowhere else.

Now let's say I like that map in Sheet1, and I want to save it, but I don't want it to take up calculation time.  I can freeze it all forever -- that is, I can convert it all to values instead of formulas.  To do this, go to A1 and hit Shift End-Home (i.e., while holding Shift, hit End and then Home) to select it all.  Hit Edit > Copy and then Edit > Paste Special > Values.  Hit the Enter key a couple of times, until it looks like it's done.  Now those formulas in Sheet1 will all be converted to simple "x" entries.  Save a version of the file for backup.  For instance, let's call it BigFile 01.xls, and then save again as a newer version (BigFile 02.xls).

Alright.  On to the main event.  Let's create another spreadsheet, Sheet2.  Here, in cell A1, enter this formula:

=if(Sheet1!A1<>"x","",CELL("address",A1))

That tells Sheet2 to enter the location of cell A1 into cell A1.  That is, Sheet2!A1 will now say $A$1.  (Note that, if you didn't want to keep Sheet1 as a map, you could incorporate the LEN calculation (above) into this formula, and do both at the same time on Sheet1.)

In Sheet2, copy that formula to all cells, from A1 to FB1765, as described above.  Here's a before-and-after picture of what that would look like, if I were trying to do it all in a single spreadsheet:


In that example, what I want next would look like this:

$A$2
$A$3
$B$1
$B$2
$B$4
$C$3

This would be a step on the way to getting values like this:

3
2
5
14
8
6

So how do we do that?  In a big spreadsheet like mine, it's easier to do it in Microsoft Word.  So let's get Sheet2 ready for transfer.  Freeze Sheet2 as described above (with Paste Special etc.).  Hit Ctrl-A to select it all, and then Ctrl-C to copy it all.  In an empty Word document, hit Ctrl-V to paste it all.  With a big spreadsheet, this could take a while, as Word slowly gags on a couple hundred columns.  The result could be ugly -- mine was a pinstriped thing that didn't look like it contained any data at all -- but fear not.  When Word is done figuring it out, click somewhere on the resulting table.  Choose Table > Convert > Table to Text > Paragraph marks.  It will default to a checkmark in "Convert nested tables," which is fine.  Click OK.  After a couple of years, Word will give you a very ragged document, with lots of spaces between rows.  (Mine was more than 3,000 pages long.)  These blank rows are easy to clean up with Find-and-Replace.  In Word, ^p is the newline character for most documents.  So do a Find-and-Replace (Ctrl-H) to replace two newlines with one.  In other words, replace ^p^p with ^p.  (Actually, before doing that, you may want to remove spacebar spaces before or after the ^p, else some lines may not get fixed.)  Repeat the ^p^p replacement until all of your cell references are in a nice list.  Word may continue to believe that it needs to remove a couple more ^p^p duplicates, but at some point you can tell it's lying.  Save the result.  Let's call it BigFile.doc.

When I went through these steps with one file, they worked fine.  When I went through them with another file, however, I had a problem at this point.  The problem was that Word did not convert all of the lines properly.  It jammed a bunch of Excel cell contents together on the same line, instead of giving each its own line.  (I could tell:  I did a LEN in a separate column for each imported line in Excel, sorted on that column, and found that some were very long.)  To fix this, I had to search the Excel file to find a character that did not already occur in it (e.g., @ or `), and then revise the formula (above) so that it would stick that character on the end.  (Don't use ^ or ~ or other characters that don't turn up normal search results when you try to search for them.)  Then my first step in Word, after converting table to text, was to search for that character and replace it with ^p.

Another innovation, in that second try, was to combine the text and its cell location.  Using the data shown above, this gave me this kind of result:

3$A$2
2$A$3
5$B$1
14$B$2
8$B$4
6$C$3


That way, I could tell where the number (e.g., 3) had come from (e.g., cell A2), and I could use FIND and MID functions to put the numbers and their locations in separate columns.

Anyway, to continue.  If cell order is important for your purposes, sort the Word doc.  If it's not too big, you can do it in Word.  Hit Ctrl-A and then Table > Sort > Sort by Paragraphs.  Mine was too big, so I created a new Excel spreadsheet, Sheet3, and pasted it back into there.  Sure enough, I had my 21,242 entries in column A.  Excel didn't sort them the way I liked, though:  it had $A$9 after $AY$896.  This called for some use of the LEN and & functions.  For instance, if the LEN of the cell containing $A$9 is less than the LEN of the other cell, then insert some zeroes (using MID and FIND and &) before the 9; and to get $A before $AY, consider using an Index column to rank the entries in the order they should go (with maybe a temporary addition before the $A).

Once you have your 21,242 (or whatever) references in the order you prefer, there in column A in Sheet3, enter this in B1:

="Multiword!"&A1

and enter this in cell C1:

=INDIRECT(B1)

copy cells B1 and C1 all the way down to the bottom of the spreadsheet.  You may want to save your work as a new file (BigFile 03.xls) and then freeze column C, and then delete all other columns and worksheets.

*  *  *  *  *

Again, if this post is helpful, please add a comment below.  Cheers!

Wednesday, May 19, 2010

Disability Prevalence -- Where Are We?

This post summarizes the general flow of my posts on disabilities over the past half-year.  I still have a few more posts in draft form, and I'll be wrapping those up shortly, but this is a good point at which to sketch out the picture as it has developed in this blog.

This post originated as an e-mail message to a researcher who seemed potentially interested in looking into data on disabilities.  I wanted to summarize, for him, the questions I have been studying.  As the message grew longer and began to cite my other blog posts, I realized that I should probably just put it up on the blog and refer him to it.  So what was going to be an e-mail message has now become the following paragraphs.

*  *  *  *  *

Let me describe the situation in general terms, and see if there are particular aspects of it that seem to have the best potential for further investigation from a research perspective.  I tend to be somewhat philosophically oriented, so my apologies in advance if it takes me a while to reach the ground.

What the Question Is

The general question is, how many people have disabilities?  There is a definitional aspect to that question.  Speaking strictly from my own perspective, I have posted some blog entries about such matters.  This definitional question matters in the sense that, if we make the circle too small, we deny relevant assistance to people who need it.  An example, in the area of mental disability, is a person who does not qualify for an official psychiatric diagnosis but nonetheless experiences obvious difficulty.

That general question is operationalized in various surveys.  At this level, we move from the purely conceptual to a mix of the hypothetical and the actual.  Elsewhere, I have cited references to a so-called National Disability Data System (NDDS).  The NDDS itself does not exist formally; the concept is that it exists in effect, through the data provided by actual research efforts.   This is still an academic's discussion; at this level we are kicking around various ways of going at the question of disability prevalence.

From there, we move to a more concrete level.  This is the level at which politicians and the public are given specific numbers.  They may not be the right numbers, but that's what footnotes are for.  Most notably, the American Community Survey (ACS) replaces the decennial census; that is, questions about disabilities have disappeared from the latter because the prevalence of disabilities is now being estimated rather than counted.  The ACS is in the process of replacing the census, for this purpose, down to the local level.

Why It Matters

As you can see, I have been trying to get a grasp on what we think we know, and why we think we know it.  But why does it matter?  Why should we care about the prevalence of disabilities?  There seem to be two ways to answer that.

The National Perspective

On one hand, we can approach the issue from a national perspective.  As the posts describe, the disability-related questions on the ACS have been modified, in the last few years, for purposes of improved reliability.  All well and good; but in the process, the estimate of people with disabilities dropped by some 15%.  Meanwhile, more sensitive measures (e.g., the SIPP) have the potential (but, alas, not the financial backing) to show a significantly higher rate.

The nation has a profound interest, budgetary and otherwise, in knowing whether the number of people with disabilities is 38 million or, instead, 53 million (to cite one alternate figure that I have encountered).  A million people here, a million people there, and pretty soon we're talking about real people.  The estimates are definitionally driven, of course, but that's the point:  how much higher does the prevalence rate go if the researcher uses a different, but comparably respectable, definition?

For example:  in an interesting book, Bagenstos contends that the Americans with Disabilities Act (ADA) has been developed in the direction of treating disability as a minority-rights kind of issue.  This, he says, has had the advantage of drawing upon the legacy of civil rights movements of the 1960s and 1970s, thus giving disability rights advocates a certain automatic sense of legitimacy.  The drawback has been that such movements invite opposition from those whom they exclude, particularly if the latter are expected to pay for adjustments to rectify perceived wrongs.

The alternative, Bagenstos says, is to treat the condition in question -- disability, in this case -- as a universal issue, something in which everyone partakes, or is at risk of partaking, through various forms of inability and imperfection.  In this approach, disability prevalence can be calculated without segregating "people with disabilities" into their own conceptual ghetto.  If disability is treated as something that anyone is capable of experiencing, like chickenpox or the flu, but if only a fraction of the population is likely to experience it at any particular point, what is that fraction?

Good question.  But how can we answer it?  The budgetary infeasibility of extending the SIPP to localities across the entire nation, on a par with the ACS, demonstrates that national disability prevalence estimation is presently stuck in a rather absurd place.  Because of its lack of local foundation, it can be gerrymandered, from a desk in D.C., to add or drop five or ten million people here and there, for reasons of statistical or budgetary convenience.  The nation, and the disability community, need something better than that.  The following suggestion illustrates a national alternative on the local level.

The Local Perspective

The relatively narrow operationalization of disability in the ACS is obviously problematic.  If its national estimate of disabilities is on the conservative side, its local estimate will tend to be so as well.

That seems reasonable enough.  But putting it that way highlights a bigger problem.  The idea seems to be that the best way to know whether my neighbor has a disability is to wait for the latest ACS to be completed; wait for the local-level ACS data to be compiled by someone in Washington; adjust that local number upwards by a fudge factor due to the conservative bias of the ACS; and then calculate my neighbor's odds.

Faced with that kind of logic, practical decisionmakers and advocates say, in effect, "Research be damned."  They aren't going to plumb the intricacies of the Supreme Court's latest interpretations of the Americans with Disabilities Act, and they aren't going to invest the time required for a clear understanding of the ACS.  They're going to rely, instead, on what they've heard and what they believe, supplemented by the occasional citation to some source or other.

Suppose we began, instead, from present experience.  Suppose, for example, that I cannot walk to work.  Research on the benefits of outdoor exposure suggests that this state of affairs will tend to make me less happy than I would be if I could walk to work.  The reason for this impairment of my subjective well-being may not be crucial:  it may not matter, for that purpose, whether I can't walk to work because I have no legs or, instead, because the streets between here and there are dangerous for pedestrians.  Either way, I can't do it.

The focus, in that example, is upon achieving a certain outcome.  Outcome-oriented disability estimation is, in essence, the language of actual local life.  The mayor finds that 39% of her constituents are furious about the state of the roads.  They are experiencing some transportation-related disability.  The fact that 3% of constituents are furious about the state of the sidewalks may be politically trivial, but a transportation-related disability nonetheless exists there as well.

Improved accessibility will often be politically infeasible if the public impression is that we are trying to spend a fortune on curb cuts for a small number of people in wheelchairs who never use the sidewalks anyway.  Rather than ask for special handouts, a more defensible view of transportation-related disability would focus on getting the roads and sidewalks into shape for people on foot, in wheelchairs, and in cars.  Infrastructure is essential.  Everyone needs effective transportation.

Summary

Disability has been defined in different ways.  A cursory review suggests that American law is presently oriented toward treating a disability as a flaw in the individual.  Hence, instruments like the ACS look for vision impairments and other personal characteristics that prevent people from functioning like everybody else.  The social model of disability is incorporated only in the limited sense that some survey questions acknowledge, in various ways, that disability may entail mismatch between person and society; yet even that acknowledgement inevitably brings the focus back to the individual.

That approach to disability has the potential to get everyone bogged down in mutual recrimination, with the familiar old vocabularies of "handouts" versus "privilege," and "normal people" versus "the oppressed."  An approach that could be more readily calculated on the local level, and more consistent and politically supportable on the national level, would focus upon desired life outcomes.

Using transportation as a particularly important disability-related outcome, one can ask how many people are not able to get where they need to go within a reasonable amount of time, in a reasonable manner, at reasonable cost.  There are many kinds of transportation-related disabilities in this sense.  Here are some examples:
  • People who are disabled from independent transportation because they are under the control of others.  Examples include children and prison inmates.  
  • Those whose health precludes independent travel -- hospital inpatients, for instance, and nursing home residents.
  • People may also be economically disabled from utilizing independent transportation:  for instance, they may not have money for a car or even for bus fare, assuming there is a bus line near them.
  • People whose obligations prevent independent travel:  people have to stay at work, or have to stay near a certain location to be available for work, or have to stay home with the kids or with a sick relative.
  • Social disability precluding independent transportation.  People are stared at and harassed if they are someplace where, in effect, they don't belong.  This can include kids in the vicinity of a bully, women who are out late alone, bicyclists on a busy street, individuals of an unfamiliar or unwelcome race, and people who dress funny or act funny.
The purpose of such an investigation would be to provide an alternative perspective that would be more immediately familiar to the public and more responsive to actual human experiences of disability.  The idea is that, for whatever reason, some people can't get where they need to go.

Needless to say, this post does not purport to address the gamut of disability-related concerns and issues.  Indeed, it is precisely not that sort of thing.  What I have observed, in my half-year of exposure to disability-related matters, is that the cerebral model of disability -- the one that begins with abstract, individual-oriented definitions and works its way down to concrete application -- is not really very practical.

It tentatively seems that it would be more useful, marketable, and appropriate to treat disability as a matter of sociopersonal constraints that everyone experiences in various forms, and to focus especially upon those global, national, state, and/or local conditions that most profoundly impair the achievement of the most important outcomes.  This approach would still prioritize many individual impairments, but would do so as a matter of an investment in society's future rather than as a handout to a person who has managed to become privileged in the eyes of the law.

Sunday, May 16, 2010

Resuming a VMware Virtual Machine: Could Not Open /dev/vmmon

I was tweaking Ubuntu 10.04 (Lucid).  One step I took was to clean out old entries from the GRUB menu.  This involved removing some old kernels.  That process may have caused a problem for VMware Workstation 7.  When I tried to resume a previously suspended virtual machine, I got this error:

Could not open /dev/vmmon:  No such file or directory.  Please make sure that the kernel module `vmmon’ is loaded.
One source recommended adding some extra lines to the VMware startup script.  I killed VMware and tried that.  I typed “sudo gedit /etc./init.d/vmware and added those lines at the start of that file, right after the end of the introductory comments.  But it didn’t work, and I wasn’t surprised; the code seemed a bit scrambled.  I closed Workstation and tried “sudo vmware.”  But that didn’t help either.  What did finally solve the problem was a simple command:  “sudo service vmware start.”  Then I started VMware and was able to restore my virtual machine OK.

Compiz in Ubuntu 10.04: Same As It Ever Was

Following Gizmo’s Freeware list of tweaks, I decided to try jazzing up the visual appearance of Ubuntu 10.04 (Lucid Lynx).  To do this, I went into Ubuntu's System > Preferences > CompizConfig Settings Manager > Desktop.  Enable Desktop Cube; disable Desktop Wall.  Unfortunately, I went on to play with Effects at the same time, and managed to halfway freeze my system.  Having wasted hours on fooling with Compiz in previous years, I rebooted and went back to System > Preferences > Appearance > Visual Effects > Normal.

Gizmo also described a Windows 7 Aero Snap tweak in which I could drag a window to the left or right side of the screen and it would automatically fill half of the screen.  To make this work, I installed wmctrl in Synaptic.  Then, in CompizConfig > General > Commands, I entered these commands:

  • Command line 0:  WIDTH=`xdpyinfo | grep 'dimensions:' | cut -f 2 -d ':' | cut -f 1 -d 'x'` && HALF=$(($WIDTH/2)) && wmctrl -r :ACTIVE: -b add,maximized_vert && wmctrl -r :ACTIVE: -e 0,0,0,$HALF,-1
  • Command line 1:  WIDTH=`xdpyinfo | grep 'dimensions:' | cut -f 2 -d ':' | cut -f 1 -d 'x'` && HALF=$(($WIDTH/2)) && wmctrl -r :ACTIVE: -b add,maximized_vert && wmctrl -r :ACTIVE: -e 0,$HALF,0,$HALF,-1
  • Command line 2:  wmctrl -r :ACTIVE: -b add,maximized_vert,maximized_horz
Then, in the “Edge Bindings” tab, I changed None to Left (for command 0), Right (for command 1), and Top (for command 2).  I opted to disable the Flip Left, Flip Right, and Flip Up actions of Desktop Wall.  I wasn’t sure what that would mean.  I clicked Back > General > General Options > General tab > Edge Trigger Delay = 500 > press Tab button > Back > Close.  The way this actually worked, as I quickly found, was that whatever window was highlighted at the moment would go to the edge of the screen where I put the cursor.  Nice, but it kept the left panel from coming up (since I had set it to Auto-Hide), and then it made the bottom panel go to the top.

It all felt pretty flaky.  Since I couldn’t get back into the left panel I had to restart the system again.  Or, correction, the only button that worked was the Shut Down, so I did that, and then rebooted.  When the system came back up, the bottom panel was back in place, but I still could not open the left panel to get into System > Preferences and make changes to Compiz etc.  Following howefield’s advice, I hit Alt-F2, typed “gconf-editor” (I could also have typed gnome-terminal if I’d wanted Terminal) and went to /apps/panel/toplevels/top_panel_screen0 and unchecked auto_hide.  I closed that, went back into Compiz Commands, and clicked the brush or broom icon at the left to remove each of those three commands.  I went into Synaptic and removed wmctrl.  I set the left panel back to Auto-Hide.  It would not come up again.  I restarted the system.

The restart option worked this time, at least to the point of taking me to the Ubuntu screen and then freezing.  Eventually, I punched the computer’s reset button.  I tried again with the left panel.  This time, I just turned off its Auto-Hide option and didn’t turn it back on.  I wondered if System > Administration > Update Manager would somehow fix this.  I ran a check for updates and got an indication that there were 21 of them.  I installed those and tried restarting the computer again.  This time, restart worked without having to punch the reset button.  I went back into Appearance > Visual Effects and set it to “None” rather than “Normal” or “Extra.”  I changed the properties of the left panel to Autohide again.  Now it would hide and unhide without a problem.  I went back into Visual Effects and tried the Normal setting.  It said, “Searching for available drivers.”  I opted to keep the settings.  I closed out of that and tried the panel again.  So, back in gconf-editor, I turned off Autohide again to make the panel visible; I went into Visual Effects and set it back to None; and now the panel was back.  So, OK, Compiz had screwed up the Normal setting so that I had to use the None option; but with the None option, everything seemed to be working acceptably.

That took care of this problem.  If the panel had still not worked, Howefield also offered the option of resetting the panels to the original default, which would have required me to reconstruct the way I had set them up.  Moral of the story, for me, on this work-oriented system, was to continue to avoid Compiz special effects.

Importing Microsoft Word Autocorrect Entries into OpenOffice.org Writer

I had been looking, for some years, for a way to import my list of AutoCorrect entries from Microsoft Word 2003 into the OpenOffice.org (OOo) word processing program.

In Word, I had found AutoCorrect invaluable for converting shorthand expressions into longer terms, saving me a lot of typing. For example, I could type “fttt” and watch it expand to “from time to time,” having previously defined it as such. My list of Word AutoCorrect terms had grown long, into the thousands of entries, so I could not just retype them into OOo Writer manually.

I did know how to export the AutoCorrect entries from Word to a text file. There were apparently several macros available for this purpose. The challenge had been in getting the items from there to Writer. A Linuxtopia webpage now suggested a possible approach, however, and I decided to explore it.

My first step was to get into Writer’s DocumentList.xml file. To do this, in Ubuntu’s Nautilus (i.e., File Browser) I went to /usr/lib/openoffice/basis-link/share/autocorr. I double-clicked on acor_en-US.dat (there were files for other languages and for other flavors of English). There was DocumentList.xml. Now, what to do with it? I right-clicked on it and chose Extract > Extract. This gave me an error message: “Extraction not performed. You don’t have the right permissions.” So I went into Applications > Accessories > Terminal and typed “sudo nautilus,” and then, using that superuser File Browser session, went back to that same autocorr folder and tried again. This time, I didn’t try extracting; I just right-clicked on acor_en-US.dat and chose “Open with Archive Manager” and then right-clicked on DocumentList.xml and chose “Open with” and chose gedit. I went to the end of the file, right before the “</block-list:block-list>” entry, and copied the whole previous entry. In my case, it was the one that would change “yuor” to “your.” In full, it read like this:

<block-list:block block-list:abbreviated-name="yuor" block-list:name="your"/>
They all seemed to follow that same format.  So apparently it was just a matter of getting my Word abbreviations into that form.  To test this, I added an entry right after that “your” entry.  Mine read like this:
<block-list:block block-list:abbreviated-name="yr" block-list:name="your"/>
After making that change, I saved the file.  This provoked a File Roller message:  “Update the file ‘DocumentList.xml’ in the archive ‘acor_en-US.dat’?”  I said yes, i.e., Update.  Then I started Writer and tried typing “yr.”  It didn’t work.  It would correct “yuor” to “your,” but it wouldn’t correct “yr” to “your.”  I rebooted the system, in case that would make a difference, and tried again.  It didn’t.  Yr was still not listed in Writer’s autocorrect replacement list.  I went back and looked at the end of DocumentList.xml.  “Yr” was still there.  Had I not entered it correctly?  It looked like I might have entered it twice, possibly from a previous try at the same thing.  I made sure there was just one entry for “yr.”  Then it occurred to me to delete the one for “yuor” and see what would happen.  Or, even better, I deleted the one for “yr,” the one that I had added, and I changed the one for “yuor” to be for “yr” instead.  I went back into DocumentList.xml but, what’s this, there were two entries for “yr” again.  Then I realized that the file edit time had not changed:  it seemed I was editing and saving the changes, no error messages, but I hadn’t come in as root, so there was not anything actually happening.  Editing as root, I saw another problem:  I had apparently inserted a copy of the list-ending “/block-list:block-list” command before my “yr” entry.  So perhaps Writer wasn’t going beyond that, and this was why it wasn’t seeing the “yr” item.  I made those changes, started Writer, and it worked!  “Yr” became “your.”  I went into Writer’s AutoCorrect options, looked at the end of the list, and sure enough, there was “yr.”

So now the mission was to incorporate a bazillion Word AutoCorrect entries into this DocumentList.xml file.  Or, no, as I thought of it, I decided the first step was to make a backup copy of this xml file and then delete its contents.  I had been working with my Word AutoCorrect list for years.  I didn’t need any surprises from whatever might be in DocumentList.xml.  Actually, to make it easier, I just made a quick copy of the whole acor_en-US.dat file.  Then, in DocumentList.xml, I deleted everything except the file starting and file ending lines:

<?xml version="1.0" encoding="UTF-8"?>
<block-list:block-list xmlns:block-list="http://openoffice.org/2001/block-list">

</block-list:block-list>

Since I would probably be doing this again – adding to the OOo AutoCorrect list from the Word AutoCorrect list, or possibly vice versa – I decided to manage it all through an Excel 2003 spreadsheet.  This, I thought, would also be a good way to compare the AutoCorr lists that I had developed on different computers.  That is, I was using AutoCorrect on more than one computer, and it seemed likely that there would be some cases where those lists were not compatible.  So I began with that part of the project.  I ran the AutoCorrect macro in Word on each computer and brought all of the resulting wordlists together into one folder.  I opened one of those wordlists, copied the whole thing, waited a few minutes to make sure it was all there, and pasted it all into an Excel spreadsheet.  Here, too, I wished the AutoCorrect feature included a column indicating the date last used, because a lot of these entries were totally unfamiliar to me and others were for things I was no longer writing about.  Probably I should have done this spreadsheet thing when I first installed Word.  Then it occurred to me that I could set up a virtual machine, install Word on it, and do something like that now.  But without manual examination, I still wouldn’t be able to tell which of those original Word AutoCorr entries I had ever used.

I did manage to come up with some sorting rules that helped somewhat.  After deleting exact duplicates from the several combined AutoCorr files, I sorted alphabetically according to Value (i.e., the term that resulted from the auto-correction) and then according to value length.  For example, I had given “acl” a value of “actual,” and Word came with “actualyl” as also having a value of “actual.”  I could have left both, but it seemed pretty unlikely that I would let a paper go out with “actualyl” in it (not to mention “additinal” and “adequit”).  Actually, I reasoned, I would rather risk letting a paper go out with “actualyl” in it than to endure the insult of having such a spelling correction in my AutoCorrect file.  So I deleted a bunch of those.  I also searched for items containing a space, since those tended to be from Word, not me (e.g., “witht he” becomes “with the”).  I searched for items of the same length before and after, since these tended to be Word’s typo corrections.  When I was done, I copied and pasted it from the Excel file back into the Word AutoCorr list.  Doing that involved creating a new table with enough rows to accommodate all of the Excel entries, highlighting all those empty cells, and pasting the Excel cells into the highlighted space.  There were some extra rows, which Word redundantly filled by starting over at the start of the table and continuing until all rows were filled; I had to delete those.

The next step was to get rid of the existing AutoCorrect entries in Word, so that the unwanted ones that I had deleted would really be gone.  I did this by creating a Word macro to remove them all.  I had no idea how to do this, but it was easy:  in Word 2003, I went into Tools > Macro > Macros > Create.  It had a space for my new macro, starting with Sub AddTBMenuItem() and continuing on to End Sub.  I pretty much replaced that with the following macro, posted in 2001:

Sub RemoveAllDefaultAutoCorrects()
Dim aCor As AutoCorrectEntry
If MsgBox("This is a very destructive macro. Be sure that you " & vbCr & _
"want to delete all the AutoCorrect entries. There is no " & vbCr & _
"for this action. Click OK to continue", vbCritical + vbOKCancel, "CAUTION") _
= vbOK Then
For Each aCor In Application.AutoCorrect.Entries
aCor.Delete
Next aCor
End If
End Sub

I closed that, went back into Tools > Macro > Macros, selected that new macro entry, and ran it.  I gathered from somewhere that Word would restore the old list if you didn’t replace it with at least one new AutoCorrect entry, so I created a dummy one, exited Word, and then came back in to see what it looked like.  Sure enough, there was only that one dummy entry.  So now I ran the macro to restore my new list, and that took care of getting Word’s AutoCorr list updated.

Now, how to do the same thing in OOo Writer?  Using the format shown in that Linuxtopia webpage, I went back to the Excel spreadsheet, added another column on the right side, and used text concatenation to add all the missing stuff – basically, everything other than “yuor” and “your” in that example.  The formula I used was this:
=”<block-list:block block-list:abbreviated-name=”&CHAR(34)&A4&CHAR(34)&”block-list:name=&CHAR(34)&B4&CHAR(34)&”/>”
CHAR(34) was the Excel command for a regular (double) quotation mark.  I had to use CHAR(34) because the quotation mark means something different.  This formula said, take the name in cell A4 (e.g., “yuor”) and replace it with the value in cell B4 (e.g., “your”).  So I copied that formula all the way down the spreadsheet, in my column E (using column D to show the date when I did this, for future reference).  Then I copied all of those cells from column E into Notepad, made sure that Format > Word Wrap was turned off, and saved that as AC.TXT.  Back in Ubuntu, I opened AC.TXT in gedit.  Testing confirmed that OOo Writer was going to have a hard time with items that gedit displayed in funky format, like these:




Most of the items that caused problems that way were due to the use of smart apostrophes (i.e., single quotes) in Word.  So back in Windows, in Notepad, I opened AC.TXT, found an example of a smart apostrophe, highlighted and copied it into the Find & Replace box, and replaced it with a simple apostrophe.  I made a note in the spreadsheet, next to these items, to indicate that they were not compatible with Writer.  Word's em dash () was also problematic as an import into Writer, so I had to replace it, in this imported list, with two hyphens (--).

With these changes made, back in Ubuntu, I was ready to paste the revised AC.TXT into DocumentList.xml.  I closed Writer, did the paste, started Writer, and tried it out.  It didn’t work.  I looked at the AutoCorr list.  It had imported only a few items.  It looked like it had stopped at an item containing an ampersand (&).  I deleted that item from DocumentList.xml and tried again.  Now its AutoCorr list was longer, but still only a fraction.  Sure enough, it had stopped at another ampersand.  I went back to the spreadsheet and deleted or changed all items containing an ampersand, and marked them on the spreadsheet for incompatibility as well.  Trying again:  still no cigar.  This time, it seems there was an item in my list that was already in quotation marks.  So I was trying to import something like ““This”” and Writer wasn’t buying it.  I fixed that and tried again.  This time for sure.  It worked.  I had the entire list, and I played with it.  It looked like they were all going to work.  I doctored up the list by putting a copy of Writer’s special character for the em dash into a document (Insert > Special Character > Box Drawing) and then copying it into the places in the Tools > AutoCorrect list where I had had to import double hyphens (--) instead.  At some point, I would probably do the same with the smart apostrophes, ampersands, and other items if I decided to use Writer frequently.

So it worked.  I could now use my list of abbreviations in OOo Writer instead of having to use Word.

Thursday, May 13, 2010

Scanning Functionality for a Brother MFC-7340 Multifunction Device in Ubuntu 10.04

I had installed the printing functionality for a Brother MFC-7340 multifunction device in Ubuntu 10.04 (Lucid Lynx), including a Windows XP virtual machine running in VMware Workstation 7 on that Ubuntu installation.  Now I wanted to set up scanning functionality as well.  This post describes that effort.

I first verified that the scanning functionality was not installed automatically as part of the printer driver.  I went into Ubuntu’s Applications > Graphics > XSane Image Scanner.  (I was not sure whether XSane came with Ubuntu or whether I installed it separately through Synaptic Package Manager.)  XSane said it was “scanning for devices” and then reported that there were “no devices available.”  It occurred to me that this could be because I had connected the printer to the virtual machine, which probably would have made it unavailable for Ubuntu itself.  I went into the virtual machine and selected VM > Removable Devices > Brother Printer > Disconnect.  Then I went back out to Ubuntu and tried again in XSane.  I got the same result.  So it did seem that I would need to install scanner drivers separately.

The Brother Drivers for Linux webpage did have drivers and instructions for scanners.  I went to the driver download page, where I found that the MFC-7340 was categorized as a “brscan3” model.  I selected the Debian driver for the 32-bit brscan3.  There was also an option to download and install a scan-key-tool.  I wasn’t sure what that was, or if I would need it, so I held off.  The 32-bit brscan3 download gave me a file called brscan3-0.2.9-1.i386.deb.  There were different installation instructions for USB and ethernet connections.  There was also a separate webpage for Scanner Settings for Normal Users.  They had an Ubuntu 10.04 option there.  The steps I had to take there were, first, to type “sudo gedit /lib/udev/rules.d/40-libsane.rules” and then add two lines at the end of the, right before the line that began with “# The following rule will disable USB autosuspend for the device.”  The two lines I had to add were:

# Brother scanners
ATTRS{idVendor}=="04f9", ENV{libsane_matched}="yes"
Then I had to save that file and reboot the system.  When that was done, as with the printer configuration, I went through the Pre-required Procedures that seemed to apply to all versions of Ubuntu generally or to this version (32-bit 10.04) particularly.  Those steps seemed to be as follows:
sudo -i
mkdir /var/spool/lpd
apt-get install sane-utils
apt-get install psutils
I got an error for the mkdir command, as this was a step I had already done – I think when I installed the printer driver.  I already had sane-utils too, and got a message telling me to run “apt-get autoremove” to get rid of some packages that were no longer required, which I did.  I already had psutils as well.  With the Pre-Required Procedures out of the way, I was ready to follow the USB installation instructions for the scanner driver.  Having already downloaded the driver and connected the MFC-7340’s USB cable, I navigated to the folder where I had put the download and then typed this:
dpkg -i --force-all brscan3-0.2.9-1.i386.deb
dpkg -l | grep Brother
This all seemed good.  I started XSane, adjusted its settings, and scanned.  It worked with the automatic sheet feeder.  It saved in multiple formats, including JPG and PDF.  I tried scanning while standing at the scanner and punching the buttons on the MFC-7340.  That didn’t work.  That, then, was what the scan-key-tool driver was for.  From the download page, I got the 32-bit Debian brscan3 scan-key-tool driver (brscan-skey-0.2.1-3.i386.deb) and looked at the installation instructions.  They said I needed GIMP, which I already had installed (I think it came with Ubuntu), and that I also needed to have installed the scanner driver already, which I had done.  That took care of the Pre-Required Procedures.  Now, as above, I navigated to the folder where I had put the download and entered more or less the same commands as with the scanner driver, plus a couple of additional steps:
dpkg -i --force-all brscan-skey-0.2.1-3.i386.deb
dpkg -l | grep Brother
brscan-skey
brscan-skey -l
Following their additional instructions, I set the scan-key-tool to run automatically when I started the computer, by going into System > Preferences > Startup Applications > Add (suggested name = Brother Scan Key, command = brscan-skey, comment = Scan from the MFC-7340 console).  I scanned from the console.  The resulting file was saved in /root/brscan, probably because I had run all of these commands as root (i.e., sudo).  I ran brscan-skey again from my normal user (i.e., not root) prompt.  This time, I saved to Image rather than to File.  This provoked GIMP to start up and display the scan.  I saw that, again, it was saving the scans to /root/brscan (though I could only see them when I went there as root).  I changed the file name and location to a PDF in my preferred folder for scans, but GIMP was not prepared to do PDFs, so that was apparently an advantage of saving as File rather than as Image.  I tried again with GIMP, this time as a JPG to my preferred folder.  That worked, but the scankey was still saving copies of them to /root/brscan too.  The instructions said I could type “brscan-skey -u” to change the target user, so I did that as root, and then typed brscan-skey as myself.  Unfortunately, that didn’t do it; it was still saving in /root.  I tried “sudo brscan-skey -t” to stop it altogether.  That worked.  Now it wouldn’t scan from the console.  I started it as myself again.  Now it was saving to /home/ray/brscan.

The next step I wanted to take, following their additional instructions, was to modify the default script for scan-to-image so that it would save in my preferred folder as a JPG, and scan-to-file so that it would save there as a PDF.  Brother suggested changing this line in the default scantoimage script:
scanimage --device-name "$device" --resolution $resolution > $output_file
to this:
scanimage --device-name "$device" --resolution $resolution | pnmtops | gs -q -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=- - > $output_file.pdf
To edit the default script for the scanner's Image option, I typed this:
cd /usr/local/Brother/sane/script
ls
sudo gedit scantoimage-0.2.1-3.sh
Once I was in the scantoimage script, I decided to change several things, including increasing the resolution to 300.  Now, unfortunately, the script didn’t work.  After I punched the Start button on the scanner, it just sat there for a moment, and then gave a long beep and a “sane_read: Error during device I/O,” and a tiny brscan.XXXXX file would appear in /home/ray; and when I opened that file, it was all black; or if I restored the line about echoing the output to gimp, it said this:
GIMP message
Opening ‘/home/ray/brscan/brscan.3bq3GA failed: PNM Image plug-in could not open image
and then gave up.  Weird thing is, I couldn’t get it to function normally, even when I restored scantoimage-0.2.1-3.sh to what I thought was its original condition.  I stopped and restarted brscan-skey (i.e., stop with brscan-skey -t); still the same thing.  I restarted the computer and, at the same time, shut down and then restarted the printer.  Having set brscan-skey to start automatically with Ubuntu, I went right to the console and tried again.  Now it scanned.  Maybe I had needed to completely kill brscan-skey after a bad edit.  I edited scantoimage-0.2.1-3.sh again, this time making just one change and then testing it.  Everything was fine until I got to the part where I made the recommended change to the actual scanimage line in the script.  Then, once again, the scanner just started the scan and then froze.  Eventually it reset itself.  I tried again, but now it was just waiting for a few seconds and then giving me that long beep and not even trying to scan.  I commented out their recommended change and restored the way it was originally, saved, and tried again.  Still just the long beep.  So I typed this
kill -9 `pidof brscan-skey-0.2.1-3`
brscan-skey
(note:  those are backticks, not single quotation marks) and then tried scanning again, but still the long beep and the error.  Turning the printer off for 30 seconds and then back on after killing brscan-skey did the trick.  I played some more and got the long beep and error again.  This time, I didn’t turn the printer off; I just killed brscan-skey and waited for the printer to reset itself.  That worked too.  After more searching, I found what looked like it might be an answer in Stutz’s Debian Linux Cookbook (pp. 286-287), which said this:
scanimage outputs images in the PNM (“portable anymap”) formats, so make sure that you have the `netpbm’ package (installed on most Linux systems by default); it’s a useful collection of tools for converting and manipulating these formats.
Well, Synaptic told me that I did not have netpbm installed.  I installed it, using Synaptic.  I observed that a list of devices (produced by “echo 'devicenames == ' | gs -q | tr " " "\n" | sort”) contained pnm but not the pnmtops device used in Brother’s command.  Pnmtops did appear to be in Ubuntu 10.04; it just didn’t appear to be on my system.  So I changed that too.  About this time, I discovered that Ubuntu had probably been giving me error messages in Terminal after each try, but Terminal tended to be buried under other windows.  So now I got this error:
scanimage: sane_read: Error during device I/O
scanimage: received signal 13
scanimage: trying to stop scannerSegmentation fault
So I changed it back to pnmtops.  This time, I got an additional error, after the “scanimage: sane_read: Error during device I/O” message:
pnmtops: warning, image too large for page, rescaling to 0.691703
pnmtops: writing color PostScript...
pnmtops: EOF/error reading 1 byte sample to file.
It looked like almost nobody had gotten that EOF/error message.  Not a good sign!  The rescaling part did not seem to be a problem.

Around this point, I realized that I had been trying to set the scanner’s Image option to produce PDFs, when I had originally wanted to do that via the File option.  I didn’t know whether I would have any better luck editing scantofile-0.2.1-3.sh than I had had with scantoimage-0.2.1-3.sh, but I decided to try.  I was puzzled that the Ubuntu Manual said, “Never use mktemp().”  I tried their recommended alternative of mkstemp, but bash did not seem to recognize it.  After some additional playing around, I came up with this working script:
#! /bin/sh

# $1 = scanner device
# $2 = friendly name
#
# Resolutions: 100,200,300,400,600

resolution=300
device=$1
sleep 0.01
output_file="/media/DATA/Current/Scan_`date +%Y%m%d-%H%M%S`.pdf"

scanimage --device-name "$device" --resolution $resolution | pnmtops | gs -q -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=- - > $output_file

chmod 644 $output_file
This script would name each PDF by its date and time, so that they would sort in the correct order in Nautilus or in Windows Explorer.  I think netpbm had to be installed (above) for this to work.  This script changed a number of things in the Brother script, including notably the filename and location.

I took these changes to the scantoimage script as well.  That script would normally output a PPM file, so I added a pnmtojpeg conversion line to produce an output file in the better-known JPG format, for maximum compatibility with various browsers and other programs.  I set quality to 95 because, in a brief test, the resulting JPG was half the size of one set to 100.  I included a slight bit of smoothing to make the file slightly smaller and better-looking, without blurring sharp lines.  I put this JPG file into a Lossy folder, to distinguish it from a parallel process in which I used the scanner’s PPM output to create a lossless PNG version, in case I needed maximum quality.  After the scanning session, I could then easily get rid of all unnecessary copies by deleting either the Lossy or the Lossless folder (or by mixing their contents as needed).  The resulting, working scantoimage script was as follows:
#! /bin/sh
#
# $1 = scanner device
# $2 = friendly name
#
# Resolution options: 100,200,300,400,600

resolution=300
device=$1
sleep 0.01
dir_name="/media/DATA/Current/"
file_name="Scan_`date +%Y%m%d-%H%M%S`"

ppm_file=$dir_name$file_name".ppm"
scanimage --device-name "$device" --resolution $resolution > $ppm_file

mkdir $dir_name"Lossy"
jpg_file=$dir_name"Lossy/"$file_name".jpg"
pnmtojpeg --quality=95 --smooth=10 $ppm_file > $jpg_file

mkdir $dir_name"Lossless"
png_file=$dir_name"Lossless/"$file_name".png"
pnmtopng $ppm_file > $png_file

# Change, comment, or delete the next line, depending on which
# file (if any) you want to open in GIMP
rm $ppm_file

# Uncomment the next line if you want the file to open in GIMP
# echo gimp $output_file \;rm -f $output_file | sh &
As with the printer setup, the final question was whether I could use the MFC-7340 from within a Windows XP virtual machine on VMware Workstation 7.  I went into a virtual machine, opened Adobe Acrobat, and entered the commands to scan.  It did not see any scanning devices.  I cancelled out of that and went to VM > Removable Devices > Brother Printer.  I saw that there was no “Brother Scanner” option, so this did not look good.  I clicked on Connect anyway, to connect the printer, and went back to Acrobat.  But that made the difference:  now it saw both TW-Brother MFC-7340 and WIA-Brother MFC-7340.  I usually used the latter, so I went with that.  I went through all the other normal steps to make a scan.  Unfortunately, Acrobat crashed.  It had been doing that anyway in that virtual machine, so I couldn’t infer anything for sure from that.  I tried again.  This time, Acrobat sat there for several minutes with the “Transferring data...” dialog open and then finally said, “Scanning canceled.”  I tried again, this time using the Twain (TW-Brother MFC-7340) option.  It said, “Reading from the device.”  It got as far as 0% Completed, and it hung there.  After maybe 10-15 minutes, I canceled it.  Apparently scanning would not be happening from within the virtual machine.  Otherwise, though, it appeared the project was complete.

Installing a Brother MFC-7340 Printer in Ubuntu 10.04

I was installing Ubuntu 10.04 (Lucid Lynx) on a desktop computer.  I wanted to get my Brother MFC-7340 printer working from within Ubuntu.  I had not been able to make it work with Ubuntu 9.10, but now I had found a post where mdgrech described how it could be done.  The steps I took were as follows:
sudo -i
aa-complain cupsd
mkdir /usr/share/cups/model
mkdir /var/spool/lpd
apt-get install sane-utils
apt-get install psutils
It puzzled me that mdgrech’s link led to the LPR driver for the MFC-7420.  I suspected he knew exactly what he was doing; but just in case that was a mistaken link, I went to the Brother Linux driver download page and downloaded the Debian LPR driver for the MFC-7340 instead.  This gave me a file called brmfc7340lpr-2.0.2-1.i386.deb.  To install the LPR driver, Brother advised using CUPS if it was working on my system.  I wasn’t sure if it was.  I was advised to try this:  “sudo /etc./init.d/cups status.”  That said “cupsd is running.”  Now what?  I followed the links to the Cupswrapper Driver Install page.  There, I had to follow certain “pre-required procedures.”  These appeared to be more or less the steps that mdgrech had already had me take (above).  So apparently he had used the CUPS approach too.  Encouraged, I continued along this CUPS route.  There was some disagreement on the next step.  Brother said that I should turn on the printer and connect it to the computer now; mdgrech seemed to say I should install the driver first.  I wound up not connecting the printer until later.  Meanwhile, it seemed that I would need to be installing the cupswrapper driver as well as the LPR driver, so I went back to the download page and did that.  This gave me a download called cupswrapperMFC7340-2.0.2-1.i386.deb.  The next steps were to navigate to the folder where I had downloaded brmfc7340lpr-2.0.2-1.i386.deb and then type these commands:
dpkg -i --force-all brmfc7340lpr-2.0.2-1.i386.deb
dpkg -i --force-all cupswrapperMFC7340-2.0.2-1.i386.deb
dpkg -l | grep Brother
The next step was to go to http://localhost:631/printers.  It showed the printer.  So now I did turn on the printer and plug in the USB cable.  (Note that there are slightly different instructions if your connection is ethernet.)  Ubuntu saw the printer, but gave me a “Missing printer driver” note in the upper right corner of the screen, and then said “Searching for available drivers.”  Eventually it gave me a New Printer dialog.  

Note:  mdgrech had advised, instead, to go to http://localhost:631/admin, select Add printer, choose Brother MFC-7340, choose “Another Make/Manufacturer,” select the MFC-7340, and click Add Printer.  Since I had gone to http://localhost:631/printers as Brother advised, I was now at the New Printer dialog, so I proceeded from there.  I selected “Select printer from database” (with Brother highlighted) > Forward > MFC7340 for CUPS > Brother MFC7340 for CUPS [en] (recommended) > Forward.  I went with the defaults in the “Describe Printer” dialog > Apply.  I printed a test page.  It worked!

There was one other thing I needed to check.  On this Ubuntu machine, I was running Windows XP in a virtual machine in VMware, and had previously tried to install the MFC-7340 from there.  I still had it listed as a printer.  So I went into VMware at this point and tried printing from there.  The print job queued up, but it didn’t print.  I ran Brother’s Installation Diagnostics software, there in Windows XP, and it reported failure:  “Cannot communicate with the machine.”  I went into Start > Settings > Printers and Faxes (right-click) > right-click on the Brother MFC-7340 Printer > Properties > Ports tab.  I checked the box next to the USB003 port, which was the only port that specifically referred to the Brother MFC-7340.  I clicked OK and tried printing again.  Once again, it queued but did not print.  Then it occurred to me that, of course, I would have to go into VMware’s VM > Removable Devices.  There, sure enough, I saw “Brother Printer.”  I clicked Connect.  The queue dialog said, “Printing,” and then it did print.

The other thing I really wanted to be able to do with the Brother MFC-7340 was to scan.  This appeared to be an entirely different process, so I started another post for that.  Otherwise, with this step finished, I returned to the project of tweaking Ubuntu 10.04, as described in a separate post.

Compaq CQ60-420US Laptop Ethernet Connection Problem

I was having problems going online with my Compaq CQ60-420US laptop.  The wireless connection worked, but for some reason the wired connection didn’t, even when I connected it using a cable that was working fine for another computer.  This problem was occurring in both Microsoft Vista and Ubuntu (both 9.10 and 10.04) on this dual-boot machine.  I had spent a lot of time on the phone with HP tech support, and had even shipped the computer back to Texas for a motherboard replacement.  Maybe they didn’t replace the motherboard, or maybe they replaced it with another defective one; but in any event, it still wasn’t working.  For example, if I tried going to a website in Internet Explorer 8, it would say, “Internet Explorer cannot display the webpage,” and if I tried in Google Chrome, it would say, “This webpage is not available.”

In Vista (classic view), I went to Start > Settings > Network Connections.  With wireless disconnected, I right-clicked on Local Area Connection (Realtek PCIe FE Family Controller) > Diagnose.  Windows Network Diagnostics said,

Windows did not find any problems with this computer’s network connection.
I tried the option that said, “Reset the network adapter ‘Local Area Connection.’”  After a minute, it said, “The problem has been resolved.”  Yet I was still getting the “cannot display” error in Internet Explorer.  Clicking on the “Diagnose Connection Problems” button in Internet Explorer produced the Windows Network Diagnostics “Identifying the problem” message, followed by “Cannot communicate with” the webpage I had been trying to reach. It again offered me the option of resetting the network adapter, but I declined.

Microsoft offered a “Fix it for me” option, but of course I was not able to connect to that webpage on the laptop.  When I looked into that option on another computer, it said that it was intended to fix problems with Internet Explorer itself (e.g., freezes or crashes).  This was not just an Internet Explorer problem.  Choosing the “Let me fix it myself” option, then, I had already gone through methods 1 (try viewing another webpage), 2 (Network Diagnostics tool), and 3 (reset modem or router), and I did not bother trying methods 4 (delete browsing history) and 5 (use Internet Explorer no add-ons mode).  Advanced Troubleshooting Method 1 called for temporarily disabling the Internet security suite or firewall.  I had tried that, but not recently, so I tried it again.  In Control Panel, I went to Windows Security Center and turned off the firewall.  That did not help, so I turned off Windows Defender.  Still no solution.  In the system tray (at the bottom right corner of the screen, I moused over the various icons until I found the one that said, “Computer status – Protected.”  I right-clicked on that, and that opened Microsoft Security Essentials.  I went to its Settings tab and turned off Real-time protection > Save Changes.  Still no joy in either Internet Explorer or Chrome.  Just to be sure, I killed and restarted Internet Explorer and Chrome and tried again, but still no connection.  In Internet Explorer, I went into Tools > Internet Options > Security, unchecked Enable Protected Mode, restarted Internet Explorer, and tried again; but no.  This didn’t seem to be the solution, so I went on to Advanced Troubleshooting Method 2:  check whether Windows assigned you an automatic IP address.  To do this, Microsoft told me to go to Internet Explorer > Diagnose Connection Problems and click on IP Address, but I didn’t see an option like that.  Instead, I clicked on the Reset Network Adapter option again.  This time, the Windows Network Diagnostics dialog said something a little different:
Windows tried a repair but a problem still exists.
Cannot communicate with www.hotmail.com (64.4.20.174).
As advised on a couple of other websites, in Start > Run > CMD, I typed these commands:
ipconfig /release
ipconfig /renew
I wasn’t sure how to read the output, so I went back to the Microsoft advice.  I now saw that Advanced Method 2 didn’t apply, since it was just trying to see whether I had a problem related to my Internet Service Provider (ISP).  I didn’t, since the computer on which I was doing this typing was connected to the same line, and it was working just fine.  So:  on to Advanced Method 3:  test Internet Explorer by using a safe mode startup option that enables networking.  To do this, I went to Start > Search for files or folders > msconfig.exe.  The one I wanted was under C:\ProgramData\Microsoft\Windows\Start Menu.  I double-clicked on it and went to Boot tab, clicked Safe boot and Network > Apply.  Then Start > Shut down > Restart.  In Safe Mode, I opened Internet Explorer.  It still could not connect.  The Microsoft advice told me, in that case, to skip to Advanced Method 6:  Start > Run > devmgmt.msc > Network Adapters.  There were no exclamation marks.  Microsoft said, in that case, I should go to Advanced Method 7:  run System Restore.  But this, they said, would make sense only if the problem was a recent one, and in this case it wasn’t.  Advanced Methods 8 and 9 were oriented toward fixing problems with Internet Explorer but, again, it was a Chrome problem too.  I tried to go back to msconfig.exe via Search, but it didn’t find it, so I used Start > Run > CMD and then typed msconfig.exe at the prompt.  I went back to the msconfig.exe Boot tab and unclicked Safe Boot.  I restarted back into Vista and verified that the problem was still there.  I re-enabled my various security programs and settings, and pondered the situation.

It seemed I had a recurrent hardware problem that HP was not going to fix for me.  I noticed that Amazon.com had a number of devices that would plug into a USB port to give me an ethernet connection.  I got one that, according to its sole review, was good only for 32-bit operating systems, which is what I had on the laptop.  It cost me less than $7.  If the problem was just with the motherboard, this would hopefully get around it.  Unfortunately, that one failed.  I did not actually succeed in resolving this problem within the existing situation.  Instead, I moved to a different apartment, and for some reason that solved the problem.

Ubuntu 10.04 Adjustments: Software Source List

I was making some adjustments to my Ubuntu 10.04 (Lucid Lynx) installation.  I got kind of bogged down in the Software Sources.  This post describes that part of the enterprise.

When I went into Software Sources > Other Software, I now had a whole boatload of items, most of which were marked "disabled on upgrade to lucid."  To handle this, I tried Ubuntu Tweak.  It was an easy download, double-click to install.  I went into Applications > System Tools to run it.  Then I went down the list of applications and other items on the left side of its window and selected and adjusted them to taste.  I found that I had to exit Ubuntu Tweak and start it again in order to get the prompt that would re-enable my disabled Software Sources.

In Software Sources > Other Software, I noticed several things that didn't look quite right.  First, the "Cdrom with Ubuntu 9.10 'Karmic Koala'" was unchecked.  This made sense; I wasn't running 9.10 anymore.  I selected that item and clicked Remove.  I thought I'd put the new CD in its place.  I had installed 10.04 via download, not via CD, but in the process of fixing the installation I had downloaded and burned the alternate installation CD.  So now I put that into the CD drive and clicked on Add CD-ROM.  It gave me this:
Upgrade volume detected
A distribution volume with software packages has been detected.
Would you like to try to upgrade from it automatically?
It looked like the answer to that should be no, so I clicked Cancel.  Then I had another error:
Error scanning the CD
E:Unable to locate any package files, perhaps this is not a Debian disc or the wrong architecture?
Apparently the Alternative CD was more different from the ordinary Ubuntu CD than I had realized.  I was curious, so I downloaded and burned the official 10.04 CD.  While that was underway, I went to the next problem item on the Software Sources list:  “Unsupported updates.”  The common advice, repeated on a number of websites that seemed to have copied it from one another, was that this would give me programs that I “probably don’t need or even want.”  Au contraire, I was thinking that a person would enable this kind of source to get access to solutions to new problems as soon as they were discovered.  But for purposes of stability, for now, until I needed them, I decided to go with the flow and leave them unchecked.

Next, I saw that some items were marked as "disabled on upgrade to lucid."  According to TualatriX, Ubuntu Tweak > Applications > Source Center would enable only those that supported Lucid.  I’m not sure what happened to this particular category of problem; I played around with a couple of things and these went away.  Next, some of the titles were not right.  In particular, I saw this:
Medibuntu – Ubuntu 9.10 “karmic koala” (http://packages.medibuntu.org/ lucid free non-free)
I didn’t seem to find advice on exactly this problem, so I just selected that item in Software Sources, clicked Edit, and changed the Comment to “Medibuntu – Ubuntu 10.04 ‘lucid lynx.’”  These tinkerings led me to the revelations that “Mixing repositories can break your system,” and that I should have made a backup of my list of sources before I started fooling around.  The backup, it seemed, could be made with this command:
sudo cp -i /etc/apt/sources.list /etc/apt/sources.list_backup
This made me think that I could work up a good source list once and then not have to start from scratch like this in the future.  I had already modified the sources list in Ubuntu Tweak, but now I thought I might want to start over.  I got out of Software Sources and typed “sudo gedit /etc/apt/sources.list” in Terminal.  I copied the sources.list file suggested in the Ubuntu Guide wiki and used that to replace the contents of sources.list.  The two were pretty much the same anyway, but I thought this might clean things up a bit.  I saved and exited sources.list and went back into Software Sources.  Oddly, it still had some of the items that I had added in Ubuntu Tweak.  I went back into sources.list and, no, those other items (e.g., Skype) had definitely not been added to sources.list when I wasn’t looking.  Evidently Ubuntu Tweak was maintaining its own list of software sources and was using that to supplement whatever was in sources.list.  In Software Sources, I cleaned up the list (added comments, deleted duplicates) and then clicked Close.  It gave me the option to Reload, which I did.  Now I got an error message:
Could not download all repository indexes
The repository may no longer be available or could not be contacted because of network problems.
This applied to the CD-ROM line I had copied over from the wiki.  I closed that dialog.  The CD ISO was still downloading, so I couldn’t do anything more about that yet.  In the meantime, I went back into Ubuntu Tweak > Source Center > All Categories > Unlock.  Sources that I had selected previously were still checked, so UT did seem to be saving its own list somewhere.  I refreshed, got the CD-ROM error message again, closed that, installed new applications implied by my selection of sources, and then exited.  Trying another approach, I went to the Ubuntu Sources List Generator and selected all of the repositories I would want.  I excluded source code repositories, since I did not plan to be working with source code.  I clicked “Generate List.”  It gave me a replacement sources.list file, plus a list of commands to run to get the keys necessary to make the sources work; but it also looked like those commands were listed in the comments in the sources.list file too.  I looked again at Ubuntu Tweak.  It had a much longer list of sources, but I thought I could probably do without some.  In particular, I didn’t know if I needed a source just for some individual programs.  One, "déjà vu dup," was supposed to be a simple backup utility.  It was listed in Synaptic and would presumably be updated through there.  I hadn’t used it before, but I thought I would give it a try.  Likewise for Firefox, Opera, Shutter, deluge-torrent, and others.  The list generated by Ubuntu Sources List Generator did include Google, Medibuntu, and other major sources.  So I unchecked all of the sources in Ubuntu Tweak, including the Ubuntu Tweak source itself.  Then I took another look at sources.list, in the form I had copied from the wiki.  It was much more verbose than the one generated by the Ubuntu Sources List Generator, and now that I understood more about it, I didn’t want all those extra comments.  In the end, I decided that all I needed from the sources.list file that I had copied from the wiki was the first line, referring to the Ubuntu 10.04 CD-ROM.  So I replaced the existing sources.list with this one, provided by the List Generator:

#############################################################
################### OFFICIAL UBUNTU REPOS ###################
#############################################################

deb cdrom:[Ubuntu 10.04 LTS _Lucid Lynx_ - Release i386 (20100429)]/ lucid main restricted

###### Ubuntu Main Repos
deb http://us.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse 

###### Ubuntu Update Repos
deb http://us.archive.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse 
deb http://us.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse 

###### Ubuntu Partner Repo
deb http://archive.canonical.com/ubuntu lucid partner
deb-src http://archive.canonical.com/ubuntu lucid partner

##############################################################
##################### UNOFFICIAL  REPOS ######################
##############################################################

###### 3rd Party Binary Repos

#### GetDeb - http://www.getdeb.net
## Run this command: wget -q -O- http://archive.getdeb.net/getdeb-archive.key | sudo apt-key add -
deb http://archive.getdeb.net/ubuntu lucid-getdeb apps

#### Google Linux Software Repositories - http://www.google.com/linuxrepositories/index.html
## Run this command: wget -q https://dl-ssl.google.com/linux/linux_signing_key.pub -O- | sudo apt-key add -
deb http://dl.google.com/linux/deb/ stable non-free

#### HandBrake - http://handbrake.fr/
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 62D38753
deb http://ppa.launchpad.net/handbrake-ubuntu/ppa/ubuntu lucid main 

#### Medibuntu - http://www.medibuntu.org/ 
## Run this command: sudo apt-get update && sudo apt-get install medibuntu-keyring && sudo apt-get update 
deb http://packages.medibuntu.org/ lucid free non-free 

#### Mendeley Desktop - http://www.mendeley.com/
## Run this command: no gpg keys supplied
deb http://www.mendeley.com/repositories/xUbuntu_10.04 /

#### muCommander - http://www.mucommander.com/
## Run this command: sudo wget -O - http://apt.mucommander.com/apt.key | sudo apt-key add - 
deb http://apt.mucommander.com stable main non-free contrib  

#### Ubuntu Tweak - http://ubuntu-tweak.com/
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0624A220
deb http://ppa.launchpad.net/tualatrix/ubuntu lucid main

#### Wine - https://launchpad.net/~ubuntu-wine/+archive/ppa/
## Run this command:  sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F9CB8DB0
deb http://ppa.launchpad.net/ubuntu-wine/ppa/ubuntu lucid main

#### X Updates - https://launchpad.net/~ubuntu-x-swat/+archive/x-updates/
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys AF1CDFA9
deb http://ppa.launchpad.net/ubuntu-x-swat/x-updates/ubuntu lucid main 

With that as my new sources.list file, I went back into Software Sources.  The entries that had been added previously were still there, along with the new ones listed in my new sources.list file.  I deleted the old ones and closed.  It asked if it should reload, and I said yes.  It said, "Could not download all repository indexes," because I had not yet entered the commands shown in the sources.list (above).  So I entered them, one at a time.  Some just gave me OK; some gave me other messages and then OK; some seemed to have problems.  I went to System > Administration > Update Manager and updated programs.    When it was done, I clicked the Check button.  It again said "Could not download all repository indexes" and "Please use apt-cdrom to make this CD-ROM recognized by APT."  It also said, "Some index files failed to download, they have been ignored, or old ones used instead."  For the CD-ROM, I went into System > Administration > Software Sources > Other Software and clicked the Add CD-ROM button.  I inserted the CD.  It said, "A volume with software packages has been detected."  I didn't want to install anything from the CD-ROM now, so I canceled out of that.  When I clicked Close, I was again given an opportunity to Reload, which I took.  I clicked Check again in Update Manager.  That was apparently the only thing I had needed to fix; there were no errors now.  Back in Software Sources, I saw that I had two entries for the CD-ROM.  I looked again at sources.list.  It had added a duplicate of the CD-ROM line, but it put it as the very first line, not lower down under the "OFFICIAL UBUNTU REPOS" heading as shown above.  So I deleted the one (under that heading), saved sources.list, and took another look in Software Sources.  There was now just one CD-ROM.  I made a trivial change and tried to close; I took the Reload option; all was good.  I saved a copy of the revised sources.list file, for use in this or other computers.  It was time to return to the main project of updating Ubuntu 10.04.