Thursday, May 25, 2006

RUN and GetShortPathName

Stonefield Query has a function in its developer interface (the Configuration Utility) to generate an InnoSetup script and compile it into a setup executable. The idea is to make it as easy as possible for someone to deploy a custom Stonefield Query application without having to be an installer expert. Generating the script is easy because InnoSetup scripts are just text files. Compiling the script is also easy: use the RUN command to call the InnoSetup compiler, passing it the name of the script file to compile.

I get the location of the compiler from the Windows Registry, at HKEY_CLASSES_ROOT\'InnoSetupScriptFile\Shell\Compile\Command, which on my system gives "D:\Program Files\Inno Setup 5\Compil32.exe" /cc "%1". So, it's a simple matter to read this value into a variable (for example, lcInnoCompiler) and then:

lcInnoCompiler = strtran(lcInnoCompiler, '%1', lcScriptFile)
run /n1 &lcInnoCompiler

This works great on my system and lots of customers' systems. However, one of our sales guys (Jeff Zinnert) reported that he got a "RUN command failed; file does not exist" error when he tried it and so did a customer. We checked that the InnoCompiler was installed correctly, in the place the Registry said it was, and that the script file existed, but to no avail.

While pondering this, I came across a message on the Universal Thread that was completely unrelated but mentioned an issue with "short" (ie. the old DOS 8.3) paths. That reminded me of a similar issue I'd run into several years ago but had forgotten about. A function I'd written years ago calls the Windows API GetShortPathName function to convert a "long" path into a short one:

lparameters tcPath
local lcPath, ;
lnLength, ;
lcBuffer, ;
lnResult
declare integer GetShortPathName in Win32API ;
string @lpszLongPath, string @lpszShortPath, integer cchBuffer
lcPath = tcPath
lnLength = 260
lcBuffer = space(lnLength)
lnResult = GetShortPathName(@lcPath, @lcBuffer, lnLength)
return iif(lnResult = 0, '', left(lcBuffer, lnResult))

I used this function to convert the paths in the lcInnoCompiler variable to short paths and Jeff and the customer no longer get this error.

Once again, the UT saves my butt even though the answer wasn't directly there.

Wednesday, May 17, 2006

Varchar, SET ANSI, and the UT

I've been working on updating Stonefield Query for GoldMine to use the version 3.0 Stonefield Query Developer's Edition as its engine. While doing some testing, I found that one of the queries was taking significantly longer in the new version than the old version: 30 seconds rather than 3 seconds. My first thought is that it's a VFP 9 issue, since the old version uses VFP 8. I remembered from messages on the Universal Thread regarding query performance in VFP 9 that Rushmore, the key to VFP's magical speed in performing queries, is disabled if the code page of a cursor doesn't match the current code page (ie. CPDBF() <> CPCURRENT()). However, that wasn't the case here. In fact, if I ran the old version under VFP 9, it gave the same performance as VFP 8, so that wasn't the problem.

To dig into this further, I searched the Univeral Thread for messages regarding performance, and saw one where Sergey Berezniker (the king of the Universal Thread) mentioned that expressions using ALLTRIM() aren't optimized because Rushmore requires fixed length keys, but that SET ANSI ON would take care of that. Again, that wasn't the case here; the query didn't use ALLTRIM(). However, that got me to thinking: I wonder if the fields involving in the JOIN clause were Varchar fields. Sure enough, they were. Why the difference between the old and new versions? I added the following to the new version so Varchar and Varbinary fields are properly supported:

cursorsetprop('MapBinary', .T., 0)
cursorsetprop('MapVarchar', .T., 0)

This means that in the new version, the fields retrieved from the database were Varchar rather than Character as they were in the old version. Because Varchar fields could have different lengths (in fact, they were the same length for the fields involved in this join), Rushmore won't optimize them unless ANSI is set on. So, a quick SET ANSI ON added to the code, and the query is now as fast in the new version.

So, two lessons: sometimes a little change in one part of an application can cause a big change in another part, and search the Universal Thread (or other online sources) before spending hours trying to track down a problem. This one only took me about 15 minutes to fix. Thanks, Sergey!

Wednesday, May 10, 2006

FoxUnit is Cool!

FoxUnit has been available for at least of couple of years, and I've always meant to work with it, but it was one of those things I just didn't get around to. However, after attending Nancy Folsom's session on refactoring at the recent GLGDW, in which she discusses the importance of testing both before and after refactoring to ensure the functionality remains the same, I figured it was time to get to it.

What is FoxUnit? As defined on the FoxUnit Web site, 'FoxUnit is an open-source unit testing framework for Microsoft Visual FoxPro®. It is based on unit testing frameworks as described in Kent Beck's book "Test Driven Development by Example" but takes a more pragmatic approach to unit testing for Visual FoxPro than a more purist xUnit implementation would.'

The idea is to create tests for the various pieces of your application. You then run some or all of the tests prior to releasing a new build (or performing refactoring or checking in the latest update or whenever you want) to ensure everything works correctly. Note that unit testing isn't a replacement for system or acceptance testing, but is one more tool in your professional developer's toolkit.

What got my current interest in FoxUnit started is the need to refactor some code. One method is particular is huge (several hundred lines long) and has been getting more convoluted with time. Before I add some new functionality, I want to refactor it so it's easier to comprehend, easier to maintain, and easier to test. However, I'm scared that during refactoring, I'll drop some functionality or introduce bugs. Hence the need for suite of tests I can run before and after each refactoring task. (Nancy stressed doing refactoring in small steps rather than one huge job. In addition to being easier to do, it's easier to test and less likely to cause broken functionality).

So, I downloaded and installed FoxUnit. My introduction was a little rough -- because I didn't follow instructions and SET PATH to the folder when I installed it, a few things didn't work right. In my opinion, a app should be able to find its own pieces without having to use a crutch like SET PATH, so I made a few minor tweaks (like having the program figure out what directory it's running in and using that path in a few places rather than assuming the files can be found without one). However, once I got past that, it was pretty easy to work with.

Tests are stored in PRGs which are managed by the FoxUnit UI. Each PRG contains a class subclassed from FXUTestCase, the base class FoxUnit test class. Each test is a method of the subclass (although there can be non-test methods too, of course). Although you could write all of the tests for your entire application into a single PRG, that wouldn't be very granular. I prefer one PRG for a single "thing" (module, class, or whatever) I want to test, and then one or more test methods within the PRG to test the functionality of that "thing".

The idea is to write small tests that each test one aspect of one method. For example, if a method accepts a couple of parameters and does different things based on the parameters passed, there should be tests for each parameter being passed or not, different types of bad values for the parameters, different types of good values that result in different behavior, etc. As a result, you'll have a lot of tests for even the simplest application, but each test is small, easy to understand, easy to maintain, and does just one thing. And since the FoxUnit UI manages all of the tests for you, and gives you options to run a single test, all tests in one PRG/class, or all tests, the management of these tests isn't too bad.

Tests are easy to write. Since it's just code in a VFP class, you can add custom properties if necessary. For example, rather than instantiating the object to be tested in every test method, you could create a property of the test class such as oObjectToTest, and in the Setup method (called just before any test is run), instantiate the object into that property: Here's an example, along with a test to ensure the object actually instantiated properly:

define class MyTest as FxuTestCase of FxuTestCase.prg
oObjectToTest = .NULL.

function Setup
This.oObjectToTest = newobject('MyClass', 'MyLibrary.vcx')
endfunc

function TestObjectWasInstantiated
This.AssertNotNull('The object was not instantiated.", This.oObjectToTest)
endfunc
enddefine

Note the use of one of the test methods, AssertNotNull. There are several similar methods available, including AssertEquals and AssertTrue. These methods test that some condition is the way it's expected to be, and if it isn't, the test fails and displays the failure message specified in the assert call.

In addition to instantiating the object, Setup can be used to perform other tasks needed for every test, such as setting up the environment or the object under test. A similar method, TearDown, can be used to perform common tasks after a test has been run, such as restoring the environment.

When you run a test, it either passes, in which case it's shown in green in the FoxUnit UI, or fails, which is shown in red. Tests that haven't been run are shown in gray. Thus it's easy to visually see the results of test runs.

FoxUnit is one of those things that seems like a good idea until you try it, and once you do, you realize it's a great idea. I'm kicking myself for not trying it out a couple of years ago when I first saw Drew Speedie demonstrate it at DevTeach in Montreal. But now that I've worked with it for a while, I'm a firm believer. So, if you haven't started using FoxUnit, do yourself a favor: take out an hour, download it, and create some simple tests. You'll become a believer too.

Friday, May 05, 2006

Debugging Tips

As I mentioned in my post about GLGDW, I missed the Best Practices for Debugging session. Here are a couple of tips I was going to mention. Sorry if someone else mentioned them; I was too busy setting up Rick's system to do my presentation to listen.

1. Write debugging-friendly code.

I used to write code like this:

llReturn = SomeFunction() and SomeOtherFunction() and YetAnotherFunction()
I don't do that for two reasons now: it's harder to understand than:
llReturn = SomeFunction()
llReturn = llReturn and SomeOtherFunction()
llReturn = llReturn and YetAnotherFunction()

But more importantly, it's harder to debug. If you step through the code, execution goes into SomeFunction. If you decide you don't need to trace inside that function and want to jump to the next one, you'd think Step Out would do it. Unfortunately, that steps out all the way back to the next line following the llReturn statement. So, the only ways to trace YetAnotherFunction are to go all the way through SomeFunction and SomeOtherFunction or specifically add a SET STEP ON (or set a breakpoint) for YetAnotherFunction.

Note that I don't typically write code like:

llReturn = SomeFunction()
if llReturn
llReturn = SomeOtherFunction()
if llReturn
llReturn = YetAnotherFunction()
endif
endif

That seems (to me) harder to read than the first example.

Here's another example of debugging-unfriendly code I used to write:

SomeVariable = left(SomeFunction(), 5) + substr(SomeOtherFunction(), 2, 1)

The reason that's not friendly is that you can't see what SomeFunction and SomeOtherFunction return unless you actually trace their code. Instead, I now use code like:

SomeVariable1 = SomeFunction()
SomeVariable2 = SomeOtherFunction()
SomeVariable = left(SomeVariable1, 5) + substr(SomeVariable2, 2, 1)

That way, I can trace this code and see what the functions return without having to trace the functions. If something doesn't look right, I can always Set Next Statement back to one of the statements calling a function and Step Into the function to see what went wrong.

2. Instrument Your Application

I, Rod Paddock, Lisa Slater Nicholls, and others have written articles on instrumenting your applications. The idea is to sprinkle calls to a logging object throughout your application. If logging is turned off, nothing happens. If logging is turned on (for example, oLogger.lPerformLogging is .T.), some message is written to some location (a text file, a table, the Windows Event log, etc.) indicating what the application is doing.

I've found this extremely valuable in tracking down problems. While error logs are great, they only give you a snapshot of how things were at the time the error occurred. Sometimes, you need to know how you got there. Also, sometimes the problem the user is experiencing doesn't cause an error but incorrect behavior. By perusing a diagnostic log, you can see all of the steps (the ones you've instrumented, anyway) that lead to the behavior. Yes, you can use SET COVERAGE in your application, but that generates a ton of information, likely a lot more than you need unless you have a really ugly problem that you have no clue about the cause.

Lisa's article is the most recent one I've read, so it's a good starting place. Rod's article is in the September 1997 issue of FoxTalk and mine is in the October 2003 issue of FoxTalk (both articles require you to be subscribers).