Wednesday, June 14, 2006

Updated SFTreeView

I wrote a two-part article in the July and August issues of FoxTalk called "The Mother of all TreeViews" that presented a class library providing all the features you (or at least I) would ever need in a TreeView, including automatically dealing with the twips coordinate system used by the control, handling drag and drop, only loading parent nodes at startup, saving and restoring expanded and selected nodes, and so forth.

Andrew Nickless emailed me today about a bug: navigating the TreeView using the up and down arrows didn't cause the sample form that accompanied the article to refresh and show the properties for the selected node. Fortunately, it was easy to fix:

1. In TreeNodeClick, change the IF statement as shown:

*** if isnull(.oTree.SelectedItem) or toNode.Key <> .oTree.SelectedItem.Key
if isnull(.oTree.SelectedItem) or not toNode.Key == .cCurrentNodeKey

2. Add a new cCurrentNodeKey property

3. Add this line to SelectNode right after the assignment to cCurrentNodeID:

.cCurrentNodeKey  = loNode.Key

However, while I was looking into this, I decided to delve into another weird behavior that had bugged me for a while: sometimes clicking on the + to expand a parent node showed the placeholder "Loading..." node rather the actual children. The reason is that the Expand method of the TreeView didn't fire, and that method was responsible for removing the placeholder node and adding the child nodes. The weird thing is that it wasn't consistent; I could only make it happen about 25% of the time, if that. And given that I haven't seen that happen in other applications using a TreeView, it was clearly something I'd done in my class.

Long story short, it turned out (by trial and error, sadly) that code I had in the MouseMove method seemed to be the culprit. I say "seemed" because after removing it, I couldn't reproduce the behavior, but given that it didn't always happen, and of course it never happened when tracing the code, it's darned hard to confirm that it's gone for good. MouseMove called TreeMouseMove (thanks to Steve Black for drilling "events call methods" into my brain), which didn't do anything; I put it there in case I wanted to handle that in a subclass. Turns out I never have, so removing that behavior was no big deal.

Monday, June 05, 2006

DevCon Discount

Advisor Media is offering a $100 discount for Microsoft Visual FoxPro DevCon. Simply write "Doug discount" in the comment field of the registration form to qualify.

This will be my 16th DevCon as an attendee (I only missed the first one in Toledo in 1989), 13th as an exhibitor (1993 was the first), and my 10th as a speaker (1997 was the first). As far as I know, I'll be the only person to have attended 16 consecutive DevCons. Anyone know of someone who's attended them all?

For posterity, here are the ones I've attended:

1990 Toledo
1991 Toledo
1992 Phoenix
1993 Orlando
1995 San Diego
1996 Scottsdale
1997 San Diego
1998 Orlando
1999 Palm Springs
2000 Miami
2001 San Diego
2002 Ft. Lauderdale
2003 Palm Springs
2004 Las Vegas
2005 Las Vegas
2006 Phoenix

Thursday, May 25, 2006

RUN and GetShortPathName

Stonefield Query has a function in its developer interface (the Configuration Utility) to generate an InnoSetup script and compile it into a setup executable. The idea is to make it as easy as possible for someone to deploy a custom Stonefield Query application without having to be an installer expert. Generating the script is easy because InnoSetup scripts are just text files. Compiling the script is also easy: use the RUN command to call the InnoSetup compiler, passing it the name of the script file to compile.

I get the location of the compiler from the Windows Registry, at HKEY_CLASSES_ROOT\'InnoSetupScriptFile\Shell\Compile\Command, which on my system gives "D:\Program Files\Inno Setup 5\Compil32.exe" /cc "%1". So, it's a simple matter to read this value into a variable (for example, lcInnoCompiler) and then:

lcInnoCompiler = strtran(lcInnoCompiler, '%1', lcScriptFile)
run /n1 &lcInnoCompiler

This works great on my system and lots of customers' systems. However, one of our sales guys (Jeff Zinnert) reported that he got a "RUN command failed; file does not exist" error when he tried it and so did a customer. We checked that the InnoCompiler was installed correctly, in the place the Registry said it was, and that the script file existed, but to no avail.

While pondering this, I came across a message on the Universal Thread that was completely unrelated but mentioned an issue with "short" (ie. the old DOS 8.3) paths. That reminded me of a similar issue I'd run into several years ago but had forgotten about. A function I'd written years ago calls the Windows API GetShortPathName function to convert a "long" path into a short one:

lparameters tcPath
local lcPath, ;
lnLength, ;
lcBuffer, ;
lnResult
declare integer GetShortPathName in Win32API ;
string @lpszLongPath, string @lpszShortPath, integer cchBuffer
lcPath = tcPath
lnLength = 260
lcBuffer = space(lnLength)
lnResult = GetShortPathName(@lcPath, @lcBuffer, lnLength)
return iif(lnResult = 0, '', left(lcBuffer, lnResult))

I used this function to convert the paths in the lcInnoCompiler variable to short paths and Jeff and the customer no longer get this error.

Once again, the UT saves my butt even though the answer wasn't directly there.

Wednesday, May 17, 2006

Varchar, SET ANSI, and the UT

I've been working on updating Stonefield Query for GoldMine to use the version 3.0 Stonefield Query Developer's Edition as its engine. While doing some testing, I found that one of the queries was taking significantly longer in the new version than the old version: 30 seconds rather than 3 seconds. My first thought is that it's a VFP 9 issue, since the old version uses VFP 8. I remembered from messages on the Universal Thread regarding query performance in VFP 9 that Rushmore, the key to VFP's magical speed in performing queries, is disabled if the code page of a cursor doesn't match the current code page (ie. CPDBF() <> CPCURRENT()). However, that wasn't the case here. In fact, if I ran the old version under VFP 9, it gave the same performance as VFP 8, so that wasn't the problem.

To dig into this further, I searched the Univeral Thread for messages regarding performance, and saw one where Sergey Berezniker (the king of the Universal Thread) mentioned that expressions using ALLTRIM() aren't optimized because Rushmore requires fixed length keys, but that SET ANSI ON would take care of that. Again, that wasn't the case here; the query didn't use ALLTRIM(). However, that got me to thinking: I wonder if the fields involving in the JOIN clause were Varchar fields. Sure enough, they were. Why the difference between the old and new versions? I added the following to the new version so Varchar and Varbinary fields are properly supported:

cursorsetprop('MapBinary', .T., 0)
cursorsetprop('MapVarchar', .T., 0)

This means that in the new version, the fields retrieved from the database were Varchar rather than Character as they were in the old version. Because Varchar fields could have different lengths (in fact, they were the same length for the fields involved in this join), Rushmore won't optimize them unless ANSI is set on. So, a quick SET ANSI ON added to the code, and the query is now as fast in the new version.

So, two lessons: sometimes a little change in one part of an application can cause a big change in another part, and search the Universal Thread (or other online sources) before spending hours trying to track down a problem. This one only took me about 15 minutes to fix. Thanks, Sergey!

Wednesday, May 10, 2006

FoxUnit is Cool!

FoxUnit has been available for at least of couple of years, and I've always meant to work with it, but it was one of those things I just didn't get around to. However, after attending Nancy Folsom's session on refactoring at the recent GLGDW, in which she discusses the importance of testing both before and after refactoring to ensure the functionality remains the same, I figured it was time to get to it.

What is FoxUnit? As defined on the FoxUnit Web site, 'FoxUnit is an open-source unit testing framework for Microsoft Visual FoxPro®. It is based on unit testing frameworks as described in Kent Beck's book "Test Driven Development by Example" but takes a more pragmatic approach to unit testing for Visual FoxPro than a more purist xUnit implementation would.'

The idea is to create tests for the various pieces of your application. You then run some or all of the tests prior to releasing a new build (or performing refactoring or checking in the latest update or whenever you want) to ensure everything works correctly. Note that unit testing isn't a replacement for system or acceptance testing, but is one more tool in your professional developer's toolkit.

What got my current interest in FoxUnit started is the need to refactor some code. One method is particular is huge (several hundred lines long) and has been getting more convoluted with time. Before I add some new functionality, I want to refactor it so it's easier to comprehend, easier to maintain, and easier to test. However, I'm scared that during refactoring, I'll drop some functionality or introduce bugs. Hence the need for suite of tests I can run before and after each refactoring task. (Nancy stressed doing refactoring in small steps rather than one huge job. In addition to being easier to do, it's easier to test and less likely to cause broken functionality).

So, I downloaded and installed FoxUnit. My introduction was a little rough -- because I didn't follow instructions and SET PATH to the folder when I installed it, a few things didn't work right. In my opinion, a app should be able to find its own pieces without having to use a crutch like SET PATH, so I made a few minor tweaks (like having the program figure out what directory it's running in and using that path in a few places rather than assuming the files can be found without one). However, once I got past that, it was pretty easy to work with.

Tests are stored in PRGs which are managed by the FoxUnit UI. Each PRG contains a class subclassed from FXUTestCase, the base class FoxUnit test class. Each test is a method of the subclass (although there can be non-test methods too, of course). Although you could write all of the tests for your entire application into a single PRG, that wouldn't be very granular. I prefer one PRG for a single "thing" (module, class, or whatever) I want to test, and then one or more test methods within the PRG to test the functionality of that "thing".

The idea is to write small tests that each test one aspect of one method. For example, if a method accepts a couple of parameters and does different things based on the parameters passed, there should be tests for each parameter being passed or not, different types of bad values for the parameters, different types of good values that result in different behavior, etc. As a result, you'll have a lot of tests for even the simplest application, but each test is small, easy to understand, easy to maintain, and does just one thing. And since the FoxUnit UI manages all of the tests for you, and gives you options to run a single test, all tests in one PRG/class, or all tests, the management of these tests isn't too bad.

Tests are easy to write. Since it's just code in a VFP class, you can add custom properties if necessary. For example, rather than instantiating the object to be tested in every test method, you could create a property of the test class such as oObjectToTest, and in the Setup method (called just before any test is run), instantiate the object into that property: Here's an example, along with a test to ensure the object actually instantiated properly:

define class MyTest as FxuTestCase of FxuTestCase.prg
oObjectToTest = .NULL.

function Setup
This.oObjectToTest = newobject('MyClass', 'MyLibrary.vcx')
endfunc

function TestObjectWasInstantiated
This.AssertNotNull('The object was not instantiated.", This.oObjectToTest)
endfunc
enddefine

Note the use of one of the test methods, AssertNotNull. There are several similar methods available, including AssertEquals and AssertTrue. These methods test that some condition is the way it's expected to be, and if it isn't, the test fails and displays the failure message specified in the assert call.

In addition to instantiating the object, Setup can be used to perform other tasks needed for every test, such as setting up the environment or the object under test. A similar method, TearDown, can be used to perform common tasks after a test has been run, such as restoring the environment.

When you run a test, it either passes, in which case it's shown in green in the FoxUnit UI, or fails, which is shown in red. Tests that haven't been run are shown in gray. Thus it's easy to visually see the results of test runs.

FoxUnit is one of those things that seems like a good idea until you try it, and once you do, you realize it's a great idea. I'm kicking myself for not trying it out a couple of years ago when I first saw Drew Speedie demonstrate it at DevTeach in Montreal. But now that I've worked with it for a while, I'm a firm believer. So, if you haven't started using FoxUnit, do yourself a favor: take out an hour, download it, and create some simple tests. You'll become a believer too.

Friday, May 05, 2006

Debugging Tips

As I mentioned in my post about GLGDW, I missed the Best Practices for Debugging session. Here are a couple of tips I was going to mention. Sorry if someone else mentioned them; I was too busy setting up Rick's system to do my presentation to listen.

1. Write debugging-friendly code.

I used to write code like this:

llReturn = SomeFunction() and SomeOtherFunction() and YetAnotherFunction()
I don't do that for two reasons now: it's harder to understand than:
llReturn = SomeFunction()
llReturn = llReturn and SomeOtherFunction()
llReturn = llReturn and YetAnotherFunction()

But more importantly, it's harder to debug. If you step through the code, execution goes into SomeFunction. If you decide you don't need to trace inside that function and want to jump to the next one, you'd think Step Out would do it. Unfortunately, that steps out all the way back to the next line following the llReturn statement. So, the only ways to trace YetAnotherFunction are to go all the way through SomeFunction and SomeOtherFunction or specifically add a SET STEP ON (or set a breakpoint) for YetAnotherFunction.

Note that I don't typically write code like:

llReturn = SomeFunction()
if llReturn
llReturn = SomeOtherFunction()
if llReturn
llReturn = YetAnotherFunction()
endif
endif

That seems (to me) harder to read than the first example.

Here's another example of debugging-unfriendly code I used to write:

SomeVariable = left(SomeFunction(), 5) + substr(SomeOtherFunction(), 2, 1)

The reason that's not friendly is that you can't see what SomeFunction and SomeOtherFunction return unless you actually trace their code. Instead, I now use code like:

SomeVariable1 = SomeFunction()
SomeVariable2 = SomeOtherFunction()
SomeVariable = left(SomeVariable1, 5) + substr(SomeVariable2, 2, 1)

That way, I can trace this code and see what the functions return without having to trace the functions. If something doesn't look right, I can always Set Next Statement back to one of the statements calling a function and Step Into the function to see what went wrong.

2. Instrument Your Application

I, Rod Paddock, Lisa Slater Nicholls, and others have written articles on instrumenting your applications. The idea is to sprinkle calls to a logging object throughout your application. If logging is turned off, nothing happens. If logging is turned on (for example, oLogger.lPerformLogging is .T.), some message is written to some location (a text file, a table, the Windows Event log, etc.) indicating what the application is doing.

I've found this extremely valuable in tracking down problems. While error logs are great, they only give you a snapshot of how things were at the time the error occurred. Sometimes, you need to know how you got there. Also, sometimes the problem the user is experiencing doesn't cause an error but incorrect behavior. By perusing a diagnostic log, you can see all of the steps (the ones you've instrumented, anyway) that lead to the behavior. Yes, you can use SET COVERAGE in your application, but that generates a ton of information, likely a lot more than you need unless you have a really ugly problem that you have no clue about the cause.

Lisa's article is the most recent one I've read, so it's a good starting place. Rod's article is in the September 1997 issue of FoxTalk and mine is in the October 2003 issue of FoxTalk (both articles require you to be subscribers).

Tuesday, April 25, 2006

Don't RETURN Inside WITH

At GLGDW, I mentioned during Marcia Akins' excellent Best Practices for Class Design session that one of the leading causes of C5 errors is using RETURN inside WITH structures. Given the number of people that came up to me after the session, this clearly isn't well-known, so here's the scoop.


I'm not sure exactly what happens when you use a WITH structure, but clearly VFP stores a reference to the object specified in the WITH statement somewhere. Obviously, the reference must be removed at some point or else the object couldn't release, but I suspect there's a memory leak under these conditions, and that when enough of these memory leaks happen, you end up with a C5 error. The insidious thing about memory leaks is that the C5 error can occur far away in both code and time from the original source, so they're next to impossible to track down for mere mortals like me who don't do C code debugging.


I first started looking into this about 18 months ago. We had fairly regular reports of C5 errors from people going into or out of the Report Designer from within Stonefield Query. The problem is that it wasn't reproducible -- I could never get it to happen when I tried (I did have it happen a couple of times when I didn't want it to, such as during demos!). I went through the code related to the Report Designer with a fine-tooth comb and couldn't see anything that could cause this. Then I remembered some weird behavior from years earlier: if I used RETURN within a WITH statement, under some conditions, the object specified wouldn't release. That problem was fixed in a later version of VFP, so I'd forgotten about it, but it occurred to me that I'd used that a lot in my code even though it really isn't a good practice. So, I spent a day or two refactoring every single instance of RETURN inside WITH so the RETURN statement was after the ENDWITH. Since I couldn't reproduce the C5 errors on demand, it was hard to know whether this worked or not, but we haven't had a single C5 error reported since we released that version of Stonefield Query. And in thinking about what may be going on underneath the covers, it makes sense to me that this was likely the culprit.


So, a heads-up for everyone: if you're getting C5 errors and are pulling your hair out trying to track them down, look at all your WITH structures and move any RETURN statements below the ENDWITH. Not only is it good programming practice, it may kill those maddening problems.


Back from GLGDW

I got back from Milwaukee late last night. What a conference! Many people were saying it was the best conference they ever attended. Here are my thoughts:


  • I loved the format: one track so everyone was in one room and lots of audience participation. It took a little getting used to, and definitely was hard on time management, but it was great hearing audience members telling everyone about their experiences with X or what utilities they use to solve a problem. It felt more like a big workshop rather than a conference.

  • Whil's daughter Aleix, who helped with everything from registration to book sales, was amazing. At 13, she has the poise and self-confidence of someone much older. For example, she came to dinner Saturday night with 6 of us without her Dad--who, although he gave another excuse, was scared of the Thai restaurant we went to (gd&rfwh)--and it didn't faze her in the slightest listening to our typical 40-something conversation.

  • Nearly every speaker told me that even they learned a lot. In fact, most speakers attended every session, something you rarely see at other conferences.

  • The only negative was the wireless connection provided by the hotel. It only worked in the lobby, not the conference room, nor in my hotel room (although some people said it worked OK in their rooms).


Here's a breakdown of the sessions:


  • Best Practices for Development Environment Setup: the first session, this was sort of a panel presentation on Friday night. I say "sort of" because there wasn't a table for panelists, but most of the speakers sat in the front row. Rick Schummer ran the show and did a great job of encouraging audience participation. In fact, that session set the tone for the rest of the conference. The topic was about the best ways to set up the VFP IDE, such as modifying the VFP system menus to provide quick access to commonly used utilities.

  • Best Practices for Error Handling and Reporting: Rick did his usual great job presenting his session on error handling, pointing out things such as the priority of error handling in a mixed environment of TRY structures, Error methods, and a global error handler.

  • Best Practices for Class Design: given that I've been using VFP for more than 10 years, I didn't expect to learn much from this session, but I was surprised. Marcia Akins challenged everyone with her ideas of the "must, could, and should" rules of class design and provided lots of examples to nail down the points. I'll definitely used some of the things she taught to make my class designs better in the future. Best of all, she gave away treats to people with the best suggestions or answers!

  • Best Practices for User Interfaces: Tamar Granor's session showed us good and bad examples of user interfaces (not just computer UI) and broke it down by section, including dialogs, menus, user accessibility, and so on.

  • Best Practices for Data Access (local): I'm ashamed to say that I missed Andy Kramek's session because I really wanted to go over my session one more time. Fortunately, Andy writes great white papers so I know I can get the content from there.

  • Best Practices for Data Access (remote): Andy's second session, held Saturday night after one of the best Thai meals I've ever had, was one of two sessions (Nancy Folsom's was the other) where some of the best practices were somewhat controversial. For example, Andy suggested that DSNs are preferred over DSN-less connections and that stored procedures are to used as a last resort. Not everyone agrees with these, but given Andy's extensive experience in this area, it's best to at least reconsider your views on these areas. He even gave a short break so those so inclined could get a beer from the nearby bar!

  • Best Practices for Refactoring: Nancy Folsom's 8:30 am session was one of my favorites. She went through reasons for refactoring, general concepts (for example, test before and after the change so you can confirm functionality wasn't affected and don't change too much in one swoop), and then detailed techniques, showing examples of code before, during, and after refactoring. Some of the "bad smells" (reasons to suspect you need to refactor) were controversial, such as the presence of comments (even her slide has a "wha??" after that point!), but thought-provoking. She only finished about half her material due to audience participation, but I think that's a good thing, as it was great to explore ideas with others.

  • Best Practices for Reporting and Output: Barbara Peisch showed how she provides a generic reporting dialog to her users and showed the advantage of using a common output routine for all reports.

  • Best Practices for Project Management: Cathy Pountney discussed a wide range of issues when working on VFP projects, such as team building and management, scheduling, requirements analysis (she hates the term "gathering", which implies that requirements are just lying around waiting to be scooped up).

  • Best Practices for Vertical Application Development: my session did not start well. For some reason, my laptop wouldn't talk to the projector. I've never had problems with projectors before, so this was unexpected and very unpleasant. After fiddling with settings and a restart, we decided (15 minutes after the session was supposed to start) to rearrange the schedule and do the Best Practices for Debugging panel session while I moved everything I needed over to Rick Schummer's laptop. As a result, I totally missed that session (which I'm bummed about), including not giving the best practices I'd planned to present, so I'll blog about those in the next couple of days. I finally started at 4:45, more than an hour late, but after getting over the jitters due to what happened, using someone else's system, and not being able to show everything I planned because I couldn't install everything, I think the session went well. I discussed application activation and licensing, maintenance models, version update mechanisms, support policies, and, after a short break to allow those who were getting hungry to leave (it was past 6 pm at this point), error reporting. It was great getting feedback from the audience about how they do some of those things--I definitely liked the audience participation, even if it made it impossible to cover everything (as Nancy and some others found) or finish on time (in my case--my session was about 1:45 rather than the planned 1:30, but there was nothing after my session so it worked out).

  • Best Practices for Designing Middle Tier Components: Craig Berntson discussed the importance of separating business logic, data access, and user interface into different components, even if they're in the same physical layer, and presented some simple classes that explained the concepts very well.

  • Best Practices for Deployment: Rick Borup's session was probably the closest one to the "best practices" theme, as he discussed the five stages of deployment. I came away with several good ideas about how to make deployment work better.


The general consensus was that there should be another GLGDW next year. Whil didn't promise anything, so anyone interested in attending an inexpensive conference where you'll learn things that simply aren't presented at other conferences should email Whil and tell him they'll be there next year. Thanks, Whil, for going to all the hard work of putting on yet another great conference, and here's my vote for GLGDW 2007.

Checking out Qumana

After a few uses, I've decided the Blogger editor kind of sucks. It's slow (especially if you write longer entries as I've tended to so far), klunky, and seems to have a mind of its own regarding formatting. The fatal thing for me, though, was losing an entire entry just as I was finishing it up last week. So, I'm using a free blog editor called Qumana after reading Craig Bailey's recommendation.

So far, so good. It's very responsive (as you'd expect for a desktop app rather than a browser-based one like the Blogger editor), has a built-in spell checker with the little red squigglies like Microsoft Word, works offline (like I'm doing right now), and supports the same features I like about the Blogger editor (being able to switch between editing in text or HTML, formatting toolbar, easy hyperlinking, etc.). A few things I haven't tried yet are support for ads, tags, categories, and trackbacks, and the DropPad, which allows you to add text or images from any source by dragging and dropping. The only thing I've found that I don't like is the lack of local help; it's available on the Qumana web site. Since I don't currently have a connection (I'm typing this on my flight to Milwaukee), that doesn't work.


Wednesday, April 19, 2006

Forget TXTWIDTH - use GdipMeasureString

For years, we've used code like the following to determine the width of a string:
lnWidth = txtwidth(lcText, lcFontName, lnFontSize, ;
lcFontStyle)
lnWidth = lnWidth * fontmetric(6, lcFontName, ;
lnFontSize, lcFontStyle)
This code works OK in many situations, but not in one in particular: when defining how wide to make an object in a report.

The value calculated above is in pixels, so you must convert the value to FRUs (the units used in reports, which are 1/10000th of an inch); you need to multiply by 104.166 (10000 FRUs per inch / 96 pixels per inch). Instead of doing all that work, you could use the GetFRUTextWidth method of the FFC _FRXCursor helper object:
loFRXCursor = newobject('FRXCursor', ;
home() + 'FFC\_FRXCursor.vcx')
lnWidth = loFRXCursor.GetFRUTextWidth(lcText, ;
lcFontName, lnFontSize, lcFontStyle)
The problem is this doesn't actually give you the correct value. The reason is because reports use GDI+ for rendering and GDI+ renders objects a little larger than you'd expect it to.

To see this problem, do the following:
use home() + 'samples\data\customer'
loFRXCursor = newobject('FRXCursor', ;
home() + 'FFC\_FRXCursor.vcx')
select max(loFRXCursor.GetFRUTextWidth(trim(company), ;
'Arial', 10)) from customer into array laWidth
wait window laWidth[1]
I get 22500. Now create a report, add a field, enter "company" as the expression, and make it 2.25 inches wide (22500 FRUs / 10000 FRUs per inch). Preview the report. The telltale ellipsis at the end of some values indicates the field wasn't sized wide enough.

This drove me crazy for years. I figured out an empirical "fudge" factor to add to the calculated width; 19 pixels (1979.154 FRU) seemed to work most of the time, but occasionally I'd find that wasn't enough for some values.

Fortunately, since reports use GDI+, we can use a GDI+ function to accurately calculate the width. GdipMeasureString determines several things about the specified string, including the width. Even better, VFP 9 comes with a GDI+ wrapper object so you don't have to understand the GDI+ API to call GdipMeasureString.

To show an example of using the GDI+ wrapper classes, take a look at this function:
function GetWidth(tcText, tcFontName, tnFontSize)
local loGDI, ;
loFont, ;
lnChars, ;
lnLines, ;
loSize
loGDI = newobject('GPGraphics', ;
home() + 'FFC\_GDIPlus.vcx')
loFont = newobject('GPFont', ;
home() + 'FFC\_GDIPlus.vcx', '', tcFontName, ;
tnFontSize, 0, 3)
loGDI.CreateFromHWnd(_screen.HWnd)
lnChars = 0
lnLines = 0
loSize = loGDI.MeasureStringA(tcText, loFont, , , ;
@lnChars, @lnLines)
lnWidth = loSize.W
release loGDI, loFont, loSize
return lnWidth
Now try the following:
select max(GetWidth(trim(company), ;
'Arial', 10)) from customer into array laWidth
wait window ceiling(laWidth[1] * 104.166)
This gives 23838. Change the width of the field in the report to 2.384 inches and preview it again. This time the values fit correctly.

The only problem now is that this code can take a long time to execute if there are a lot of records because for each call, a couple of GDI+ wrapper objects are created and some GDI+ setup is done. I created a wrapper class for GdipMeasureString called SFGDIMeasureString that works a lot more efficiently.

Let's look at this class in sections. Here's the start: it defines some constants, the class, and its properties:
* These #DEFINEs are taken from
* home() + 'ffc\gdiplus.h'

#define GDIPLUS_FontStyle_Regular 0
#define GDIPLUS_FontStyle_Bold 1
#define GDIPLUS_FontStyle_Italic 2
#define GDIPLUS_FontStyle_BoldItalic 3
#define GDIPLUS_FontStyle_Underline 4
#define GDIPLUS_FontStyle_Strikeout 8
#define GDIPLUS_STATUS_OK 0
#define GDIPLUS_Unit_Point 3

define class SFGDIMeasureString as Custom
oGDI = .NULL.
&& a reference to a GPGraphics object
oFormat = .NULL.
&& a reference to a GPStringFormat object
oFont = .NULL.
&& a reference to a GPFont object
oSize = .NULL.
&& a reference to a GPSize object
nChars = 0
&& the number of characters fitted in the
&& bounding box
nLines = 0
&& the number of lines in the bounding box
nWidth = 0
&&amp;amp; the width of the bounding box
nHeight = 0
&& the height of the bounding box
nStatus = 0
&& the status code from GDI+ functions
The Init method instantiates some helper objects and declares the GdipMeasureString function. Destroy nukes the member objects:
function Init
This.oGDI = newobject('GPGraphics', ;
home() + 'ffc\_gdiplus.vcx')
This.oFormat = newobject('GPStringFormat', ;
home() + 'ffc\_gdiplus.vcx')
This.oFont = newobject('GPFont', ;
home() + 'ffc\_gdiplus.vcx')
This.oSize = newobject('GPSize', ;
home() + 'ffc\_gdiplus.vcx')
declare integer GdipMeasureString ;
in gdiplus.dll ;
integer nGraphics, string cUnicode, ;
integer nLength, integer nFont, ;
string cLayoutRect, integer nStringFormat, ;
string @cRectOut, integer @nChars, ;
integer @nLines
endfunc

function Destroy
store .NULL. to This.oGDI, This.oFormat, ;
This.oFont, This.oSize
endfunc
MeasureString determines the dimensions of the bounding box for the specified string:
function MeasureString(tcString, tcFontName, ;
tnFontSize, tcStyle)
local lcStyle, ;
lnStyle, ;
lnChars, ;
lnLines, ;
lcBoundingBox, ;
lnGDIHandle, ;
lnFontHandle, ;
lnFormatHandle, ;
lcRectF, ;
lnStatus, ;
llReturn
with This

* Ensure the parameters are passed correctly.

do case
case vartype(tcString) <> 'C' or ;
empty(tcString)
error 11
return .F.
case pcount() > 1 and ;
(vartype(tcFontName) <> 'C' or ;
empty(tcFontName) or ;
vartype(tnFontSize) <> 'N' or ;
not between(tnFontSize, 1, 128))
error 11
return .F.
case pcount() = 4 and ;
(vartype(tcStyle) <> 'C' or ;
empty(tcStyle))
error 11
return .F.
endcase

* Set up the font object if the font and size
* were specified.

if pcount() > 1
lcStyle = iif(vartype(tcStyle) = 'C', ;
tcStyle, '')
.SetFont(tcFontName, tnFontSize, lcStyle)
endif pcount() > 1

* Initialize output variables used in
* GdipMeasureString.

lnChars = 0
lnLines = 0
lcBoundingBox = replicate(chr(0), 16)

* Get the GDI+ handles we need.

lnGDIHandle = .oGDI.GetHandle()
if lnGDIHandle = 0
.oGDI.CreateFromHWnd(_screen.HWnd)
lnGDIHandle = .oGDI.GetHandle()
endif lnGDIHandle = 0
lnFontHandle = .oFont.GetHandle()
lnFormatHandle = .oFormat.GetHandle()

* Get the size of the layout box.

lcRectF = replicate(chr(0), 8) + ;
.oSize.GdipSizeF

* Call the GdipMeasureString function to get
* the dimensions of the bounding box for the
* specified string.

.nStatus = GdipMeasureString(lnGDIHandle, ;
strconv(tcString, 5), len(tcString), ;
lnFontHandle, lcRectF, lnFormatHandle, ;
@lcBoundingBox, @lnChars, @lnLines)
if .nStatus = GDIPLUS_STATUS_OK
.nChars = lnChars
.nLines = lnLines
.nWidth = ctobin(substr(lcBoundingBox, ;
9, 4), 'N')
.nHeight = ctobin(substr(lcBoundingBox, ;
13, 4), 'N')
llReturn = .T.
else
llReturn = .F.
endif .nStatus = GDIPLUS_STATUS_OK
endwith
return llReturn
endfunc
GetWidth is a utility method that returns the width of the specified string:
function GetWidth(tcString, tcFontName, ;
tnFontSize, tcStyle)
local llReturn, ;
lnReturn
with This
do case
case pcount() < 2
llReturn = .MeasureString(tcString)
case pcount() < 4
llReturn = .MeasureString(tcString, ;
tcFontName, tnFontSize)
otherwise
llReturn = .MeasureString(tcString, ;
tcFontName, tnFontSize, tcStyle)
endcase
if llReturn
lnReturn = .nWidth
endif llReturn
endwith
return lnReturn
endfunc
SetSize sets the dimensions of the layout box for the string:
function SetSize(tnWidth, tnHeight)
if vartype(tnWidth) = 'N' and ;
tnWidth >= 0 and ;
vartype(tnHeight) = 'N' and tnHeight >=0
This.oSize.Create(tnWidth, tnHeight)
else
error 11
endif vartype(tnWidth) = 'N' ...
endfunc
SetFont sets the font name, size, and style to use:
function SetFont(tcFontName, tnFontSize, tcStyle)
local lcStyle
do case
case pcount() <>= 2 and ;
(vartype(tcFontName) <> 'C' or ;
empty(tcFontName) or ;
vartype(tnFontSize) <> 'N' or ;
not between(tnFontSize, 1, 128))
error 11
return .F.
case pcount() = 3 and ;
vartype(tcStyle) <> 'C'
error 11
return .F.
endcase
lcStyle = iif(vartype(tcStyle) = 'C', tcStyle, '')
lnStyle = iif('B' $ lcStyle, ;
GDIPLUS_FontStyle_Bold, 0) + ;
iif('I' $ lcStyle, ;
GDIPLUS_FontStyle_Italic, 0) + ;
iif('U' $ lcStyle, ;
GDIPLUS_FontStyle_Underline, 0) + ;
iif('-' $ lcStyle, ;
GDIPLUS_FontStyle_Strikeout, 0)
This.oFont.Create(tcFontName, tnFontSize, ;
lnStyle, GDIPLUS_Unit_Point)
endfunc
Let's try the previous example using this class:
loGDI = newobject('SFGDIMeasureString', ;
'SFGDIMeasureString.prg')
select max(loGDI.GetWidth(trim(company), 'Arial', 10)) ;
from customer into array laWidth
wait window laWidth[1] * 10000/96
This is a lot faster than the GetWidth function presented earlier. The following would run even faster because the font object doesn't have to be initialized on each call:
loGDI = newobject('SFGDIMeasureString', ;
'SFGDIMeasureString.prg')
loGDI.SetFont('Arial', 10)
select max(loGDI.GetWidth(trim(company))) ;
from customer into array laWidth
wait window laWidth[1] * 10000/96
The cool thing about this class is that it can do a lot more than just calculate the width of a string. It can also determine the height or the number of lines a string will take at a certain width (think setting MEMOWIDTH to a certain width and then using MEMLINES(), but faster, more accurate, and supporting fonts).

For example, I have a generic message dialog class I use to display warnings, errors, and other types of messages to the user. I don't use MESSAGEBOX() for this because my class support multiple buttons with custom captions. The problem is that the buttons appear below an editbox used to display the message. So, how much room do I have to allocate for the height of the editbox? If I don't specify enough, the user has to scroll to see the message. If I specify too much, short messages look goofy because there's a lot of blank space before the buttons. Now, I can make the editbox an arbitrary size and use SFGDIMeasureString to determine the necessary height for the editbox for a given message, adjusting the positions of the buttons dynamically. To do so, I call the SetSize method to tell SFGDIMeasureString the width of the editbox (I pass a very large value, like 10000, for the height, so it isn't a factor), then call MeasureString, and use the value of the nHeight property for the height of the editbox.

I'm finding a lot more uses for this class. I hope you find it useful too.