Code That Debugs Itself

Environment: VC++ 6.0, ANSI/UNICODE, Windows 98, ME, NT 4.0, 2000, XP

Table of Contents

  1. Introduction
  2. Code that Debugs Itself
  3. Requirements of the Debug Macros
  4. QAFDebug Debug Macros
  5. Using the Debug Macros
  6. Error Log File and VC++ Debug Trace Window
  7. Release and Debug Builds
  8. Customizing the Debug Log for Your Projects
  9. Unit Tests Support
  10. Syntax Coloring and AutoText Completions
  11. Appendix A: References
  12. Appendix B: Drawbacks of C++ Standard Asserts
  13. Appendix C: Version History
  14. Appendix D: Missing Features

Introduction

Standard C++ debug macros are not rich. Usually, only a couple of such macros with some variations is there in your toolkit: ASSERT() and TRACE() (look at [MSDN: Assertions] for a complete list of the standard debug macros). In its general implementation, ASSERT() pops up a dialog box and/or terminates the program. TRACE() usually outputs a string to the debug output.

These standard debug macros are very useful in debugging but they do have few drawbacks (look at Appendix B: Drawbacks of C++ standard assert's). They are not suitable especially for the projects I work on: DLLs and ActiveX COM objects, often running on a server. They lack some features important to me (like the ability to have release builds with the critical error log and reporting the actual error message to that log). I cannot allow them to terminate the server process. I do not want to bother QA guys with the "Abort, Retry, Ignore" dialog boxes. Finally I do not like how these ASSERTs appear in the source code (look at Listing 2). I want an alternative.

More than that, traditionally the role of these debug macros is limited to the input parameters testing while they can detect a wide scope of programming errors. I think that the design of the standard ASSERT macro limits its power. To overcome ASSERT's limitations, I developed a set of simple, flexible, and powerful debug macros that allow me to catch most of the programming errors in early stages of development. Even more, these macros as a tool stimulate writing the "code that debugs itself." The technique and the tool presented here proved their usefulness in real projects.

Code that Debugs Itself

Before we go into the implementation details, I want to guide you to the idea of writing the "code that debugs itself." This idea is simple, it is very old, and it was not me who invented it: Code defensively, show up the defects. and always check error returns (look at [The Practice of Programming, page 142]).

Here is an example. Suppose you have a function like this one (the example is Win32, VC++, COM, and ATL specific):

// Listing 1: A regular C++ source code (it will not crash by AV)
HRESULT ConvertFile2File( BSTR bstrInFileName, BSTR * pbstrRet )
{
    if( NULL == bstrRet )
        return E_INVALIDARG;
    // Create the object (I use a descendant of CComDispatchDriver)
    QComDispatchDriver piConvDisp;
    HRESULT hr = piConvDisp.CreateObject( L"QTxtConvert.QTxtConv",
        CLSCTX_LOCAL_SERVER );
    if( FAILED(hr) )
        return hr;
    hr = piConvDisp.Invoke1( L"Convert",
        CComVariant(bstrInFileName), NULL );
    if( FAILED(hr) )
        return hr;
    CComVariant varTemp;
    // Get the result
    hr = piConvDisp.Get( L"Target", &varTemp );
    if( FAILED(hr) )
        return hr;
    hr = varTemp.ChangeType( VT_BSTR );
    if( FAILED(hr) )
        return hr;
    CComBSTR bstrResult = varTemp.bstrVal;
    // Get the optional Author property
    hr = piConvDisp.Get( L"Author", &varTemp );
    if( Q_SUCCEEDED( hr ) &&
        Q_ASSERT( VT_BSTR == V_VT(&varTemp) ) )
    {
        CComBSTR szAuthor = V_BSTR( &varTemp );
        // The Author property may be an empty string - it's okay!
        if( szAuthor.Length() > 0 )
        {
            bstrResult += L"&>";
            bstrResult += szAuthor;
        }
    }
    return bstrResult.CopyTo(pbstrRet);
}

This function is defensive and its error processing is okay. It is pretty good for the release code but first you must debug this code. Here I have a problem—while this function accurately returns the error code, I have no idea what caused the error. This function may return the same E_FAIL from six different places. The only way to know where it failed is to go trough it step by step in the debugger; that is not always possible. The user or a QA guy will find some feature(s) non-working with no diagnostic of the problem. What would you do?

Well, at this point the ASSERT() macro comes to help us. By using the ASSERT macro, we can spot the error at the moment it happens. We can test the input parameters, illegal conditions, and our design assumptions. Look at the resulting code:

// Listing 2: A good C++ source code (it will prompt on any error)
HRESULT ConvertFile2File( BSTR bstrInFileName, BSTR * pbstrRet )
{
    // test the input parameters
    ASSERT( NULL != bstrRet );
    ASSERT( SysStringLen(bstrInFileName) > 0 );
    if( NULL == bstrRet )
        return E_INVALIDARG;
    // Create the object (I use a descendant of CComDispatchDriver)
    QComDispatchDriver piConvDisp;
    HRESULT hr = piConvDisp.CreateObject( L"QTxtConvert.QTxtConv",
        CLSCTX_LOCAL_SERVER );
    // I assume that the object is always installed and available.
    // If it is not created, this is definitely a bug.
    ASSERT( SUCCEEDED(hr) );
    if( FAILED(hr) )
        return hr;
    hr = piConvDisp.Invoke1( L"Convert",
        CComVariant(bstrInFileName), NULL );
    // This method should usually never fail.
    // If something wrong happens, it is most
    // possible a bug, I want to know about it.
    // Attention: I filter one expected error first!
    ASSERT( (E_INVALID_FORMAT == hr) || SUCCEEDED(hr) );
    if( FAILED(hr) )
        return hr;
    CComVariant varTemp;
    // Get the result
    hr = piConvDisp.Get( L"Target", &varTemp );
    // If this property is not available, it is a bug.
    ASSERT( SUCCEEDED(hr) );
    if( FAILED(hr) )
        return hr;
    hr = varTemp.ChangeType( VT_BSTR );
    // If the property has a wrong type, it is a bug.
    ASSERT( SUCCEEDED(hr) );
    if( FAILED(hr) )
        return hr;
    CComBSTR bstrResult = varTemp.bstrVal;
    // Get the optional Author property
    hr = piConvDisp.Get( L"Author", &varTemp );
    // If this property is not available, it is a bug.
    // If the property has a wrong type, it is a bug.
    ASSERT( SUCCEEDED(hr) );
    ASSERT( VT_BSTR == V_VT(&varTemp) );
    if( SUCCEEDED( hr ) && ( VT_BSTR == V_VT(&varTemp) ) )
    {
        CComBSTR szAuthor = V_BSTR( &varTemp );
        // The Author property may be an empty string - it's okay!
        if( szAuthor.Length() > 0 )
        {
            bstrResult += L"&>";
            bstrResult += szAuthor;
        }
    }
    hr = bstrResult.CopyTo(pbstrRet);
    // If I cannot copy a string, it is a catastrophe!
    ASSERT( SUCCEEDED(hr) );
    return hr;
}

Is it long and ugly? Yes, I agree. Is it a misuse of ASSERTs? Not at all! (Maybe only a bit...)

The usual place for an ASSERT is at the beginning of the function where it tests for input parameters. Most programmers stop at that. I think they forget that ASSERTs are also a tool to test DESIGN ASSUMPTIONS and ILLEGAL CONDITIONS. I assume that the object must be registered and available. I assume that it must never fail in Convert(). I assume that there are two BSTR properties. Finally, I assume that my object will not eat all the computer's memory. All of those are my design assumptions. I test my assumptions by adding ASSERTs. This fact does not eliminate the regular error handling, not at all.

Well, here I mix runtime error conditions (memory overflow) with runtime programming errors or illegal conditions (NULL pointer in input parameters). I think they are very close to each other because they are not expected by my algorithm. These are UNEXPECTED ERRORS that should be handled by ASSERTs. I never will use asserts to test EXPECTED ERRORS (such as writing to a read-only file). You can see in the code that I filter one expected error that may occur in this function: The Convert method may return the E_UNKNOWN_FORMAT error. There are no other expected errors (such as file read errors) because the input file is generated by the caller process and must be always available for me. If it cannot be read, this is an unexpected error.

I definitely want to have all this debug staff, but I do not like how the code looks. And even if I know that the code failed in a specific place, I would like to know why—to get the HRESULT code at least. Plus, I may repeat all what I wrote in Appendix B: Drawbacks of C++ Standard Asserts. I think now is the right time to present the right tool.

Requirements to the Debug Macros

When I first started to think about a set of debug macros, I defined the following requirements:

  • Only a few easy-to-use functions or macros with clear semantic (to make them user-friendly... sorry—developer-friendly);
  • Error report is sent either to file or debug console (no pop-up dialog boxes, look at Appendix B for Drawback #2);
  • Switching on/off the debug macros should not affect the normal program execution (adding a debug macro should not cause evaluating the expression twice; look at Appendix B for Drawback #5);
  • Debug macros can be transparently integrated into the source code and removing them in the release build should not change the algorithm (each ASSERT macro generally takes a separate line and causes the program to repeat the tested condition twice);
  • Error report must be available in both debug and release builds depending on conditional defines (in certain cases; I want to keep the error log in release builds—it makes detecting defects in early production much easier).

I realized that, by using C++ syntax, it is very simple to enrich these requirements. It took me about 10 releases to come up with the set of debug macros.

QAFDebug Debug Macros

bool Q_ASSERT( bool )
This is the macro similar to the regular ASSERT() macro. It returns the same value as it receives. And it reports if the bool is false (good condition failed).
// report if the variable is NULL - it is an unexpected error
if( Q_ASSERT(NULL != lpszString) )
    ;    // process the string
else
    return ERROR;    // you may even skip "else"
bool Q_INVALID( bool )
This is the macro that is very useful in testing parameters. It returns the same value as it receives. And it reports if the bool is true (bad condition succeeded).
// report if the variable is NULL - it is an unexpected error
if( Q_INVALID(NULL == lpszString) )
    return ERROR;
bool Q_SUCCEEDED( HRESULT )
This is the macro similar to the regular SUCCEEDED() macro. It actually uses SUCCEEDED() inside. And it reports if HRESULT is an error.
hr = varTemp.ChangeType( VT_BSTR );
// report if the HRESULT failed - it is an unexpected error
if( Q_SUCCEEDED(hr) )
    ; // process the string
else
    return hr;    // you may even skip "else"
bool Q_FAILED( HRESULT )
This is the macro similar to the regular FAILED() macro. It also uses FAILED() inside. And it reports if HRESULT is an error.
// report if the HRESULT failed - it is an unexpected error
if( Q_FAILED(hr) )
    return hr;
HRESULT Q_ERROR( HRESULT )
This is a special macro that returns the same input value. It is useful in places where you get a result and return it immediately without testing.
// report if the operation failed - it is an unexpected error
// while the caller will know about the error, I still want
// to get the first sign from the original source of the error
return Q_ERROR( bstrResult.CopyTo(pbstrRet) );
void Q_EXCEPTION( CException )
This is a special macro that is very useful when processing exceptions in MFC Wizard-generated code. It reports about the exception.
catch{ CException e )
{
    // report about the exception - it is an unexpected error
    // (In a case if some exceptions are expected, I will
    // process them differently.
    Q_EXCEPTION(e);
}
void Q_LOG( LPCTSTR )
This macro writes a custom error message to the error log. It replaces undescriptive "Q_ASSERT(false)".
DLLEXPORT int Func01( LPCTSTR lpszString )
{
    // report about the unexpected function call
    Q_LOG( _T("This function is deprecated "
        "and it should never be called!") );
}

Using the Debug Macros

Now, let's return to that function and add the debug macros to it. Once we add the debug macros, the function will carefully report about all unexpected errors at the moment they happen. And the error reporting stuff does not take too much space in the code and time to implement. No more than it is needed, I mean.

// Listing 3: The instrumented C++ source code
#include "QAFDebug.h"
...
HRESULT ConvertFile2File( BSTR bstrInFileName, BSTR * pbstrRet )
{
    // test the input parameters
    if( Q_INVALID(NULL == bstrRet)
        || Q_INVALID(SysStringLen(bstrInFileName) <= 0)  )
        return E_INVALIDARG;
    // Create the object (I use a descendant of CComDispatchDriver)
    QComDispatchDriver piConvDisp;
    HRESULT hr = piConvDisp.CreateObject( L"QTxtConvert.QTxtConv",
        CLSCTX_LOCAL_SERVER );
    // I assume that the object is always installed and available.
    // If it is not created, this is definitely a bug.
    if( Q_FAILED(hr) )
        return hr;
    hr = piConvDisp.Invoke1( L"Convert",
        CComVariant(bstrInFileName), NULL );
    // This method should usually never fail.
    // If something wrong happens, it is most
    // possibly a bug, and I want to know about it.
    // Attention: I filter one expected error first!
    if( (E_INVALID_FORMAT == hr) || Q_FAILED(hr) )
        return hr;
    CComVariant varTemp;
    // Get the result
    hr = piConvDisp.Get( L"Target", &varTemp );
    // If this property is not available, it is a bug.
    if( Q_FAILED(hr) )
        return hr;
    hr = varTemp.ChangeType( VT_BSTR );
    // If the property has a wrong type, it is a bug.
    if( Q_FAILED(hr) )
        return hr;
    CComBSTR bstrResult = varTemp.bstrVal;
    // Get the optional Author property
    hr = piConvDisp.Get( L"Author", &varTemp );
    // If this property is not available, it is a bug.
    // If the property has a wrong type, it is a bug.
    if( Q_SUCCEEDED( hr ) &&
        Q_ASSERT( VT_BSTR == V_VT(&varTemp) ) )
    {
        CComBSTR szAuthor = V_BSTR( &varTemp );
        // The Author property may be an empty string - it's okay!
        if( szAuthor.Length() > 0 )
        {
            bstrResult += L"&>";
            bstrResult += szAuthor;
        }
    }
    // If I cannot copy a string, it is a catastrophe!
    return Q_ERROR( bstrResult.CopyTo(pbstrRet) );
}

The syntax of these macros is clear. As you can see, the expression is evaluated only once so the algorithm remains the same in the debug and release builds. If you disable the debug stuff, it only will remove these "Q_" signs and not touch the program logic (it's an allegory but it's close to the truth).

When something goes wrong (the programmer tries to access an invalid property name or there is no enough memory), the program will write to the error log file all the error information available (line-wrapped here):

QAFDBG01 [2003-02-07 19:21:45:820] [process=0x000002FC,
                                    thread=0x00000660,
module=C:\Works\ErrorLog\ErrorLog.exe, GetLastError=183]
[file=C:\Works\ErrorLog\ErrorLog.cpp, line=61,
expression="FAILED(0x8007000E)"] Not enough storage is available
                                 to complete this operation.

You have all the required information here: the name of the hosting process, the last error code, the location in the source file, and description of the error. In many cases it is enough to take a look at the log record on the QA computer to understand what's wrong with your program.

In VC++ IDE, it also will add a line for quick navigation to the source of the error:

C:\Works\ErrorLog\ErrorLog.cpp(61) : Not enough storage
is available to complete this operation.

To use the debug macros, you need to add two files to your project: QAFDebug.h and QAFDebug.cpp. I also recommend to adjust several constants in the source code to define the desired location and maximum size of the log file. Look at the Customizing section.

Error Log File and VC++ Debug Trace Window

I like the error log to be printed in the debug trace window of my VC++ IDE when I'm running the debugger, and to be written to the file otherwise. This is what I implemented there. The error log is written to the file and duplicated for the VC++ IDE or other trace tools (such as DebugView from www.sysinternals.com). I have a lot of different ActiveXes and DLLs hosted by different processes, so I need to have a single, well-known location for the log file. I decided to put it under the folder:

C:\Documents and settings\username\Application Data\MyCompany\
   Log\error.log

For DLLs hosted by NT services (they have no any current user account), I use the "All Users" or "Default User" folders (it differs between Windows versions). Or, you may define an environment variable named "QAFERRORLOGPATH". I think using the environment variable is the best way for NT services.

An endless error log is dangerous. Therefore, I implemented a maximum file size limit (by default, 250 Kb characters—the size is in TCHARs). There are two log files at each moment—the current one and the previous one. Generally, 1 record takes about 300 TCHARs, so I reserve space for about 800–1500 records with the 250,000 TCHARs limit. This should be enough and it guaranties that the both log files together will not be larger than 1 Mb for UNICODE and 500 Kb for ASCII builds.

All processes write to the same log file. Of course, access to it should be synchronized. But because it is the critical error log and not a regular trace log, and critical errors happen quite seldom, I use the file itself for the synchronization. I lock it on opening and try to open the file several times (max 200 msec) if it is locked by someone else. This protects me from any kind of deadlocks.

Release vs. Debug Builds

Jon Bentley in his [Programming Pearls, page 50] quotes Tony Hoare, who said that a programmer who uses assertions while testing and turns them off during production is like a sailor who wears a life vest while drilling on shore and takes it off at sea. The question "to assert or not to assert in release" already caused a lot of discussions here in the forum. I think the answer is quite simple: Whenever you know that the product is stable, you are free to switch off the assertions for better performance and security. You are free to switch them off if the product is not stable yet as well.

In certain cases, I believe it is useful to keep the critical error log switched on even in release builds. In a start-up company and in early stages of production, when the QA/RND loops are very short, tests do not cover 100% of features and new releases almost immediately go to production (for me, even an internal release or a pilot project means "production"); there is no chance to release a fully tested build. Developers receive many defect reports from outside and in most cases they are hard to reproduce. In such situations, the critical error log becomes extremely useful. The macros make the program several Kb bigger and affect the performance a bit (maximum two assembler instructions per macro when no error is detected), but the cost is worth it to pay it.

The set of macros presented here is very flexible—you may decide to switch it on/off depending on the debug/release builds. Even specific parts of it may be switched on/off. From my experience, it is useful to switch maximum reporting in debug builds, and keep only the error log file enabled in release builds (usually nobody traces the release product). This are the default settings. Look at the next section if you want to override this. I also switch off the unit tests support in release builds because unit tests are a bit heavy.

Customizing the Debug Log for Your Projects

First of all, I recommend that you change the following constants in QAFDebug.h and define them to something unique for your projects:

QAFDEBUG_LOG_SUBFOLDER = _T("YourCompany\\Log\\")
Subfolder in the application data folder.
QAFDEBUG_LOG_ENV_VAR = _T("QAFERRORLOGPATH")
Name of the environment variable that may set the output debug log folder.
QAFDEBUG_SILENCE_MUTEX = _T("QAFDebugMutex001A")
The name of the mutex for synchronizing the unit test support staff.
QDEBUG_SHMEMFILE = _T("QAFDbgMemFile01")
The name of the memory-mapped file that stores the shared flags for the unit test support staff.

You might also want to correct the following constants in QAFDebug.h:

QAFDEBUG_LOG_FILE_MAX_SIZE = (250 * 1024 * sizeof(TCHAR))
Maximum log file size.
QAFDEBUG_LOG_FILE_NAME = _T("error.log")
The current error log file name.
QAFDEBUG_LOG_OLD_FILE_NAME = _T("error.old.log")
The previous error log file name.
dt>QAFDEBUG_STD_PREFIX = _T("QAFDBG01 ")
This is a prefix for the log file and debug out, for filtering your messages.

The following defines allow switching off specific features of the debug log (or any reporting at all):

QAF_DISABLED
Switch off any error reporting (not used by default).
QAF_LOGFILE_DISABLED
Switch off writting to the error log file (not used by default).
QAF_OUTPUTDEBUGSTRING_DISABLED
Switch off writting to the debug trace window (defined in release builds by default).
QAF_UNITTEST_DISABLED
Switch off unit test support (defined in release builds by default).

Unit Tests Support

When writting automated unit tests, you often need to test wrong parameters. In such cases, we need to ensure that the function will correctly report an error. The error becomes a normal, desired behavior. In such cases, I wanted to temporarily disable the error log, test that the function fails, and enable the error log back. In order to support this, I added two macros:

void Q_SILENT( expression );
This macro disables the error log, evaluates the expression, and enables the error log back.
void Q_ENABLE_DEBUG_LOG;
This macro unconditionally enables the error log. It is useful when the log might remain disabled by an exception thrown in the previous test case. By including it, you ensure that the error log is switched on.
void CUnitTest::testCase01( void )
{
    // ensure that the error log is switched on
    Q_ENABLE_DEBUG_LOG;
    // write to the error log if the test case fails
    CPPUNIT_ASSERT( Q_SUCCEEDED( QAFGetRegKey( HKCU_C_END,
                                               &str ) ) );
    // skip writing to the error log since this function
    // should fail
    Q_SILENT( CPPUNIT_ASSERT( Q_FAILED( QAFGetRegKey( NULL,
                             NULL ) ) ) );
    ...
}

Syntax Coloring and AutoText Completions

There many opportunities to make the programming simpler and faster. Here, I present a couple of addins (most programmers should already know them) that help me not only with the debug macros but with the entire coding process. You can find the corresponding files in QAFDebug_doc.zip.

The UserType.dat file makes Visual Studio 6.0 highlight common ATL and MFC classes, types, macros, and functions as keywords (in blue). This helps a lot because the colored standard types are easy to recognize and you make fewer syntax mistakes. I added my debug macros to this file so they are highlighted, too. This file should be put under the folder:

C:\Program Files\Microsoft Visual Studio\Common\MSDev98\Bin

The VAssist.tpl file contains keyboard shortcuts for Visual Assist 6.0 that make typing commonly used constructions faster. Visual Assist 6.0 from http://www.wholetomato.com/ is one of the most useful addins for Visual Studio 6.0 that upgrades Intellisence features of VC++. I strongly recommend this addin. I added several keyboard shortcuts for my macros to make my life simpler. You must NOT REPLACE this file but add its content to the existing one.

Now, when I type "qs", it automatically inserts "Q_SUCCEEDED()". The other shortcuts mean: "qa" ("Q_ASSERT()"), "qi" ("Q_INVALID()"), "qf" ("Q_FAILED()"), "qe" ("Q_ERROR()"), "qx" ("Q_EXCEPTION(e)"), "ql" ("Q_LOG( _T("") )"). This file is usually located under the following folder (you also may edit it from Visual Assist options):

C:\Program Files\Visual Assist 6.0\Misc

Appendix A: References

  1. Debugging Applications by John Robbins
    An excellent book on debugging Win32 applications.
  2. Writing Solid Code by Steve Maguire
    This book should be studied in schools... schools on programming, I mean. This is my favorite.
  3. Code Complete by Steve McConnell
    This book is too academic but also recommended. I'd say it lacks a bit of fun.
  4. The Practice of Programming by Brian W. Kernighan and Rob Pike
    This book is a kind of old fashioned but still valuable.
  5. Programming Pearls by Jon Bentley
    This book speaks more about algorithms than about the style of coding, but there are a couple of interesting chapters.
  6. ATL/AUX Library by Andrew Nosenko
    Macros in this library are very similar to my macros. They only seem a bit cryptic for me (question of taste). Still, there is no error log file and the debug stuff is switched off in release builds.
  7. Extended Debug macros by Jungsul Lee
    Macros there implement only the idea of having the error processing in release builds. There is no any log file and I do not like the idea of breaking the program execution.
  8. A Treatise on Using Debug and Trace classes... by Marc Clifton
    The author of this article is actually who inspired me to re-write this article and better argue my position. I liked to learn from his experience, too.
  9. Considerations for implementing trace facilities in release builds... by Vagif Abilov
    Look at the title. A good one.
  10. MSDN: Assertions by Microsoft©
    A short and complete reference of all assertions available in VC++.

Appendix B: Drawbacks of C++ Standard Asserts

This is an example of the standard assertion message that is displayed by the ANSI assert() function. Personally, I do not like the standard assertions that come with VC++. They suffer from many little but annoying defects. I'm sure that the framework I present here is also full of drawbacks. (Appendix D lists only the missing features). Still, mine provides me with more goods.

Drawback 1: Missing information.
The regular assert only knows the process name, the source file name, and the line number. I find it useful to print also the timestamp, process id, thread id, the last error code and the error message itself. I use this information in conjunction with the trace log, which is formatted similarly. The most useful fields are the error message and the last error code.

Drawback 2: The pop up dialog box.
Most of the standard ASSERT macros pop up a dialog box. When you are working on a project that does not have a UI, such as a Windows NT service or a COM out-of-process server, this dialog box might be missed by the user or even might hang the process (look at [Debugging Applications, page 60]). Also, I never saw a QA person who liked to get these dialog boxes because she can do nothing except choose "Ignore." I prefer to have a single, well-known file with the error log.

Drawback 3: Redirecting the stderr to a file.
The C run-time library allows redirecting the assertion to a file or to the OutputDebugString API by calling the _CrtSetReportMode function (look at [Debugging Applications, page 61]). But none of these functions takes care of the common file location for all your programs. Neither they truncate the file in case it becomes too large. (I'm not quite honest here—it is possible to write a helper class that will do it). At last, in a case you develop a COM in-process object or a DLL; you are not in charge to redirect stderr because it belongs to the entire process (this is one of my problems with regular ASSERTs).

Drawback 4: Change state of the system.
All the Microsoft-supplied assertions suffer from one fatal flaw: They change the state of the system. This issue is discussed in details in [Debugging Applications, page 62], here I'll explain it in short. Many of Win32 API functions use the SetLastError and GetLastError functions. And it is often needed to test the last error code to react differently on different errors. VC++ CRT macros do not preserve the last error code while at the same moment these macros themselves call to other Win32 functions which might destroy the last error code by setting it to 0. (To be honest again, I have to say that I found this argument when recently I started to re-read this book in searching information for the revised article).

Drawback 5: Potential side effects.
Almost any coding style book mentions the danger of assertions side effects. (Look at [Writing Solid Code, page 38], [Code Complete, page 96] and [MSDN: Assertions]). If the expression you test in the ASSERT macro is complex and includes function calls, this expression may occasionally change data. The debug version of the program may became dependent on this occasionally, and the release build will fail because of the missing function call. While the golden rule is to try never call functions in the assertions, these things still happen. I believe that, by integrating the assertions into the actual algorithms, it is possible to eliminate this class of defects.

Appendix C: Version History

QAFDebug version 1.5.0.20

  • The article is completely re-written with better examples and argumentation.
  • Fixed a syntax error in the Q_EXCEPTION macro.
  • Fixed a memory allocation error in UNICODE builds (I allocated bytes instead of TCHARs).
  • Added preserving the last error code as in [Debugging Applications, page 62].
  • In RELEASE builds, I removed printing the expression—it should optimize the size of the executable a bit because the file name and line number are enough to locate the problem.

Appendix D: Missing Features

  • I'd like to report the call stack, at least in the DEBUG builds. The problem is that most of the examples are Windows NT-specific.
  • I'd like to be able to get the error reports in real-time. Let's say I define a computer in the network that gathers all the error logs, sorts them, and forwards to the responsible developers. This should visualize the silent defects that QA is not used to noticing.
  • In case the program is running from the debugger, I'd still like to be able to fall into the debugger in a case of an error, but this should be a kind of switch (this feature is something that I've lost from the standard ASSERT macro).

Downloads

Download demo project - 36 Kb
Download source - 12 Kb
Download documentation, coloring and AutoText - 55 Kb



Comments

  • ModAssert

    Posted by markvp on 05/01/2007 04:15pm

    For another interesting C++ assertion framework, check out ModAssert. It is described on http://modassert.sourceforge.net/

    Reply
  • Disable Logfile

    Posted by Legacy on 05/28/2003 12:00am

    Originally posted by: Steffen Maucy

    At first I like to thanks you for sharing this greate tools.

    I have a question:
    There is a function for enabling logging, but no function for disable logging. What I like to do is enable/disable logging at runtime.


    Regards,

    Steffen

    Reply
  • Design by contract?

    Posted by Legacy on 04/28/2003 12:00am

    Originally posted by: Per N.

    Nice work!

    However, there's one thing I find missing (prolly because it in other aspects is quite complete) and that's macros for contract checking, ie differ between indata, invariant and outstate/output checks.

    Example:
    int Foo::Bar(int n)
    {
    REQUIRE(n>0);

    int res = n * 321;

    ENSURE(res >= 321);
    return res;
    }

    Which could be regarded as a contract:
    "You must give me an int>0. If you do, I promise to return an int >=321"

    Typically:
    REQUIRE : Contract the client should satisfy. (could be a plain ASSERT)
    ENSURE : Contract the operating method/class will satify, given the REQUIRE was satisfied.(could be a plain ASSERT)
    INVARIANT: Tests of the class' state that should work at all times.(could be a something like ASSERT_VALID(this))


    Reply
  • Thanks for sharing this - I look forward to trying it out

    Posted by Legacy on 04/25/2003 12:00am

    Originally posted by: Jens Winslow

    Thanks for sharing this - I look forward to trying it out

    Jens Winslow

    Reply
Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • It's time high-level executives and IT compliance officers recognize and acknowledge the danger of malicious insiders, an increased attack surface and the potential for breaches caused by employee error or negligence. See why there is extra emphasis on insider threats.

  • Download the Information Governance Survey Benchmark Report to gain insights that can help you further establish business value in your Records and Information Management (RIM) program and across your entire organization. Discover how your peers in the industry are dealing with this evolving information lifecycle management environment and uncover key insights such as: 87% of organizations surveyed have a RIM program in place 8% measure compliance 64% cannot get employees to "let go" of information for …

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds