Win32 Thread Synchronization, Part I: Overview

Introduction

In a multithreaded Win32 environment, it is necessary to synchronize the activities of threads that access common data to prevent memory corruption. Part 1 of this article gives a general explanation of processes and threads and describes a couple of thread synchronization techniques. Part 2 introduces thread synchronization helper classes, their implementation, and includes sample projects.

Multithreading Primer

To explain why synchronization is necessary, you must first take a moment to understand processes, threads, and thread scheduling. This is a basic overview, so more advanced readers are advised to skip over to the SlowCopy Example section or to jump ahead to Part 2.

What is a process?

A process is a 4 Gb virtual address space that contains an instance of an executable program or application. The process itself doesn’t perform any work and can be thought of as a container created by the system to hold the program EXE, any required DLLs, and program memory. The system tracks a process via a process handle and when the program exits, the system frees up any resources associated with the process. As mentioned, the process itself doesn’t perform any work—well how does any work get performed? This is where threads come in. Each process must have at least one thread of execution (or ‘thread’). Fortunately, the system is nice enough to create this for you when the program executes, so you don’t need to do any extra work. When the system creates the process space, it also creates a single, primary thread for the program. In fact, it is this primary thread that the main or winmain function executes in.

What is a thread?

A thread is a path of execution—this is where the actual work is done. If you create a simple console app, and step through the main function in a debugger, you are stepping through the primary thread of the application. Threads are cool; they can be stopped, paused, started, and new threads can be created.

Thread scheduling

The system is very fair about which thread gets executed and will execute a thread in a round robin fashion. For example, say there are five applications running on the system, each with one thread. The system will execute the thread in app1 for a bit, then move on to app2 and execute its thread for a bit, and so on. The little bit that I refer to is called a time slice. As a side note, I’d like to mention that on multiprocessor machines, the round robin method is still used except the system doles out the workload to more than one processor. In other words, each thread still gets a time slice, but more than one thread can be executed simultaneously on different processors.

The algorithm Windows uses to determine thread scheduling is based on many factors, the details of which aren’t really important to writing correct multithreaded applications. What is important is that Windows remains in control of how long each thread gets executed. The application itself doesn’t have this control and this is a good thing; otherwise, a ‘piggy app’ will hog the processor time on a system.

Windows 3.1 used to behave this way. It relied on properly behaving apps to execute code in small chunks and then give control back to the operating system. Unfortunately, applications didn’t always behave properly, so poorly written apps would tie up the system.

Thread priorities

The thread scheduling is slightly more complicated by the fact that threads can have different priorities. Without going into too much detail, the system will always execute threads with the highest priority before moving on to lower priority threads. A complete explanation of thread priorities is beyond the scope of this article, so for now I are going to assume that all your threads have equal priority. By default, a thread is created with a normal priority, and this will be sufficient for our needs; in fact, most of the time threads are generally created with normal priority.

Pre-emptive multitasking

On the NT-based operating systems such as Win2000 and XP, the system uses something called pre-emptive multitasking that keeps the system in control of the thread scheduling. With pre-emptive multitasking, the system uses the round robin scheme to give each thread its appropriate time slice. When a thread has used up its allocated time slice, the system puts that thread to sleep and moves on to giving the next thread a time slice. With pre-emptive multitasking, it is much more difficult for an errant application to take over the system (although not impossible).

Why care about thread scheduling?

As I’ve said earlier, threads exist to perform work and this work often includes reading and writing data to memory. Thread scheduling and synchronization become important when more than one thread needs to access the memory (usually referred to as ‘shared memory’). By the waw, I keep referring to shared memory, but really I’m referring to shared resources whether it’s a file, memory, or some other resource.

Because of the pre-emptive nature of the OS, you are not guaranteed that a section of memory written by one thread has completed fully before the OS has interrupted the thread to run another thread. The problem occurs when this second thread needs to read the memory because the first thread may not have completed the write operation.

Article Source Code (Parts I & II)

The complete source code for Parts 1 and 2 is included in the ThreadSync.Zip file located in the link at the bottom of this article. The ThreadSync.sln consists of the following projects:

  • SlowCopy: This console example illustrates sharing memory between threads with 1) no synchronization (Part I); 2) native synchronization using a critical section (Part I); and 3) synchronization using the helper classes (Part II)
  • LogSend/LogRcv (Part I): These are two applications that illustrate using helper classes to protect an std::queue shared between threads and to use a mutex to protect resources shared between multiple processes.
  • OnlyOne (Part II) This is an MFC application that uses a mutex to limit it to a single instance. In addition, this project uses a memory mapped file to share the hWnd. This second instance uses this hWnd to bring the first instance into the foreground before exiting.

SlowCopy Example

To illustrate what can go wrong with sharing data between threads, you are going to need an example that creates a couple of threads and performs some operation with shared data that will exhibit memory corruption.

Enter the SlowCopy project in the ThreadSync solution. In the SlowCopy project, you create two threads: T1 and T2 that share a string. T2’s job is to sit in a loop and display the string whereas the job of T1 is to copy data into the string (while T2 is displaying the data).

SlowCopy Structure and Classes

The SlowCopy project is actually multiple projects in one, but rather than having separate projects—one that copies a string without synchronization, one that copies with native critical section synchronization, and finally one that copies using synchronization via the helper classes—I’ve created one project that uses three different classes derived from a base class. To view the project without synchronization or with one of the synchronization methods, the reader is asked to uncomment the appropriate class in the program main.

Table 1: SlowCopy Classes
Class Type Description
CSlowCopy Base Base class that handles creation of the secondary display thread. Also declares two virtual functions used to perform the string copy and display the string.
CSlowCopyNoSync DerivedfromCSlowCopy Virtual methods perform string copy and display of the shared string without any synchronization.
CSlowCopyNativeCS DerivedfromCSlowCopy Virtual methods perform string copy and display of the shared string using synchronization via the native Win32 Critical Section.
CSlowCopyAutoLockCS DerivedfromCSlowCopy Virtual methods perform string copy and display of the shared string using synchronization via the helper classes. This portion of the example is looked at in Part 2 of the article.

Why use a base class in SlowCopy?

One may ask why use C++ inheritance and polymorphism in such a simple example? At first, this might seem as though it adds unnecessary complexity. However, because our goal is to take the reader through the synchronization levels from non-synchronized data sharing to synchronization using native Win32 to finally synchronization using the helper classes, it makes sense to pull all the thread creation and other common data and methods into a base class. I feel this is ultimately clearer to the reader because the reader only has to look at changes to the two virtual methods in each of the derived classes to understand the code changes of each synchronization level.

More by Author

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Must Read