Why Too Many Threads Hurts Performance, and What to do About It

Why Too Many Threads Hurts Performance, and What to do About It.

Arch D. Robison, Intel Corporation

Too Many Threads

Threading is the current method of choice for extracting performance from multi-core chips. It might seem that if a little threading is good, then a lot must be better. In fact, having too many threads can bog down a program. This article discusses why and how task-based programming avoids the problem. The Intel® Threading Building Blocks (Intel® TBB) task scheduler serves as an example.

The impact of having too many threads comes in two ways. First, partitioning a fixed amount of work among too many threads gives each thread too little work that the overhead of starting and terminating threads swamps the useful work. Second, having too many threads running incurs overhead from the way they share finite hardware resources.

It is important to distinguish software threads from hardware threads. Software threads are the threads that programs create. Hardware threads are real physical resources. There may be one hardware thread per core on the chip, or more, as for example with Intel Hyper-Threading Technology.

When there are more software threads than hardware threads, the operating system typically resorts to round robin scheduling. Each software thread gets a short turn, called a time slice, to run on a hardware thread. When the time slice runs out, the scheduler suspends the thread and allows the next thread waiting its turn to run on the hardware thread.

Time slicing ensures that all software threads make some progress. Otherwise, some software threads might hog all the hardware threads and starve other software threads. However, fair distribution of hardware threads incurs overhead. There are several kinds of overhead, and it helps to know the culprits so you can spot them when they appear.

The most obvious overhead is saving register state of a thread when suspending it, and restoring the state when resuming it. You might be surprised how much state there is on modern processors. However, schedulers typically allocate big enough time slices so that the save/restore overheads are insignificant, so this obvious overhead is in fact not much of a concern.

A more subtle but significant overhead of time slicing is saving and restoring a thread's cache state, which can be megabytes. Modern processors rely heavily on cache memory, which can be about 10 to 100 times faster than main memory. Accesses that hit in cache are not only much faster; they also consume no bandwidth from the memory bus. Caches are fast, but finite. When the cache is full, a processor must evict data from the cache to make room for new data. Typically, the choice for eviction is the least recently used data, which is typically data from an earlier time slice. Thus software threads tend to evict each other's data, and the cache fighting from too many threads can hurt performance.

A similar overhead, at a different level, is thrashing virtual memory. Most computers use virtual memory. Virtual memory resides on disk, and the frequently used portions are kept in real memory. Similar to caches, the least recently used data is evicted from memory to disk when necessary to make room. Each software thread requires virtual memory for its stack and private data structures. As with caches, time slicing causes threads to fight each other for real memory and thus hurts performance. In extreme cases, there can be so many threads that the program runs out of even virtual memory.

Another problem arises when a time slice expires for a thread holding a lock. All threads waiting for the lock must now wait for the holding thread to get another time slice and release the lock. The problem is even worse if the lock implementation is fair, in which the lock is acquired in first-come first-served order. If a waiting thread is suspended, then all threads waiting behind it are blocked from acquiring the lock. It's like having someone fall asleep in a check-out line. The more software threads there are without hardware threads to run them, the more likely this will become a problem.

Organizing Threads

A good solution is to limit the number of runnable threads to the number of hardware threads, and possibly limit it to the number of outer-level caches if cache contention is a problem. Because target platforms vary in the number of hardware threads, avoid hard-coding your program to a fixed number of threads. Let your program's degree of threading adapt to the hardware.

Runnable threads, not blocked threads, cause time-slicing overhead. When a thread blocks on an external event, such as a mouse click or disk I/O request, the operating system takes it off the round-robin schedule, so the thread no longer incurs time-slicing overhead. A program may have many more software threads than hardware threads, and still run efficiently if most of those software threads are blocked.

A helpful organizing principle is to separate compute threads from I/O threads. Compute threads should be the threads that are runnable most of the time, and ideally never block on external events. The number of compute threads should match the processor resources. The I/O threads are threads that wait on external events most of the time, and thus do not contribute to having too many threads.

Task-Based Programming

Because the most efficient number of compute threads depends upon the particular hardware, programming in terms of threads can be a poor way to do multithreaded programming. It is often better to formulate your program in terms of logical tasks, not threads, and let a task scheduler take care of mapping the tasks onto threads. The rest of this article will use the Intel® TBB tasks as an example.

The key advantage of tasks versus logical threads is that tasks are much lighter weight than logical threads. On Linux, starting and terminating an Intel® TBB task is about 18 times faster than starting and terminating a thread. On Windows, the ratio is more than 100. This is because a thread has its own copy of many resources, such as register state and a stack. On Linux, a thread even has its own process id. A task, in contrast, is typically a small routine that cannot be preempted at the task level. It can be preempted only by preempting the software thread running it.

Another improvement is unfair scheduling. As mentioned earlier, thread schedulers typically distribute time slices fairly because it is the safest strategy without understanding the higher-level organization of a program. In task-based programming, the task scheduler does have some higher-level information, and so can sacrifice fairness for efficiency. Indeed, it often goes to the extreme of not even starting tasks until they can make useful progress, in order to reduce memory consumption.

The scheduler does load balancing; that is, spreading the work across threads so that they are kept busy. Good load balancing can be tricky, because subtle cache, paging, and interrupt effects may cause some threads to finish earlier than others, even when apparent equal pieces of work were handed out. In task-based programming, you break your program into many small tasks, and let the scheduler issue tasks to threads to keep them busy.

The big win from using tasks instead of threads is easier programming. Thread-based programming forces you to think at the low level of hardware threads to get good efficiency, because you need one runnable software thread per hardware thread, not too few or too many. You also have to deal with the relatively coarse grain of threads. With tasks, you can concentrate the logical dependences between tasks, and leave the efficient scheduling to the scheduler.

Example: Summing A Tree

We'll use summing values over a tree as an example, because it involves a common recursive pattern that demonstrates the fundamentals of a task library. If you are not a fan of recursion, do not despair. Intel® TBB has high-level algorithm templates that hide the recursion and let you take an iterative view. For example, the library template parallel_for does parallel iteration, and the template parallel_reduce does reductions like summation. Both of these work over generic iteration spaces. This article, however, looks "under the hood" at the task scheduler that powers these templates, because understanding the task scheduler lets you deal with problems beyond the algorithm templates, or even write your own algorithm templates.

Listing 1 shows serial code for recursively summing over a tree. Field node_count is unused, but declared because it is necessary in the parallel version. Listing 2 shows the parallel code. It is relatively large compared to serial_sum_tree because it expresses parallelism without the help of any linguistic extensions to standard C++. Not depending on language extensions simplifies integration into existing production environments.

The top-level routine parallel_sum_tree in Listing 2 performs three actions:

  1. Allocate space for the task that will process the root, using an overloaded new operator and method task::allocate_root, both provided by the library. Task objects must be allocated by overloaded new operators provided by the library so that the space can be rapidly recycled when the task completes.
  2. Constructed the task using constructor sum_task(root,&sum). When the task is run in step 3, it will store the sum of the (sub)tree rooted at root into *sum.
  3. Start and run the task to completion by invoking task::spawn_root_and_wait.

The real work is inside class sum_task, which is derived from the base class task provided by Intel® TBB. Fields n and sum respectively hold the input value and pointer to the output. These are copies of the arguments passed to the constructor for sum_task. Method execute does the actual computation. It overrides a pure virtual method task::execute. The scheduler executes a task by invoking its execute method.

Method sum_task::execute() operates as follows:

  1. Check if the tree is so small that serial execution would be faster. If so, use serial_sum_tree from Listing 1.
  2. Otherwise, create a child task for each non-null subtree, using an inherited method allocate_child() and an overloaded operator new. Put each child on a list.
  3. Call set_ref_count to indicate the number of children created, plus one for the wait to be done. The task scheduler uses a very light-weight synchronization mechanism that atomically decrements a reference count when each child finishes or for a wait.
  4. Call spawn_and_wait_for_all to spawn the child tasks and wait for them to complete.
  5. Store the final sum in *sum.
  6. Return, which implicitly causes the scheduler to destroy and deallocate the task object. In this example, the return value is NULL. In more sophisticated uses, it is a pointer to the next task to run.

Step 1, using a serial algorithm for a small problem is common in parallel programming. Even though tasks are lighter weight than threads, they still have some overhead compared to functions, and thus for small problems using the serial function is faster. Finding the ideal threshold for serial execution usually requires some experimentation. A lower threshold creates more tasks, and thus more potential parallelism. But making the tasks too small incurs excessive overhead from task management. In a program that is going to generate far more tasks than there are threads, it does not hurt to set the threshold somewhat too high, because there will still be enough potential parallelism to keep all hardware threads busy.

Work Stealing

At first glance, the parallelism in Listing 2 might appear to be limited, because the task creates at most two child tasks. The trick here is recursive parallelism. The child tasks each create more child tasks, and so on, until small subtrees are reached. If each task creates two child tasks, then the Nth level of recursion creates 2N child tasks. That offers plenty of potential parallelism to go around.

The trick is efficiently using the potential parallelism. A poorly structured task pool can be a performance killer. For starters, the pool can become a centralized source of contention. Furthermore, the pool's structure can strongly effect performance. Let's look at two extremes to see the effects.

One extreme is making the pool a first-in first-out queue, which maximizes parallelism, because execution will tend to traverse the tree breadth-first as shown in Figure 1. As execution walks each level of the tree, it doubles the number of available tasks. The drawback is that it can thrash cache or virtual memory, because at some point, there simultaneously exists a task for every node in the tree! It's self-defeating overkill, because we need only enough parallelism to keep the hardware threads busy.

The other extreme is to make the pool a last-in first-out stack. Then execution will tend to traverse the tree depth-first as shown in Figure 2. Now the space is proportional to the depth of the tree. Furthermore, the cache behavior on a single thread is usually quite good, because the child is typically working on data that was already pulled into cache by the parent. The drawback is that the parallelism is minimized. Worse yet, multiple threads tend to get in each other's way because each will grab tasks most recently created by other threads, causing cache traffic between the threads.

Why Too Many Threads Hurts Performance, and What to do About It

To summarize, there are three problems to solve here:

  • Avoid having a central bottleneck
  • Create enough parallelism to keep threads busy.
  • Keep memory consumption reasonable.

The modern method of choice for solving all three problems is to give each thread its own task deque. Thus contention for a central pool is eliminated. Each thread treats its own deque as a last-in first-out stack. Doing so gains the space and cache efficiencies of depth-first traversal.

But what about parallelism? When a thread's own deque is empty, the thread steals work from another thread's deque. The choice of victim thread is random, so no centralized control is necessary. Furthermore, it treats the victim's deque as a queue; that is, it steals the oldest task. The net effect is that parallelism is automatically throttled to a level just high enough to keep hardware threads busy without causing excessive consumption of memory.

The strategy of "work depth-first; steal breadth-first" has further benefit: it reduces contention between threads. The oldest task tends to be the one closest to the root of the tree, and thus a thief grabs a relatively large piece of work. Better yet, the stolen work's data is typically farthest from the victim's data, and thus tends to generate less contention for memory and locks than if the thief had stolen the youngest task.

The strategy described above was developed by MIT's Cilk language project [2]. It applies to not only trees, but any problem that can be recursively divided into subproblems. Intel® TBB enables you to use this strategy in standard ISO C++.

Conclusion

Having the right number of threads is essential to multi-core performance. Task-based programming lets you write a program in terms of logical tasks, and the task scheduler deals with choosing when and where to run those tasks. Recursive task patterns combined with work-stealing throttles parallelism to the right level. The Intel® Threading Building Blocks tutorial [1] delves deeper into task-based programming, and other features such as high-level algorithm templates and concurrent containers.

Further Reading

[1] Parts of this article were adapted with permission from the tutorial included in Intel® Threading Building Blocks.

[2] The Cilk home page is http://supertech.csail.mit.edu/cilk.

[3] Parts of this article were adapted with permission from Chapter 7 of Multi-Core Programming: Increasing Performance through Software Multi-threading by Shameem Akhter and Jason Roberts, Intel Press, 2006. http://www.intel.com/intelpress/sum_mcp.htm.

*Other names and brands may be claimed as the property of others.

Figure 1: Breadth-first maximizes available parallelism, but maximizes memory consumption.

Figure 2: Depth-first minimizes memory consumption, but minimizes available parallelism.

struct tree_node {
   tree_node* left;          // Pointer to left subtree
   tree_node* right;         // Pointer to right subtree
   unsigned node_count;      // Number of nodes in this subtree
   value value;              // Value associated with the node.
}; 

value serial_sum_tree( tree_node* root ) {
   value result = root->value;
   if( root->left )
      result += serial_sum_tree(root->left);
   if( root->right )
      result += serial_sum_tree(root->right);
   return result;
}

Listing 1: Serial code for summing values in a tree.

class sum_task: public tbb::task {
   value* const sum;
   tree_node* root;
public:
   sum_task( tree_node* root_, value* sum_ ) : root(root_),
      sum(sum_) {}
   task* execute() {
      if( root->node_count<1000 ) {
         // For small trees, use the serial code from Listing 1.
         *sum = serial_sum_tree(root);
      } else {
         value x, y;
         int count = 1;
         tbb::task_list list;
         if( root->left ) {
            ++count;
            list.push_back( *new( allocate_child() )
               sum_task(root->left,&x) );
         }
         if( root->right ) {
            ++count;
            list.push_back( *new( allocate_child() )
               sum_task(root->right,&y) );
         }
         // Argument to set_ref_count is one more than size of the
         // list, because spawn_and_wait_for_all expects an
         // augmented ref_count.
         set_ref_count(count);
         spawn_and_wait_for_all(list);
         *sum = root->value;
         if( root->left ) *sum += x;
         if( root->right ) *sum += y;
      }
      return NULL;
   }
};

value parallel_sum_tree( tree_node* root ) {
    value sum;
    sum_task& a = *new(tbb::task::allocate_root())
       sum_task(root,&sum);
    tbb::task::spawn_root_and_wait(a);
    return sum;
}

Listing 2: Parallel version of Listing 1, based on task-based programming.



Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Learn How A Global Entertainment Company Saw a 448% ROI Every business today uses software to manage systems, deliver products, and empower employees to do their jobs. But software inevitably breaks, and when it does, businesses lose money -- in the form of dissatisfied customers, missed SLAs or lost productivity. PagerDuty, an operations performance platform, solves this problem by helping operations engineers and developers more effectively manage and resolve incidents across a company's global operations. …

  • Today's agile organizations pose operations teams with a tremendous challenge: to deploy new releases to production immediately after development and testing is completed. To ensure that applications are deployed successfully, an automatic and transparent process is required. We refer to this process as Zero Touch Deployment™. This white paper reviews two approaches to Zero Touch Deployment--a script-based solution and a release automation platform. The article discusses how each can solve the key …

Most Popular Programming Stories

More for Developers

RSS Feeds