Fundamentals of Microsoft .NET Programming: Multiprocessing

  • 10/15/2011

Problems with Parallelism

At a high level, running threads in parallel is easy to understand. When you look closely at specific tasks, however, you can encounter several problems. Some of these include contention for resources, races, and deadlocks.

Contention for Resources

Sometimes multiple threads need to use the same resources. Consider again the stock calculator example. Suppose the program starts 10 threads to perform calculations for 10 stocks. The first task that each thread must perform is using the Internet to get its stock’s price data. If your network bandwidth is limited, this will be a big bottleneck as each thread demands access to the network. Even if your network has plenty of bandwidth, the website that you access to get the stock prices needs to process all the requests and, if it’s a slow website, that may cause a bottleneck.

Similarly, multiple threads may need to access the same disk drive, CD or DVD drive, or other limited resource, and performance can be limited as a result. It’s bad enough that these sorts of contention can limit performance, but they can also cause incorrect behavior. The most common example of this kind of error is called a race condition.

Race Conditions

A race condition occurs when the result of a calculation depends on the exact sequence or timing of execution in multiple threads.

For example, suppose you want to compute the total of 2 million numbers. You could loop through the numbers and add them up one at a time, but you want to save time with multithreading, so you break the task into two pieces and solve each piece in a separate thread. The first thread adds the first million numbers to a value called total and the second thread adds the second million numbers to total. The basic algorithm for each thread looks like this:

For i = start To finish
    Get total
    Calculate result = total + value[i]
    Save result In total

This code enters a loop where the looping variable i starts at the value start and runs to the value finish. In other words, it takes the values start, start + 1, start + 2, …, finish.

The values start and finish represent the indices of the values that a thread should process. In this example, the first thread’s values for start and finish would be 1 and 1,000,000, and the values for the second thread would be 1,000,001 and 2,000,000. The two threads run exactly the same code; only the values’ start and finish are different for the two threads.

Inside the loop, each thread reads the current value of the total variable, adds the value pointed to by the current value of i to total, and saves the new result in total.

If you’re running a single thread to process all the values, this code works perfectly. However, if you use two threads running at the same time, they can enter a race condition. Consider this sequence of events as the two threads execute inside their loops.

Thread 1:    Get total
Thread 2:    Get total
Thread 1:    Calculate result = total + value[i]
Thread 1:    Save result In total
Thread 2:    Calculate result = total + value[i]
Thread 2:    Save result In total

In this case, both threads start by reading the value total. Because thread 1 does this right after thread 2 does it, both threads get the same value.

Next, thread 1 adds a value to total and saves the result back in the value total. Then thread 2 does the same. Because thread 2 still has the original value for total, it overwrites the new value saved by thread 1.

For a concrete example, suppose total starts with the value 100 and the two threads are adding the values 20 and 30, respectively. Both start by reading the value 100. Thread 1 then adds 20 and saves the result 120 in the value total. Next, thread 2 adds 30 to the value that it originally read (100), gets the result 130, and saves it in the value total. Instead of holding the correct result 150, total now holds 130.

One way to prevent a race condition is to use a lock on the critical section of code.

Locks

A lock guarantees that a thread has exclusive access to a piece of code, memory, or other item that it needs to prevent a race condition or other bug. In the previous example, a thread could use a lock to gain exclusive access to the variable total while performing its calculation. The new code looks like this:

For i = start To finish
    Lock total
        Get total
        Calculate result = total + value[i]
        Save result In total

Now, if two threads are running at the same time, one cannot read the value of total while the other has it locked so it cannot interfere with the other thread. Instead, it waits until the lock is released, and then it locks the total value and performs its own calculation without interference.

Unfortunately, locks add considerable overhead to a program because making multiple threads coordinate in this way makes them much slower. The more locks a program must make and release, the more slowly the program will execute. In this example, the threads must lock and unlock the value total for each of the 2 million numbers that should be totaled.

In this particular program, the problem is even worse because each thread performs calculations only while it’s running code inside the lock; therefore, no two threads can be doing anything significant at the same time, which eliminates all the benefits of multithreading. The result is that this program really performs its calculations one at a time sequentially, just spread across multiple threads in a complicated way using 2 million locks. All those locks guarantee that this program will be much slower than the original single-threaded program, which didn’t need to use any locks.

Locks solve one problem but sometimes cause another: deadlocks.

Deadlocks

A deadlock occurs when two threads are waiting for resources held by each other. For example, suppose thread 1 has resource A locked and is waiting for resource B, but thread 2 has resource B locked and is waiting for resource A. Neither thread can get the second lock it needs, so it cannot continue. Because of this, neither thread will release the lock that it already holds, so they’re both stuck.

In this example, the deadlock is simple and easy to avoid by making both threads lock resource A before locking resource B. Then, if thread 1 locks resource A, thread 2 cannot lock resource B until it first locks resource A.

Detecting and breaking deadlocks in more general situations can be harder. If a program has many threads that need exclusive access to lots of resources in complex combinations, it can be difficult to prevent deadlocks.