Saturday, November 4, 2017

Thread Safety

As I noted in yesterday's post, one of the major changes associated with the new "generic" callback structure in CPLEX is that users now bear the responsibility of making their callbacks thread-safe. As I also noted yesterday, this is pretty new stuff for me. So I'm going to try to share what I know about thread safety, but bear in mind that I don't know all that much (and don't know what I don't know). In a subsequent post, I'll share an updated example of Benders decomposition using the new callback structure.

What is a thread?


I'll refer you to the Wikipedia for a detailed definition. Basically, a thread is a chunk of code that is set up to be run asynchronously. Typically, the point of creating a thread is to allow the code to run in parallel with other parts of the parent program. The operating system swaps threads in and out of processor cores. Part of the reason for creating additional threads is to exploit the presence of multiple processor cores, but that's not the only reason. Consider a program designed to do some computationally intensive operation (such as solving an integer program) and assume the program has a graphical user interface (GUI). Chances are the GUI runs in a different thread from the computational portion of the program, even if it is running on a single-core processor. Otherwise, with everything in one thread, the GUI would become unresponsive for the entire time the program was crunching numbers. Among other things, that would make it impossible to use a GUI control or menu item to abort the computation if, say, you were jonesing to check your Twitter feed.

Before continuing, I think it's worth noting three things here. The first is that, in an era of multi-core computer processors, multithreaded applications are increasingly common, and increasingly attractive. CPLEX defaults to using multiple threads when solving optimization problems, although you can control the number threads used (including throttling it to a single thread) via a parameter setting. Second, performance improvement due to multithreading is sublinear. If you go from one thread to two, the reduction in execution time is less than 50%. If you go from one thread to four, you will likely not see anywhere near a 75% reduction in processing time. This is partly due to added overhead required to set up and manage threads, and partly to the fact that threads can get in each others' way, jostling for CPU time and blocking progress of their siblings. Finally, making an application multithreaded may increase memory use, because you may need to make separate copies of some data for each thread.

What is thread safety?


Again, I'll refer you to a Wikipedia definition. The key concepts, I think, is that you want to defend against a couple of dangers. The first is that threads might block each other. Deadlock can occur when one thread is waiting for another thread to do something but simultaneously blocking the second thread from doing it (say, by hogging a resource). That's actually an oversimplification, in that more than two threads can be contributing to a deadlock. Starvation occurs when some thread cannot access the resources it needs because other threads (several different threads, or one thread over and over) keep blocking it.

The second danger, which helps explain how things like starvation or deadlock can come about, is the danger that one thread writes shared data while another thread is reading or using it. For instance, consider Benders decomposition. (This is not a hypothetical example: you'll see it in the next post, if you stick around.) Assume, as is usually the case, a MIP master problem and a single LP subproblem, and assume we are using a callback to test proposed integer feasible solutions coming from the master problem. When the solver thinks it has a new incumbent for the master problem, it calls the callback. The callback uses the proposed incumbent to adjust some constraint limits in the subproblem, solves the subproblem, and based on the result either accepts the incumbent or cuts it off with a Benders cut.

Now suppose that two different threads, A and B, both get (different) proposed incumbents, with A beginning processing a little ahead of B. A modifies the LP subproblem and tries to solve it, but before it gets to the point of calling the solver B (on a different core) starts modifying the subproblem. So when A calls the LP solver, the subproblem it is solving has a mix of some modifications A made and some modifications B made. At best, A ends up solving the wrong LP (and not knowing it). At worst, B is modifying the LP while A is trying to solve it. With CPLEX, at least, if this happens CPLEX throws an exception and the program likely grinds to a halt.

How do we make code thread-safe?


Good question. I don't know all the methods, but there are two fundamental techniques that I do know. The first is locks. Basically, locks are semaphores (flags, signals, whatever) that tell the system not to let any other thread touch some part of memory until the thread owning the lock is done. You write your code so that the part that will run concurrently locks shared objects, does what it needs with them, and then unlocks them. On the one hand, it's important to lock everything that needs locking. Missing one item is like trying to burglar-proof your home but then leaving the back door wide open. On the other hand, it's important to lock only what has to be locked, and only for as long as it needs to be locked. Hanging on to a lock for too long can block other threads, and hurt performance.

The second technique is to avoid contention for data by giving each thread its own personal copy of the data. Thread-safety varies from language to language. In Java, each thread gets its own stack, containing local variables, and no thread can touch another thread's stack. So, for instance, if a thread starts a loop indexed by local variable i, there is no danger that i is touched by another thread. On the other hand, Java objects are parked in the heap, and are available to any thread that knows the addresses of the objects. So to avoid collisions between threads, you can either copy the original data to the stack for each thread and let the thread mess with its own copy, or (if the data is a Java object) create a clone of the original object for each thread. The clone will live in the heap, but only the thread for which it was created will know its address, so no other thread will screw with it.

My modified Benders example (subject of a future post, after CPLEX 12.8 ships) will demonstrate both approaches.

If you are a Java programmer, and if multithreading is new to you, I recommend you look at Oracle's tutorial on concurrency in Java. It is written well, contains examples of things that can go wrong, and covers much if not all of what you need to know to handle thread safety while working with CPLEX generic callbacks.

2 comments:

  1. Another common reason that routines aren't thread safe is that they use global variables (which is shared data.)

    ReplyDelete
    Replies
    1. I think *reading* global variables is not a problem. You're right, though, that writing global variables is a crash waiting to happen.

      Delete

Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.