Since the dawn of third-generation computers, legions of successful multi-threading application have run inside containers. Because there is no way to control a thread of execution at the operating system level, knowledgeable developers have built containers to house the elusive threads at the application level.
This article expounds Open Source Tymeac™, a container for managing J2SE application threads.
It is often imperceptible why we house application threads inside a container. After all, anyone can create a thread anywhere, so what's the point? Exactly the enigma. Once you understand the problem, then the solution makes perfect sense. Therefore, we start at the beginning.
The problem arises
In the beginning (of third generation computers) came the almighty IBM® System/360® series of computers. These audacious machines could have multiple computer programs (called tasks or processes) memory resident and switch execution between them.
An early innovation was the idea that a task itself could be divided into sub-tasks so procedures within a task could run independently. The thinking here was that if each part, task or sub-task, was defined to the operating system as an executable entity, then the operating system could easily switch between those entities.
Thus was born the ability of software developers to create their own sub-tasks, child processes, lightweight processes or as they are now commonly called, threads of execution. Just as Dr. Frankenstein was delighted with his aggregate creation, software developers then were enthralled with all the new possible compositions.
The ability to create a sub-task was burdensome. It required writing the program or at least the sub-program in assembler language using the ATTACH macro and eventually the program executing this macro required "authorization."
The rational with making it difficult to create sub-tasks is:
- There is no facility to control these sub-tasks. Each main task must control its own sub-tasks.
- What if a sub-task hangs in a never-ending loop?
- What if a sub-task abnormally terminates?
- What if the sub-task create/destroy overhead bogs down the overall processing?
- What if a sub-task needs timing?
- What if a sub-task needs canceling?
- What is the status of a sub-task?
- How to detect and recover from dead/live locks?
- How to tune this sub-tasking environment?
- How can we inquire about the overall health of the environment?
- How may the sub-tasking environment quiesce and shut down gracefully?
- Since sub-tasks share the execution context (address space, I/O buffers, save areas, handles) with the main task, a misbehaving sub-task can irreparably damage an application—just as one bad apple can ruin the entire barrel.
- Declaring too many of these sub-tasks can easily impact other tasks in other address spaces. This is often called not playing nice with others in the box.
- What can happen is that the dispatcher's list of active tasks can become excessively long making other tasks wait for CPU cycles when the number of tasks/sub-tasks exceeds the number of CPU's.
- The overhead to manage the list goes up exponentially with the length of the list.
- Sub-tasks eat memory ravenously leaving less for other tasks.
- When starting a sub-task what you are really doing is starting a backend-process.
- Think of a backend-process as something taking place in another room of your house. You're sitting in the den and the new sub-task is working in the basement. What is it doing down there? Is it still alive? What happened to the last request I asked it to work on?
- Think of a backend-process as a child [process]. Would you want young children running around without direct supervision or would you favor the bounds of a play pen?
The list goes on and on but the main point is control. In a multi-tasking/threading application it is critical to be able to control both the main task/process and the sub-tasks/threads as well.
There are infinite products from vendors to monitor and control the main task/process inside its container (address space.)
There are no products that can know the purpose of a sub-task/thread just by looking at its properties. There is no way to kill a sub-task/thread without endangering the execution context and/or risking inconsistent states in shared objects. Because there are no definitive means for controlling sub-tasks/threads, the main task itself must containerize and control its own sub-tasks/threads.
The container as a solution
The first successful application multi-tasking container was the CICS® transaction processor. CICS® used pseudo-tasks to let application software developers multi-task their applications in a professional framework. Others have followed IBM®'s lead (Encina®, Tuxedo®) but none have been so popular.
POSIX threads are a little easier to create then the mainframe model. All you need is the #include <pthread.h> and pthread_create(). No need for assembler routines or "authorization."
What POSIX threads also offer is the same opportunity for the threads to get into as much devilry as the mainframe sub-tasks. Therefore the only effective way to control these threads is inside a container such as the Tuxedo® Application Server.
Java threads are the easiest to create. Either define a class that extends Thread or define a class that implements the Runnable Interface.
Without access to memory, the stack or other computer internals, there are few methods in Java to control thread functionality. Naturally, Java also provides the same potential for threads to get into as much mischief as the mainframe sub-tasks or C language threads. Therefore the only effective way to control these threads is inside a container.
Two of the most popular multi-threading J2EE containers are for Enterprise Java Beans and Servlets.
- The EJB containers are called Application Servers (GlassFish®, JBoss®, WebLogic®, WebSphere®)
- The Servlets run under a Servlet Container (Jetty®, Apache Tomcat®)
Multi-threading in J2SE generally comes in two flavors:
- Plain vanilla. (Such as those used for listeners or for message writing.)
- New York double Dutch extra fudge chocolate. (Such as those used in application thread pools.)
For many years it was evident the standard edition Java threads were mostly for the plain vanilla, simple tasks. Creating a thread was so easy anybody could do it. Controlling a simple thread was easy; basically there was no need to do anything.
After the wonderful folks at the JCP JSR-166 Expert Group published the Concurrency API, unseasoned application developers started building complex, server-side threading environments with Futures and Thread Pools. Very soon thereafter and very much like the early developers before them, many of those developers found their compositions were the equivalent of Frankenstein monsters. Why?
Go back and take a look at what can happen with uncontrolled sub-tasks/threads.
J2SE threads have two major issues: Concurrency and Control.
Just as there are two similar issues with juggling balls.
- When the balls are in the air, the balls may try to occupy the same space at the same time. A concurrency issue.
- When launching and catching the balls one needs to tightly coordinate the throw/catch so a hand is free when a ball needs catching. A control issue.
Failure to address both these issues means the endeavor will fail sooner or later.
The Concurrency API contains three packages:
The atomic and locks packages have to do with concurrency. The brilliant scientists at the JCP JSR-166 Expert Group are making a blue-ribbon contribution to solving this issue, we are everlastingly grateful and concurrency needs no further discussion.
The basic java.util.concurrent package's treatment of threads (Executors, Futures, ThreadPool and others) is a superior vision and represents many years of effort by exceptional computer scientists, but it inadequately addresses the control issue. Since we know things can go wrong with threads (those pesky problems again) and there is no way on this great, green planet an API alone can control a multi-threading environment, J2SE threads need a container.
Now that you understand the problem, it's time to meet the solution.