E: A bit about garbage collection

Bruce Eckel's Thinking in Java Contents | Prev | Next

It’s hard to believe that Java could possibly be as fast or faster than C++ .

In C++, creating objects on the stack is fast because when you enter a particular scope the stack pointer is moved down once to allocate storage for all the stack-based objects created in that scope, and when you leave the scope (after all the local destructors have been called) the stack pointer is moved up once. However, creating heap objects in C++ is typically much slower because it’s based on the C concept of a heap as a big pool of memory that (and this is essential) must be recycled. When you call delete in C++ the released memory leaves a hole in the heap, so when you call new, the storage allocation mechanism must go seeking to try to fit the storage for your object into any existing holes in the heap or else you’ll rapidly run out of heap storage. Searching for available pieces of memory is the reason that allocating heap storage has such a performance impact in C++, so it’s far faster to create stack-based objects.

To understand how this works, you need to get a little better idea of the way the different garbage collector (GC) schemes work. A simple but slow GC technique is reference counting. This means that each object contains a reference counter, and every time a handle is attached to an object the reference count is increased. Every time a handle goes out of scope or is set to null, the reference count is decreased. Thus, managing reference counts is a small but constant overhead that happens throughout the lifetime of your program. The garbage collector moves through the entire list of objects and when it finds one with a reference count of zero it releases that storage. The one drawback is that if objects circularly refer to each other they can have non-zero reference counts while still being garbage. Locating such self-referential groups requires significant extra work for the garbage collector. Reference counting is commonly used to explain one kind of garbage collection but it doesn’t seem to be used in any JVM implementations.

In faster schemes, garbage collection is not based on reference counting. Instead, it is based on the idea that any non-dead object must ultimately be traceable back to a handle that lives either on the stack or in static storage. The chain might go through several layers of objects. Thus, if you start in the stack and the static storage area and walk through all the handles you’ll find all the live objects. For each handle that you find, you must trace into the object that it points to and then follow all the handles in that object, tracing into the objects they point to, etc., until you’ve moved through the entire web that originated with the handle on the stack or in static storage. Each object that you move through must still be alive. Note that there is no problem with detached self-referential groups – these are simply not found, and are therefore automatically garbage.

As previously mentioned, in the JVM described here memory is allocated in big blocks. If you allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live object from the source heap to a new heap before you could free the old one, which translates to lots of memory. With blocks, the GC can typically use dead blocks to copy objects to as it collects. Each block has a generation count to keep track of whether it’s alive. In the normal case, only the blocks created since the last GC are compacted; all other blocks get their generation count bumped if they have been referenced from somewhere. This handles the normal case of lots of short-lived temporary objects. Periodically, a full sweep is made – large objects are still not copied (just get their generation count bumped) and blocks containing small objects are copied and compacted. The JVM monitors the efficiency of GC and if it becomes a waste of time because all objects are long-lived then it switches to mark-and-sweep. Similarly, the JVM keeps track of how successful mark-and-sweep is, and if the heap starts to become fragmented it switches back to stop-and-copy. This is where the “adaptive” part comes in, so you end up with a mouthful: “adaptive generational stop-and-copy mark-and-sweep.”

There are a number of additional speedups possible in a JVM. An especially important one involves the operation of the loader and Just-In-Time (JIT) compiler. When a class must be loaded (typically, the first time you want to create an object of that class), the .class file is located and the byte codes for that class are brought into memory. At this point, one approach is to simply JIT all the code, but this has two drawbacks: it takes a little more time, which, compounded throughout the life of the program, can add up; and it increases the size of the executable (byte codes are significantly more compact than expanded JIT code) and this might cause paging, which definitely slows down a program. An alternative approach is lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code that never gets executed might never get JIT compiled.



Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • With JRebel, developers get to see their code changes immediately, fine-tune their code with incremental changes, debug, explore and deploy their code with ease (both locally and remotely), and ultimately spend more time coding instead of waiting for the dreaded application redeploy to finish. Every time a developer tests a code change it takes minutes to build and deploy the application. JRebel keeps the app server running at all times, so testing is instantaneous and interactive.

  • Live Event Date: April 22, 2014 @ 1:00 p.m. ET / 10:00 a.m. PT Database professionals — whether developers or DBAs — can often save valuable time by learning to get the most from their new or existing productivity tools. Whether you're responsible for managing database projects, performing database health checks and reporting, analyzing code, or measuring software engineering metrics, it's likely you're not taking advantage of some of the lesser-known features of Toad from Dell. Attend this live …

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds