Transactions in the .NET 2.0 Framework

Most meaningful software operations are composed of multiple independent steps. Although a single method call, for instance, is often viewed as a logically distinct unit, this is in the eye of the caller. A single method might just execute a single SQL UPDATE statement over 1000 database rows that must be applied by the database atomically. On the other hand, another method might execute a SQL UPDATE statement, manipulate state on a shared COM+ component, and initiate web service message that results in some middle-tier processing, all of which must be treated as a single atomic operation. A failure to execute any one operation must cause any which have already been executed to be undone.

The former scenario is reasonably simple, handled primarily by the database itself internally, while the second is trickier, because it spans multiple distinct resources. System.Transactions supports both, using the same programming model. In this article, we'll take a quick look at the basics of this technology, new to the .NET Framework 2.0.

Transaction Crash Course

In both cases above, some granularity of operation is assumed to be indivisible. If one in a series of steps that are part of that operation fails, we have a problem on our hands. The system — in this case spanning multiple physical resources — could become corrupt. This might leave important data structures (like data in a database or some COM+ components) in an invalid state, lose important user data, and/or even prohibit applications from running altogether. Clearly this is a bad situation that must be avoided at all costs, especially for large-scale, complex, mission-critical systems.

The general solution to this problem is transactions. If we are careful to mark the begin and end points for a set of indivisible steps, a transaction manager (TM) might be willing to ensure that any failures result in rolling back all intermediary steps to the previous valid system state that existed before the transaction began. And if the transaction manipulates more than one resource, each could be protected by its own resource manager (RM), which participates in the commit/rollback protocol of the TM. Such an RM would know how to perform deferred activities and/or compensation in a way that coordinates nicely with the TM, giving the programmer the semantics that he or she desires. This general infrastructure is depicted in Figure 1.

Figure 1
Figure 1: A single transaction manager (TM) with multiple resource managers (RMs).

The idea of transaction flow, state, commitment, and rollback is illustrated in Figure 2. In this example, a TM monitors in-between state changes and ensures that a transition is made to the End State or no transition is made at all (i.e., the system is restored to Begin State). Both states are consistent from the system and application point of view.

Figure 2
Figure 2: Transactional state management.

Take note of some key terminology. We say the transaction was committed if we successfully reach the End State. Otherwise, we say that the transaction was rolled back — a.k.a. aborted — meaning that all state manipulations have been undone and the system has been returned back to the Start State.

A TM does more than that. In addition to coordinating multiple transacted resource TMs, such as in-memory data structures, file systems databases, and messaging end points, it can coordinate such activities with distributed entities. In other words, a distributed transaction may reliably span RMs on machines that are physically separate. TMs furthermore isolate inconsistencies occurring in between the start and end of a transaction so that others accessing the same resources in parallel will not observe a surprising state containing broken invariants or partially committed values.

In fact, transactions guarantee four things, referred to as the ACID properties:

  • Atomicity: The effects of a set of operations are either visible immediately together or they fail together. In other words, they are indivisible. Given two operations, it is illegal for one to become visible before the other, or for one to fail but the other succeed. This alleviates the need for a programmer to manually compensate for a failure, for example by hand-coding the logic to put data back into a consistent state.
  • Consistency: The system ensures that transactions are applied in a manner that leaves the system in a consistent state, meaning that a committed transaction will not incur transactional failures post-commit. This is a close cousin to atomicity. Moreover, the transaction is guaranteed system consistency during execution.
  • Isolation: Other system components that are simultaneously accessing resources protected by a transaction are not permitted to observe "in-between" states, where a transaction has made changes that have not yet been committed. Transacted resources deal with isolation in different manners; some choose to prevent access to resources enlisted in a transaction altogether — called pessimistic concurrency — while others allow access and detect conflicting reads and writes at commit time — called optimistic concurrency. The latter can result in better throughput if lots of readers are accessing a resource but can also result in a high conflict (and thus transaction abort) rate if there are lots of writers.
  • Durability: The results of an operation are persisted assuming the transaction has committed. This means that if a transactional manager agrees to commit, it guarantees the results will not be lost afterward. There are actually gradients of durability depending on the storage the resource lives in. A file system, for example, commits to a physical disk; a database commits to a physical transaction log, but a transacted object in memory likely doesn't write to anything but physical RAM.

With some of the basics of transactions under our belts now, lets now take a look at the System.Transactions namespace introduced in version 2.0, physically located in the System.Transactions.dll assembly. This namespace provides a new unified programming model for working with transacted resources, regardless of their type or location. This encompasses integration with ADO.NET and web services, in addition to providing the ability to write a custom transactional manager.

Transactions in the .NET 2.0 Framework

Transactional Scopes

The first question that probably comes to mind when you think of using transactions is the programming model with which to declare the scope of a transaction over a specific block of code. And you'll probably wonder how to enlist specific resources into the transaction. In this section, we discuss the explicit transactions programming model. It's quite simple. This example shows a simple transactional block:

using (TransactionScope tx = new TransactionScope())
{
    // Work with transacted resources...
	tx.Complete();
}

With the System.Transactions programming model, manual enlistment is seldom necessary. Instead, transacted resources that participate with TMs will detect an ambient transaction (meaning, the current active transaction) and enlist automatically through the use of their own RM.

Once you've declared a transaction scope in your code, you, of course, need to know how commits and rollbacks are triggered. Before discussing mechanics, there are some basic concepts to understand. A transaction may contain multiple nested scopes. Each transaction has an abort bit, and each scope has two important state bits: consistent and done. These names are borrowed from COM+ transactions.

abort may be set to indicate that the transaction cannot commit (i.e., it must be rolled back).The consistent bit indicates that the effects of a scope are safe to be committed by the TM, and done indicates that the scope has completed its work. If a scope ends while the consistent bit is false, the abort bit gets automatically set to true and the entire transaction must be rolled back. This general process is depicted in Figure 3.

Figure 3
Figure 3: A simple transaction with two inner scopes.

In summary, if just one scope fails to set its consistent bit, the abort bit is set for the entire transaction, and the effects of all scopes inside of it are rolled back. Because of the poisoning effect of setting the abort bit, it is often referred to as the doomed bit. With that information in mind, the following sections will discuss how to go about constructing scopes and manipulating these bits.

An instance of the TransactionScope class is used to mark the duration of a transaction. Its public interface is extremely simple, offering just a set of constructors, a Dispose, and a Complete method. (An alternate programming model, called declarative transactions, which is not discussed here, facilitates interoperability with Enterprise Services.)

After a new transaction scope is constructed, any enlisted resource will participate with the enclosing transaction until the end of the scope. Constructing a new top-level scope installs an ambient transaction in Thread Local Storage (TLS), which can later be accessed programmatically through the Transaction.Current property. You saw a brief snippet of code above showing how to use these via the default constructor, the C# using statement (to automatically call Dispose), and an explicit call to Complete.

Calling Complete on the TransactionScope sets its consistent bit to true, indicating that the scope has successfully completed its last operation and is safe to commit. When Dispose gets called, it inspects consistent; if it is false, the transaction's abort bit is set. In simple cases with flat, single scope transactions, this is precisely when the effects of the commit or rollback are processed by the TM and its enlisted RMs. In addition to setting the various bits, it instructs the RMs to perform any necessary actions for commit or rollback. In nested scope scenarios, however, a child does not actually perform the commit or rollback; rather, the top-level scope is responsible for that (the first scope created inside a transaction).

Transactional Database Access Example (ADO.NET)

As a brief example of actually using transactions in your code, this C# snippet wraps a set of calls to a database inside a transaction. ADO.NET's SQL Server database provider automatically looks for an ambient transaction, instead of having to call CreateTransaction and associated methods on the connection manually, and avoids having to manually enlist its RM:

using (TransactionScope tx = new TransactionScope())
{
    IDbConnection cn = /*...*/;
    // ADO.NET detects the Transaction erected by the TransactionScope
	cn.Open();    
    // and uses it for the following commands automatically.
    IDbCommand cmd1 = cn.CreateCommand();
    cmd1.CommandText = "INSERT ...";
    cmd1.ExecuteNonQuery();    IDbCommand cmd2 = cn.CreateCommand();
    cmd2.CommandText = "UPDATE ...";
    // A call to Complete indicates that the ADO.NET Transaction is safe
	cmd2.ExecuteNonQuery();    
    // for commit. It doesn't actually complete until Dispose is called.
    tx.Complete();
}

Similar things were possible with version 1.x of the Framework, but of course it required a different programming model for each type of transacted resource you worked with. And it didn't automatically span transactions across multiple resource enlistments.

Wrapping Up

This article of course only touched on some of the capabilities of System.Transactions. We didn't talk about the various transaction creation options, deadlock avoidance, distributed transactions and two-phase commit, how to manually enlist RMs, how to build RMs, and a whole host of additional interesting parts of the new technology. But hopefully this quick overview is sufficient to get you familiar with the basic concepts, and ready to start exploring.

This article is adapted from Professional .NET Framework 2.0 by Joe Duffy (Wrox, 2006, ISBN: 0-7645-7135-4), from chapter 15 "Transactions."

Copyright 2006 by WROX. All rights reserved. Reproduced here by permission of the publisher.



About the Author

Joe Duffy

Joe Duffy is a Program Manager on the CLR Team at Microsoft, where he works on WinFX and the .NET Framework. He is also the author of Professional .NET Framework 2.0 (Wrox, 2006, ISBN: 0-7645-7135-4).

Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Download the Information Governance Survey Benchmark Report to gain insights that can help you further establish business value in your Records and Information Management (RIM) program and across your entire organization. Discover how your peers in the industry are dealing with this evolving information lifecycle management environment and uncover key insights such as: 87% of organizations surveyed have a RIM program in place 8% measure compliance 64% cannot get employees to "let go" of information for …

  • With JRebel, developers get to see their code changes immediately, fine-tune their code with incremental changes, debug, explore and deploy their code with ease (both locally and remotely), and ultimately spend more time coding instead of waiting for the dreaded application redeploy to finish. Every time a developer tests a code change it takes minutes to build and deploy the application. JRebel keeps the app server running at all times, so testing is instantaneous and interactive.

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds