WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
Most meaningful software operations are composed of multiple independent steps. Although a single method call, for instance, is often viewed as a logically distinct unit, this is in the eye of the caller. A single method might just execute a single SQL
UPDATE statement over 1000 database rows that must be applied by the database atomically. On the other hand, another method might execute a SQL
UPDATE statement, manipulate state on a shared COM+ component, and initiate web service message that results in some middle-tier processing, all of which must be treated as a single atomic operation. A failure to execute any one operation must cause any which have already been executed to be undone.
System.Transactionssupports both, using the same programming model. In this article, we'll take a quick look at the basics of this technology, new to the .NET Framework 2.0.
Transaction Crash Course
In both cases above, some granularity of operation is assumed to be indivisible. If one in a series of steps that are part of that operation fails, we have a problem on our hands. The system — in this case spanning multiple physical resources — could become corrupt. This might leave important data structures (like data in a database or some COM+ components) in an invalid state, lose important user data, and/or even prohibit applications from running altogether. Clearly this is a bad situation that must be avoided at all costs, especially for large-scale, complex, mission-critical systems.
The general solution to this problem is transactions. If we are careful to mark the begin and end points for a set of indivisible steps, a transaction manager (TM) might be willing to ensure that any failures result in rolling back all intermediary steps to the previous valid system state that existed before the transaction began. And if the transaction manipulates more than one resource, each could be protected by its own resource manager (RM), which participates in the commit/rollback protocol of the TM. Such an RM would know how to perform deferred activities and/or compensation in a way that coordinates nicely with the TM, giving the programmer the semantics that he or she desires. This general infrastructure is depicted in Figure 1.
Figure 1: A single transaction manager (TM) with multiple resource managers (RMs).
The idea of transaction flow, state, commitment, and rollback is illustrated in Figure 2. In this example, a TM monitors in-between state changes and ensures that a transition is made to the End State or no transition is made at all (i.e., the system is restored to Begin State). Both states are consistent from the system and application point of view.
Figure 2: Transactional state management.
Take note of some key terminology. We say the transaction was committed if we successfully reach the End State. Otherwise, we say that the transaction was rolled back — a.k.a. aborted — meaning that all state manipulations have been undone and the system has been returned back to the Start State.
A TM does more than that. In addition to coordinating multiple transacted resource TMs, such as in-memory data structures, file systems databases, and messaging end points, it can coordinate such activities with distributed entities. In other words, a distributed transaction may reliably span RMs on machines that are physically separate. TMs furthermore isolate inconsistencies occurring in between the start and end of a transaction so that others accessing the same resources in parallel will not observe a surprising state containing broken invariants or partially committed values.
In fact, transactions guarantee four things, referred to as the ACID properties:
- Atomicity: The effects of a set of operations are either visible immediately together or they fail together. In other words, they are indivisible. Given two operations, it is illegal for one to become visible before the other, or for one to fail but the other succeed. This alleviates the need for a programmer to manually compensate for a failure, for example by hand-coding the logic to put data back into a consistent state.
- Consistency: The system ensures that transactions are applied in a manner that leaves the system in a consistent state, meaning that a committed transaction will not incur transactional failures post-commit. This is a close cousin to atomicity. Moreover, the transaction is guaranteed system consistency during execution.
- Isolation: Other system components that are simultaneously accessing resources protected by a transaction are not permitted to observe "in-between" states, where a transaction has made changes that have not yet been committed. Transacted resources deal with isolation in different manners; some choose to prevent access to resources enlisted in a transaction altogether — called pessimistic concurrency — while others allow access and detect conflicting reads and writes at commit time — called optimistic concurrency. The latter can result in better throughput if lots of readers are accessing a resource but can also result in a high conflict (and thus transaction abort) rate if there are lots of writers.
- Durability: The results of an operation are persisted assuming the transaction has committed. This means that if a transactional manager agrees to commit, it guarantees the results will not be lost afterward. There are actually gradients of durability depending on the storage the resource lives in. A file system, for example, commits to a physical disk; a database commits to a physical transaction log, but a transacted object in memory likely doesn't write to anything but physical RAM.
With some of the basics of transactions under our belts now, lets now take a look at the
System.Transactions namespace introduced in version 2.0, physically located in the
System.Transactions.dll assembly. This namespace provides a new unified programming model for working with transacted resources, regardless of their type or location. This encompasses integration with ADO.NET and web services, in addition to providing the ability to write a custom transactional manager.