By Robert Chartier
Developers must realize there is more to programming than simple code. This two-part series addresses the important issue of application architecture using an N-tier approach. The first part is a brief introduction to the theoretical aspects, including the understanding of certain basic concepts. The second part shows how to create a flexible and reusable application for distribution to any number of client interfaces. Technologies used consist of .NET Beta 2 (including C#, .NET Web Services, symmetric encryption), Visual Basic 6, the Microsoft SOAP Toolkit V2 SP2, and basic interoperability [ability to communicate with each other] between Web Services in .NET and the Microsoft SOAP Toolkit. None of these discussions (unless otherwise indicated) specify anything to do with the physical location of each layer. They often are on separate physical machines, but can be isolated to a single machine.
For starters, this article uses the terms “tier” and “layer” synonymously. In the term “N-tier,” “N” implies any number, like 2-tier, or 4-tier, basically any number of distinct tiers used in your architecture.
“Tier” can be defined as “one of two or more rows, levels, or ranks arranged one above another” (see http://www.m-w.com/cgi-bin/dictionary?Tier). So from this, we get an adapted definition of the understanding of what N-tier means and how it relates to our application architecture: “Any number of levels arranged above another, each serving distinct and separate tasks.” To gain a better understanding of what is meant, let’s take a look at a typical N-tier model (see Figure 1.1).
Figure 1.1 A Typical N-Tier Model
The Data Tier
Since this has been deemed the Age of Information, and since all information needs to be stored, the Data Tier described above is usually an essential part. Developing a system without a data tier is possible, but I think for most applications the data tier should exist. So what is this layer? Basically, it is your Database Management System (DBMS) — SQL Server, Access, Oracle, MySql, plain text (or binary) files, whatever you like. This tier can be as complex and comprehensive as high-end products such as SQL Server and Oracle, which do include the things like query optimization, indexing, etc., all the way down to the simplistic plain text files (and the engine to read and search these files). Some more well-known formats of structured, plain text files include CSV, XML, etc.. Notice how this layer is only intended to deal with the storage and retrieval of information. It doesn’t care about how you plan on manipulating or delivering this data. This also should include your stored procedures. Do not place business logic in here, no matter how tempting.
The Presentation Logic Tier
Let’s jump to the Presentation Logic Layer in Figure 1.1. You probably are familiar with this layer; it consists of our standard ASP documents, Windows forms, etc. This is the layer that provides an interface for the end user into your application. That is, it works with the results/output of the Business Tier to handle the transformation into something usable and readable by the end user. It has come to my attention that most applications have been developed for the Web with this layer talking directly to the Data Access Layer and not even implementing the Business Tier. Sometimes the Business Layer is not kept separated from the other two layers. Some applications are not consistent with the separation of these layers, and it’s important that they are kept separate. A lot of developers will simply throw some SQL in their ASP (using ADO), connect to their database, get the recordset, and loop in their ASP to output the result. This is usually a very bad idea. I will discuss why later.
The Proxy Tier and the Distributed Logic
There’s also that little, obscure Proxy Tier. “Proxy” by definition is “a person [object] authorized to act for another” (see http://www.m-w.com/cgi-bin/dictionary?Proxy). This “object,” in our context, is referring to any sort of code that is performing the actions for something else (the client). The key part of this definition is “act for another.” The Proxy Layer is “acting” on behalf of the Distributed Logic layer (or end-user’s requests) to provide access to the next tier, the Business Tier. Why would anyone ever need this? This facilitates our need for distributed computing. Basically it comes down to you choosing some standard method of communication between these two entities. That is, “how can the client talk to the remote server?” This is where we find the need for the Simple Object Access Protocol (SOAP). SOAP is a very simple method for doing this. Without too many details, SOAP could be considered a standard (protocol) for accessing remote objects. It provides a way in which to have two machines “talking” or “communicating” with each other. (Common Object Request Broker Architecture [CORBA], Remote Method Invocation [RMI], Distributed Component Object Model [DCOM], SOAP, etc., all basically serve the same function.)
The Client Interface
In this section of Figure 1.1 we notice that the end-user presentation (Windows forms, etc.) is connected directly to the Business Tier. A good example of this would be your applications over the Local Area Network (LAN). This is your typical, nondistributed, client-server application. Also notice that it extends over and on top of the Distributed Logic layer. This is intended to demonstrate how you could use SOAP (or some other type of distributed-computing messaging protocol) on the client to communicate with the server and have those requests be transformed into something readable and usable for the end user.
The Business Tier
This is basically where the brains of your application reside; it contains things like the business rules, data manipulation, etc. For example, if you’re creating a search engine and you want to rate/weight each matching item based on some custom criteria (say a quality rating and number of times a keyword was found in the result), place this logic at this layer. This layer does NOT know anything about HTML, nor does it output it. It does NOT care about ADO or SQL, and it shouldn’t have any code to access the database or the like. Those tasks are assigned to each corresponding layer above or below it. We must gain a very basic understanding of Object-Oriented Programming (OOP) at this time. Take time to read over http://searchwin2000.techtarget.com/sDefinition/0,,sid1_gci212681,00.html and make sure you understand the important benefits of OOP. We can plug this Product object into another object, a “Cart” object. This cart can contain and handle many Product objects. It also has getters and setters, but obviously on a more global scale. You can do something like “for each product in myCart”, and enumerate (loop through) each product within. (For more information on enumeration, refer to http://www.m-w.com/cgi-bin/dictionary?enumeration.) Now, when you call “getPrice” for the Cart object, it knows that it must enumerate each product that it has, add up the price for each, and return that single total. When we fire the “saveCart” method, it will loop for each “product” and call its “saveProduct” method, which will then hit the Data Access Tier objects and methods to persist itself over to the Data Tier. We could also take our simple Product object, and plug it into our “Sale” object. This Sale object contains all of the items that are available for a particular sale. And the Sale object can be used for things like representing all the items on sale at a given outlet or the like. I’m sure you are beginning to understand the advantage of using an OOP environment.
To clarify, let’s look at another example, such as a shopping cart application. Think in terms of basic objects. We create an object to represent each product for sale. This Product object has the standard property getters and setters: getSize, getColor, setSize, setColor, etc. It is a super simple implementation of any generic product. Internally, it ONLY knows how to return information (getters) and understands how it can validate the data you pump into it (ONLY for its limited use). It is self-contained (encapsulation). The key here is to encapsulate all the logic related to the generic product within this object. If you ask it to “getPrice,” it will return the price of the single item it represents. Also if you instruct it to “validate” or “save,” it has the brains to be able to handle this, return any errors, etc.
Data Access Tier
This layer is where you will write some generic methods to interface with your data. For example, we will write a method for creating and opening a Connection object (internal), and another for creating and using a Command object, along with a stored procedure (with or without a return value), etc. It will also have some specific methods, such as “saveProduct,” so that when the Product object calls it with the appropriate data, it can persist it to the Data Tier. This Data Layer, obviously, contains no data business rules or data manipulation/transformation logic. It is merely a reusable interface to the database.
Conclusions
In all of the systems that I have been able to dig my dirty little hands into, I have rarely ever seen both the Business Tier and Data Access Tiers used. I mostly combine the two tiers. Allow the Business Layer to talk directly to the Data Layer, and do not bother with the Data Access Layer. To justify this, we are all developing on Internet time, and the last time I looked, it’s still going at about 3 to 4 times faster than normal time, which means we are expected to also work and produce at the same rate. The bottom line is reducing the time to market. In my opinion, writing this Data Access Tier, which is simply abstracting the Data Tier, is overkill, and ADO can be considered as this Data Access Layer. It provides us with the interface directly. We still keep all SQL in the Data Tier (stored procedures), but no business rules should be kept here. Of course, the more tiers you add, the more performance is affected. The client hits “Save Cart” on their Web-enabled phone, it hits the Business Tier to call the “Cart” “saveCart,” which calls the products “save,” which goes either directly to the database or goes through the Data Access Layer and finally persists into the database. This path does affect performance. It is up to the application architect (you) to know and understand this, and all other factors affecting the system, and be able to make a good decision on how to develop it at this level. This decision is usually pretty easily made, depending on the amount of work and documentation that was produced from the analysis phase. We all now know how to do this logically. Let’s explain the why. A good example is to look at the Presentation Logic Tier. Notice that many of its sections –the Web, the Proxy Tier, and the Client Interface — all sit directly on top of the Business Tier. We gain the advantage of not needing to redo any code from that Business Tier all the way to the database. Write it once, and plug into it from anywhere. Now say you’re using SQL Server and you don’t want to pay Microsoft’s prices anymore, and you decide to pay Oracle’s instead. So, with this approach you could easily port the Data Layer over to the new DBMS and touch up some of the code in the Data Access Layer to use the new system. This should be a very minimal touch-up. The whole point is to allow you to plug each layer in and out (very modular) without too many hassles and without limiting the technology used at each tier. Another example would be that we initially develop our entire system using VB (COM) and ASP, and now we want to push it over to our friendly VB .NET or C#. It is just a matter of porting the code over at each layer (phased approach), and voila, it’s done. (Microsoft has given us the ability for interop between classic COM and .NET.) We can upgrade each layer separately (with minor hurdles) on an as-needed basis.
About the Author
Robert Chartier has worked in the information technologies field for more than 7 years. While studying at college he began his career working as a software and hardware technician at the college, supporting a user base of thousands of students and hundreds of instructors. Once his college days were finished he moved on to full-time studies at a university in the lower mainland of British Columbia, and landed a full-time job developing large projects for distribution on many platforms, mediums, and languages. Next, he moved onto Stockgroup, where he was able to tap into the Internet development market on a larger and more focused scale. In his spare time he began writing and producing content for developer-specific sites focusing on Microsoft technologies (ASP, COM/COM+, etc.). He has also been a part of many open forums on cutting-edge technologies, such as the .NET Framework and Web Services (SOAP), and has been invited to speak at large developer conferences and contribute to many technical publications. His next step was to take a position with a large B2B training marketplace, Thinq.com. At Thinq he developed many tools, including a very comprehensive search engine with custom business rules for weighting, sorting, and analysis (COM+). This led him into strong development with beta versions of Commerce Server 2000 and BizTalk Server 2000. His next opportunity included a large Internet development effort using technologies such as JSP (Java Beans, J2EE), Oracle, WebLogic, etc.