Past, Present, and Future of ADO.NET

ADO.NET, the data-access component of the Microsoft.NET Framework, works with any component on any platform that understands XML, which is a standard format for transmitting data. ADO.NET exposes a simple object model that conforms to the standards prescribed by the World Wide Consortium (W3C). It is an evolution of the ADO data access model that directly addresses user requirements for developing scalable applications.

ADO .NET was designed specifically for the Web with scalability, statelessness, and XML in mind. The ADO.NET interface is readily accessible using any of the .NET-compliant languages.

ADO.NET is essentially a collection of classes that expose methods and attributes used to manage communications between an application and a data store. An integral part of the .NET Framework, ADO.NET simplifies a variety of development needs, including the creation of front-end database clients and middle-tier business objects used in distributed, data-intensive applications.

ADO.NET is not just another object model for working with data. In essence, it is ADO revisited on a wider-range, cross-platform scenario. ADO.NET is ADO extended to work in cooperation with any software component, on any platform, that understands XML.

ADO .NET supports a rich disconnected view of data as represented by data sets, cursors, batching, data shaping, connection pooling, meta data access, and a host of other techniques that will allow you to build performance, reliability, and robustness in your data applications.

Evolution of ADO.NET from ADO

Under the .NET Framework, data access is handled by a set of classes called ADO.NET; these are essentially an augmentation of the existing ActiveX Data Objects (ADO). There are some big changes, both internally and on the surface. The most obvious internal change is that ADO.NET is based entirely on XML. Externally, the biggest change is that there is no Recordset object.

The Recordset mechanism was used to interact with the results of an SQL Query; later, it grew into a structure that supports disconnected or even fabricated recordsets. The challenge it faced was the sophisticated SQL generation layer capable of simulating optimistic locks without needing to hold physical locks that otherwise restrict scalability.

Under ADO.NET, the functionality of the Recordset has been split into three groups, which are explained below:

  1. The DataReader object allows you to perform a single pass through a set of records as efficiently as possible whereas in ADO, this was achieved using a forward-only, server-side cursor.
  2. The DataSet and DataSetCommand objects allow you to create a client-side cache of one or more related recordsets and process it in a disconnected fashion whereas in ADO, a client-side cursor was used for much the same thing. By separating out these two very different sets of functionality, ADO.NET can provide tailored support for each approach, as well as exposing more power.
  3. Updation of data through pessimistic locking is not supported in ADO.NET. It is targeted at n-tier, Web-friendly applications and it does not intend to tread on ADO's toes in all areas. In ADO, updating data is done through connection-oriented pessimistic locking.

The other major differences between ADO and ADO .NET are as follows:

  1. XML in ADO.NET allows data to pass through a firewall. Data cannot pass through a firewall through the ADO recordset.
  2. ADO.NET supports the typed dataset, which means they support all the dataset capabilities built into .NET, including binding and XML DOM interoperation. In ADO, recordset fields are not strongly typed and are not supported by IntelliSense.
  3. ADO.NET DataSet can hold multiple tables and therefore is very flexible. In ADO, a recordset holds data only for one table.

Past, Present, and Future of ADO.NET

DataSets

ADO.NET is an attempt to encourage developers to use the disconnected model. Disconnected data is more scalable because it reduces the demand on database servers. It is much easier to scale the middle tier using parallel servers than it is to scale the data tier, and therefore scalable applications need to use data tier resources carefully. DataSets are always client-side.

The DataSet object contains a collection of zero or more DataTable objects, each of which is an in-memory representation of a single table. DataColumns and Constraint collection together make up the schema of the table. A DataTable also contains a DataRows collection, which contains the actual data in DataSet.

The DataSet contains a DataRelation collection. A DataRelation object allows you to create association between rows in one table and rows in another table.

DataSet neither knows nor cares about databases or SQL. It simply allows tables of data to be manipulated, exchanged, or bound to user interfaces. There are two standard ways of getting tables of data into a DataSet. One is by using a DataSetCommand object that turns the results of a database query into XML. This generates database updates in much the same way that the ADO Client Cursor Engine does, but with far greater control.

The second approach is to work directly with XML. The DataSet object has methods for reading and writing XML data and schemas, and can work closely with an XMLDataDocument object. The XMLDataDocument class in the .NET Framework inherits from the general-purpose XMLDocument class, extending it with specific functionality for manipulating tabular data and forming a bridge between a DataSet and a generic XML DOM.

To create a DataSet using an OLE DB Provider, you need to use an instance of the ADODataSetCommand class.

The following example explains how the dataset is loaded with data:

Dim Sql As New ADODataSetCommand("select emp_id, emp_fname,
                                  emp_lname from employee",
                                 sqlConnection)
Dim ds As New DataSet()
Sql.Fill(ds, "Employee")

The above dataset is is filled with data executed by a query.

Multiple DataTables can be created into a single DataSet, whereby relationships can be created between them to aid navigation.

The following code explains the relationship of two tables within a dataset:

Dim t1() As DataRow
Dim t2() As DataRow
Dim dr As DataRelation
Dim ds As New DataSet()
dr = New DataRelation("Employee",
                      t1.Tables("Employee").Columns("dept_id"),
                      t2.Tables("Department").Columns("dept_id"))
ds.Relations.Add(dl)

The above example uses two tables, Employee and Department, to relate the department ID of the employee table with the department ID in the Department table. This establishes a parent-child relationship between these two tables.

Past, Present, and Future of ADO.NET

Data Provider

A .NET data provider provides functionality for connecting to a data source, executing commands, and retrieving results. Those results can be processed directly, or placed in an ADO.NET DataSet for further processing while in a disconnected state. While in the DataSet, data can be exposed to the user, combined with other data from multiple sources, or passed remotely between tiers. Any processing performed on the data while in the dataset can then be reconciled to the data source.

All .NET data providers are designed to be lightweight. They consist of a minimal layer between the data source and your code. This extends functionality without sacrificing performance.

There are four core objects that make up a .NET data provider:

  • Connection: Establishes a connection to a specific data source and can begin a Transaction.
  • Command: Executes a command at a data source, and exposes Parameters. It has three methods: ExecuteNonQuery, ExecuteScalar, and ExecuteReader. The Sql Command class exposes an additional method for executing the Command: ExecuteXmlReader
  • DataReader: Exposes and reads a forward-only stream of data from a data source.
  • DataAdapter: Populates a DataSet and resolves updates with the data source.

The main design goals of the .NET Data Provider was to implement a generic client layer enabling the re-use of our existing data access components for connecting to remote data sources. This has been achieved with both our Managed and Unmanaged .NET Data Providers, each of which has a common namespace that eliminates the need to recompile applications should the need arise to change the remote data source.

Managed Data Provider

The .NET Data Provider is built with managed code, enabling it to run completely within the .NET Framework runtime, delivering better security and performance. This generic managed .NET Data Provider connects to the remote data source via one of two forms currently.

  • Multi-Tier Database Agents: The OpenLink VDB(Virtual Database) layer has been ported to C# and linked into the Provider, enabling it to communicate directly with the OpenLink Multi-Tier Database agents installed on the remote machine (or via 3-Tier connection)
  • TDS Protocol: The TDS protocol has been ported to C# and linked into the Provider, enabling two 100% Managed Providers to be created capable of direct connectivity to Microsoft and Sybase SQLServer Databases without the need for any additional components on the Server

UnManaged Data Provider

The Unmanaged .NET Data Provider enables connectivity to any ODBC data source by acting as a bridge between ADO.NET and ODBC. This provider is provided as a stop-gap solution enabling connectivity to data source for which managed .NET Providers are not already available. So, it does not provide the benefits of security and performance available from its managed counterpart.

The following example explains the steps of a data adapter for SQL Server.

Dim da As SqlClient.SqlDataAdapter
Dim cm as SqlClient.SqlCommand
Dim cn as SqlConnection
Dim ds as Data.DataSet
Dim StrSql as String
Cn=Initial Catalog=pubs;DataSource=local);userid=sa;pswd=;
StrSql=select * from user
Cm=new SqlClient.SqlCommand(StrSql,new SqlClient.SqlConnection(cn))
Dr=cm.ExecuteReader(CommandBehaviour,CloseCOnnection)

The above example executes the query and reads all data from the user table.

Associated Namespace of ADO.NET

System.Data

The System.Data namespace consists mostly of the classes that constitute the ActiveX Data Objects architecture in the .NET Framework (ADO.NET). ADO.NET provides the programming interfaces for building data-driven Web and Windows applications.

This namespace includes the centerpiece of the ADO.NET architecture, the dataset, which represents a relational database in memory.

System.Data.Common

The System.Data.Common namespace consists of the classes shared by the OLE DB and the SQL Server .NET data providers.

A .NET data provider describes a collection of classes used to manage connections between an application and a data store, such as a relational database or a hierarchical XML file.

System.Data.OleDB

The System.Data.OleDb namespace is the OLE DB .NET Data Provider.

A .NET data provider describes a collection of classes used to manage connections between an application and a data store, such as a relational database or a hierarchical XML file. This namespace includes the core OleDb classes for accessing and manipulating any OLE DB-compliant data source, and essentially consists of the OleDbConnection, the OleDbCommand, the OleDbdataReader, and the OleDbDataAdapter classes.

System.Data.SqlClient

The System.Data.SqlClient namespace is the SQL Server .NET Data Provider.

A .NET data provider describes a collection of classes used to manage connections between an application and a data store, such as a relational database or a hierarchical XML file. This namespace includes the core SqlClient classes for specifically accessing and manipulating an SQL Server data source, and essentially consists of the SqlConnection, the SqlCommand, the SqlDataReader, and the SqlDataAdapter classes.

System.Xml

The System.Xml namespace is an integral player in the data access scheme for the .NET framework. This is a close relationship with the dataset that relies entirely on XML for its own internal schema and data representation.

Filtering and Sorting in ADO .Net

ADO.NET supports two fundamental approaches for performing these operations:

  • The DataTable Select Method: This method is overloaded to accept arguments to filter and sort data rows returning an array of DataRow objects.
  • The DataView object sort, filter and find methods: This object uses the same filter arguments supported by the Select method, but the DataView extrudes structures that can be bound to data-aware controls.

Past, Present, and Future of ADO.NET

What's New in ADO.NET 2.0?

The overall design goals of ADO.NET v2.0 is to allow a higher degree of interoperability between data accessed relationally, accessed as XML, and accessed as custom objects. ADO.NET v2.0 was designed to make it easier for developers to use the appropriate model when desired within and across applications.

The various features supported by ADO.Net 2.0 are:

  • Allows multiple updates in a single batch operation. There's a new UpdateBatchSize property on DataAdapters that supports batched updates.
  • Converting a DataReader into a DataSet using the GetDataReader method.
  • Accessing two data reader objects simultaneously.
  • Asynchronous data access.
  • System.Data.Common.DbProviderFactory to create the standard connection, command, data reader, table, parameter, permissions, and data adapter classes.
  • Data paging using the ExecutePageReader method.
  • Binary DataSet remoting, that allows a dataset to be serialized using a binary format when employing .NET remoting.

All the above features are achieved through a simple solution.

Advantages and Disadvantages of ADO.NET

The benefits of using ADO.NET are are follows:

  • Performance: Exteremely fast
  • Optimized SQL Provider: Provider that is highly optimized for interaction with SQL Server
  • XML Support: Everything you do in ADO.NET at some point will boil down to the use of XML
  • Disconnected Operation Model: Remarkably efficient and scalable architecture
  • Rich Object Model: The entire ADO.NET architecture is built on a hierarchy of class inheritance and interface implementation

The drawbacks of using ADO.NET are:

  • Managed-Only Access: Cannot utilize the ADO.NET architecture from anything but managed code. This means that there is no COM interoperability allowed for ADO.NET. Therefore, to take advantage of the advanced SQL Server Data Provider and any other features such as DataSets, XML internal data storage, and so forth, your code must be running under the CLR.
  • Only three managed data providers: Unfortunately, if you need to access any data that requires a driver that cannot be used through either an OLEDB provider or the SQL Server Data Provider, you may be out of luck. However, the good news is that the OLEDB provider for ODBC is available for download from Microsoft. At that point, the down-side becomes one of performance, in which you are invoking multiple layers of abstraction as well as crossing the COM InterOp gap, incurring some initial overhead as well.

Future of ADO.NET

In the future version of ADO.NET, the following points will be taken care of:

  • Connected cursor models and a higher-level API called DataSpaces (Virtual Storage Area), similar in concept to the Entity bean of Enterprise Java Beans, were presented for developer feedback as possible enhancements to the ADO.NET model.
  • Expect other data sources, such as Oracle databases, to have custom .NET data providers in the future.
  • ObjectSpaces (treat data as an object), which provides a protective business logic layer for your source data.
  • A managed schema object model for ADO.NET will replace the ADO Extensions for DDL and Security (ADOX). The future schema object model is based on the dataSet object and supports read/write access to Table, Column, and Constraint objects.
  • Oracle and ODBC drivers will be implemented as .NET (formerly "managed") data providers.


About the Author

Ramesh Kumar Nagarajan

Iam working for MNC at Chennai. I have more than 6 years of experience in Microsoft solution design and development.

Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Open source and cloud computing have long promised cost efficiencies, yet many organizations have shied away from these technologies due to security and reliability concerns. Now, open source has proven itself stable, and the cloud has become as secure—or even more secure—than on-premises implementations. Read this white paper to learn how you can get the accessibility of open source and the flexibility and affordability of cloud computing combined in a compelling storage option for companies …

  • Live Event Date: October 29, 2014 @ 11:00 a.m. ET / 8:00 a.m. PT Are you interested in building a cognitive application using the power of IBM Watson? Need a platform that provides speed and ease for rapidly deploying this application? Join Chris Madison, Watson Solution Architect, as he walks through the process of building a Watson powered application on IBM Bluemix. Chris will talk about the new Watson Services just released on IBM bluemix, but more importantly he will do a step by step cognitive …

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds