Differential Development, Part 1

Introduction

If you look at a software solution like a race across the ocean through a narrow canyon of icebergs, then companies start out by charting the maze. They have a goal (the finish line) and set checkpoints (milestones). They analyze where the currents are, where the glaciers are stable and where they drift, the weather conditions, the water temperature, the wind speed, right down to the native sea-life. Then, they try to pilot a supertanker through the maze and get to the end as quickly as possible. Many seasoned project managers in their skipper-hats will attest that making small course corrections early on can save a lot of effort and time when aiming the supertanker for a narrow channel across the ocean. So, they have to make certain the channel they are aiming for is the right channel. In a sense, project managers and architects are not only responsible for steering the ship, but also for predicting the future. If the finish line should move halfway through the race, it either takes a lot longer to reach it, or the supertanker simply runs out of fuel (funding) and is dead in the water.

What is 2D?

2D Stands for Differential Development. 2D is a stronger, more powerful approach to developing applications that leverage bleeding edge technology, the opinions of some very progressive thinkers and ardently rejects the notion of freezing any part of the design. It is important to know that the concepts embraced by 2D are not unproven or experimental. 2D seeks to synergistically unite compatible methodologies from a broad spectrum of leading IT experts. Much of what 2D advocates has been in practice and successfully implemented within the industry. 2D seeks to make a supertanker-sized jet-ski that can handle the load of corporate business needs and efforlessly zip across the turbulent waters of changing requirements. It can also be a lot of fun to ride.

Structured Programming vs. Agile Development

When constructing a building, certain things follow a set order. First a lot or location is established, then a foundation is laid, a framework is erected, and so forth. It is impractical to reverse previous decisions such as location halfway through construction, and the further along a construction project gets, the more costly and impractical changing the design becomes. Because there are no significant physical constraints on a software application, making even fundamental design changes at any point in the application is less costly.

However, an Agile Development Methodology can support fundamental changes on any level without a complete re-write, and do so without introducing additional bugs. This is important to understand. Very important. Fundamental changes on any level without necessitating a rewrite.

In many situations, things such as re-naming a base class or namespace, changing data types on an interface, renaming variables, or even changing the base data model is an agonizing decision because it could require extensive revisions throughout the code to accommodate the changes and may also introduce new and unexpected bugs. The fear of "Breaking the Interface"—changing the face of one object to make it unrecognizeable to the rest of the objects in your application tree—results in leaving things as they are, which is often a compromised design that meets the functional requirements, but is not as efficient, modular, or agile as it could have been given the chance to code it knowing what the developers have learned.

The approach to product architecture has a fundamental and unavoidable flaw:
The most important decisions are made at the start of a project with the least amount of road experience and the highest uncertainty factor.

These are critical design decisions that not only involve the foundation of the relational data model but also the architecture and interrelation of the layers/tiers between the UI and the database. How many times have you heard the word "Re-architect" mentioned in reference to a project halfway through the development life cycle? I would venture to guess not very often. That is because re-architecting is almost always synonymous with a major re-write. And like rebuilding a house, the very act of committing to re-architecture implies a demolition of the existing product and devaluation of the time and effort spent on it. Usually, such drastic measures are taken only after enough enhancement and feature requests unsupportable by the current version have piled up or if current product performance is unacceptable and cannot be resolved with hardware upgrades. More than likely, the bigger and more complex projects (supertankers) pose a much greater challenge to re-architecture than a simple, single-user application (jet-ski).

Re-Architecting vs. Re-Factoring

Although Re-Architecting is a major undertaking that cannibalizes existing design, Re-Factoring is actually quite the contrary. In fact, many developers have been secretly refactoring their code for years without saying a word. Some do not call it refactoring, instead referring to it as "tweaking" or "tuning," but the principle is the same. A developer builds a class or module, then goes back and removes redundant code, adds references to remote error handlers, renames the variables or re-classifies their type, adds attributes and comments, and so on. When they are done, the object, class, or module they are submitting to version control has already gone through several personal revisions, versions, and iterations before it was added to the application. Each developer has their own style, their own experiences with what works better in different situations. Development environments in corporate IT often force developers to adapt and use uniform coding standards that can be as ambiguous as "good user experience" (one of my most memorable function requirements) or as specific as variable naming conventions. Though there are many varieties of coding standards and best practices, there seem to be surprisingly few flavors of refactoring methodology, leaving the process mostly in the hands of developers.

Differential Development (2D) using Refactoring principles provides the following advantages:

  • Keeps the application up to date with the latest technology.
  • Keeps the design up to date and in line with expectations and experiences.
  • Allows the dev team to apply what they learn along the way back into the design on every level, including the core libraries.
  • Modularity of components, which can effortlessly adapt to changing business requirements.
  • Dynamic and on-going testing of the product.
  • Established and well-practiced process of applying changes to any level of the application and understanding/managing their impact on the rest of the application.
  • Better interoperability within the application layers.
  • A non-frozen data model, which can change and evolve throughout the course of the development life cycle without significantly impacting deadline or cost.

The advantage to refactoring as opposed to rearchitecture is that the product design is in a constant state of revision, implementation, and testing. There is no "set" design and nothing is frozen. Any change to design, anywhere, is propagated throughout the product via dependency chains and references. You can call it "Extreme Development" or "Dynamic Design" or whatever makes sense, but the concept is important and quite powerful.

Differential Development, Part 1

What Is Needed for 2D?

Proper implementation of 2D methodology is both novel and extremely powerful. But until now, the methodology was mostly idealistic, much like "it would sure be nice to cure the common cold someday." Many of the building blocks to achieve successful 2D implementation have existed for some time. Many more are just now beginning to emerge.

True Object-Oriented Development Environment

Object-Oriented Development supporting true inheritance and polymorphism is optimized for refactoring methodologies. In fact, a great deal of the work is already done for us. Extending base classes from which application layer objects are derived allows us to instantly propagate design changes throughout the inheritance chain by virtue of the compiler laws. Therefore, changing a variable name, type, or interface property in the base class will cascade those changes through any derived instance of that class. The same will apply to non-overriden methods and functions. However, accessors and functions calling those properties would have to be modified in code (via find/replace or some other IDE tool).

Test Driven Development environment

To effectively use the refactoring methodology, we also need to implement a new approach to the development cycle: Test-Driven Development (TDD). Read all about TDD here. It is an excellent read and also provides a link to an excellent tool (NUnit) that is free to download and use. Using embedded test cases in this manner is essential for maintaining a functioning build during and after refactoring. This allows the entire application to be tested quickly and thoroughly to ensure nothing unexpected broke from implementing a change of any scale. This is a rather sturdy and comforting safety net.

Data Integrated Abstraction Layer Object Generator (DIALOG)

This is a critical element of 2D. The first generation of DIALOG is finally wrapping up initial development and being released on SourceForge under General Public License. It is THE missing link to effectively implementing 2D. This piece may be challenging to explain but, essentially, it builds an abstract data layer with complete and functioning data objects based on a relational data model. All database interoperability is auto-generated based on the schema and any changes made to the relational entities within the schema are instantly reflected in the application data layer. The DIALOG tool also ensures naming consistency, constraints, relational and referential ntegrity, and provides data-flow between the application layers. The DIAL objects are the next generation of data objects, which have the capacity to replace ADO.NET Datasets in the middle/data tier. DIAL Objects can reflect custom data shapes, views, and procedures with all data fields exposed as .NET property accessors as well as provide unlimited nested relation-based collections that can be traversed just like regular, strongly typed .NET collections. DIAL objects also can maintain state as strongly typed disconnected datasets and also support serialization. I have personally played with an Alpha DIALOG release and find it to be extremely powerful, fast and efficient. The first generation DIALOG is up for Alpha release under general public license on sourceforge. Do a search for DIALOG.

Dynamic User Interface (DUI)

This principle can greatly decrease revision time and provide an overall consistency to the look and feel of applications. Though there are a few fledgling third-party interface and dynamic UI builders out there, the best option is to invest a little time and create one specifically for the corporation or company to use on their application. This helps insure each product brand has both a consistent look and feel and at the same time, does not necessarily look like a competitor's product. There are several different approaches to DUI.

.NET Property Grid

One simple approach is the .NET Property Grid control, which works well in exposing public object members. The object members can be further modified with attribute tags to set custom UITypeEditors, appearance, font, color, and so forth. This simple control allows for a variety of data and object interfaces to be displayed using a single and powerful grid-based control. Although this is a quick and easy approach, it is not as complete and thorough as the more methodical DDUI option. Get a great overview and tutorial on this powerful control here.

Embedded Attributes:

.NET provides for the ability to create and imbed custom attributes right into the sourcecode. These attributes can store just about anything, so why not UI data?

Usage EXAMPLE:

//define what type of control will be used to expose this property
//on the screen and set up screen name and security.
[UI_SCREEN("WelcomeFrm")]
[UI(typeof(System.Windows.Forms.TextBox), "Name", "Name", "Text",
    DUITEST.UI.roles.User)]
//expose the property.
public string Name {get{return name;}set{name = value;}}

Attribute definitions are basically just like any other class. You derive from the System.Attribute base type. An attribute definition for UI_SCREEN is provided below.

[AttributeUsage(AttributeTargets.Property)]
public class UI_SCREEN : Attribute
{
string screen;
public UI_SCREEN(string ScreenName) :
       base(){this.screen = ScreenName;}
public string ScreenName{get{return screen;}}
   }

The UI_SCREEN attribute stores a single string value to indicate the name of the screen where the member for which the attribute is defined will appear. You can use this attribute multiple times if the same value is to appear on multiple screens. The [UI] attribute is more robust, providing the control type that should be used to display the data, the name of the control, how that name is displayed on the screen, the property of the control which binds to the value, and the security role membership of the member (property). These attributes can be assigned manually or programmatically to the exposed members of DIAL objects based on how a developer would like the controls laid out on the screen. Then, a simple parsing routine could be written to extract attribute data from DIAL object members using the tools provided in the System.Reflection namespace.

DDUI:

DDUI is by far the most powerful and most robust method of managing dynamic UI. DDUI stands for Data Driven User Interface. Though the topic can easily cover an entire paper in itself, the essential basis of DDUI is to store all of your application UI data in relational data tables. The table definitions would include columns and rows that would specify not only the screen layout, but the absolute coordinates and related properties of all visual controls. Though it seems like a lot of extra work, the full power and speed of this approach can be realized with a few simple database scripts. A script can select relational data entities and procedures/views from a sysobjects table (or if implementing DIALOG from the DIALObjects table), then create an entry in the [SCREENS] data table for each entity. Then, through the use of discovery, it can obtain the fields in each entity (or scan the DIALObjectMemebers table) and their data type and insert an entry into the related [CONTROLS] table for each discovered field. Those two simple scripts can stub out approximately 80% of your UI design work. You then can define DDUI rules in a separate table.

Rules such as Varchar values under 256 characters are always displayed in a textbox and over 256 in a multi-line text box, numeric fields linked to lookup tables will be displayed in either a dropdown list (if more than 5) or an option box. Bool/Bit values will be displayed as check boxes, and so forth. You also can define UI standards such as control styles, colors, fonts, width, docking, and height in a separate table. Then, changing the font in a set of text boxes in your application is as simple as changing one entry in the databse, instead of manually editing each and every control in the application you wish to change. This UI consistency is by far the biggest advantage to DDUI. Imagine each text box, each button, each option bar, and dropdown list maintaining absolute and uniform consistency on every screen of your application. Imagine defining several types of textboxes of varying width and height, or allowing the user to dock controls in real-time to any edge of the application screen. Instantly changing screen color and font size to suit user preferences on all of your controls, providing internal themes and skins to the look and feel of your application or changing regional settings with the click of a button.

Here is a sample of a fully normalized, basic DDUI schema that supports DIALOG.

[DDUISchemav1.jpg]

Next time, we will get our hands dirty with the ALPHA version of DIALOG and see how we can leverage the tool to auto-generate our data definition layers.



About the Author

Eric Litovsky

Eric Litovsky has been a .NET developer since 2000. He has written and reviewed articles for MSDN, CodeBase, and TransitiveT. He is a staunch advocate of AGILE development and works with many forward-thinking developers who share his passion. Eric has been writing code since 1995 and has been promoting non-structured methodologies since COM. When he's not glued to the keyboard he can be seen playing bass with his local Jazz combo or keyboards for Sons Of Nothing. He currently resides in the Rocky Mountains and is working for 3M Health Information Systems.

Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Live Event Date: November 20, 2014 @ 2:00 p.m. ET / 11:00 a.m. PT Are you wanting to target two or more platforms such as iOS, Android, and/or Windows? You are not alone. 90% of enterprises today are targeting two or more platforms. Attend this eSeminar to discover how mobile app developers can rely on one IDE to create applications across platforms and approaches (web, native, and/or hybrid), saving time, money, and effort and introducing apps to market faster. You'll learn the trade-offs for gaining long …

  • IBM Worklight is a mobile application development platform that lets you extend your business to mobile devices. It is designed to provide an open, comprehensive platform to build, run and manage HTML5, hybrid and native mobile apps.

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds