Performing Various Iteration Methods in .NET

Environment: .NET

Introduction

I've been implementing numerical libraries in .NET and have come to some conclusions about iteration performance. My classes have to hold a large amount of data and be able to iterate through that data as quickly as possible. To compare various methods, I created a simple class called Data that encapsulates an array of doubles.

Method #1: Enumeration

Data implements IEnumerable. It contains GetEnumerator, which returns its own DataEnumerator, an inner class.

...
   public IEnumerator GetEnumerator()
    {
      return new DataEnumerator( this );
    }

    internal class DataEnumerator : IEnumerator
    {
      private Data internal_ = null;
      private int index = -1;

      public DataEnumerator( Data data )
      {
        internal_ = data;
      }

      public object Current
      {
        get
        {
          return internal_.Array[index];
        }
      }

      public bool MoveNext()
      {
        index++;
        if ( index >= internal_.Array.Length )
        {
          return false;
        }
        return true;
      }

      public void Reset()
      {
        index = -1;
      }
    }
...

Method #2: Indexing

I implemented an index operator on the class, which simply calls the index operator on the array.

    public double this[int position]
    {
      get
      {
        return array_[position];
      }
    }

Method #3: Indirect Array

I created a property to access the array.

    public double[] Array
    {
      get
      {
        return array_;
      }
    }

When iterating, I called the Array property and then its index operator.

          d = data.Array[j];

Method #4: Direct Array

I created a reference to the array.

        double[] array = data.Array;

Then, I iterate through that reference.

          d = array[j];

Method #5: Pointer Math

Finally, I tried improving performance by iterating through the array in Managed C++, using pointer manipulation.

        static void iterate( Data& data )
        {
          double d;
          double __pin* ptr = &( data.Array[0] );
          for ( int i = 0; i < data.Array.Length; i++ )
          {
            d = *ptr;
            ++ptr;
          }
        }

I called it this way:

        Pointer.iterate( data );

Conclusions

To test the different methods, I allocated 1,000,000 doubles into an array and indexed over all of them. I repeated this 1,000 times to minimize randomness. Here are the results...



Click here for a larger image.

Enumeration is always slow. That's not surprising because I'm using a general data structure to hold the doubles. Each access performs a cast. The three operator/property methods differed very slightly. These are probably all optimized similarly. Using pointer math to traverse over the raw data was significantly faster. This is probably due to the fact that there's no bounds checking.

In summary, if you have large amounts of data and performance is critical, consider using managed C++.

Acknowledgements

Thanks to Mark Vulfson of ProWorks for tips on using the Flipper Graph Control. Also, to my colleagues Ken Baldwin and Steve Sneller at CenterSpace Software.

About the Author

Trevor Misfeldt is the co-founder and CEO of CenterSpace Software, which specializes in .NET numerical method libraries. Trevor has worked as a software engineer for eight years. He has held demanding positions for a variety of firms using C++, Java, .NET, and other technologies, including Rogue Wave Software, CleverSet Inc., and ProWorks. He is coauthor of Elements of Java Style, published by Cambridge University Press, and is currently working on a follow-up book for C++. He has also served on a course advisory board of the University of Washington. His teams have won the JavaWorld "GUI Product of the Year" and XML Magazine "Product of the Year" awards. Trevor holds a BSc in Computer Science from the University of British Columbia and a BA in Economics from the University of Western Ontario.

Downloads

Download source - 6 Kb


Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Live Event Date: October 29, 2014 @ 11:00 a.m. ET / 8:00 a.m. PT Are you interested in building a cognitive application using the power of IBM Watson? Need a platform that provides speed and ease for rapidly deploying this application? Join Chris Madison, Watson Solution Architect, as he walks through the process of building a Watson powered application on IBM Bluemix. Chris will talk about the new Watson Services just released on IBM bluemix, but more importantly he will do a step by step cognitive …

  • QA teams don't have time to test everything yet they can't afford to ship buggy code. Learn how Coverity can help organizations shrink their testing cycles and reduce regression risk by focusing their manual and automated testing based on the impact of change.

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds