Code Parallelization in C#


Application Security Testing: An Integral Part of DevOps


Welcome! Today, I would like to introduce you to Code Parallelization in C#. I hope you enjoy it.

Serial Computing

Computer software used to be written for serial computation by a serial computer. A serial computer operates internally on one bit or digit for each clock cycle. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions that are executed on a central processing unit on one computer. This means that only one instruction may execute at a given time, and only after the instruction has finished, the next instruction gets executed.

Parallel Computing

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems get divided into smaller problems, which then get solved at the same time. There are many different forms of parallel computing, including the following: bit-level, instruction-level, data, and task parallelism.

Bit-level Parallelism

Bit-level parallelism is a form of parallel computing based on increasing processor word size. A processor word is the natural unit of data that is used by a particular processor design. A word is a fixed-sized piece of data that is handled as a unit by the instruction set or the processor's hardware.

By increasing the word size, the number of instructions the processor must execute to perform an operation on variables whose sizes are greater than the length of the word is reduced.

Instruction-level Parallelism

ILP (Instruction-level parallelism) measures how many of the instructions in a computer program can be executed simultaneously.

The two approaches to instruction level parallelism are:

  • Dynamic parallelism: Hardware
  • Static parallelism: Software

With dynamic parallelism, the processor decides at run time which instructions to execute in parallel. With static parallelism, the compiler decides which instructions to execute in parallel.

Data Parallelism

Data parallelism is parallelization that takes place across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different nodes, which operate on the data in parallel. Data parallelism can be applied on data structures such as matrices and arrays by working on each element in parallel.

Task Parallelism

Task parallelism, or "function parallelism" or "control parallelism," is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism distributes tasks that are concurrently performed by threads across different processors.


Let's see parallelism in action!

Start a new C# Windows Forms project. Add a Panel and a Button to it. Name it anything descriptive, but remember, as always, my object names may differ from yours. You will be adding a ListBox dynamically through code.

Figure 1: Design

Ensure you can make use of the necessary namespaces by including them in your project:

using System;
using System.Collections.Concurrent;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
using System.Windows.Forms;

The System.Collections.Concurrent namespace provides thread-safe collection classes that can be used instead of the corresponding types in the System.Collections.Generic and System.Collections namespaces.

The System.Threading.Tasks namespace provides types that simplify the work of writing asynchronous code. The System.Linq namespace provides classes that support queries that use LINQ. Add the following members to your form class:

      const int Steps = 2000;
      static ListBox lb = new ListBox();

Add the Form_Load event to create the ListBox and display it properly inside the Panel:

      private void Form1_Load(object sender, EventArgs e)

         lb.Width = panel1.Width - 10;
         lb.Height = panel1.Height - 10;

         lb.Left = 5;
         lb.Top = 5;

         lb.Visible = true;



Add the LINQ method to calculate the PI parallel:

      static double ParallelLinq()
         double dblIncrement = 1.0 / (double)Steps;

         return (from pe in ParallelEnumerable.Range(0, Steps)

         let objX = (pe + 0.5) * dblIncrement

         select 4.0 / (1.0 + objX * objX)).Sum() * dblIncrement;


The ParallelEnumerable class provides a set of methods for querying objects that implement ParallelQuery{TSource}. The ParallelQuery class represents a parallel LINQ sequence. The ParallelEnumerable.Range method generates a parallel sequence of integral numbers within a specified range.

Add the next Parallel method:

      static double ParallelForPi()

         double dblSum = 0.0;

         double dblIncrement = 1.0 / (double)Steps;

         object objPI = new object();

         Parallel.For(0, Steps, () => 0.0, (i,
            LoopState, objLocal) =>
            double dblX = (i + 0.5) * dblIncrement;

            return objLocal + 4.0 / (1.0 + dblX * dblX);

         }, objTemp => { lock (objPI) dblSum += objTemp; });

         return dblIncrement * dblSum;


The Parallel.For method simply executes a For loop in which iterations could run in parallel and the state of the loop can be monitored and manipulated.

Add the next parallel method, ParallelForEachRangePartitioner:

      static double ParallelForEachRangePartitioner()
         double sum = 0.0;

         double increment = 1.0 / (double)Steps;

         object pi = new object();

         Parallel.ForEach(Partitioner.Create(0, Steps), ()
            => 0.0, (tplRange, LoopState, objLocal) =>
            for (int i = tplRange.Item1; i < tplRange.Item2; i++)

               double x = (i + 0.5) * increment;

               objLocal += 4.0 / (1.0 + x * x);


            return objLocal;

         }, local => { lock (pi) sum += local; });

         return increment * sum;


The Parallel.ForEach method executes a foreach operation on a Partitioner in which iterations are allowed to run in parallel.

Now that we have three different parallel methods, let's do a test to see which is faster. To check the speed of each operation, you will need to make use of a StopWatch object, as follows:

      static void Duration<T>(Func<T> Calc)
         var swPi = Stopwatch.StartNew();
         var Result = Calc();

         lb.Items.Add("Time Elapsed: " + swPi.Elapsed + "
            - PI Value: " + Result);


This will add the PI result of each operation into the ListBox as well as the time it took to finish calculating for PI. Finally, add the Calculate Button click event:

      private void btnCalc_Click(object sender, EventArgs e)

         Duration(() => ParallelLinq());
         Duration(() => ParallelForPi());
         Duration(() => ParallelForEachRangePartitioner());


Figure 2: Running

The code for this article is available on GitHub.


There are numerous ways to achieve code parallelization. Have fun exploring these and more.

About the Author

Hannes DuPreez

Hannes du Preez is an ex MVP for Visual Basic from 2008 to 2017. He loves technology and loves Visual Basic and C#. He loves writing articles and proving that Visual Basic is more powerful than what most believe. You are most welcome to reach him at: ojdupreez1978[at]gmail[dot]com

Related Articles


  • There are no comments yet. Be the first to comment!

  • You must have javascript enabled in order to post comments.

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • As all sorts of data becomes available for storage, analysis and retrieval - so called 'Big Data' - there are potentially huge benefits, but equally huge challenges...
  • The agile organization needs knowledge to act on, quickly and effectively. Though many organizations are clamouring for "Big Data", not nearly as many know what to do with it...
  • Cloud-based integration solutions can be confusing. Adding to the confusion are the multiple ways IT departments can deliver such integration...

Most Popular Programming Stories

More for Developers

RSS Feeds

Thanks for your registration, follow us on our social networks to keep up-to-date
We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.