Optimizing 64-Bit Programs

Optimizing 64-Bit Programs

Andrey Karpov
"Program Verification Systems"

Abstract

Some means of 64-bit Windows applications performance increase are considered in the article.

Introduction

People often have questions concerning 64-bit solutions performance and means of increasing it. Some questionable points are considered in this article and then some recommendations concerning program code optimization are given.

1. The Result of Porting to 64-Bit Systems

In a 64-bit environment, old 32-bit applications run owing to the Wow64 subsystem. This subsystem emulates a 32-bit environment by means of an additional layer between a 32-bit application and the 64-bit Windows API. In some localities, this layer is thin; in others, it's thicker. For an average program, the productivity loss caused by this layer is about 2%. For some programs, this value may be larger. 2% is certainly not much, but you still have to take into account the fact that 32-bit applications function a bit slower under a 64-bit operation system than under a 32-bit one.

Compiling 64-bit code not only eliminates Wow64 but also increases performance. It's related to architectural alterations in microprocessors, such as the increase in the number of general-purpose registers. For an average program, the expected performance growth caused by an ordinary compilation is 5-15%. But, in this case, everything depends upon the application and data types. For instance, the Adobe Company claims that its new 64-bit "Photoshop CS4" is 12% faster than its 32-bit version.

Some programs dealing with large data arrays may greatly increase their performance when expanding address space. The ability to store all the necessary data in random access memory eliminates slow operations of data swapping. In this case, the performance increase can be measured in times, not in percent rate.

Consider the following example: Alfa Bank has integrated an Itanium 2-based platform into its IT infrastructure. The bank's investment growth resulted in the fact that the existing system became unable to cope with the increasing workload: users' service delays attained its deadline. Case analysis showed up that the system's bottleneck is not the processors' performance but the limitation of 32-bit architecture in a memory subsystem part that does not allow the efficient use of more than 4 GB of the server's addressing space. The database itself was larger than 9 GB. Its intensive usage resulted in the critical workload of the input-output subsystem. Alfa Bank decided to purchase a cluster consisting of two four-processor Itanium2-based servers with 12GB of random access memory. This decision allowed the bank to ensure the necessary level of system performance and fault-tolerance. As explained by company representatives, implementation of Itanium2-based servers allowed the bank to terminate problems to cut costs.

2. Program Code Optimization

We can consider optimization at three levels: microprocessor instructions optimization, code optimization on the level of high-level languages and algorithmic optimization (which takes into account peculiarities of 64-bit systems). The first one is available when we use such development tools as assembler and is too specific to be of any interest for a wide audience. For those who are interested in this theme we can recommend "Software Optimization Guide for AMD64 Processors" [2] -an AMD guide of application optimization for a 64-bit architecture. Algorithmic optimization is unique for every task and its consideration is beyond this article.

From the point of view of high-level languages, such as C++, 64-bit architecture optimization depends on the choice of optimal data types. Using homogeneous 64-bit data types allows the optimizing compiler to construct a simpler and more efficient code, because there's no need to convert 32-bit and 64-bit data integers so often. Primarily, this can be referred to variables that are used as loop counters, array indexes, and for variables storing different sizes. Traditionally, you use such types as int, unsigned, and long to represent the above-listed types. With 64-bit Windows systems that use LLP64 [3] a data model, these types remain 32-bit ones. In a number of cases, this results in less efficient code construction because there are some additional conversions. For instance, if you need to figure out the address of an element in an array with 64-bit code, first you must turn the 32-bit index into a 64-bit one.

The use of such types as ptrdiff_t and size_t is more effective because they possess an optimal size for representing indexes and counters. For 32-bit systems, they are scaled as 32-bit, for 64-bit systems as 64-bit (see Table 1).

Table 1: Data type dimension of 32-bit and 64-bit versions of the Windows operating system.

Using ptrdiff_t, size_t, and derivative types allows you to optimize program code up to 30%. You can study an example of such optimization in the article "Development of resource-intensive applications in Visual C++ environment" [4]. An additional advantage here is a more reliable code. Using 64-bit variables as indexes permits you to avoid overflows when you deal with large arrays having several billion elements.

Data type alteration is not an easy task, far less so if the alteration is really necessary. You bring forward a Viva64 static code analyzer as a tool that is meant to simplify this process. Although it specializes in 64-bit code error search, one can considerably increase code performance if he follows its recommendations on data type alteration.

3. Memory Usage Decrease

After a program is compiled in a 64-bit regime, it starts consuming more memory than its 32-bit variant used to. Often, this increase is almost imperceptible, but sometimes memory consumption increases two times. This coheres with the following reasons:

    Increasing memory allocation size for certain objects storage; for instance, pointers
  • Alteration of regulations of data alignment in structures
  • Stack memory consumption increase

One can often put up with an RAM consumption increase. The advantage of 64-bit systems is exactly that the amount of this memory is rather large. There's nothing bad in the fact that, with a 32-bit system having 2 GB of memory, a program took 300 MB, but with a 64-bit system having 8 GB of memory this program takes 400 MB. In relative units, you see that with a 64-bit system this program takes three times less available physical memory. There is no sense trying to fight this memory consumption growth. It's easier to add some memory.

But, the increase of consumed memory has one disadvantage. This increase causes a performance loss. Even though 64-bit program code functions faster, extracting large amounts of data out of memory frustrate all the advantages and even decrease performance. Data transfer between memory and microprocessor (cache) is not a cheap operation.

Assume that you have a program that processes a large amount of text data (up to 400 MB). It creates an array of pointers, each indicating a successive word in the processed text. Let the average word length be five characters. Then, the program will require about 80 million pointers. So, a 32-bit variant of the program will require 400 MB + (80 MB * 4) = 720 MB memory. As for a 64-bit version of the program, it will require 400 MB+ (80 MB * 8) = 1040 MB memory. This is a considerable increase that may adversely affect program performance. And, if there's no need to process gigabyte-sized text, the chosen data structure will be useless. The use of unsigned-type indexes instead of pointers may be viewed as a simple and effective solution of the problem. In this case, the size of consumed memory again is 720 MB.

One can waste a considerable amount of memory altering regulations of data alignment. Consider an example:

struct MyStruct1
{
   char m_c;
   void *m_p;
   int m_i;
};

Structure size in a 32-bit program is 12 bytes, and in a 64-bit one it is 24 bytes; this is not thrifty. But, you can improve this situation by altering the sequence of elements in the following way:

struct MyStruct2
{
   void *m_p;
   int m_i;
   char m_c;
};

The MyStruct2 structure size still equals 12 bytes in a 32-bit program, and in a 64-bit program it is only 16 bytes. Therefore, from the point of view of data access efficiency, MyStruct1 and MyStruct2 structures are equivalent. Figure 1 is a visual representation of structure elements distribution in memory.

Figure 1.

It's not easy to give clear instructions concerning the order of elements in structures. But, the common recommendation is the following: The objects should be distributed in the order of their size decrease.

The last point is stack memory consumption growth. Storing larger return addresses and data alignment increases the size. Optimizing them makes no sense. A sensible developer would never create megabyte-sized objects in a stack. Remember that, if you are porting a 32-bit program to a 64-bit system, don't forget to alter the size of stack in project settings. For instance, you can double it. On default, a 32-bit application as well as a 64-bit one is assigned a 2 MB stack as usual. It may turn out to be insufficient and securing makes sense.

Conclusion

The author hopes that this article will help in efficient 64-bit solutions development and invites you to visit www.viva64.com to learn more about 64-bit technologies. You can find lots of items devoted to development, testing, and optimization of 64-bit applications. I wish you the best of luck in developing your 64-bit projects.

References

  1. Valentin Sedykh. Russian 64 bit: let's dot all the 'i's.
    http://www.citforum.ru/hardware/arch/64bit_russian/.
  2. Software Optimization Guide for AMD64 Processors.
    http://www.viva64.com/go.php?url=59.
  3. Blog "The Old New Thing": "Why did the Win64 team choose the LLP64 model?"
    http://www.viva64.com/go.php?url=25.
  4. Andrey Karpov, Evgeniy Ryzhkov. Development of Resource-intensive Applications in Visual C++.
    http://www.viva64.com/articles/Resource_intensive_applications.html.


About the Author

Andrey Karpov

Andrey Karpov is technical manager of the OOO "Program Verification Systems" (Co Ltd) company developing the PVS-Studio tool which is a package of static code analyzers integrating into the Visual Studio development environment.

PVS-Studio is a static analyzer that detects errors in source code of C/C++ applications. There are 3 sets of rules included into PVS-Studio:

  1. Diagnosis of 64-bit errors (Viva64)
  2. Diagnosis of parallel errors (VivaMP)
  3. General-purpose diagnosis

Andrey Karpov is also the author of many articles on the topic of 64-bit and parallel software development. To learn more about the PVS-Studio tool and sources concerning 64-bit and parallel software development, please visit the www.viva64.com site.

Best Articles:

My page on LinkedIn site: http://www.linkedin.com/pub/4/585/6a3

E-mail: karpov@viva64(dot)com

Comments

  • Good Article.

    Posted by TheCPUWizard on 12/15/2008 06:07pm

    You cover number of good points. Typically ANY application (simply re-compiled for 64 bit rather than 32 bit) WILL run slower (sometime quite significantly). Reworking the internal logic to make proper use of the 64 bit architecture may mitigate this. Based on my (professional) experience, the only valid reason for porting to 64 bit is to increase the GLOBAL address space so that applications to not incurr OOM errors (especially when running many applications on the same server. I have yet to see a single instance where an application simply re-compiled to 64 bit and run on a 64 bit OS, is faster than the original 32 bit application on 32 bits OS (using identical hardware).

    Reply
Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Download the Information Governance Survey Benchmark Report to gain insights that can help you further establish business value in your Records and Information Management (RIM) program and across your entire organization. Discover how your peers in the industry are dealing with this evolving information lifecycle management environment and uncover key insights such as: 87% of organizations surveyed have a RIM program in place 8% measure compliance 64% cannot get employees to "let go" of information for …

  • You probably have several goals for your patient portal of choice. Is "community" one of them? With a bevy of vendors offering portal solutions, it can be challenging for a hospital to know where to start. Fortunately, YourCareCommunity helps ease the decision-making process. Read this white paper to learn more. "3 Ways Clinicians can Leverage a Patient Portal to Craft a Healthcare Community" is a published document owned by www.medhost.com

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds