Optimizing 64-Bit Programs
- 1. The Result of Porting to 64-Bit Systems
- 2. Program Code Optimization
- 3. Memory Usage Decrease
Some means of 64-bit Windows applications performance increase are considered in the article.
People often have questions concerning 64-bit solutions performance and means of increasing it. Some questionable points are considered in this article and then some recommendations concerning program code optimization are given.
In a 64-bit environment, old 32-bit applications run owing to the Wow64 subsystem. This subsystem emulates a 32-bit environment by means of an additional layer between a 32-bit application and the 64-bit Windows API. In some localities, this layer is thin; in others, it’s thicker. For an average program, the productivity loss caused by this layer is about 2%. For some programs, this value may be larger. 2% is certainly not much, but you still have to take into account the fact that 32-bit applications function a bit slower under a 64-bit operation system than under a 32-bit one.
Compiling 64-bit code not only eliminates Wow64 but also increases performance. It’s related to architectural alterations in microprocessors, such as the increase in the number of general-purpose registers. For an average program, the expected performance growth caused by an ordinary compilation is 5-15%. But, in this case, everything depends upon the application and data types. For instance, the Adobe Company claims that its new 64-bit “Photoshop CS4” is 12% faster than its 32-bit version.
Some programs dealing with large data arrays may greatly increase their performance when expanding address space. The ability to store all the necessary data in random access memory eliminates slow operations of data swapping. In this case, the performance increase can be measured in times, not in percent rate.
Consider the following example: Alfa Bank has integrated an Itanium 2-based platform into its IT infrastructure. The bank’s investment growth resulted in the fact that the existing system became unable to cope with the increasing workload: users’ service delays attained its deadline. Case analysis showed up that the system’s bottleneck is not the processors’ performance but the limitation of 32-bit architecture in a memory subsystem part that does not allow the efficient use of more than 4 GB of the server’s addressing space. The database itself was larger than 9 GB. Its intensive usage resulted in the critical workload of the input-output subsystem. Alfa Bank decided to purchase a cluster consisting of two four-processor Itanium2-based servers with 12GB of random access memory. This decision allowed the bank to ensure the necessary level of system performance and fault-tolerance. As explained by company representatives, implementation of Itanium2-based servers allowed the bank to terminate problems to cut costs.
We can consider optimization at three levels: microprocessor instructions optimization, code optimization on the level of high-level languages and algorithmic optimization (which takes into account peculiarities of 64-bit systems). The first one is available when we use such development tools as assembler and is too specific to be of any interest for a wide audience. For those who are interested in this theme we can recommend “Software Optimization Guide for AMD64 Processors”  -an AMD guide of application optimization for a 64-bit architecture. Algorithmic optimization is unique for every task and its consideration is beyond this article.
From the point of view of high-level languages, such as C++, 64-bit architecture optimization depends on the choice of optimal data types. Using homogeneous 64-bit data types allows the optimizing compiler to construct a simpler and more efficient code, because there’s no need to convert 32-bit and 64-bit data integers so often. Primarily, this can be referred to variables that are used as loop counters, array indexes, and for variables storing different sizes. Traditionally, you use such types as int, unsigned, and long to represent the above-listed types. With 64-bit Windows systems that use LLP64  a data model, these types remain 32-bit ones. In a number of cases, this results in less efficient code construction because there are some additional conversions. For instance, if you need to figure out the address of an element in an array with 64-bit code, first you must turn the 32-bit index into a 64-bit one.
The use of such types as ptrdiff_t and size_t is more effective because they possess an optimal size for representing indexes and counters. For 32-bit systems, they are scaled as 32-bit, for 64-bit systems as 64-bit (see Table 1).
Table 1: Data type dimension of 32-bit and 64-bit versions of the Windows operating system.
Using ptrdiff_t, size_t, and derivative types allows you to optimize program code up to 30%. You can study an example of such optimization in the article “Development of resource-intensive applications in Visual C++ environment” . An additional advantage here is a more reliable code. Using 64-bit variables as indexes permits you to avoid overflows when you deal with large arrays having several billion elements.
Data type alteration is not an easy task, far less so if the alteration is really necessary. You bring forward a Viva64 static code analyzer as a tool that is meant to simplify this process. Although it specializes in 64-bit code error search, one can considerably increase code performance if he follows its recommendations on data type alteration.
After a program is compiled in a 64-bit regime, it starts consuming more memory than its 32-bit variant used to. Often, this increase is almost imperceptible, but sometimes memory consumption increases two times. This coheres with the following reasons:
- Increasing memory allocation size for certain objects storage; for instance, pointers
- Alteration of regulations of data alignment in structures
- Stack memory consumption increase
One can often put up with an RAM consumption increase. The advantage of 64-bit systems is exactly that the amount of this memory is rather large. There’s nothing bad in the fact that, with a 32-bit system having 2 GB of memory, a program took 300 MB, but with a 64-bit system having 8 GB of memory this program takes 400 MB. In relative units, you see that with a 64-bit system this program takes three times less available physical memory. There is no sense trying to fight this memory consumption growth. It’s easier to add some memory.
But, the increase of consumed memory has one disadvantage. This increase causes a performance loss. Even though 64-bit program code functions faster, extracting large amounts of data out of memory frustrate all the advantages and even decrease performance. Data transfer between memory and microprocessor (cache) is not a cheap operation.
Assume that you have a program that processes a large amount of text data (up to 400 MB). It creates an array of pointers, each indicating a successive word in the processed text. Let the average word length be five characters. Then, the program will require about 80 million pointers. So, a 32-bit variant of the program will require 400 MB + (80 MB * 4) = 720 MB memory. As for a 64-bit version of the program, it will require 400 MB+ (80 MB * 8) = 1040 MB memory. This is a considerable increase that may adversely affect program performance. And, if there’s no need to process gigabyte-sized text, the chosen data structure will be useless. The use of unsigned-type indexes instead of pointers may be viewed as a simple and effective solution of the problem. In this case, the size of consumed memory again is 720 MB.
One can waste a considerable amount of memory altering regulations of data alignment. Consider an example:
Structure size in a 32-bit program is 12 bytes, and in a 64-bit one it is 24 bytes; this is not thrifty. But, you can improve this situation by altering the sequence of elements in the following way:
The MyStruct2 structure size still equals 12 bytes in a 32-bit program, and in a 64-bit program it is only 16 bytes. Therefore, from the point of view of data access efficiency, MyStruct1 and MyStruct2 structures are equivalent. Figure 1 is a visual representation of structure elements distribution in memory.
It’s not easy to give clear instructions concerning the order of elements in structures. But, the common recommendation is the following: The objects should be distributed in the order of their size decrease.
The last point is stack memory consumption growth. Storing larger return addresses and data alignment increases the size. Optimizing them makes no sense. A sensible developer would never create megabyte-sized objects in a stack. Remember that, if you are porting a 32-bit program to a 64-bit system, don’t forget to alter the size of stack in project settings. For instance, you can double it. On default, a 32-bit application as well as a 64-bit one is assigned a 2 MB stack as usual. It may turn out to be insufficient and securing makes sense.
The author hopes that this article will help in efficient 64-bit solutions development and invites you to visit www.viva64.com to learn more about 64-bit technologies. You can find lots of items devoted to development, testing, and optimization of 64-bit applications. I wish you the best of luck in developing your 64-bit projects.
- Valentin Sedykh. Russian 64 bit: let’s dot all the ‘i’s.
- Software Optimization Guide for AMD64 Processors.
- Blog “The Old New Thing”: “Why did the Win64 team choose the LLP64 model?”
- Andrey Karpov, Evgeniy Ryzhkov. Development of Resource-intensive Applications in Visual C++.