Virtual Developer Workshop: Containerized Development with Docker

Environment: Should be able to be compiled with any standard C++ compiler (untested). Developed in VC++. I originally got the idea for this project after seeing a couple of similar things done on the net - A class based on storing the data in string format, the author of which I cannot recall, and another one by PJ Naughter that was a 96 bit integer. After studying PJ Naughter's code, I began thinking that the system could be made more flexible by making the storage dynamic. I played around with it for a while, and this is the result.

The CBigInt class allows the manipulation of extremely large signed integer values. The data contained in the objects is stored in an array of unsigned longs, which is dynamically allocated on the heap as required. The maximum size of this array is MAXLONG elements, giving a theoretical possible range of values of -268,719,476,704 to 268,719,476,704 - 1. With numbers this big, a further realistic limitation of available memory would also apply, a full range number requiring 8Gb of RAM to store, without output formatting. I have not done any checks for overflow, so in the odd chance that a number that big is obtained, the results are undefined. Each element of the array is stored in the host machine's native format, and the entire array is stored in Little Endian format, with the low order DWORD in element 0, and the high order DWORD in element n). Setting the highest bit of the high order word denotes a negative number.

A special Null state of an object, where it contains no data, indicates that a memory error occurred. The IsNull() function allows you to test an object for this condition.

Maintaining the sign as a number shrinks and grows proved to be one of the more awkward operations in the whole class. Basically, what it means is that if a number is positive, but its internal representation has the high order bit of its highest order DWORD set, then an extra array element containing a zero value must be kept. Also, if a negative number grows in negativity past a DWORD boundary, the new element needs to have all of its "unused" bits set.

There are two private functions that are helpers to organise this. The ExpandTo() function causes the internal array to grow, filling each element with 0xffffffff if the value is negative, or 0 if it is positive. The object's sign is passed as a parameter to this function, rather than being determined by the existing value, to allow for the case where a positive number's high order bit is set. The Expand() function is simply a derivative of ExpandTo(), with the first parameter being a value to expand the array by, rather than to.

The other helper function is Optimize(). This one simply removes redundant elements from the high order of the array, maintaining the sign (and value, of course) of the resulting value.

Due to the enormity of some of the numbers that may be generated, I have tried to write the code with speed optimisations, forgoing code size on a number of occasions, figuring that with data this big, an extra K or two in code is pretty insignificant. Consequently, there are a number of overloaded operators and friend functions, taking standard C++ operands rather than CBigInt operands, with the associated performance benefits of not having to parse the array coming into effect. These performance benefits are most obvious in multiplication and division operations, where the performance improvement raises exponentially with the size of the operands. The test project demonstrates this, with two non-recursive factorial functions defined, both of which return a CBigInt, with one of them performing an CBigInt * DWORD operation, and the other doing the CBigInt * CBigInt operation. The former performs over 100 times faster with 1000!, and (try it if you are willing to wait), over 400 times faster with 10000!.

Apart from all of the operations, the only other thing that was needed was the ability to read and write the numbers in human format. The Format functions convert the number to a string, in any base from 2 to 36. An internal string buffer is allocated as a member for storing this string, and is reallocated each time a Format() function is called, so the pointer should not be stored. There is a version of the function where you may pass your own buffer to be filled, if you need to save or reuse the results.

The FromXxx functions convert a string representation of a number to a CBigInt, as does the constructor that takes a const char* parameter. The latter recognises the "0x", "0", and "0b" prefixes on the strings as being Hex, Octal, and Binary respectively, whereas the FromString function expects the Radix to be passed as a parameter. Parsing of the sting commences from the first non space or tab character, and continues to the first unrecognised digit for the supplied radix.


Download demo project (VC++ specific) - 11Kb
Download source - 9Kb


  • missing operator?

    Posted by FreemanNg on 04/11/2005 05:49am

    I just tried out the module, and the implementation for
         CBigInt operator *(const CBigInt& Value) const;
    is missing! Is it not possible to multiply two CBigInts together? 

  • Bug in division

    Posted by wald0 on 02/23/2005 11:12pm

    CBigInt a,b,c;
    This program returns 9223372036854775808
    Should be 65

  • Code updated

    Posted by Legacy on 11/18/2003 08:00am

    Originally posted by: Mick

    There were a couple of bugs in the code (thanks to all you who reported them), which have now been fixed, and the source has been updated.

    Most of the problems were due to a bug in the addition routine, where the sign of the result was getting confused. This indirectly caused "FromString" to function incorrectly, which gave spurious results with some parameters.

    I also fixed the memory leak in the Div function.

    Thanks to all that tried this code, and found the time to report the propblems. I hope that this modified code assist in your project/s in some small way.

  • Looking for CBigDouble class

    Posted by Legacy on 10/23/2003 07:00am

    Originally posted by: Tomer

    Do you have any idea where can I find a 'double' version for CBigInt class ?

  • Bug in FromString()

    Posted by Legacy on 08/25/2002 07:00am

    Originally posted by: Mohamed Abdel-Monem

    There is bug in the FromString() function when given a (-ve) decimal number.
    when commenting the block of code that consists of the 'case 10:' and 'return FromDec()' it works fine.

  • Bug in add & subtract

    Posted by Legacy on 08/23/2002 07:00am

    Originally posted by: Mohamed Abdel-Monem

    I really appreciate your work. your library should be the number 1. But unfortunely try this:

    CBigInt a = -1;
    CBigInt b = 1;
    CBigInt x = a + b;
    CString s = x.Format();


    CBigInt a = 2;
    CBigInt b = -1;
    CBigInt x = a - b;
    CString s = x.Format();

    What do you see?

  • Memory leak in Div function?

    Posted by Legacy on 08/15/2002 07:00am

    Originally posted by: Chris H

    I've gotten the class to compile and run in my software. My problem comes when I try to perform a modulus operation. For every time I execute this operation in my software, there is a 4 byte object dump in debug mode when my program exits. The problem looks like it's coming from either "Div" or "Optimize". If I take that one line of code out of my software there isn't a memory leak. Here's what my code looks like:

    CBigInt CClass::ChangeRandomNum(CBigInt biRandom)
    int c=27;
    biRandom = (biRandom + c) % 2147483647;
    return biRandom;

    If I call this function 5 times, there will be five object dumps. If I change the '%' to a '+' there are no memory leaks. What am I doing wrong?

  • BUG in FromDec (indirectly affects FromString)

    Posted by Legacy on 07/18/2002 07:00am

    Originally posted by: Ian Davis

    If a large decimal number is converted with "FromDec"
    (or indirectly via "FromString"), you see corruption
    if the first few digits of the number set the MSB of a
    32-bit unsigned long. bla... who... whaaa... you ask?

    In BitInt.cpp, change the following (near line 1000):
    const unsigned long MAXDEC = (MAXULONG - 9) / 10;
    const unsigned long MAXDEC = (0x7FFFFFFF-9)/10;

    The FromDec uses an optimization to improve performance.
    It converts the first few digits in a fast loop to an unsigned long. BigInt emulates signed integers using
    arrays of unsigned long's. No problem thus far.

    However, if only one unsigned long is allocated, and its
    MSB is set, various BigInt functions think the number is
    negative. Oops. The fix prevents that MSB from getting
    set in the fast conversion loop. Thus, all is well.


  • Strange behaviour ??

    Posted by Legacy on 04/08/2002 07:00am

    Originally posted by: Michael Pliam

    I have sucessfully compiled this code and run the test app using VCPP 6.0 on Win95 in a console app.

    I noticed something I cannot explain. If I run the following initialization, the resulting BigInt is not correct.
    char *cp;
    CBigInt n1("2343432354345434");
    cp = n1.Format(10); printf("n1 = %s\n", cp);

    // yields: 18444792538767897050, not 2343432354345434

    This seems to happen periodically with certain numbers. Has anyone else encountered this oddity.

  • Performance!!

    Posted by Legacy on 02/26/2002 08:00am

    Originally posted by: Alok Govil

    I bet it is easy to write a much more efficient and faster code than what you have.

    Performance test case:

    Just try this: Take two numbers which are small enough to fit in regular 'int' also. Now perform some operation 1000 times on those numbers using regular 'int' and CBigInt. See if the speed of 'CBigInt' is any better than ten times slower than 'int'.

    Comments on code itself:

    When defining such basic data-types like integers, I love not to waste time on things like "if (A.IsNull())". Those things are easily handled by redefining the interface or the specifications of the class itself. For example, predefined 'int' itself never requires such checking. Just that construction does not allows having integers which are null.

    Then, in the functions for operator + and all, temporary BigInts are created. Why dear?! Think again to see if addition can be done without dealing with any more BigInts than the two numbers involved and the result.

    Optimization with "A.Optimize();" Whenever you need things like this, it basically means that the algorithm by itself is not efficient. Why not produce the optimized answer by optimizing the algorithm!

    Regards - Alok Govil

  • Loading, Please Wait ...

  • You must have javascript enabled in order to post comments.

Leave a Comment
  • Your email address will not be published. All fields are required.

Most Popular Programming Stories

More for Developers

RSS Feeds

Thanks for your registration, follow us on our social networks to keep up-to-date