There were a couple of bugs in the code (thanks to all you who reported them), which have now been fixed, and the source has been updated.
Most of the problems were due to a bug in the addition routine, where the sign of the result was getting confused. This indirectly caused "FromString" to function incorrectly, which gave spurious results with some parameters.
I also fixed the memory leak in the Div function.
Thanks to all that tried this code, and found the time to report the propblems. I hope that this modified code assist in your project/s in some small way.
I've gotten the class to compile and run in my software. My problem comes when I try to perform a modulus operation. For every time I execute this operation in my software, there is a 4 byte object dump in debug mode when my program exits. The problem looks like it's coming from either "Div" or "Optimize". If I take that one line of code out of my software there isn't a memory leak. Here's what my code looks like:
CBigInt CClass::ChangeRandomNum(CBigInt biRandom)
biRandom = (biRandom + c) % 2147483647;
If I call this function 5 times, there will be five object dumps. If I change the '%' to a '+' there are no memory leaks. What am I doing wrong?
If a large decimal number is converted with "FromDec"
(or indirectly via "FromString"), you see corruption
if the first few digits of the number set the MSB of a
32-bit unsigned long. bla... who... whaaa... you ask?
In BitInt.cpp, change the following (near line 1000):
const unsigned long MAXDEC = (MAXULONG - 9) / 10;
const unsigned long MAXDEC = (0x7FFFFFFF-9)/10;
The FromDec uses an optimization to improve performance.
It converts the first few digits in a fast loop to an unsigned long. BigInt emulates signed integers using
arrays of unsigned long's. No problem thus far.
However, if only one unsigned long is allocated, and its
MSB is set, various BigInt functions think the number is
negative. Oops. The fix prevents that MSB from getting
set in the fast conversion loop. Thus, all is well.
I bet it is easy to write a much more efficient and faster code than what you have.
Performance test case:
Just try this: Take two numbers which are small enough to fit in regular 'int' also. Now perform some operation 1000 times on those numbers using regular 'int' and CBigInt. See if the speed of 'CBigInt' is any better than ten times slower than 'int'.
Comments on code itself:
When defining such basic data-types like integers, I love not to waste time on things like "if (A.IsNull())". Those things are easily handled by redefining the interface or the specifications of the class itself. For example, predefined 'int' itself never requires such checking. Just that construction does not allows having integers which are null.
Then, in the functions for operator + and all, temporary BigInts are created. Why dear?! Think again to see if addition can be done without dealing with any more BigInts than the two numbers involved and the result.
Optimization with "A.Optimize();" Whenever you need things like this, it basically means that the algorithm by itself is not efficient. Why not produce the optimized answer by optimizing the algorithm!