Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Stratoukos

macrumors member
Original poster
Jul 15, 2008
51
0
I'm implementing a bignum library in C++ for a personal project. I want to have a function for returning the value of an integer as a string and another for getting a string and converting it to an integer. Obviously I am not talking about normal ints, but about my own class for handling integers (imaginatively called Bigint).

The internal representation of Bigint is just an array of unsigned longs where the first element is interpreted as value*2^64^0 the second the second element is interpreted as value*2^64^1 and so on. Basically it's just like any other numeral system only the base is 2^64.

I am completely stumped on this. Do you have any ideas on where should I start?

Thanks.
 
First off, which bignum library are you using? I really would think it would have a function that would return a c-string that is a decimal representation of the number.
 
Thanks for the reply, but I think you misunderstood me.

I am not using any bignum library. I am writing one and I don't know how to implement the conversion to string.
 
Thanks for the reply, but I think you misunderstood me.

I am not using any bignum library. I am writing one and I don't know how to implement the conversion to string.

Your number is represented by (x0, x1, x2, ...) in base 2^64.

Convert to base 2^32: Let y0 = x0 & 0xffffffff, y1 = x0 >> 32, y2 = x1 & 0xffffffff, y3 = x1 >> 32 etc. Let yn be the highest number of those.

Convert to base 10^9:
Start with z0 = yn % 10^9, z1 = yn / 10^9.
Multiply by 2^32 and add y(n-1):
Start with t = z0 * 2^32 + y(n-1), let z0 = t % 10^9, t = t / 10^9.
Then t = z1 * 2^32 + t, z1 = t % 10^9, t = t / 10^9.
Then t = z2 * 2^32 + t, z2 = t % 10^9, t = t / 10^9 etc.
Multiply by 2^32 and add y(n-2) in the same way etc...
Multiply by 2^32 and add y(0) in the same way until you are done.

Now convert each base 10^9 digit into 9 decimal digits.
 
Many many possible algorithms. Here's one which requires no multiplications or divisions: Convert each binary one into a decimal string via (huge) lookup table. Then do decimal string additions with carry to sum the decimal strings for all the one bits.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.