Is there:
1:) Any document available which described the difference of Java's BigDecimal class vs. MFC's BigDecimal implementation? 2.) and/or any short overview of the different approaches (when they are different at all)? 3.) There has been rumor, years ago, that Big Blue (IBM) is going to develop a DECIMAL CHIP, based on Mike's specification of this nearly never ending Project he was working on, and he did finish so successfully. Are there any new's whether this CHIP is going to come or this hardware based implementation of Decimal Arithmetic has been cancelled? I would need to have replies for those questions, when possible, for some upcoming meeting's here in Vienna with some IBM representatives, when possible. ... And I don't know whom to ask except this list :-) Thomas. . -- Thomas Schneider (www.thsitc.com) _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/
Thomas Schneider, Vienna, Austria (Europe) :-)
www.thsitc.com www.db-123.com |
Thomas,
both implementations are very well documented; both sources are available now. I am under the impression that there is not much difference. The hardware implementation is there for a number of years already in the Z10 mainframe; look at http://speleotrove.com/decimal/ for more answers. best regards, René. On 29 jul. 2011, at 23:24, Thomas Schneider wrote: > Is there: > > 1:) Any document available which described the difference of Java's BigDecimal class vs. MFC's BigDecimal implementation? > 2.) and/or any short overview of the different approaches (when they are different at all)? > 3.) There has been rumor, years ago, that Big Blue (IBM) is going to develop a DECIMAL CHIP, based on Mike's > specification of this nearly never ending Project he was working on, and he did finish so successfully. > > Are there any new's whether this CHIP is going to come or this hardware based implementation of Decimal Arithmetic has been > cancelled? > > I would need to have replies for those questions, when possible, for some upcoming meeting's here in Vienna > with some IBM representatives, when possible. > > ... And I don't know whom to ask except this list :-) > > Thomas. > . > -- > Thomas Schneider (www.thsitc.com) > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Hi Thomas, hi René, the implementations are both very well documented, that's true. But the differences are somewhat fundamental. The NetRexx implementation (the oldest) uses 1 byte per digit and has it's deficiencies both due to the single byte access in the algorithms and footprint (and therefore cache coverage in modern CPUs). The Java BigDecimal (mark1, until Java 6u24 IIRC) uses two-complement storage of the numbers, which decreases the footprint and allows for integer and long access in the algorithms. So it's much, much faster (3-10x) and uses far less memory (2-3x) as the NetRexx implementation. There's a (I called it mark2, don't know about if it has a real name) BigDecimal class (and some more, too, like TreeMap and such) in Java6u25 and above including jdk7, which is enabled on -XX:+AggressiveOpts and increases the throughput by about 30%. This is done by finetuning some methods and some hotspot help. René Jansen schrieb am 30.07.2011 14:21: > Thomas, > > both implementations are very well documented; both sources are available now. I am under the impression that there is not much difference. > The hardware implementation is there for a number of years already in the Z10 mainframe; look at http://speleotrove.com/decimal/ for more answers. > > best regards, > > René. > > On 29 jul. 2011, at 23:24, Thomas Schneider wrote: > >> Is there: >> >> 1:) Any document available which described the difference of Java's BigDecimal class vs. MFC's BigDecimal implementation? >> 2.) and/or any short overview of the different approaches (when they are different at all)? >> 3.) There has been rumor, years ago, that Big Blue (IBM) is going to develop a DECIMAL CHIP, based on Mike's >> specification of this nearly never ending Project he was working on, and he did finish so successfully. >> >> Are there any new's whether this CHIP is going to come or this hardware based implementation of Decimal Arithmetic has been >> cancelled? >> >> I would need to have replies for those questions, when possible, for some upcoming meeting's here in Vienna >> with some IBM representatives, when possible. >> >> ... And I don't know whom to ask except this list :-) >> >> Thomas. >> . >> -- >> Thomas Schneider (www.thsitc.com) >> _______________________________________________ >> Ibm-netrexx mailing list >> [hidden email] >> Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ >> > > > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > > - -- cu, Patric -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: GnuPT 2.5.2 iEYEARECAAYFAk4z/nwACgkQfGgGu8y7ypCjJgCcCcuQupbHsneJJEjC4JwUr5FL 5A0AoMrpNHbH1STkJKYO+SEAEq3l03Nv =iVl4 -----END PGP SIGNATURE----- _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Hi Patric,
yes, that confirms the performance results Alan found in the RossettaCode examples for arbitrary precision at http://rosettacode.org/wiki/Arbitrary-precision_integers_(included) ; we might want to analyze these later and see what can be done. These results are so very much different that there is probably room for improvement in some places. I opened issue NETREXX-21 for this a number of days ago. best regards, René. On 30 jul. 2011, at 14:52, Patric Bechtel wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi Thomas, hi René, > > the implementations are both very well documented, that's true. But the > differences are somewhat fundamental. > > The NetRexx implementation (the oldest) uses 1 byte per digit and has > it's deficiencies both due to the single byte access in the algorithms > and footprint (and therefore cache coverage in modern CPUs). > > The Java BigDecimal (mark1, until Java 6u24 IIRC) uses two-complement > storage of the numbers, which decreases the footprint and allows for > integer and long access in the algorithms. So it's much, much faster > (3-10x) and uses far less memory (2-3x) as the NetRexx implementation. > > There's a (I called it mark2, don't know about if it has a real name) > BigDecimal class (and some more, too, like TreeMap and such) in Java6u25 > and above including jdk7, which is enabled on -XX:+AggressiveOpts and > increases the throughput by about 30%. This is done by finetuning some > methods and some hotspot help. > > > René Jansen schrieb am 30.07.2011 14:21: >> Thomas, >> >> both implementations are very well documented; both sources are available now. I am under the impression that there is not much difference. >> The hardware implementation is there for a number of years already in the Z10 mainframe; look at http://speleotrove.com/decimal/ for more answers. >> >> best regards, >> >> René. >> >> On 29 jul. 2011, at 23:24, Thomas Schneider wrote: >> >>> Is there: >>> >>> 1:) Any document available which described the difference of Java's BigDecimal class vs. MFC's BigDecimal implementation? >>> 2.) and/or any short overview of the different approaches (when they are different at all)? >>> 3.) There has been rumor, years ago, that Big Blue (IBM) is going to develop a DECIMAL CHIP, based on Mike's >>> specification of this nearly never ending Project he was working on, and he did finish so successfully. >>> >>> Are there any new's whether this CHIP is going to come or this hardware based implementation of Decimal Arithmetic has been >>> cancelled? >>> >>> I would need to have replies for those questions, when possible, for some upcoming meeting's here in Vienna >>> with some IBM representatives, when possible. >>> >>> ... And I don't know whom to ask except this list :-) >>> >>> Thomas. >>> . >>> -- >>> Thomas Schneider (www.thsitc.com) >>> _______________________________________________ >>> Ibm-netrexx mailing list >>> [hidden email] >>> Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ >>> >> >> >> _______________________________________________ >> Ibm-netrexx mailing list >> [hidden email] >> Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ >> >> > > - -- > cu, Patric > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: GnuPT 2.5.2 > > iEYEARECAAYFAk4z/nwACgkQfGgGu8y7ypCjJgCcCcuQupbHsneJJEjC4JwUr5FL > 5A0AoMrpNHbH1STkJKYO+SEAEq3l03Nv > =iVl4 > -----END PGP SIGNATURE----- > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
> > the implementations are both very well documented, that's true. But > > the differences are somewhat fundamental. > > > > The NetRexx implementation (the oldest) uses 1 byte per > > digit and has > > it's deficiencies both due to the single byte access in the > > algorithms > > and footprint (and therefore cache coverage in modern CPUs). > > > > The Java BigDecimal (mark1, until Java 6u24 IIRC) uses > > two-complement > > storage of the numbers, which decreases the footprint and > > allows for > > integer and long access in the algorithms. So it's much, much faster > > (3-10x) and uses far less memory (2-3x) as the NetRexx > > implementation. > > > > There's a (I called it mark2, don't know about if it has a > > real name) > > BigDecimal class (and some more, too, like TreeMap and such) in > > Java6u25 and above including jdk7, which is enabled on > > -XX:+AggressiveOpts and increases the throughput by about > > 30%. This is > > done by finetuning some methods and some hotspot help. Not sure of the background here ... but it's certainly not as simple as suggested above. Using a binary representation as the base is a speed 'win' up to about 9 digits or so. After that, the BigInteger + BigDecimal in Java is very expensive -- dozens of bytes of overhead in both the BigDecimal and the BigInteger objects. In general, a packed (especially densely packed decimal) representation is far better for software than a binary representation if any rounding is necessary, and hugely better in hardware. See, for example: http://speleotrove.com/decimal/decperf.pdf Incidentally, the NetRexx BigDecimal was 4x-30x faster than the original Java BigDecimal when it was first written; that prompted Sun to put some major improvements into the JVM handling of longs and into the Long class (especially the Long<-->Character conversions). I then worked with Sun to merge all the algorithm improvements from the NetRexx BigDecimal into the Java BigDecimal (you will see I am quoted as an author of that). Those improvements were never retrofitted into the NetRexx class, nor does the latter take advantage of the new binary<-->decimal conversions in the JVM. Mike _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Hello Mike,
nice to see that with my *blant question* I did make it happening that you are raising *The VOICE* again ;-) Thank's for your clarifications. Three questions (as usual): 1.) Which NetRexx Option has to be used (or is suggested) to actually USE the HARDWARE CHIP on the IBM Z10 ? 2.) Where can I read more aout 'densilly packed decimals'', referenced below. 3.) Would you, personally, advise to SEPARATE the whole DECIMAL STUFF (within the current NetRexx Compiler) from the Compiler to a separate class, to be able to take advantage from new Java speedup's development ? Anyway, Thanks for prompt response, as usual, since I ever raised a question to you. You have been one of the most responsive and respondable person's I've ever met in my life! Thank You! Happy Flying in the pension, I do hope! Greetings from dark Veinna, Thomas. ========================================================================= Am 30.07.2011 20:58, schrieb Mike Cowlishaw: > >>> the implementations are both very well documented, that's true. But >>> the differences are somewhat fundamental. >>> >>> The NetRexx implementation (the oldest) uses 1 byte per >>> digit and has >>> it's deficiencies both due to the single byte access in the >>> algorithms >>> and footprint (and therefore cache coverage in modern CPUs). >>> >>> The Java BigDecimal (mark1, until Java 6u24 IIRC) uses >>> two-complement >>> storage of the numbers, which decreases the footprint and >>> allows for >>> integer and long access in the algorithms. So it's much, much faster >>> (3-10x) and uses far less memory (2-3x) as the NetRexx >>> implementation. >>> >>> There's a (I called it mark2, don't know about if it has a >>> real name) >>> BigDecimal class (and some more, too, like TreeMap and such) in >>> Java6u25 and above including jdk7, which is enabled on >>> -XX:+AggressiveOpts and increases the throughput by about >>> 30%. This is >>> done by finetuning some methods and some hotspot help. > Not sure of the background here ... but it's certainly not as simple as > suggested above. Using a binary representation as the base is a speed 'win' up > to about 9 digits or so. After that, the BigInteger + BigDecimal in Java is > very expensive -- dozens of bytes of overhead in both the BigDecimal and the > BigInteger objects. In general, a packed (especially densely packed decimal) > representation is far better for software than a binary representation if any > rounding is necessary, and hugely better in hardware. > > See, for example: http://speleotrove.com/decimal/decperf.pdf > > Incidentally, the NetRexx BigDecimal was 4x-30x faster than the original Java > BigDecimal when it was first written; that prompted Sun to put some major > improvements into the JVM handling of longs and into the Long class (especially > the Long<-->Character conversions). I then worked with Sun to merge all the > algorithm improvements from the NetRexx BigDecimal into the Java BigDecimal (you > will see I am quoted as an author of that). Those improvements were never > retrofitted into the NetRexx class, nor does the latter take advantage of the > new binary<-->decimal conversions in the JVM. > > Mike > > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > > -- Thomas Schneider (www.thsitc.com) _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/
Thomas Schneider, Vienna, Austria (Europe) :-)
www.thsitc.com www.db-123.com |
In reply to this post by Mike Cowlishaw
Anyone have any idea how to "retrofit" these improvements to NetRexx?
On 7/30/2011 11:58 AM, Mike Cowlishaw wrote: > >>> the implementations are both very well documented, that's true. But >>> the differences are somewhat fundamental. >>> >>> The NetRexx implementation (the oldest) uses 1 byte per >>> digit and has >>> it's deficiencies both due to the single byte access in the >>> algorithms >>> and footprint (and therefore cache coverage in modern CPUs). >>> >>> The Java BigDecimal (mark1, until Java 6u24 IIRC) uses >>> two-complement >>> storage of the numbers, which decreases the footprint and >>> allows for >>> integer and long access in the algorithms. So it's much, much faster >>> (3-10x) and uses far less memory (2-3x) as the NetRexx >>> implementation. >>> >>> There's a (I called it mark2, don't know about if it has a >>> real name) >>> BigDecimal class (and some more, too, like TreeMap and such) in >>> Java6u25 and above including jdk7, which is enabled on >>> -XX:+AggressiveOpts and increases the throughput by about >>> 30%. This is >>> done by finetuning some methods and some hotspot help. > Not sure of the background here ... but it's certainly not as simple as > suggested above. Using a binary representation as the base is a speed 'win' up > to about 9 digits or so. After that, the BigInteger + BigDecimal in Java is > very expensive -- dozens of bytes of overhead in both the BigDecimal and the > BigInteger objects. In general, a packed (especially densely packed decimal) > representation is far better for software than a binary representation if any > rounding is necessary, and hugely better in hardware. > > See, for example: http://speleotrove.com/decimal/decperf.pdf > > Incidentally, the NetRexx BigDecimal was 4x-30x faster than the original Java > BigDecimal when it was first written; that prompted Sun to put some major > improvements into the JVM handling of longs and into the Long class (especially > the Long<-->Character conversions). I then worked with Sun to merge all the > algorithm improvements from the NetRexx BigDecimal into the Java BigDecimal (you > will see I am quoted as an author of that). Those improvements were never > retrofitted into the NetRexx class, nor does the latter take advantage of the > new binary<-->decimal conversions in the JVM. > > Mike > > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > > > Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Kermit Kiser <[hidden email]> wrote:
>Anyone have any idea how to "retrofit" these improvements to NetRexx? Not to be a smart-a*s, but using the Java library seems the easy solution. Tom. Sent from my Motorola ATRIX™ 4G on AT&T _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Makes you wonder: why dupe the function in NetRexx to begin with. . .
Bob Hamilton On 7/30/11, Tom Maynard <[hidden email]> wrote: > Kermit Kiser <[hidden email]> wrote: > >>Anyone have any idea how to "retrofit" these improvements to NetRexx? > > Not to be a smart-a*s, but using the Java library seems the easy solution. > > Tom. > > Sent from my Motorola ATRIX™ 4G on AT&T > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > > _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
In reply to this post by Mike Cowlishaw
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Hi Mike, Mike Cowlishaw schrieb am 30.07.2011 20:58: > >>> the implementations are both very well documented, that's true. But >>> the differences are somewhat fundamental. >>> >>> The NetRexx implementation (the oldest) uses 1 byte per >>> digit and has >>> it's deficiencies both due to the single byte access in the >>> algorithms >>> and footprint (and therefore cache coverage in modern CPUs). >>> >>> The Java BigDecimal (mark1, until Java 6u24 IIRC) uses >>> two-complement >>> storage of the numbers, which decreases the footprint and >>> allows for >>> integer and long access in the algorithms. So it's much, much faster >>> (3-10x) and uses far less memory (2-3x) as the NetRexx >>> implementation. >>> >>> There's a (I called it mark2, don't know about if it has a >>> real name) >>> BigDecimal class (and some more, too, like TreeMap and such) in >>> Java6u25 and above including jdk7, which is enabled on >>> -XX:+AggressiveOpts and increases the throughput by about >>> 30%. This is >>> done by finetuning some methods and some hotspot help. > > Not sure of the background here ... but it's certainly not as simple as > suggested above. Using a binary representation as the base is a speed 'win' up > to about 9 digits or so. After that, the BigInteger + BigDecimal in Java is > very expensive -- dozens of bytes of overhead in both the BigDecimal and the > BigInteger objects. In general, a packed (especially densely packed decimal) > representation is far better for software than a binary representation if any > rounding is necessary, and hugely better in hardware. My background is solid: the source of Java6, Java7 and NetRexx. Although a lot, I had to study the source of NetRexx and BigDecimal to some extend, for a number crunching project I did a 2 years ago. So I'm sure that BigInteger and a scale value is used to store the values of BigDecimal, which itself uses a two component representation. That's as packed as possible I think. Additionally, all algorithms have been changed to use integer chunks to use the native two complement mechanics of the processor. That means, it's as dense and almost as fast as it can get. Additionally, the BigDecimal class uses a "deflated" representation in a single long if operating on <=18 digits. But the speed difference going over the 18 digits is not too big altogether. I fear there's no need to use any 'packed' binary in BigDecimal/BigInteger anymore. There's very low overhead at all, just for the extra deflate and scale values, and the cached String representation. Getting this back to NetRexx would be great, as it would simplify the Rexx class by quite some extend. The question is just, if the behaviour might be slightly changed somewhat. My hope is that the tests cover that already, but I didn't verify that. Yet. - -- cu, Patric -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: GnuPT 2.5.2 iEYEARECAAYFAk40pAAACgkQfGgGu8y7ypDIEwCfXlSjpbirDn0b/U04DCLOLi3A BusAn3CNtkPRYKqMBQM2/gZ8AHtLVUj5 =iULP -----END PGP SIGNATURE----- _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Hi Patric, > >>> the implementations are both very well documented, that's > >>> true. But > >>> the differences are somewhat fundamental. > >>> > >>> The NetRexx implementation (the oldest) uses 1 byte per digit and > >>> has it's deficiencies both due to the single byte access in the > >>> algorithms and footprint (and therefore cache coverage in modern > >>> CPUs). > >>> > >>> The Java BigDecimal (mark1, until Java 6u24 IIRC) uses > >>> two-complement storage of the numbers, which decreases > >>> the footprint > >>> and allows for integer and long access in the algorithms. So it's > >>> much, much faster > >>> (3-10x) and uses far less memory (2-3x) as the NetRexx > >>> implementation. > >>> > >>> There's a (I called it mark2, don't know about if it has a real > >>> name) BigDecimal class (and some more, too, like TreeMap > >>> and such) > >>> in > >>> Java6u25 and above including jdk7, which is enabled on > >>> -XX:+AggressiveOpts and increases the throughput by about > >>> 30%. This > >>> is done by finetuning some methods and some hotspot help. > > > > Not sure of the background here ... but it's certainly not > > as simple > > as suggested above. Using a binary representation as the base is a > > speed 'win' up to about 9 digits or so. After that, the > > BigInteger + > > BigDecimal in Java is very expensive -- dozens of bytes of > > overhead in > > both the BigDecimal and the BigInteger objects. In > > general, a packed > > (especially densely packed decimal) representation is far > > better for > > software than a binary representation if any rounding is > > necessary, and hugely better in hardware. > > My background is solid: the source of Java6, Java7 and > NetRexx. Although a lot, I had to study the source of NetRexx > and BigDecimal to some extend, for a number crunching project > I did a 2 years ago. I meant the background to the discussion, not your background. :-) > So I'm sure that BigInteger and a scale value is used to > store the values of BigDecimal, which itself uses a two > component representation. > That's as packed as possible I think. It is indeed those, but quite a bit more. When the value fits in a long int then the storage used is 24 bytes (assuming a 32-bit machine). When a BigInteger is used then this jumps to a minimum of 64 bytes. Hence for a 34-digit decimal number (IEEE decimal128) 512 bits are used -- 4 times as many as necessary, so certainly not 'as packed as possible'. > Additionally, all > algorithms have been changed to use integer chunks to use the > native two complement mechanics of the processor. That means, > it's as dense and almost as fast as it can get. With a binary integer representation, rounding takes much longer as the length of the number increases because a division or multipication is generally needed to determine the rounding point. With a decimal representation, rounding takes essentially a constant time, regardless of length (not quite true as it depends on the rounding mode -- it may be necessary to inpect all the digits of the number to check for non-zeros). Some performance numbers (for C implementations) are in http://speleotrove.com/decimal/decperf.pdf. > Additionally, the BigDecimal class uses a "deflated" > representation in a single long if operating on <=18 digits. > But the speed difference going over the 18 digits is not too > big altogether. Yes, for <= 18 digits speed isn't an issue. > I fear there's no need to use any 'packed' binary in > BigDecimal/BigInteger anymore. There's very low overhead at > all, just for the extra deflate and scale values, and the > cached String representation. The current representation uses (effectively) packed binary. A lot of space could be saved by not using the BigInteger class (which has various lookaside values to aid cryptography). Personally I'd probably use a base 1E+9 (base-billion) representation, 9 digits per int. This gives a good compromise between efficient multiplication and division and fast rounding. Or if the table size was considered OK, would use DPD as in my decFloats routines in the decNumber package (http://speleotrove.com/decimal/#decNumber). This expands the packed decimal to BCD for addition and subtraction and to base-billion for multiplication and division. > Getting this back to NetRexx would be great, as it would > simplify the Rexx class by quite some extend. The question is > just, if the behaviour might be slightly changed somewhat. My > hope is that the tests cover that already, but I didn't > verify that. Yet. Yes, there are some differences, some of them quite significant; see http://speleotrove.com/decimal/dax3274.html. For example, Rexx rounds the operands before use if they are longer than the desired results (easy enough to do using BigDecimal), and the rounding after a subtract is different (not so easy). Mike _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Free forum by Nabble | Edit this page |