File: NrMethodcall.nrx Line 52 /** Pass processing */ method scan(pass=int) if pass<2 then signal RxQuit(rxt, null, 'internal.error', 'NrMethodcall', pass) nop Check if NOP statement necessary? Do not think it is. _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
In reply to this post by Jason Martin
What about just a comment in the code to get that negligible increase? On Thu, Sep 10, 2015 at 11:25 AM, Jason Martin <[hidden email]> wrote:
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Yes, the line could have been:
-- nop for other
passes
Apologies for not writing that. Hindsight is
a wonderful thing :-). Recall that I was documenting and then implementing
between 500--1000 lines of code a day, for several months, at that time.
In retrospect I should have taken a weekend or two off ..
:-)
Mike
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
You've earned your retirement, Mike ... as have we all ... the hard way. From: [hidden email] Sent: 9/10/2015 13:45 To: [hidden email] Subject: Re: [Ibm-netrexx] possible small speed-up fixme Yes, the line could have been:
-- nop for other
passes
Apologies for not writing that. Hindsight is
a wonderful thing :-). Recall that I was documenting and then implementing
between 500--1000 lines of code a day, for several months, at that time.
In retrospect I should have taken a weekend or two off ..
:-)
Mike
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
In reply to this post by Jason Martin
The code is excellent, sometimes, I am just trying to understand the logic behind it. Have a nice day. On Thu, Sep 10, 2015 at 11:25 AM, Jason Martin <[hidden email]> wrote:
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Perfect example of the "critical comma", Jason. We assume you meant
the first comma to make the "sometimes" apply to "trying to understand". If there had been only the second comma, the "sometimes" would have referred to the code's excellence. By using both commas, the sentence is ambiguous and you can have it both ways... :-/ Coincidentally, there is a (now waning) thread on Regina-L concerning the effect of NOP on executable code, e.g. Do 16000000; Nop; End is slower than Do 16000000; End -Chip- On 9/10/2015 2:50 PM, Jason Martin wrote: > The code is excellent, sometimes, I am just trying to understand the > logic behind it. _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
@Mike Presumably you did not have time to maintain a journal. It might have been as influential as The Mythical Man Month.On Thu, Sep 10, 2015 at 4:36 PM, Chip Davis <[hidden email]> wrote: Perfect example of the "critical comma", Jason. We assume you meant the first comma to make the "sometimes" apply to "trying to understand". -- "Life isn't one damn thing after another -- it's the same damn thing over and over." -- Edna St Vincent Millay_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
In reply to this post by Jason Martin
I would not take the the second coma to heart. I never was great at English and it is definitely not an example of a doing something right. On Thu, Sep 10, 2015 at 11:25 AM, Jason Martin <[hidden email]> wrote:
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
In reply to this post by Jason Martin
But you read it and understood it. Machines are not as forgiving. On Thu, Sep 10, 2015 at 11:25 AM, Jason Martin <[hidden email]> wrote:
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
In reply to this post by Aviatrexx
Hi Chip,
while playing with this in NetRexx, I came upon a result that I did not want to withhold. I turns out that a single nop executed on its own is slower than a nop in a loop of 100.000 : ➜ test git:(master) ✗ cat nop.nrx say System.currentTimeMillis() nop say System.currentTimeMillis() ➜ test git:(master) ✗ cat loopnop.nrx say System.currentTimeMillis() loop for 100000 nop end say System.currentTimeMillis()% ➜ test git:(master) ✗ java nop 1442745275500 1442745275505 ➜ test git:(master) ✗ java loopnop 1442745280506 1442745280509 ➜ test git:(master) ✗ java nop 1442745286482 1442745286487 ➜ test git:(master) ✗ java loopnop 1442745289268 1442745289272 ➜ test git:(master) ✗ java nop 1442745294057 1442745294062 ➜ test git:(master) ✗ java loopnop 1442745297432 1442745297434 ➜ test git:(master) ✗ This is what nop.nrx generates for jvm bytecode: Compiled from "nop.java" public class nop { public static void main(java.lang.String[]); Code: 0: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J 3: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z 6: pop 7: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J 10: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z 13: pop 14: return } And this is loopnop.nrx: ➜ test git:(master) ✗ javap -c loopnop Compiled from "loopnop.java" public class loopnop { public static void main(java.lang.String[]); Code: 0: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J 3: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z 6: pop 7: getstatic #3 // Field $01:Lnetrexx/lang/Rexx; 10: aconst_null 11: invokevirtual #4 // Method netrexx/lang/Rexx.OpPlus:(Lnetrexx/lang/RexxSet;)Lnetrexx/lang/Rexx; 14: invokevirtual #5 // Method netrexx/lang/Rexx.toint:()I 17: istore_1 18: iload_1 19: ifle 28 22: iinc 1, -1 25: goto 18 28: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J 31: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z 34: pop 35: return static {}; Code: 0: new #7 // class netrexx/lang/Rexx 3: dup 4: ldc #8 // int 100000 6: invokespecial #9 // Method netrexx/lang/Rexx."<init>":(I)V 9: putstatic #3 // Field $01:Lnetrexx/lang/Rexx; 12: return } Note that both do not generate anything to execute a ‘nop'; the second program has some more instructions for setting up the loop; so why is it some milliseconds faster? If you rig the jvm with some switches and a dynamic deassembler: java -Xbatch -XX:-TieredCompilation -XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly -XX:PrintAssemblyOptions=intel loopnop >loopnop.asm (you get a very large listing and) you see that in both cases the jvm compiler is smart enough to see that nothing happens here, and the only thing that is being compiled to intel machine instructions are some lowlevel java.nio UFT8 character stuff and the static currentTimeMillis and Say methods: 0x0000000104c9a64c: movabs r10,0x1040595b6 > 0x0000000104c9a656: call r10 ;*invokestatic currentTimeMillis > ; - loopnop::main@28 (line 5) > > 0x0000000104c9a659: mov rsi,rax > 0x0000000104c9a65c: data32 xchg ax,ax > 0x0000000104c9a65f: call 0x0000000104c68160 ; OopMap{off=68} > ;*invokestatic Say > ; - loopnop::main@31 (line 5) > ; {static_call} You can see that then the address of the currentTimeMillis is moved to register 10 and that routine is then called. ‘Say’ is called through the jvm’s invokestatic call. Funny thing is, the assembly generated for both programs is virtually the same. When we try to run interpreted only, we get: ➜ test git:(master) ✗ java -Xint nop 1442763233552 1442763233558 ➜ test git:(master) ✗ java -Xint loopnop 1442763239989 1442763239991 The one with the loop around the nop is still faster. We can also profile this run: ➜ test git:(master) ✗ java -Xint -Xprof nop 1442763073604 1442763073609 Flat profile of 0,04 secs (2 total ticks): main Interpreted + native Method 50,0% 0 + 1 java.lang.System.arraycopy 50,0% 0 + 1 java.util.zip.ZipFile.open 100,0% 0 + 2 Total interpreted Flat profile of 0,00 secs (1 total ticks): DestroyJavaVM Thread-local ticks: 100,0% 1 Blocked (of total) Global summary of 0,04 seconds: 100,0% 3 Received ticks ➜ test git:(master) ✗ java -Xint -Xprof loopnop 1442763075380 1442763075382 Flat profile of 0,06 secs (4 total ticks): main Interpreted + native Method 25,0% 0 + 1 java.util.zip.ZipFile.open 25,0% 0 + 1 java.lang.ClassLoader.defineClass1 25,0% 1 + 0 loopnop.<clinit> 25,0% 1 + 0 sun.util.calendar.ZoneInfoFile.load 100,0% 2 + 2 Total interpreted We see that the jvm itself is also calling some routines, and we see that that timestamp call even reads a file. So while we don’t know exactly yet why not executing a nop in a not running loop is faster, it is what we measure. So I think we do not have to worry too much about that nop. best regards, René. > On 10 sep. 2015, at 22:36, Chip Davis <[hidden email]> wrote: > > > Coincidentally, there is a (now waning) thread on Regina-L concerning the effect of NOP on executable code, e.g. > > Do 16000000; Nop; End > > is slower than > > Do 16000000; End > > -Chip- > > On 9/10/2015 2:50 PM, Jason Martin wrote: >> The code is excellent, sometimes, I am just trying to understand the >> logic behind it. > > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
It might well be, Rene, that the TIME to MEASURE a single NOP is *too
small* to capture the REAL time ;-) By the way: what is INVOKESTATIC #n ?? Thomas . ========================================================================================= Am 20/09/2015 um 17:37 schrieb René Jansen: > Hi Chip, > > while playing with this in NetRexx, I came upon a result that I did not want to withhold. I turns out that a single nop executed on its own is slower than a nop in a loop of 100.000 : > > ➜ test git:(master) ✗ cat nop.nrx > say System.currentTimeMillis() > nop > say System.currentTimeMillis() > ➜ test git:(master) ✗ cat loopnop.nrx > say System.currentTimeMillis() > loop for 100000 > nop > end > say System.currentTimeMillis()% ➜ test git:(master) ✗ java nop > 1442745275500 > 1442745275505 > ➜ test git:(master) ✗ java loopnop > 1442745280506 > 1442745280509 > ➜ test git:(master) ✗ java nop > 1442745286482 > 1442745286487 > ➜ test git:(master) ✗ java loopnop > 1442745289268 > 1442745289272 > ➜ test git:(master) ✗ java nop > 1442745294057 > 1442745294062 > ➜ test git:(master) ✗ java loopnop > 1442745297432 > 1442745297434 > ➜ test git:(master) ✗ > > This is what nop.nrx generates for jvm bytecode: > Compiled from "nop.java" > public class nop { > public static void main(java.lang.String[]); > Code: > 0: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J > 3: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z > 6: pop > 7: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J > 10: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z > 13: pop > 14: return > } > > And this is loopnop.nrx: > > ➜ test git:(master) ✗ javap -c loopnop > Compiled from "loopnop.java" > public class loopnop { > public static void main(java.lang.String[]); > Code: > 0: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J > 3: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z > 6: pop > 7: getstatic #3 // Field $01:Lnetrexx/lang/Rexx; > 10: aconst_null > 11: invokevirtual #4 // Method netrexx/lang/Rexx.OpPlus:(Lnetrexx/lang/RexxSet;)Lnetrexx/lang/Rexx; > 14: invokevirtual #5 // Method netrexx/lang/Rexx.toint:()I > 17: istore_1 > 18: iload_1 > 19: ifle 28 > 22: iinc 1, -1 > 25: goto 18 > 28: invokestatic #1 // Method java/lang/System.currentTimeMillis:()J > 31: invokestatic #2 // Method netrexx/lang/RexxIO.Say:(J)Z > 34: pop > 35: return > > static {}; > Code: > 0: new #7 // class netrexx/lang/Rexx > 3: dup > 4: ldc #8 // int 100000 > 6: invokespecial #9 // Method netrexx/lang/Rexx."<init>":(I)V > 9: putstatic #3 // Field $01:Lnetrexx/lang/Rexx; > 12: return > } > > > Note that both do not generate anything to execute a ‘nop'; the second program has some more instructions for setting up the loop; so why is it some milliseconds faster? > > If you rig the jvm with some switches and a dynamic deassembler: > > java -Xbatch -XX:-TieredCompilation -XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly -XX:PrintAssemblyOptions=intel loopnop >loopnop.asm > > (you get a very large listing and) > > you see that in both cases the jvm compiler is smart enough to see that nothing happens here, and the only thing that is being compiled to intel machine instructions are some lowlevel java.nio UFT8 character stuff and the static currentTimeMillis and Say methods: > > 0x0000000104c9a64c: movabs r10,0x1040595b6 >> 0x0000000104c9a656: call r10 ;*invokestatic currentTimeMillis >> ; - loopnop::main@28 (line 5) >> >> 0x0000000104c9a659: mov rsi,rax >> 0x0000000104c9a65c: data32 xchg ax,ax >> 0x0000000104c9a65f: call 0x0000000104c68160 ; OopMap{off=68} >> ;*invokestatic Say >> ; - loopnop::main@31 (line 5) >> ; {static_call} > You can see that then the address of the currentTimeMillis is moved to register 10 and that routine is then called. ‘Say’ is called through the jvm’s invokestatic call. > Funny thing is, the assembly generated for both programs is virtually the same. When we try to run interpreted only, we get: > > ➜ test git:(master) ✗ java -Xint nop > 1442763233552 > 1442763233558 > ➜ test git:(master) ✗ java -Xint loopnop > 1442763239989 > 1442763239991 > > The one with the loop around the nop is still faster. > > We can also profile this run: > > ➜ test git:(master) ✗ java -Xint -Xprof nop > 1442763073604 > 1442763073609 > > Flat profile of 0,04 secs (2 total ticks): main > > Interpreted + native Method > 50,0% 0 + 1 java.lang.System.arraycopy > 50,0% 0 + 1 java.util.zip.ZipFile.open > 100,0% 0 + 2 Total interpreted > > > Flat profile of 0,00 secs (1 total ticks): DestroyJavaVM > > Thread-local ticks: > 100,0% 1 Blocked (of total) > > > Global summary of 0,04 seconds: > 100,0% 3 Received ticks > ➜ test git:(master) ✗ java -Xint -Xprof loopnop > 1442763075380 > 1442763075382 > > Flat profile of 0,06 secs (4 total ticks): main > > Interpreted + native Method > 25,0% 0 + 1 java.util.zip.ZipFile.open > 25,0% 0 + 1 java.lang.ClassLoader.defineClass1 > 25,0% 1 + 0 loopnop.<clinit> > 25,0% 1 + 0 sun.util.calendar.ZoneInfoFile.load > 100,0% 2 + 2 Total interpreted > > We see that the jvm itself is also calling some routines, and we see that that timestamp call even reads a file. > > So while we don’t know exactly yet why not executing a nop in a not running loop is faster, it is what we measure. So I think we do not have to worry too much about that nop. > > best regards, > > René. > > >> On 10 sep. 2015, at 22:36, Chip Davis <[hidden email]> wrote: >> >> >> Coincidentally, there is a (now waning) thread on Regina-L concerning the effect of NOP on executable code, e.g. >> >> Do 16000000; Nop; End >> >> is slower than >> >> Do 16000000; End >> >> -Chip- >> >> On 9/10/2015 2:50 PM, Jason Martin wrote: >>> The code is excellent, sometimes, I am just trying to understand the >>> logic behind it. >> _______________________________________________ >> Ibm-netrexx mailing list >> [hidden email] >> Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ >> > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/
Thomas Schneider, Vienna, Austria (Europe) :-)
www.thsitc.com www.db-123.com |
Although the loading of a timezones file makes me think twice about how suitable System.currentTimeMillis is for this purpose, the main conclusion of this exercise was that there is no executable code generated for a nop, so there is nothing to measure. I looked into this file-reading issue and found System.nanoTime, which has a much higher resolution.
Invokestatic is a bytecode instruction for the jvm, generated by a java compiler. It instructs the jvm runtime to call a static method. There are invokevirtual, invokeinterface and invokedynamic instructions, for several types of methods. Each java class is associated with a virtual method table that contains pointers to the bytecode of each method of a class. That table is inherited from the superclass of a particular class and extended with regard to the new methods of a subclass. Those are called using invokevirtual. Invokedynamic is the latest introduction and might be of interest to netrexx; around the api to use it in java programs the dust is just settling, and it will be an interesting task to ponder where it should be employed to the advantage of netrexx. The #n is the method (pointer) that has been loaded at that location on the stack. Javap helps us by printing that as a comment after the invocation. So the point of my email was that unlike Classic and ooRexx, the very aggresive optimization of the java compiler renders the introduction of nop statements a non-problem. But prompted by your observation I re-ran the programs with System.nanoTime() and got more reasonable outcomes. At least I now can measure the time an empty loop takes. The programs are now as follows: ➜ test git:(master) ✗ cat nop.nrx options binary a = long System.nanoTime() nop b = long System.nanoTime() say b - a ➜ test git:(master) ✗ cat loopnop.nrx options binary a = System.nanoTime() loop for 100000 nop end b = System.nanoTime() say b - a And the results now are: ➜ test git:(master) ✗ java nop 180 ➜ test git:(master) ✗ java nop 160 ➜ test git:(master) ✗ java nop 157 ➜ test git:(master) ✗ java nop 201 ➜ test git:(master) ✗ java nop 187 ➜ test git:(master) ✗ java nop 158 ➜ test git:(master) ✗ java loopnop 612817 ➜ test git:(master) ✗ java loopnop 598831 ➜ test git:(master) ✗ java loopnop 599397 ➜ test git:(master) ✗ java loopnop 620352 ➜ test git:(master) ✗ java loopnop 640828 Which, I think, is in line with expectations. best regards, René. > On 20 sep. 2015, at 17:51, Thomas Schneider <[hidden email]> wrote: > > It might well be, Rene, that the TIME to MEASURE a single NOP is *too small* to capture the REAL time ;-) > > By the way: what is INVOKESTATIC #n ?? > Thomas . > ========================================================================================= > > Am 20/09/2015 um 17:37 schrieb René Jansen: _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Rene & all,
I personally *think* that we shall *not pounder* our energies with NOP statements, but with possible NEW statements ... I'm still feeling that NetRexx should implement REPORT, PRINT, DISPLAY, SHOW, etc, etc, as specified long time ago on KENAI. I did make an error, however: Didn't implement them in Core NetRexx :-( ... and Kermit did delete my proposed language extentions, due to lack of implementations ... My Fault, Sorry :-( Did, however, implement them ... So what ? Thomas. ========================================================================================= Am 21/09/2015 um 15:04 schrieb René Jansen: > Although the loading of a timezones file makes me think twice about how suitable System.currentTimeMillis is for this purpose, the main conclusion of this exercise was that there is no executable code generated for a nop, so there is nothing to measure. I looked into this file-reading issue and found System.nanoTime, which has a much higher resolution. > > Invokestatic is a bytecode instruction for the jvm, generated by a java compiler. It instructs the jvm runtime to call a static method. There are invokevirtual, invokeinterface and invokedynamic instructions, for several types of methods. Each java class is associated with a virtual method table that contains pointers to the bytecode of each method of a class. That table is inherited from the superclass of a particular class and extended with regard to the new methods of a subclass. Those are called using invokevirtual. Invokedynamic is the latest introduction and might be of interest to netrexx; around the api to use it in java programs the dust is just settling, and it will be an interesting task to ponder where it should be employed to the advantage of netrexx. > > The #n is the method (pointer) that has been loaded at that location on the stack. Javap helps us by printing that as a comment after the invocation. > > So the point of my email was that unlike Classic and ooRexx, the very aggresive optimization of the java compiler renders the introduction of nop statements a non-problem. > > But prompted by your observation I re-ran the programs with System.nanoTime() and got more reasonable outcomes. At least I now can measure the time an empty loop takes. > > The programs are now as follows: > > ➜ test git:(master) ✗ cat nop.nrx > options binary > a = long System.nanoTime() > nop > b = long System.nanoTime() > say b - a > > ➜ test git:(master) ✗ cat loopnop.nrx > options binary > a = System.nanoTime() > loop for 100000 > nop > end > b = System.nanoTime() > say b - a > > And the results now are: > > ➜ test git:(master) ✗ java nop > 180 > ➜ test git:(master) ✗ java nop > 160 > ➜ test git:(master) ✗ java nop > 157 > ➜ test git:(master) ✗ java nop > 201 > ➜ test git:(master) ✗ java nop > 187 > ➜ test git:(master) ✗ java nop > 158 > ➜ test git:(master) ✗ java loopnop > 612817 > ➜ test git:(master) ✗ java loopnop > 598831 > ➜ test git:(master) ✗ java loopnop > 599397 > ➜ test git:(master) ✗ java loopnop > 620352 > ➜ test git:(master) ✗ java loopnop > 640828 > > Which, I think, is in line with expectations. > > > best regards, > > René. > > >> On 20 sep. 2015, at 17:51, Thomas Schneider <[hidden email]> wrote: >> >> It might well be, Rene, that the TIME to MEASURE a single NOP is *too small* to capture the REAL time ;-) >> >> By the way: what is INVOKESTATIC #n ?? >> Thomas . >> ========================================================================================= >> >> Am 20/09/2015 um 17:37 schrieb René Jansen: > _______________________________________________ > Ibm-netrexx mailing list > [hidden email] > Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ > _______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/
Thomas Schneider, Vienna, Austria (Europe) :-)
www.thsitc.com www.db-123.com |
In reply to this post by Jason Martin
René, I think you and others for the time and research done related to this post. I think it was definitely worth it. At the least, it was another feather for NetRexx. It also shows your dedication to your position. Thanks. On Thu, Sep 10, 2015 at 11:25 AM, Jason Martin <[hidden email]> wrote:
_______________________________________________ Ibm-netrexx mailing list [hidden email] Online Archive : http://ibm-netrexx.215625.n3.nabble.com/ |
Free forum by Nabble | Edit this page |