g+
g+ Communities
Argonne National Laboratory

Experimental Physics and
Industrial Control System

2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  Index 2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014 
<== Date ==> <== Thread ==>

Subject: RE: MEDM 3.13 vs. 3.14
From: "Jeff Hill" <johill@lanl.gov>
To: "'Marty Kraimer'" <mrk@aps.anl.gov>
Cc: "'Kenneth Evans, Jr.'" <evans@aps.anl.gov>, <nda@aps.anl.gov>, "'Johnson, Andrew N.'" <anj@aps.anl.gov>, "Janet Anderson" <jba@aps.anl.gov>, "Bob Dalesio" <ldalesio@lanl.gov>, <core-talk@aps.anl.gov>
Date: Tue, 21 Jan 2003 11:01:14 -0700
The changes I committed Thursday evening improve the efficiency of
ca_flush() which is called also by ca_pend_event(), ca_pend_io(), and
ca_poll(). The performance differences were not detected by my R3.14
performance tests because their results compared favorably with R3.13,
and because these tests call flush only once at the end of a large set
of IO requests. 

I can't provide R3.13 test results at the moment because R3.13 patch
release changes to its build system seem to prevent an R3.13 "catime"
from being built as in the past using the makefile in R3.13's
base/src/ca. 

Nevertheless, here are some R3.14 results. The 71 Mbps maximum
throughput is probably a favorable result from a 100 Mbps LAN when CA
client CPU consumption peaks remained at less than 12% on each of two
CPUs. I suspect that the results may be limited by the portable (now
single threaded) ca server running on one of our newer uniprocessor
Linux boxes or possibly by the network interfaces and or switches.

D:\users\hill\epicsDvl\epics\base>catime joh:bill
Testing with 10000 channels named joh:bill
channel connect test
Elapsed Per Item =   0.00007238 sec,    13815.4 Items per sec, 7.3 Mbps
Search tries per chan - mean=1.142600 std dev=0.417690 min=1.000000
max=3.000000

channel name=joh:bill, native type=6, native count=1
        pend event test
Elapsed Per Item =   0.00002332 sec,    42883.6 Items per sec
float test
        async put test
Elapsed Per Item =   0.00000324 sec,   308700.3 Items per sec, 42.0 Mbps
        async get test
Elapsed Per Item =   0.00000371 sec,   269869.6 Items per sec, 71.2 Mbps
        synch get test
Elapsed Per Item =   0.00023484 sec,     4258.2 Items per sec, 1.1 Mbps
double test
        async put test
Elapsed Per Item =   0.00000264 sec,   379012.5 Items per sec, 51.5 Mbps
        async get test
Elapsed Per Item =   0.00000352 sec,   284206.7 Items per sec, 75.0 Mbps
        synch get test
Elapsed Per Item =   0.00026464 sec,     3778.8 Items per sec, 1.0 Mbps
string test
        async put test
Elapsed Per Item =   0.00000341 sec,   293473.3 Items per sec, 49.3 Mbps
        async get test
Elapsed Per Item =   0.00000545 sec,   183500.7 Items per sec, 54.3 Mbps
        synch get test
Elapsed Per Item =   0.00027926 sec,     3580.9 Items per sec, 1.1 Mbps
integer test
        async put test
Elapsed Per Item =   0.00000267 sec,   374066.7 Items per sec, 50.9 Mbps
        async get test
Elapsed Per Item =   0.00000351 sec,   285104.6 Items per sec, 75.3 Mbps
        synch get test
Elapsed Per Item =   0.00024215 sec,     4129.6 Items per sec, 1.1 Mbps
round trip jitter test
Round trip get delays - mean=0.000253 sec, std dev=0.000176 sec,
min=0.000206 se
c max=0.008954 sec
free test
Elapsed Per Item =   0.00000157 sec,   637174.2 Items per sec, 0.0 Mbps

Jeff

> -----Original Message-----
> From: Marty Kraimer [mailto:mrk@aps.anl.gov]
> Sent: Tuesday, January 21, 2003 7:26 AM
> To: Jeff Hill; Johnson, Andrew N.
> Cc: Kenneth Evans, Jr.
> Subject: Re: MEDM 3.13 vs. 3.14
> 
> I ran the dbcaPerform test without running changeLinks.
> This is on a 68040 vxWorks ioc.
> 
> On 3.13 it was 44% idle and on 3.14 it was only 25% idle.
> 
> Here is the breakdown
> 
> R3_13
> 
> tSpyTask     spyComTask   8aa734    5     2% (     169)    2%
> (      17)
> tNetTask     netTask      f57408   50     2% (     212)    3%
> (      23)
> scan60                    c8a4a0   75     7% (     572)    7%
> (      53)
> cbLow        callbackTa   ec10f4   85    25% (    1983)   23%
> (     170)
> dbCaLink     dbCaTask     eb4bf0   88    11% (     930)   11%
> (      85)
> cpuUsageTask              eb1bc8  255    12% (     946)    0%
> (       0)
> KERNEL                                    0% (      68)    3%
> (      23)
> INTERRUPT                                 0% (      71)    2%
> (      15)
> IDLE                                     35% (    2787)   44% (
> 
> R3_14
> 
> tSpyTask     spyComTask   763af0    5     2% (      32)    4%
> (      32)
> tNetTask     netTask      f57408   50     1% (      17)    1%
> (      12)
> scan1                     90a8c0  136    12% (     156)   11%
> (      92)
> cbLow                     ebb3c4  140    33% (     426)   34%
> (     265)
> CAC-TCP-send              8c9a98  147     1% (      16)    1%
> (       8)
> dbCaLink                  eaecf8  149    10% (     129)   10%
> (      81)
> CAC-TCP-recv              8cc748  150     8% (     114)    9%
> (      70)
> IDLE                                     28% (     358)   25%
> (     197)
> TOTAL                                    95% (    1268)   95%
> (     768)
> 
> NOTE that cbLow does CA puts to the other ioc.
> 
> I think this test also indicates that CA is using lots more CPU
> TIME.
> 
> Marty


Replies:
Re: MEDM 3.13 vs. 3.14 Janet Anderson

Navigate by Date:
Prev: new caEventRate diagnostic Jeff Hill
Next: Re: MEDM 3.13 vs. 3.14 Janet Anderson
Index: 2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014 
Navigate by Thread:
Prev: new caEventRate diagnostic Jeff Hill
Next: Re: MEDM 3.13 vs. 3.14 Janet Anderson
Index: 2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014 
ANJ, 02 Feb 2012 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· EPICSv4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·