EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  <20052006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  <20052006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: RE: excessive ioc memory utilization
From: "Jeff Hill" <[email protected]>
To: "'Geoff Savage'" <[email protected]>
Cc: "'EPICS Tech-Talk'" <[email protected]>
Date: Tue, 10 May 2005 17:13:22 -0600
> The culprit was a misbehaving channel access client.  Somehow
> it connected tens of thousands of channels which increased the
> memory usage by CA.  When it was stopped the memory on the 
> IOC was not returned.  

The CA server and CA client use free lists by design. Here are
some implications.

o System pool Is not fragmented. System pool fragmentation has
been observed to a problem with past versions of vxWorks.

o Memory for small data structures are allocated from pool in
large chunks of N contiguous smaller data structures. 

o Runtime overhead for memory management is lower.

o When CA finishes using a data structure it is returned to the
free list for reuse only as the same data structure, and it is
not returned to system pool. Typically, a free list is maintained
by a parent data structure and the entire free list is returned
to pool when the parent data structure is reclaimed, but this
might not occur until the IOC is shutdown.

> Are there tools to monitor the CA memory usage?  

Free list memory consumption by the CA server can be tracked
using a higher magnitude interest level argument with casr. 

With EPICS R3.14 there is also a similar approach used in the
status reporting for the CA client library, but this status
reporting is usually layered into the overall status reporting
for the CA client application. The DB CA link facility is a good
example.

> Is this the expected behavior of CA?

Yes CA uses free lists by design, and CA does not implement
quotas to stop a badly behaved client application from
introducing unacceptable load should it create too many channels,
subscriptions, etc. 

Perhaps user based CA client resource consumption quotas should
be added to access security system in a future release?

Jeff

> -----Original Message-----
> From: Geoff Savage [mailto:[email protected]]
> Sent: Tuesday, May 10, 2005 2:13 PM
> To: Jeff Hill
> Cc: 'EPICS Tech-Talk'
> Subject: Re: excessive ioc memory utilization
> 
> Hi Jeff,
> 
> The culprit was a misbehaving channel access client.  Somehow
> it
> connected tens of thousands of channels which increased the
> memory usage
> by CA.  When it was stopped the memory on the IOC was not
> returned.  Are
> there tools to monitor the CA memory usage?  Is this the
> expected
> behaviour of CA?
> 
> Thanks,  Geoff
> 
> Jeff Hill wrote:
> 
> >Geoff,
> >
> >Use memShow to obtain status of the system pool. A common
> cause
> >of system pool depletion will be system pool data structure
> >corruption, or of course some program continuously allocating
> >memory that it does not free. The former shows up
> instantaneously
> >while the latter generally can be predicted as a trend. If the
> >problem is easily reproducible you may find that selective use
> of
> >the task suspend command can be used to isolate the
> responsible
> >thread. Typically a binary search is used where you suspend
> one
> >half of the threads initially followed by half of the
> remaining
> >half etc to tighten the noose with minimized effort.
> >
> >With your version of vxWorks the network stack pool shouldn't
> be
> >increasing past the limits set when vxWorks is built.
> >
> >There is an integer level argument to casr that can be used to
> >get more information about what it has allocated.
> >
> >Jeff
> >
> >
> >
> >>-----Original Message-----
> >>From: Geoff Savage [mailto:[email protected]]
> >>Sent: Tuesday, May 10, 2005 9:48 AM
> >>To: EPICS Tech-Talk
> >>Subject: excessive ioc memory utilization
> >>
> >>Hi,
> >>
> >>When I came in this morning our high voltage iocs are using
> >>excessive
> >>amounts of memory, around 95%.  They are all mv2301
> processors
> >>with 16MB
> >>of memory running epics 3.14.6 built with vxworks 5.5.1.  The
> >>netStackSysPoolShow and netStackDataPoolShow outputs are
> >>reasonable.
> >>There are not an excessive number of channels connected.
> What
> >>can I use
> >>to look and see where the memory is allocated?  Any other
> >>suggestions?
> >>
> >>Thanks,  Geoff
> >>
> >>-> netStackSysPoolShow
> >>type        number
> >>---------   ------
> >>FREE    :    5991
> >>DATA    :      0
> >>HEADER  :      0
> >>SOCKET  :     21
> >>PCB     :     35
> >>RTABLE  :     91
> >>HTABLE  :      0
> >>ATABLE  :      0
> >>SONAME  :      0
> >>ZOMBIE  :      0
> >>SOOPTS  :      0
> >>FTABLE  :      0
> >>RIGHTS  :      0
> >>IFADDR  :      4
> >>CONTROL :      0
> >>OOBDATA :      0
> >>IPMOPTS :      0
> >>IPMADDR :      2
> >>IFMADDR :      0
> >>MRTABLE :      0
> >>TOTAL   :    6144
> >>number of mbufs: 6144
> >>number of times failed to find space: 0
> >>number of times waited for space: 0
> >>number of times drained protocols for space: 0
> >>__________________
> >>CLUSTER POOL TABLE
> >>_____________________________________________________________
> __
> >>________________
> >>size     clusters  free      usage
> >>-------------------------------------------------------------
> --
> >>----------------
> >>64       1024      965       15123
> >>128      1024      996       1647
> >>256      512       446       10893
> >>512      512       512       0
> >>-------------------------------------------------------------
> --
> >>----------------
> >>value = 80 = 0x50 = 'P'
> >>
> >>-> netStackSysPoolShow
> >>type        number
> >>---------   ------
> >>FREE    :    5991
> >>DATA    :      0
> >>HEADER  :      0
> >>SOCKET  :     21
> >>PCB     :     35
> >>RTABLE  :     91
> >>HTABLE  :      0
> >>ATABLE  :      0
> >>SONAME  :      0
> >>ZOMBIE  :      0
> >>SOOPTS  :      0
> >>FTABLE  :      0
> >>RIGHTS  :      0
> >>IFADDR  :      4
> >>CONTROL :      0
> >>OOBDATA :      0
> >>IPMOPTS :      0
> >>IPMADDR :      2
> >>IFMADDR :      0
> >>MRTABLE :      0
> >>TOTAL   :    6144
> >>number of mbufs: 6144
> >>number of times failed to find space: 0
> >>number of times waited for space: 0
> >>number of times drained protocols for space: 0
> >>__________________
> >>CLUSTER POOL TABLE
> >>_____________________________________________________________
> __
> >>________________
> >>size     clusters  free      usage
> >>-------------------------------------------------------------
> --
> >>----------------
> >>64       1024      965       15125
> >>128      1024      996       1647
> >>256      512       446       10894
> >>512      512       512       0
> >>-------------------------------------------------------------
> --
> >>----------------
> >>value = 80 = 0x50 = 'P'
> >>
> >>
> >>-> casr
> >>Channel Access Server V4.11
> >>Connected circuits:
> >>TCP 131.225.231.65:46261(d0olj.fnal.gov): User="d0cal", V4.8,
> >>70
> >>Channels, Priority=0
> >>TCP 131.225.230.156:1026(d0olctl38): User="vxworks", V4.11, 6
> >>Channels,
> >>Priority=80
> >>TCP 131.225.231.39:45142(d0ol57.fnal.gov): User="d0fpd",
V4.8,
> >>6
> >>Channels, Priority=0
> >>TCP 131.225.231.39:45149(d0ol57.fnal.gov): User="d0fpd",
V4.8,
> >>4
> >>Channels, Priority=0
> >>TCP 131.225.231.252:60499(d0ol49.fnal.gov): User="d0fpd",
> V4.8,
> >>4
> >>Channels, Priority=0
> >>TCP 131.225.231.252:49088(d0ol49.fnal.gov): User="d0fpd",
> V4.8,
> >>30
> >>Channels, Priority=0
> >>TCP 131.225.231.39:49714(d0ol57.fnal.gov): User="d0fpd",
V4.8,
> >>6
> >>Channels, Priority=0
> >>TCP 131.225.231.247:42004(d0ol45.fnal.gov): User="d0cal",
> V4.8,
> >>3
> >>Channels, Priority=0
> >>TCP 131.225.231.247:43286(d0ol45.fnal.gov): User="d0cal",
> V4.8,
> >>128
> >>Channels, Priority=0
> >>value = 0 = 0x0
> >>
> >>
> >
> >
> >
> >




Replies:
Re: excessive ioc memory utilization Benjamin Franksen
References:
Re: excessive ioc memory utilization Geoff Savage

Navigate by Date:
Prev: Re: excessive ioc memory utilization Geoff Savage
Next: errlogPrintf in OSD implementation Jun-ichi Odagiri
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  <20052006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: Re: excessive ioc memory utilization Geoff Savage
Next: Re: excessive ioc memory utilization Benjamin Franksen
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  <20052006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 02 Sep 2010 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·