Jeff said:
>If the client's sustained event consumption
>rate is slower than the scan task's sustained event consumption
>rate then we will eventually overflow the finite buffering system.
>Likewise if the average available network bandwidth constrains the
>event transmission rate to the client to the point where it
>is slower than the event production rate then we will also
>overflow the finite buffering system.
There is one other important senario for lost data that I am trying
to address: CPU cycle starvation of the CA client tasks. In this
senario, the network client is happily waiting for data and flow
control does not come into play.
>1) The scan tasks consume enough CPU so that the server is
>starved for CPU.
> SOLUTION=> split into two IOCs or buy new CPU
The starvation is random and not sustained, and it is wasteful to
double the CPU power of the entire system to deal with this problem.
Approximate cost for this solution: $200,000.
>Another problem: If we make socket calls from the scan tasks
>then their maximum stack consumption may increase. Do a tt()
>on one of the event tasks while it is running a few times
>if you would like to see how many subroutines deep the
>IP kernel goes.
This is not a serious problem. We use this technique in our high
bandwidth data acquisition systems (>1MB/sec over ethernet), and our
stack sizes are not that large. Also, since I am proposing greatly
reducing the number of stacks, even doubling it is not an issue.
>Intermediate monitors will be dropped in a consumer producer system where
>the sustained production rate is higher than the sustained consumption
>rate if we are not willing to suspend event production (fact of life).
Agree. If the client is too slow, we must keep the current behavior of
discarding data.
>Con: No preemptive multi-tasking. ie the alarm client is not allowed to be
>serviced at a higher priority than some of the scan tasks etc. We dont do
>this now but perhaps we would like to.
We could deal with this by having multiple servers, i.e. a known port for
high priority, another for medium, another for low.
Chip
- Navigate by Date:
- Prev:
Re: devLib Bill Brown
- Next:
Re: EPICS database & channel access Jeff Hill
- Index:
1994
1995
<1996>
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: EPICS database & channel access Jeff Hill
- Next:
Re: EPICS database & channel access Jeff Hill
- Index:
1994
1995
<1996>
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|