EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: Changes to records during asynch processing
From: Benjamin Franksen <[email protected]>
To: [email protected]
Date: Fri, 29 Aug 2003 14:50:00 +0200
Marty Kraimer wrote:
> 
> Benjamin Franksen wrote:
> 
> > Hmmm. It depends of course on how much is deemed "significant runtime
> > overhead". I imagine a solution along the following lines: All puts that
> > occur during asynch processing (i.e. target record's PACT==TRUE) are
> > redirected (by dbPut) into a queue (linked list or cicular buffer or
> > some such thing), where pairs consisting of {field address, new value}
> > are stored. Only when the record completes processing, the puts stored
> > in this list actally happen. After that eventual re-processing is done.
> >
> > The advantage is that for the 'normal' situation (either PACT==FALSE or
> > dbScanLock is active) the only additional overhead is the check for PACT
> > and the check for an empty queue after processing. These are effectively
> > two machine instructions. Only if no scan lock is active and PACT==TRUE,
> > real overhead will occur. Even this overhead is not too great a cost,
> > IMHO, if it solves the problem.
> >
> > I haven't spent any time thinking about the most clever representation
> > of the 'deferred puts' in the queue but certainly this can be worked
> > out.
> >
> > What do you think?
> 
> At the present time after initialization nothing in base except channel access
> server code allocates or frees memory. Channel access server code must because
> clients are connecting and disconnecting. The CA code manages memory VERY carefully.
> 
> In order to implement what you suggest the queue must be managed very carefully.

Yes. Note that allocation and deallocation will happen in strict fifo
order. That's why I suggested a ring buffer. The buffer could be
allocated per record and only if such a put actually happens. IIRC, CA
does not deallocate its buffers and I'd suggest the same in this case
with the argument that such a condition (a put happens during asynch
processing) is likely to occur again in the future. Buffer size could be
increased if necessary.

BTW, isn't this is a good opportunity to leverage on the experience and
coding effort already sunk into CA? Maybe some of the CA code that
handles memory management can be factored out into an independent
library and then re-used?

> But what goes into the queue? What if the value is an array?
> If it is an array
> record support routines are involved. What fields of a record are involved? 

The idea was to handle *all* (legal) puts to *any* field of the record.
Arrays, special fields etc. are handled exactly the same way as in the
normal case. The only difference is that the action is deferred until
processing of the target record is complete.

An interesting variant would be to allow record and device support to
flag those fields they are interested to protect and postpone puts only
to fields marked in this way. Yet another variant is to add this flag to
fields in the recordtype definition.

> Note
> that for the bo record only the VAL field is a problem because this is the only
> field that the bo record support compares with MLST.

Hmm. Doesn't this rather cure a symptom, instead of the cause? How long
until other problems surface? For instance, how many device supports
implicitly assume that the fields of a record don't change during
processing?

Of course, device supports and record supports *should* be aware of the
problem and save any record fields which are needed in the second stage
of asynch processing. But even if all of them did so, this is definitely
*not* efficient, since due to pessimistic assumtions copying must be
done each time, even if there will never be any put to any of the
relevant fields. With the scheme sketched above (copy-on-write), such
pessimistic copying can be abandoned.

Ben

Replies:
Re: Changes to records during asynch processing Marty Kraimer
References:
Does DISP work for DB OUT links? Benjamin Franksen
Re: Does DISP work for DB OUT links? Marty Kraimer
Re: Does DISP work for DB OUT links? Benjamin Franksen
Re: Does DISP work for DB OUT links? Marty Kraimer
Re: Does DISP work for DB OUT links? Related question Benjamin Franksen
Re: Does DISP work for DB OUT links? Related question Marty Kraimer
Changes to records during asynch processing Benjamin Franksen
Re: Changes to records during asynch processing Marty Kraimer

Navigate by Date:
Prev: Re: [Fwd: Re: Does DISP work for DB OUT links? Related question] Marty Kraimer
Next: Re: about NFS Billy R. Adams
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: Re: Changes to records during asynch processing Marty Kraimer
Next: Re: Changes to records during asynch processing Marty Kraimer
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 10 Aug 2010 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·