EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: Invitation to test cothread.catools release candidate
From: Matt Newville <[email protected]>
To: [email protected]
Cc: [email protected]
Date: Wed, 14 Mar 2012 19:48:15 -0500
Hi Michael (and Michael D),

Thanks for the reply.  Sorry for the delay in responding and the
length of this message.  I may be starting to understand...

On Tue, Mar 13, 2012 at 3:12 AM,  <[email protected]> wrote:
> From: [email protected] [mailto:[email protected]] On
>> Thanks for the reply.  You might have to bear through some more
>> questions...
>>
>> > 1. Controlled concurrency.
> ...
>> >
>> > With Python threads it seems to me that the biggest down side of
>> > cothread, that there is only one thread of execution, has much
>> > less impact, as Python only ever executes one thread at a time
>> >  in the interpreter anyway!
>>
>> To me, this would seem to have very little to do with Channel Access,
>> and more to do with python's threading implementation.  Is that a fair
>> characterization?  If so, should it be a separate python package,
>> useful outside of the scope of Channel Access?
>
> Certainly cothread stands alone.  The only reason it's so tightly bound to cothread.catools is that catools is implemented in an essential way on top of cothread and cothread doesn't currently have a separate existence.  In truth I see the absence of working Windows support as an Achille's heel here.

It seems that most of the work of cothread centers on Python's
threading  model, but cothread is not listed or discussed as a general
way to create light-weight threads in Python.  Instead, it's discussed
only in the domain of Channel Access.  But Channel Access has its own
threads, and one does not need any python threads to be used
effectively.   So, I'm curious why
  a) cothread is not discussed in the domain of python threading
models, ie for code that is unrelated to CA.  How does it do in those
other applications?
  b) why all the discussion of cothread seem to assume that one needs
Python threads when working with Channel Access?

>> > 2. Very light weight "threads"
>> >
>> > Coding with callbacks requires a continuation style of programming
>> > mixed with threading based events when you need to resynchronise
>> > with the original caller.  This is definitely harder to write,
>> > particularly when exceptions and
>> > error handling need to be taken into account.
>>
>> I'm not sure I'm following you here.  Concrete examples would be nice.
>
> Ok, I'll try.  I developed an example in draft yesterday but threw it away, but I think it's worth doing again.  I'll need to go into quite a bit of detail, but I think with your comments below on how to use pyepics and your caget() code in front of me for contrast I can make the necessary points.
>
> Let's first look at a schematic of the implementation of caget() in cothread.catools:
>
>        def caget(pvs, **kargs):
>            if isinstance(pvs, str):
>                return caget_one(pvs, **kargs)
>            else:
>                return caget_array(pvs, **kargs)
>
> Actually, this isn't schematic at all, this is the full truth.  This "polymorphism" of caget() was very contentious when James and I were originally developing this library, but our experience with this design has been very good.  Now, straight away I can show my first essential use of cothreads:
>
>        def caget_array(pvs, **kargs):
>            return cothread.WaitForAll([
>                cothread.Spawn(caget_one, pv, raise_on_wait=True, **kargs)
>                for pv in pvs])
>
> Again this is full code.  The point here is that cothread.Spawn() launches concurrent copies of caget_one(pv, **kargs) for each requested pv in pvs and then cothread.WaitForAll() gathers all the concurrent results together.  The raise_on_wait flag ensures that when waiting for the spawned process to complete if it raised an exception then the waiter receives the exception (otherwise the exception is just logged and the waiter receives None).
>

OK.  I think that is a very nice design, but it does come at a cost.

> I think I can already compare with true threads and with callbacks at this point.  We *can* use true threads here, but it's expensive and wasteful, and without careful management of stack sizes resource usage will rapidly explode.

Do you have data to back that up?  I don't doubt you're right, but
you've put a lot of effort into optimizing python threads, so I assume
you would have carefully tested and benchmarked these issues.

>  Using continuations is a lot more awkward in this style.
>  I've not yet worked through a full working implementation of caget_array with continuations, but it is definitely less clear.

Perhaps.  But it would be based on standard python.

>        def caget_one(pv, timeout=5):
>            channel = _channel_cache[pv]
>            channel.Wait(timeout)
>            done = cothread.Event()
>            cadef.ca_array_get_callback(
>                dbrcode, count, channel, _caget_event_handler, done)
>            return done.Wait(timeout)
>
> Now this is ludicrously simplified, I've removed most of the optional arguments, the observant reader will be wondering where dbrcode and count came from, and I guess OMAR (I reference ONAG) will be worrying a little about timeout and the lifetime of done.
>
> My point is that there are two blocking (or suspension) points in this code, the two .Wait() calls, where the inline code cannot proceed until something has happened elsewhere.  In the first case, we're waiting for channel connection to complete (if it has already connected, this will of course complete immediately without suspension), and in the second we're waiting for ca_array_get_callback to call back.
>
> Of course with ordinary threads this is easy, though of course we have to pay two system calls per Wait.  With callbacks quite a rewrite is needed to achieve the same effect ... particularly if we think carefully about how we're going to handle timeouts and exceptions.

Well, I'd say that you don't need threads at all. I guess I don't
share your objection to callbacks.  Perhaps that is the basic
difference, then?   But you *are* using callback..... Maybe I don't
understand after all.

I'm perfectly willing to use state information and callbacks, partly
because I really do want a stateful PV object, and partly because I'd
prefer to follow the documented CA methods as closely as possible,
which seems to favors callbacks.   As a side benefit it allows CA to
be used without threading at all.  Importantly, there is a lot less
code, and no C code, so that maintenance and portability much easier,
and Windows support is trivial.

I admit that I had not realized cothread.catools.caget() could take a
list of PVs.  So I tried it out.  It gives about the same performance
of the more complex code I pointed to in the pyepics docs (~0.1 second
for 200 PVs).  To recap my results for 200 PVs:

0. "Slow way": ~5.5 seconds.
   sequential creation, connection, fetching either with pyepics or
cothread.catools
   for pv in pvname_list:
        caget(pv)

1. "Fastest way":  ~0.10 seconds.
   a.  cothread:  use caget(list_of_pvs)
   b.  pyepics: create all channels, connect all channels, issue
get-with-callback, wait for callback to complete, all within the main
thread.

   cothread does make this much easier to achieve than pyepics.

2. "Medium way": ~0.30 secconds.
   pyepics -- create all PVs, then get all values.  This would be the
"normal" pyepics usage.

For the "medium way" using PVs, creating each PV entails setting a
connection callback, which runs in the background.  When the
connection callback runs, a monitor callback is set, and when the
monitor callback runs, the value is updated.  PV.get() then just
returns the latest value from the monitor (or does a real ca_get() if
the callback hasn't run yet).  OK, a lot of callbacks are run in the
backgound.  But for an extra millisecond per PV, you get a stateful
object.  And, just to be clear, it *is* a millisecond per PV, not a
factor of 3.  If you get all 200 PVs 10 times, say with:
   for i in range(10):
       vals = cothread.catools.caget(list_of_pvs)
or
   pvs = [epics.PV(pvname) for pvname in list_of_pvs]
   for i in range(10):
       vals = [pv.get() for pv in pvs]

then the pyepics version is actually faster (~0.45 seconds for
cothread, ~0.35 seconds for pyepics).  Maybe that's not a fair test,
as you would probably set up a monitor as well....  but it does show a
few things:
   a. you must be holding some state information, otherwise the time
looped cothread.catools.caget() be 10x slower instead of 4x slower.
   b. setting up PVs to use monitor callbacks is probably a good idea
for long-lived PVs.
   c. It's easy to get CA performance that is 50x slower than the
absolute fastest.  This has nothing to do with the threading model
use. Much CA work is going to be network i/o bound.

One could easily write a caget_many() using pyepics that gives the
same performance as the cothread version, though it wouldn't be based
on a single caget (Though, FWIW, pyepics uses a different sort of
composition: caget() builds on the stateful PV object).  Perhaps I
should add this.

So, if I understand correctly, a principle motivation is that you
prefer many threads over relying on pre-emptive callbacks.   With that
preference, thread "weight" becomes a serious issue, so much so that
you cannot rely on standard python threads.  Is that a fair
assessment?

Cheers,

--Matt Newville


Replies:
RE: Invitation to test cothread.catools release candidate michael.abbott
References:
Invitation to test cothread.catools release candidate michael.abbott
Re: Invitation to test cothread.catools release candidate Matt Newville
RE: Invitation to test cothread.catools release candidate michael.abbott
Re: Invitation to test cothread.catools release candidate Matt Newville
RE: Invitation to test cothread.catools release candidate michael.abbott

Navigate by Date:
Prev: RE: iocshCmd, redirection, and function pointers Kim, Kukhee
Next: RE: Invitation to test cothread.catools release candidate michael.abbott
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: RE: Invitation to test cothread.catools release candidate michael.abbott
Next: RE: Invitation to test cothread.catools release candidate michael.abbott
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 18 Nov 2013 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·