EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: Invitation to test cothread.catools release candidate
From: Matt Newville <[email protected]>
To: [email protected]
Cc: [email protected]
Date: Mon, 12 Mar 2012 11:17:35 -0500
Hi Michael,

On Mon, Mar 12, 2012 at 8:32 AM,  <[email protected]> wrote:
> From: [email protected] [mailto:[email protected]] On
>> And, sorry for the questions, but I've always been a little confused
>> by cothread.  Perhaps you could clarify some points for me?
>>
>> Is there an advantage to using cothread over standard python threads?
>> Is the idea that you have concurrent processes using "coroutines with
>> yield" approach?    I'm afraid that I am missing an important
>> use-case, especially wrt CA when using preemptive callbacks.      Is
>> the principle issue that cothread tries to solve really with
>> concurrent CA threads, or is is concurrency between CA and something
>> else?     I'm sure this is just my lack of understanding,  but using
>> CA with preemptive callbacks work fine for me with and without
>> standard python threads, and I don't see an obvious need for more than
>> that.
>>
>> Could you give a simple example of where the advantages of the
>> cothread approach really shine, perhaps something that cannot be done
>> well without it?
>
> This is a very interesting question.
>
> There is of course quite a lot of history in the development of cothread, but I think it provides two main advantages:

Thanks for the reply.  You might have to bear through some more questions...

> 1. Controlled concurrency.
>
> This is a really big win, and is the main reason for the existence and success of this library. As with
> all good things there are trade-offs and compensating problems, but the main advantage of coroutine
> based concurrency is that you never need to worry about locking.
>
> I think it is well known that multithreaded programming is hazardous and fraught with difficult to reproduce
> race condition driven problems, and the programming errors frequently boil down to incorrect use of locks.
> Using cothreads allows quite naive programmers to develop interactive and concurrent programmes without the
> hazards of worrying about concurrent access to shared data. In particular camonitor updates will not arrive at
> inconvenient times.
>
> With Python threads it seems to me that the biggest down side of cothread, that there is only one thread of
> execution, has much less impact, as Python only ever executes one thread at a time in the interpreter anyway!

To me, this would seem to have very little to do with Channel Access,
and more to do with python's threading implementation.  Is that a fair
characterization?  If so, should it be a separate python package,
useful outside of the scope of Channel Access?

> 2. Very light weight "threads"
>
> The cothread library consists of a simple scheduling layer with associated event objects overlying a very basic
> coroutine switching engine.  Thousands of cothreads can be economically created, dispatched and destroyed in very
> short order, enabling a particularly straightforward style of coding.
>
> Coding with callbacks requires a continuation style of programming mixed with threading based events when you need
> to resynchronise with the original caller.  This is definitely harder to write, particularly when exceptions and
> error handling need to be taken into account.

I'm not sure I'm following you here.  Concrete examples would be nice.

> I've done a couple of experiments with rewriting catools without cothreads, and the two options are: (a) replacing
> cothreads with threads and leaving the code largely unchanged;  (b) rewriting to use callbacks up until the point
> of return to the user, at which point a threaded wait is clearly required to resynchronise.  I've done rewrite (a)
> and a preliminary rewrite for (b) for caget only with almost no error checking.  The results are interesting:
>
> Option (a) is straightforward and requires little change to the existing catools.py code.  When fetching long lists
> of PVs (eg caget(pvs) for len(pvs) = 336 is a typical application) ordinary threads creak slightly: first it is
> necessary to configure quite a small default stack size, otherwise memory is exhausted and threads cannot be created
> (and everything crashes), and secondly cothread is about 50% faster.  Of course, this is unfair, so on to option (b).
>
> In my opinion option (b) is certainly messier to write and it's quite a bit harder to do the error checking right ...
> however, caget on an array of 336 PVs is a trifle faster than with cothread, which is interesting.

I think I must still not be understanding you.  Certainly caget() on a
list of a few hundred PVs is not challenging, and should be dominated
by network i/o not anything to do with Python.  Are you comparing
sequential cagets:

  for pvname in list_of_pvnames:
      print caget(pvname)

with creating 1 thread per PV?  If so, I can certainly see why
lightweight threads are an advantage.

I think part of my confusion is that I can easily, and quickly, fetch
hundreds of PVs without thinking about python threads at all, but
using just straightforwand wrappings of the CA library.  My
inclination is to believe that the easier approach is better until
proven worse.  You've definitely put a lot of effort into cothread, so
maybe
there is something you're doing that is better.

Doing a simple comparison of cothread.catools.caget and pyepics.caget
(which is definitely NOT optimized for speed when fetching many PV
values, see http://pyepics.github.com/pyepics/advanced.html#strategies-for-connecting-to-a-large-number-of-pvs
for details), shows that the speeds for  (with 220 PVs, all really
connected and on the same subnet)

  for pvname in list_of_pvnames:
      print cothread.catools.caget(pvname)

  for pvname in list_of_pvnames:
      print epics.caget(pvname)

are essentially identical (at ~5.5 seconds each), so I suspect that
this naive use of cothread.catools.caget is not really using
cooperative threads.  In comparison,
  pvs = []
  for pvname in list_of_pvnames:
      pvs.append(epics.PV(pvname))
  for p in pvs:
      x = p.get()

comes in at about 0.3 seconds.  So it's much, much faster to allow
connection and automated monitoring callbacks to happen in the
background of the CA library than to do sequential "create channel,
connect channel, get".  Another improvement of ~3x (to 0.1 seconds)
can be gained by avoiding the overhead of PVs, suppressing connection
callbackes, and issuing get()'s with a callback, and then waiting for
them all to complete in the background as described in the link above.
  Sometimes, that extra performance is worth the extra effort, though
I would say that for a couple hundred PVs, the improvement probably
isn't needed.  GUI screens with lots of PVs show up just fine for me,
for example.

But none of that uses Python threads at all, just documented use of
the standard Channel Access library.  So, I'm still left wondering
when cothreads offers a real advantage for using Channel Access.

Thanks for any insight!
Cheers,

--Matt Newville


Replies:
Re: Invitation to test cothread.catools release candidate Michael Davidsaver
RE: Invitation to test cothread.catools release candidate michael.abbott
References:
Invitation to test cothread.catools release candidate michael.abbott
Re: Invitation to test cothread.catools release candidate Matt Newville
RE: Invitation to test cothread.catools release candidate michael.abbott

Navigate by Date:
Prev: SmarAct / SmarPod Gofron, Kazimierz
Next: IRMIS 3 information please Xu, Kanglin
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: RE: Invitation to test cothread.catools release candidate michael.abbott
Next: Re: Invitation to test cothread.catools release candidate Michael Davidsaver
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 18 Nov 2013 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·