Dear Evgeniy and Andrew
To get such high scan rates reliably requires processing that achieves "hard realtime" repeatability.
At LIGO we have done this by separating the processing into two pieces that communicate through shared memory
1) code that runs in one core the Linux OS is told is "idle". It looks for an interrupt (from a 64KHz clock,say ), does its processing from ADC/DAC/Binary hardware, updates shared memory, then waits
2) a normal EPICS IOC run in user space on another core that references the shared memory data to update at normal rates (16 Hz, say)
- This also requires an extra data acquisition system that can record data at that higher rate.
As Andrew says, most OSs don't have precise enough scheduling for this. Hence our design choice.
Keith Thorne
On Jan 7, 2014, at 4:19 PM, Andrew Johnson <[email protected]> wrote:
> Hi Evgeniy,
>
> On 12/23/2013 05:46 PM, Evgeniy wrote:
>> The question is: if Mark got processing time for bi record 150
>> microseconds 20 years ago should I have 1.5 microseconds for the same
>> record on 100 times faster CPU ?
>
> I have just timed how long it takes to process a simple database; my
> recent-model MacBook Pro took 3.005 seconds to process the calc and ai
> records from the example template's dbExample1.db file a million times,
> thus the ai+calc pair processed once every 3 microseconds.
>
> To do that I wrote a special record type (attached) that repeatedly
> processes its target link. It is quite flexible, you can set it to do a
> specific number of repetitions (REPM="Count", the count can be read
> through the INP link or just put it into the VAL field which triggers
> processing), or have it read the INP input link before or after each
> operation to determine whether to continue or not (REPM="While" or "Until").
>
> There is also a limit field STOP which sets the maximum number of
> repetitions for protection. The value of STOP defaults to a million, but
> you can override it yourself if you wish, up to 2^32-1. Note that
> running this for several seconds can cause CA clients to disconnect, so
> be careful with it!
>
>> The same arithmetic question for scan. If its minimum period was 0.01
>> seconds 20 years ago should I expect minimum 0.0001 seconds nowadays?
>
> Periodic scans are quite complicated, since they rely on the underlying
> operating system to implement a delay timer and reschedule the scan
> thread after the specific delay has elapsed. Most OSs don't provide high
> accuracy delay scheduling, so I don't expect we will ever be able to run
> periodic scan threads at 10KHz.
>
> - Andrew
> --
> Advertising may be described as the science of arresting the human
> intelligence long enough to get money from it. -- Stephen Leacock
> <repeatRecord.c><repeatRecord.dbd>
------
Keith Thorne <[email protected]>
CDS Software Engineer
LIGO Livingston Laboratory
Livingston, LA 70754
Phone: (225)686-3168 Fax: (225)686-7189
- References:
- Re: Increasing scan rate to 10 kHz Andrew Johnson
- Navigate by Date:
- Prev:
Re: Increasing scan rate to 10 kHz Andrew Johnson
- Next:
A call to 'assert(capacity != 0)' by thread Emmanuel Mayssat
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
<2014>
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: Increasing scan rate to 10 kHz Andrew Johnson
- Next:
Re: Increasing scan rate to 10 kHz Till Straumann
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
<2014>
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|