Out of general curiosity, why is the default value for
EPICS_CA_MAX_ARRAY_BYTES set to 16384?
If I understand the purpose of this correctly, this parameter is meant
to prevent large data transfers over the CA protocol. For the
discussion here, it does not limit the size of array in a real or soft
IOC -- the array data already exists, and people are unable to
transfer it unless they explicitly increase this value when they want
to transfer "large" array data. Of course, once they set this high
enough, everything works, suggesting there is not a bandwidth issue.
That suggests the only effect of EPICS_CA_MAX_ARRAY_BYTES is to
prevent data that already exists from being transferred by CA. Is
that an unfair characterization?
So, why is the default value not unlimited (or, say 2**32), so that
EPICS_CA_MAX_ARRAY_BYTES would have the role of allowing one to
explicitly set a maximum array size for data transfer, instead of
being set (as is currently the case) to make sure that existing,
desired data can be transferred from IOCs that have that data over
networks that can handle the load?
Thanks,
--Matt Newville <newville at cars.uchicago.edu> 630-252-0431
On Tue, May 25, 2010 at 6:36 PM, Till Straumann
<[email protected]> wrote:
On 05/25/2010 05:31 PM, Mark Rivers wrote:
No, there is no limit, just set EPICS_CA_MAX_ARRRAY_BYTES. We are sending
16MB images over CA with no problem.
Amd 16K of doubles is not 1K values, it is 2K. A double takes 8 bytes on
every architecture I know of.
IIRC it can be a little less because the EPICS_CA_MAX_ARRAY_BYTES include
the extra info (status, timestamp etc) -- details depend on the DBR type.
In labCA PVs are normally retrieved as DBR_TIME_xxx.
A limitation of the underlying ezca library is that when you fetch
units or limits (stuff that isn't provided with DBR_TIME_xxx) then
ezca uses a DBR_CTRL_xxx with the native element count of the PV
even though the array elements themselves are discarded.
This is unfortunate when you have a slow connection.
E.g., if you call
lcaGetUnits(pv)
and your PV has a large native element count then the entire
array is transferred and discarded (and the call will fail if
your EPICS_CA_MAX_ARRAY_BYTES is too small).
Luckily, if you ever run into that problem you can obviously
work around in most cases by reading the '.EGU' field
lcaGet(pv + '.EGU'); // this is scilab notation
HTH
T.
Mark
________________________________
From: [email protected] on behalf of
[email protected]
Sent: Tue 5/25/2010 4:15 PM
To: Till Straumann
Cc: EPICS Tech-Talk
Subject: Re: labCA and EPICS_CA_MAX_ARRAY_BYTES
Is there still a limit on the length of arrays?
I understand that the client and ioc needs to be configured with
EPICS_CA_MAX_ARRAY_BYTES
Note that the number of bytes is not the numbers of values.
16K of double variables is only 1K values....
is this still a limitation in 3.14.11?
--
E
On 10:27 Tue 25 May , Till Straumann wrote:
On 05/25/2010 09:55 AM, John Dobbins wrote:
Is it possible to use labCA to retrieve arrays larger than 16K?
Yes - I routinely use that.
You have to make sure EPICS_CA_MAX_ARRAY_BYTES is set
in the environment *before* you start matlab or scilab (RTM).
On the IOC, too, you need EPICS_CA_MAX_ARRAY_BYTES but
since you tested with caget that's obviously set up correctly.