Subject: |
Re: V4 design issue: Should primitive data types have well defined precisions? |
From: |
Marty Kraimer <[email protected]> |
To: |
EPICS Core Talk <[email protected]> |
Date: |
Thu, 23 Jun 2005 13:54:21 -0500 |
Ralph Lange wrote:
Dalesio, Leo `Bob` wrote:
From a functional point of view - the DA approach gives you total
flexibility.
What does it do to the complexity of the implementation and the
performance? Does it have an impact on the server? Or just the client
side?
I'm amused to see that we start discussing basic requirements and
properties of an introspective data interface again. I thought this
discussion was held and ended five years ago. Well....
Sounds like we should have had more design reviews about dataAccess over
the last five years.
As far as I know, before the meeting at SLAC (April 2005), no decision
was made that dataAccess would be an integral part of epics. I think we
did agree that:
1) performance tests would be done.
2) then we would decide if dataAccess would be used in the portable CA
server
3) then we would decide if iocCore should use the portable server
instead of rsrv.
If either 2) or 3) was decided, I am not aware of it.
At the April meeting at SLAC it did seem to be decided that we would use
dataAccess as an integral part of epicsV4. However, the request to show
an hello world example really made me start wondering if this was a good
decision.
I think the following is a correct description of the main features of
dataAccess and the V4 CA client interface.
VERY BRIEF DESCRIPTION
dataAccess is a way to transfer data between two data sources.
Each source implements a propertyCatalog for accessing it's data.
propertyIds identify data that the two sources have in common.
Only data with common propertyIds is transfered.
END VERY BRIEF DESCRIPTION
BRIEF DESCRIPTION OF dataAccess
Any kind of data can have a propertyId associated with it, i.e.
primitive data, a string, an array, or a set of other propertyIds.
A data repository implements a propertyCatalog via which it's data can
be read or written. A propertyCatalog provides access to data for a set
of propertyIds.
Code that wants to write data to the data repository implements a
propertyCatalog for the data it provides. The writer than calls
something, e.g. CA client code, that can transfer data to the data
repository. The writer's propertyCatalog is used to get data from the
writer. The data repository's propertyCatalog is used to modify the data
in the repository.
Code the wants to read data from the data repository implements a
propertyCatalog for a place to put data received from the data
repository and then calls code to get the data.The data repository's
propertyCatalog is used to get data from the repository and the reader's
propertyCatalog is used to give the data to the reader.
If the reader/writer and the data repository are not in the same address
space, network communications are used to transmit the data. Thus
intermediate data repositories , which may just be network buffers, are
involved. This is transparent to the reader/writer and the data repository.
The propertyCatalog provided by the reader/writer and the
propertyCatalog supplied by the data repository do not have to match.
Only data with idential propertyIds are transfered between the
propertyCatalogs.
For primitive data types (int, short, float, etc) the data type for the
two propertyCatalogs do not have to be the same. dataAccess provides
conversions between the primitive data types.
dataAccess does not define the primitive types. this is left to the
implementation. For the existing C++ implementation the primitive types
are: char, signed char, unsigned char, short, unsigned short, long,
unsigned long, float, and double. The precision of each of these types
is not specified by dataAccess.
In order to transmit data over the network, a set of primitive data
types must be defined precisely. For example the number of bits in each
supported integer type must be defined. From the viewpoint of dataAccess
only the network layer needs to know this representation. the code that
interfaces with the network layer does not need to know this detail.
END BRIEF DESCRIPTION.
The rest of this message is comments.
In theory a client could implement any propertyCatalog it wants and a
data source could implement any propertycatalog it wants. dataAccess can
be used to pass data between the two data stores. Only data with
matching propertyIds is transfered.
It is my belief that users's will not want to create property catalogs
for everything they want to transfer.
What I think will happen is that "convenience" layers will be built on
top of dataAccess.
Unless standard convenience layers are created many non-compatible
layers will be created.
dataAccess does not define basic primitive data types such as int16,
int32, int64. This means that unless something else besides dataAccess
defines such types two data sources have no way to guranteee that their
primitive data types are compatiblle. In fact for the exact same
propertyId one source may store the data as a, int32 and the other side
as a float64. With dataAccess alone they have no way of knowing except
by some other means such as conventions about propertyIds.
Since dataAccess does not define primitive data types, application code
has no way to guarantee precision for data without some conventions on
top of dataAccess. Thus if the application uses the type long it does
not know if this is 32 bits or if it is 64 bits. For network
applications it certainly seems desirable to have a way to guarantee
precisions.
Let me give another way of looking at dataAccess.
Data is transfered from A to B via the following:
A has the data in some structured form it understands. It creates a
propertyCatalog for accessing the data.
B wants the data in some structured form it understands. It creates a
propertyCatalog that can access the data.
Some code uses the propertyCatalog, possibly passes the data through
other intermediate data repositories such as network buffers and
gateways, and finally some code uses the propertyCatalog supplied by B
to give the data to B.
At each step in this transfer data conversions may be performed. For
example the data might start as a double, be converted to an integer,
and than back to a double. dataAccess itself does not provide any way to
know.
Thus we could look at the transfer as A sending well structured data
into a cloud and B receiving well structured data from the cloud.
Neither side knows what data transformation were made inside the cloud.
- Replies:
- Re: java and unsigned Kay-Uwe Kasemir
- RE: V4 design issue: Should primitive data types have well defined precisions? Jeff Hill
- Re: V4 design issue: Should primitive data types have well defined precisions? Ralph Lange
- References:
- RE: V4 design issue: Should primitive data types have well defined precisions? Dalesio, Leo `Bob`
- Re: V4 design issue: Should primitive data types have well defined precisions? Ralph Lange
- Navigate by Date:
- Prev:
Re: hello world examples Ralph Lange
- Next:
Re: hello world examples Marty Kraimer
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: V4 design issue: Should primitive data types have well defined precisions? Ralph Lange
- Next:
Re: java and unsigned Kay-Uwe Kasemir
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|