On Friday 17 June 2005 16:18, Ralph Lange wrote:
> Marty Kraimer wrote:
> > The only place we see that V3 epics records need unsigned is for
> > bit masks.
> > However there is a big problem with the mask fields in the the
> > mbbXXX records.
> > The mask field is 32 bits. How do we handle a 64 bit I/O module?
> > OK with V4 we can make the mask fields be 64 bits.
> > But how do we handle a 128 digital I/O module?
> > Also a 16 bit digital I/O module has many unused bits.
> >
> > A way to handle this is to make the mask fields an array of octets.
> > The byte and bit order must still be decided but at least any
> > multiple of 8 bits can be handled.
>
> I see your line of argumentation and I do like the idea of handling
> all bitfield data as arrays of octets. I guess with that trick we can
> ship anything around that doesn't need to be used in calculations.
>
> But what about real unsigned numbers? Like results of an ADC
> conversion that maps 0-10V to 0-65535 (not that uncommon)? Do we want
> to use 32bit integers in all these cases wasting ~50% of the
> bandwidth on CA?
That's what we call a Milchmädchenrechnung in German.
CA overhead is (currently) 16 bytes per channel, IIRC, not counting
lower-level protocol overhead (TCP/IP).
For scalar values, this dominates your mentioned 2 bytes increase for
the payload by a factor of 8. Thus bandwidth increase is rather ~10%
than your claimed ~50%. If additional properties like timestamp and
status/severity is requested routinely (as is, for instance, by display
managers) then the increase drops to a mere ~5%.
For arrays things are different. However, I suspect the vast majority of
large arrays have floating point elements anyway. Why? Because large
arrays are typically the result of some IOC-level calculation, rather
than raw hardware values. This is more a suspicion than a hard fact,
though, and I stand to be corrected.
The (presumably) few applications that really require large arrays of 16
or 8 bit unsigned integers can use octets and perform conversion to any
appropriate number type on the client side. This may be inconvenient
but I doubt it would be a show-stopper.
> And wonder why writing a high number through a 32bit
> int to a 16bit DAC yields unpredictable results without the client
> getting an out-of-range exception? Transport it as an array of octets
> and reassembling it to an integer (unsigned or not) on both ends?
>
> If Java was the only language that didn't use unsigned I would be
> less willing to adapt to it. But my impression is that the use of
> unsigned is coming from C/C++ and getting less popular in more recent
> languages. What will be fashionable in 5 years?
A good point. In fact I know of /no/ advanced high-level programming
language that supports machine-level unsigned integers, /except/ for
the sole purpose of interfacing with C/C++ libraries.
And thinking of really high-level languages, I wonder if we should think
of supporting /unbounded/ integers as a native type. This would solve
the what-comes-after-the-64-bit question nicely. A good implementation
is readily available, see GNU MP library at http://www.swox.com/gmp/,
which is used internally by many advanced languages to implement their
native bignums. The licence is LGPL, which hopefully isn't going to be
a problem.
Regarding Jeff's argument as for the advantages of programming with
numbers that are garanteed to be non-negative. I would argue that this
is also mostly valid in C/C++, where arrays are always indexed starting
with zero. Many languages allow upper /and/ lower index bounds to be
arbitrary (signed) integers, or even any other data type, provided the
programmer can specify a one-to-one mapping onto a bounded interval of
integers. Thus, non-negativity seems to be a somewhat arbitrary
guarantee (why not, for instance, strict positivity?). Furthermore,
with regard to the efficiency question (only one range-check for upper
bound instead of two for upper and lower), in C/C++ you are always
free to apply the zero-cost type cast from signed to unsigned, thereby
mapping negative numbers to large positive ones, and then range-test
only for the upper bound. This will fail for /exactly/ the cases where
the original check failed, as long as you don't rely on the upper half
of the possible range, something you (Jeff) suggested is to be avoided
anyway.
Ben
- Replies:
- Re: Fundamental Types document / unsigned integers Eric Norum
- RE: Fundamental Types document / unsigned integers Jeff Hill
- References:
- RE: Fundamental Types document Jeff Hill
- Re: Fundamental Types document Marty Kraimer
- Re: Fundamental Types document Ralph Lange
- Navigate by Date:
- Prev:
Re: Fundamental Types / Gateway Benjamin Franksen
- Next:
Re: V4 Data Types: Request for tagged unions Benjamin Franksen
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: Fundamental Types document Ralph Lange
- Next:
Re: Fundamental Types document / unsigned integers Eric Norum
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|