EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: January Notes for non-us individuals.
From: "Leo R. Dalesio" <[email protected]>
To: <[email protected]>
Date: Mon, 31 Jan 2000 15:42:49 -0800
 Sorry for the usoft use. I have copied and pasted the word and excel documents into this email
for those of you on UNIX.
 
 
A meeting was held from January 10 through January 12, 2000 to discuss the archive retrieval library, explore requirements for higher physics applications, and to discuss the future of channel access.

 

Attendees:

Jeff Hill             [email protected]

Kay Kasemir                [email protected]

Bob Dalesio                 [email protected]

Marty Kraimer            [email protected]

Bob Sass                      [email protected]

John Galambos            [email protected]

Matt Bickley                 [email protected]

David Brian                  [email protected]

Chris Larrieu                [email protected]

Matthias Clausen            [email protected]

 

 

The goals of the archiving session were to make everyone familiar with what had been done at DESY, JLAB, and LANL, determine what  needed to be done to provide complete archiving capabilities, develop a standard archive access library, and to determine what would be required from a network protocol to support archive data. To this end, a set of presentations were made concerning the source of data to be archived as well as the users of archive data.

 

XARR – Chris Larrieu

            XARR is an archive plotting tool that runs on UNIX and accesses archive data from the old archive data source (ARR ascii version) and has a library to support the binary channel_archiver data source. It fetches data that is currently on disks that are locally mounted. The data source must be set in an environment variable.

            Some requirements to make this more useful to users was discussed. The three major elements are:

1)      A Directory Server that is able to keep hierarchical relationships between channels and groups. This service should be available over the network. It would need to support the ability of channels to live in many groups, just as a single channel can be thought of belonging to many different sets. For instance, a dipole magnet power supply in sector A, belongs to bending magnets and controllers in section A. We envision that requests for controllers and requests for signals in sector A would both return a list that contained this magnet power supply. With such a service, more generalized browsing tools for both archive and current data could be developed. This would relieve the physicist or operator from having to have predefined sets to see archive data and predefined screens to view real time data.

2)      Requests that allow XARR to request various filters would prevent the need for each tool to implement every filter that may be requested. A list of filters were discussed at greater length during the archive data access library.

3)      Requests that allow users to query data based on some relationship or limits would allow users to search for events without looking through all of the data. A request could be made to find all channels that moved by 2% when a different value moved by 5%. This would allow the user to search for relationships that were not previously understood in the process.

 

 

Xtract – Matt Bickley

            Xtract is a tool that runs in the UNIX environment and allows users to set up a data acquisition experiment. The tool allows the user to configure a neseted set of control parameters moving through the control space at defined steps, taking prescribed actions between steps and collecting some set of channels into a buffer. It then provides some plotting capabilities.

            Some requirements to improve the performance and capabilities of this tool are:

1)      Have channel access deliver time synchronous sets of data during the collection phase.

2)      Have some sort of buffer server that will take the request to collect data and perform the data collection in the front-end computers.

3)      Produce a filter that allows this data to be placed into an archiver like the channel archiver and then use the same tools that are available for archive data viewing.

4)      Provide a way to produce a standard ascii format that can then be used by some other tools like SDDS or Matlab.

 

 

 

Buffer Server – Bob Dalesio

            This was presented as a possible extension to EPICS to support some of the needs of applications for archive and synchronous data. The buffer server would provide data in two ways: a single channel over time or a set of channels at a single point in time. The buffer server could be used as a layer on top of tools like the channel archiver or direct channel access.

            Some requirements on channels access or some other protocol to support such a tool:

1)      Provide a primitive data sample that includes statistical information for a period in addition to timestamp. This would allow a large time interval to be reduced to a single sample without loosing some of the potentially important information about the interval. These would be duration of the statistics, mean value, median value, high, low, average, standard deviation, and perhaps the number of outliers from specified limits and the skew. Other data could be envisioned in a statistical sample.

2)      Provide support for a time array. The request would specify a time range and an optional statistical reduction period to use to reduce the amount of data returned. The response would be a series of data, time stamp and status triplets.

3)      Provide support for a set of channels at a single point in time. This would require the ability to define a set, request to get synchronous sets either from the present or at some point in the past from an archive file. The set would have to have certain characteristics like the amount of time that is considered within the same period. Support would also need to exist to put a synchronous set. This would mean setting values in a single IOC or multiple IOCs in as synchronously as possible.

 

 

Striptool modified for historical data viewing – Matthias Clausen

            Striptool was modified to access historical data as well as current data. Many modifications were made to Striptool to give the user a seamless view of archive and current data. At DESY, historical data can either be in an archive file or in the front-end computer in an archive record. Striptool will now allow the user to scroll back and forth in time. For plots which start at the present time, the program will first access the archive record buffer and then the long term archive. Algorithms are built in to average, tail, sharp, spline and circular fit.

            Requirements to better support this seamless access to data:

1)      Have network response to a single name include the ability to say that the data for the channel is historical.

2)      Provide some data reduction and algorithmic fitting in the request to reduce the communication time as well as the resource requirements on the client.

3)      Array data needs to include the frequency – that is if there is a single time stamp with many samples, the time distance between the samples needs to be a standard attribute.

4)      The ability to handle an array of data, time stamp and status triplets would make it possible to support front-end circular buffers to handle the first level of data archiving. This real time data could also be available without long term historical data.

 

Data Archiving at DESY – Matthias Clausen

            Data is archived from many different control systems at DESY and needs to be integrated at the viewer. The data being archived includes scalars, arrays and multidimensional arrays. Scalars archived include analog input, digital input and multibit binary inputs. Arrays include waveforms, subarrays, and data logger messages (data sets consisting of an array of data and the corresponding attributes). Multidimensional arrays include images and history buffers. It is planned to have the archived data listed into a relational database allowing an ODBC query to find what has been archived. Archive data is output in SDDS format to take advantage of the data filtering and plotting capabilities from the command line and to keep the structure of the current archive (1 channel per month per file) which already has 4 years of data.

            Requirements to support the archive uses here would include:

1)      Support for storing multidimensional arrays – which implies overcoming the existing array size boundaries as well as storing the meta data to describe the multiple dimensions.

2)      Support to extract bit information from multibit values implies that the metadata describing the MBBI is also available from the archive interface.

3)      To archive data logging messages, the current string size limitations in the database and channel access would need to be overcome. These messages should be archived into all data stores as they are now. Should there be a way to specify which group of channels that they affect – so that someone viewing archive data could see a marker from these messages if they applied to the data being viewed?

4)      Again we see the interest in querying what channels are archived over the network. The use of ODBC makes this possible.

5)      Exporting the archive data in SDDS format to provide the ability to manipulate the data from a script is an example of the need for operators to be able to manipulate the data they retrieve interactively or at least spontaneously – without pre-made programs that limit the flexibility of the manipulation.

 

Projected Requirements for the NLC – Bob Sass

NLC anticipates a large volume of archive data on the order of Petabytes (1015  bytes) /year. This probably means multiple archive formats i.e. files plus multiple databases. Some may be separate from the Channel Archiver but the use of databases e.g. Oracle by the  Archiver should be possible.

 

Channel Archiver – Kay Kasemir

            The channel archiver is a C++ archiving tool that is running under UNIX, LINUX and Windows. It loads an ascii configuration on startupThe configuration can be defined via a web interface. The interface provides channel and value iterators.These iterators make the data source opaque to the application. In order to change the data source for this set of tools, these channel and value iterators need to be replaced. A CGI tool is supplied for retrieval that allows the user to browse the archived data via a web browser. In addition, there is a windows application for viewing archive files that are mounted.

            This presentation was given with an eye toward defining a standard interface for accessing archive data. The library for accessing archive data follows.

 

 

Interface Library

            It is agreed that access to archive data from application programs must be through a common interface definition. A straw man will be worked out between Chris Larrieu and Kay and presented to this group. The ability to access archive data from a script will also be supported. The callable aspects are presented in the next five items. The key elements missing involve the storage of the data.

1)      Discovering the channels that are available in the archive. It would be valuable to have some predefined groups or relationships that can be called into an archive viewer. It was decided that this “directory service” should be a separate entity that allowed the application to define arbitrary sets and hierarchies that can be queried from various applications.

2)       Initialize the range (channel_name(s), time 0 – or special character for current time, time 1 as either the end time or the duration) would be a request for data. The return would be a query descriptor. The descriptor would include for each name: the time of the first point, the time of the last point, the archive method, number of points, other statistics to give a sense of the quality and density of the data, descriptor, value type, and a list of the attributes that were archived. We could get more than one response if there are multiple archivers or a buffers server.

3)      Transform_data would define a  set of sequential transformations that would be done on the data before it is returned to the requestor. This would set a transformation through which the data would go before being returned. There can be many such calls and they would be done in order. (Can these be changed during operation – would be need a remove transformation?)

4)      Narrow query would remove some of the attributes from the request list. (Do we really need this?)

5)      Get_data – or use a channel and data iterator and leave the details of the file traversal – network transmission hidden? Should request for attribute data be a separate call? Does this become the new channel access data object? If it is the new channel access data object, then we have the possibility of having clients that are not bothered by the transition between historic and current data.

 

Directory Service

      The ability to access relationships between channels and the sets in which they belong was considered a general service that was worth creating. Several laboratories

have already provided some sort of directory service. This is not to be confused with a name service. This service provides some information about sets and hierarchical relationships of sets. A common tool could provide the basis for generating sets to use in current displays, archive sets, and data analysis sets. We would push to select or develop a standard interface to a directory service and adopt a standard. It was suggested that this tool use a free relational database like FreeDB and mySQL. The use of Lightweight Directory Access Protocol (LDAP) was also suggested. Chris Larrieu will be working on this (in what time scale??) Calls to support include:

 

1)      channel_list (string set_name) which returns the channels that are included in this set

2)      channel_list(regular _expression_) which returns the channels whose name matches a regular _expression_.

3)      Set_list(string root) which returns all sets that are included in the specified set.

4)      Set_list(string child) which returns all sets which are ancestors of this child.

 

Standard Attributes

            It has been clear that higher level tools cannot be shared unless there is some standard set of attributes. In channel access these attributes have been in the form of several sets of attributes: They included time, display, and control attributes. Time attributes are: value, time stamp, alarm status, and alarm severity. Display attributes change for discrete and analog values. Display attributes for analog values include: display high, display low, hihi alarm limit, high alarm limit, low alarm limit, lolo alarm limit, number of digits to the right of the decimal point to display (precision), and the engineering units as well as the fields included in the time attribute list. The display attributes for discrete values is the number of states and a list of the state strings for each state. The control list for an analog value includes the display attributes as well as high control limit and low control limit. The control list for a discrete value is the same as the display attributes. This attribute list has been fixed and typically sent only on connection to clients, to reduce the network traffic. This meant that clients using these attributes for producing archives or displays would not register changes to these attributes until a client was disconnected and connected again. A modification that is planned for channel access will allow these attributes to be sent only on change. In addition, we will also allow the dynamic grouping of attributes. In this new environment, channel access will only send the attributes that have changed. This will reduce the network traffic for existing applications as the alarm status and severity rarely change, but are sent to nearly all standard EPICS clients. Clients will need to be modified to respond to a change in the attributes values. Jeff and Matthias will work to finalize this list of standard attributes. Jeff or Kay will implement this data object – depending on their available time. The list includes:

1) display_high            double

2) display_low                  double

3) units             string

4) control_high            double

5) control_low                  double

6) time                          osi_time

7) alarm_status            ???

8) alarm_severity            ???

9) precision                       unsigned integer            (number of decimal digits – old)

10) significant digits            unsigned integer            (total digits to display)

11) decimal_digits            unsigned integer            (number of decimal digits – new)

12) sample_frequency         double

13) descriptor                     variable length string

14) trigger_offset            integer

15) array_bounds            ??? – multi-dimensional arrays

16) host                  variable length string

17) user                  variable length string

18) application            variable length string

19) process ID                   variable length string

20) value                any type

 

Standard Data Transformations (not to be confused with STDs)

(who was going to work on this???)

            As we discussed the access to archive data, it was clear that most people making queries to this data would want to be able to perform some data reduction. It was also clear that we wanted to be able to reduce the data at the source in order to reduce the network traffic that would be required to send this information over the wire in a distributed archive environment. It became clear that there were several places where these transformations were made readily available to the user and become invaluable tools for operations. The most notable example was SDDS at APS. It was decided that we would supply a standard set of transforms (perhaps stolen from SDDS) that would be available in a callable library that was also accessable by a script. At any point in the transform list, an SDDS output should be available to take advantage of the other SDDS tools. The list includes binning algorithms which would be used for reducing data across the wire, as well as computational transforms. The list included:

1)      moving average

2)      digital Fourier transform (spelling)

3)      chirpZ transform (now infamous)

4)      decimation

5)      single value decomposition

6)      periodic

7)      interpolation – spline, linear, discrete, polynomial

8)      trig fit

9)      statistical – average, mean, standard deviation, skew, min, max, median, number of outliers

10)  rasterizing

11)  infinite impulse response filters

 

 

 

High Level Application Requirements

            After paying particular attention to the uses of historical data in our controls environment, we studied other high level applications to determine what changes to the channel access protocol would better support these applications.

 

 

Next Linear Collider / SLC Requirements – Bob Sass

            The SLAC control group has been meeting to discuss the control system required for the next linear collider. One key consideration for this control system is that it has to provide all of the tools that the SLC operations has found invaluable during their many years of operation.

 

Operator Scripting (Button Scripting)

            Operators are able to put their console in learn mode and have the commands that are pressed – turn into a script that can then be edited. The script can either be invoked by name or executed from an operator display button press. At SLC there are about 3,000 of these macros.

* A channel access requirement to monitor the actions from one client. The DESY put logger into a string record is an example of monitoring puts that could be translated into a macro. Having the user information included in the attribute list would assist this.

* It was agreed that operators find scripting very useful. A method for scripting should be accepted as the standard and included in EPICS training.

 

Beam-based alignment

            This is the alignment of the quads and sextapole magnets to measure the offset eith respect to the readout center of the BPM. It is run on a time scale of weeks or months. 100 pulses are read. The operator selects a range of devices to produce the fit. The data acquisition is at 120 Hz for this 100 pulses. A 3x3 or 9x9 matrix is used. The output is the offset for the power supplies.

* Channel access requirements to support up to 9x9 matrices.

*Buffer server could handle the 100 pulse at 120 Hz data collection.

*Channel access monitor to send the next X changes would also support this.

 

Linac Auto Steering with Movers

            Data is collected over many pulses and averaged. The bad data is removed from the calculation. Mover motions must be synchronized within a couple of pulse latency.

* Channel access requirement for synchronized puts.

* Standardization on transforms could aid in the averaging portion

 

Machine Protection System

            Has to run at the beam rate of 120 Hz. It will also use beam loss monitors at 1 Hz. Latency required to read data is less that 1 pulse.

 

Beam-based feedback

            This is run continuously to steer the beam. Matrices are computed and sent to downstream and upstream loops on every pulse on a dedicated network like a shared memory. Device support will interface to this. Feedback latency is 2 pulses.

 

Correlation Facility

            X-Y plots with functionality similar to the JLAB XTRACT tool.

 

Linac Energy Management

            Adjust quads under operator request. The results will be reviewed and the changes made under operator control. Needs phase and amplitude from 3,000 RF stations.

 

Optics and modeling

            The current averaged bpm data will be used.

 

Required services to meet the requirements:

*120 Hz data acquisition

Is a buffer server useful for averaged data?

What is the minimum latency required by any application that will use the standard network?

*Contention resolution at 120 Hz will require multi-priority channel access/database

*Synchronized device control across IOCs within 1 clock tick.

            Can they be told to set at some time in the future?

            What is the maximum latency before the commands are completed?

*Support for 9x9 matrices

*A way to tell users to hold off in the case of device contention.

*Booting and downloading of over 1,000 IOCs

            Does this use flash memory to boot?

            Incremental loading or online add would help

*120 Hz data scanning could use the buffer server to offer prior data when an event occurred.

*At current specifications, 10 Gbytes per second will be archived. Perhaps a buffer server will ease this requirement.

 

 

SNS – John Galambos

            The SNS is a neutron spallation facility being built in Oak Ridge in a collaboration between LBL, LANL, ANL, BNL and ORNL. One of the key design constraints is to limit the beam loss so that tunnel access is not restricted for an extended period of time.  It has a 60 Hz pulsed neutron source. The   In the first 16.6667 msec period, 15 msecs are idle, and the 1.6667 msecs is used to fill the  ring. There is 682 usecs between t0 and the beginning of the accumulation cycle. There is 975 usec to accumulate and after 250 nsec gap, there is 590 nsec of beam. The notes from the diagnostic meeting where these timing requirements were discussed lives on a web site at: http://recycle.lbl.snsbiw/. Information on the applications discussed below can be found at http://www.ornl.gov/sns/mtgs/tdasw1/tdasw1.htm.

 

Linac Tuning

            Horizontal and vertical corrections needed. The first corrector is changed based on the first moment. All downstream power supplies will be controlled on a model based correction program. This will run once per second and is based on the TRACE3D model. (This will not include multi-particle space charge tracking which is a computationally demanding and will take over 24 hours to actually predict halo).

 

Steering

            Select a subset of magnets and change the beam position.

 

RF Phase and Amplitude Correction

            Vary the RF phase and amplitude for 10 points each at 60 Hz while calculating dW. This will use the full current at a low duty factor. Each of 12 Klystrons will be run over 1000 different control values. Are these 1000 changes made at the 60 Hz rate? How many channels are collected?

 

SNS Ring Steering

            Transfer matrix needed for steering along with run-time reading of the power supply read-backs.

 

SNS Ring Radiation and Beam Loss Monitors

            100 ionization monitors are read every 1 usec and shut off beam. There are also 10 paint can loss monitors sampling at the nsec rate that need to be available to determine the beam loss within a pulse. We will need to keep track of the source of beam loss over time to determine the possible causes.

 

SNS Ring Orbit Closure

            Requires setting ramping power supplies within several pulses of each other.

 

SNS Ring BPMs

            Will require reading up to 100 bpms with 1K samples each at 60 Hz. We will want to keep these available in memory for up to a minute? and to be able to save these when an event occurs? Or deliver them to the operator after running which transforms?

 

SNS Ring Beam Current

            Wall current monitors are integrated at 60 Hz and should be kept in a front-end buffer for up to one day.

 

SNS Ring Beam Profiles

These will be done using a flying wire and a harp that is at the target and being read at 60 Hz.

 

SHS Ring Tune Measurement

            An FFT of the BPM turn by turn data – once per minute or on the first 10 turns.

 

 

Jefferson Laboratory High Level Applications – David Brian

 

Model server

            Provides transfer matrices to other codes.

 

Orbit Lock and Energy Lock

            These control strategies are run every 5 seconds during operation. They are run every second during startup. They receive matrices from the model server and then do synchronous puts into the data to change setpoints.

 

Auto Steer

            Commissioning tool for beam steering. It uses the transfer matrices from the model server. It requires that data it reads from the BPMs are from the same pulse. Ideally, it would be able to do synchronous puts across the network.

 

Momentum Scaling Tool

            Sets all of the quad magnets to known settings for know energies. No synchronization is required.

 

LEM

            Provides RF energy balance along the cavities to leave similar headroom in each cavity. This is required as the energy gain between different cavities can have as much as a 20% difference.

 

FFB

            Takes out noise from the power lines. This runs on two of the three end stations. It is run at 1KHz to keep the beam centered. It talks to orbit and energy lock to prohibit interaction between the programs. It uses 12 inputs and 12 outputs.

 

Beam Energy Manager

            Runs at a 1 Hz rate to control the energy of the beam delivered to the halls. The goal is to control within 1 part in 10,000.

 

CM Log

            Merges messages into an archive.

*  If there was support for unlimited string length, these messages could be archived on any archiver

* There needs to be some way to associate these messages with other channels – is this part of the set server? If an archived message is being displayed – does it just put a paperclip in time wherever it has a message to display?

 

 

Network Instruments – Kay Kasemir

            This includes any intelligent device that receives setpoints and commands and returns values and status. These could be channel access links, plcs, labview etc….

*Require subscription with a guaranteed update rate – notification if no data was received.

*Commands that require several parameters to be written need synchronization.

*There is some need to process commands on a given start time.

*For commands, it would be useful to be able to query the device for its control values – one use would be to query the operator for the parameters in a dialog box.

 

Redundant Lock Mechanism – Matthias Clausen

            Some applications on the workstation require redundant operation. An example is the alarm manager. The archiver could be modified to operate the same way. In this situation, two or more machines are able to reach the same file system. If one machine becomes unable to work, the other takes over.

 

Channel Access Requirements:            Jeff Hill

            The requirements fell into the following topics. We went through the list in an easiest to most contentious order.

            New Data Descriptor

            New Delivery Requirements

            Large Matrix Support

            CA Server Diagnostics

            Data Descriptor Implementation

 

Diagnostics

            A discussion was held on what online diagnostics could be added to channel access to make the state of health of the system more apparent to users. The current diagnostic, casr, does give a great deal of information on the state of health of the clients connected to a given server. However, it would be nice to be able to monitor these through channel access and set alarm and warning limits on them. Values that are wanted include:

Notification when the server beacon rate changes

Per Client Diagnostics from the server

Event rate, memory usage, last time event was sent, last time a put was received, time the client was started, user name of the client, clients execution state

How is the data requested?

Server diagnostics

Total memory usage, overall event rate, last time put was received, last time a monitor was sent, channel count, search rate, connection rate.

What is the period for computing the rate?

What about multiple servers in the same location?

Client Diagnostics

Number of events per second being received, number of channels connected, number of channels, last time an event was received.

 

 

Data Descriptor Requirements

            Process Variable Attributes –

                        Use well defined application types

                        Have built in standard attributes – listed earlier in the meeting

                        Allow new types to be added

                        Do the clients learn the servers native storage type?

We should move conversions to clients

            Application Defined Data Container

                        Support command completion

Support message passing

                        Subscription update data packaging

                        Need well defined application type names.

            Matrices

                        Number of dimensions

                        Number of elements in each dimension

                        Offset of first element in each dimension

                        What about sparse matrices?

            Enumerated Data

                        Number of states

                        String per state

 

New Delivery Requirements

            PV subscription

                        Send update messages on a periodic time (callback if late arriving)

                        Rate limit monitors (on the same offset for each requested interval)

                        Send a set of channels when some event occurs – calc like _expression_

                        Enable monitors on some condition

            PV Read/Write

                        Read/Set group of attributes atomically

                        Read/Set client defined set of channels atomically

                        Event synchronize read/write on clock time or calc _expression_

                        Arm write with verification – then execute

 

Large Matrix Dilemma

            Approach 1) lock record and copy in array

-         Slow client stops record processing – could lock with time limit

-         Client disconnecting with only have the array written – record is udf

Approach 2) allocate per client large buffer in server

-         IOC memory is now vulnerable to rogue clients

-         To prevent fragmentation of memory, use fixed size buffers

This was discussed later in the week and reappears later in the notes.

 

Data Descriptor Implementation

            Between Dynamic Descriptors like GDD and CDEV data object, Compiled Descriptors like the current dbr_types, or Abstract Base Class – it was decided to use an abstract base class.

            Bench marks of some prototype code show that like containers can be unpacked in .1 usec on a 500 MHz Pentium. Unlike containers take .6 usecs.

 

Note that base class code will be sent to Chris Larrieu.

Bob Sass will look at IDL vs. a virtual base class in the upcoming months.

Should we use IOP as the protocol – what action is required and who does it for this?

CORBA will be examined by Bob Sass and Chris Larrieu with a report forthcoming.

 

On Thursday and Friday Kay, Jeff, Marty and Bob continued to discuss some of the issues from the meeting……….

 

Large Arrays – Part Deu

            It was decided that channel access would allocate some number of fixed size buffers to buffer the array and then put it into the record when the entire array was received. Channel access would be using a high water mark for memory usage so that other clients will have some headroom in memory if they need it. When using large arrays, one will need to be careful about the number of clients that are connecting to it and the frequency that they update it.

 

EPICS Release Schedule

3.14          Available for alpha release the first week of February

Features included are:

1)      Portable database and channel access server which allows the database to run on platforms other than vxWorks.

2)      Support for tornado 2 and tornado 1 will be included

3)      The ability to log ca puts via dbaccess will be integrated into access control from DESY (but not in the first alpha relese)

3.15          Available for alpha release by August and will include

1)      Support for large arrays

2)      Variable length characters strings

3)      Unlimited PV name length

4)      Portable server replacement of rsrv – multi thread locks, no gdd, use new abstract base class

5)      Put confirmation

 

 

Synchronized Put Discussion

            The need to somehow synchronize the putting of many parameters exists in many applications. There are two scenarios: the channels are all in one IOC or they are across several IOCs. Some discussion was held about the possibility of locking the database while these values were set. It was agreed not to pursue this strategy. The new feature that will be added to channel access to support this is arm and set as well as put on event or clock time. For arm and set, the new values will be loaded into the servers with verification that they have arrived. The set command will include a time stamp and get a response indicating when the set was performed. The client can at least determine if the puts were in the time interval desired. The operation can be abandoned if any of the arm commands returns late. Only notification can be given if the set commands execute late. Another approach that will be supported is the ability to set a value on an event. The event can either be some _expression_ in the format of a calc record or a wall time that is in the future. An error is returned if the wall time is already passed.

 

 

Data Storage for the Archive Requests

            A discussion was held to examine the use of the abstract base class – data object as the  value to use for the archive interface. The allocation of memory would need to be examined.

 

 

 

 

 

Excel Spreadsheet:

Subject Priority Categories
Define C++ based client side API High CAV4-Client-API
unlimited matrix bounds High CAV4-Process-Entity-Paradigm
unlimited string length High CAV4-Process-Entity-Paradigm
allow several PCAS servers on one machine High Maintenance
Convert old client/serverto use osi routines High Maintenance
EPICS_CA_ADDR_LIST specifies port numbers High Maintenance
if possible, dont allow the server to use the last fd High Maintenance
install PCAS as the IOC server High Maintenance
abstract base class for data description in the server API (backwards compatible) High PCAS
thread safe C++ server library High PCAS
Implement C++ based client side API Normal CAV4-Client-API
write logging (subscribe for name of client that is writing etc) Normal CAV4-Data-Acquisition
application extensible container types (msg passing, cmd compl) Normal CAV4-Process-Entity-Paradigm
application extensible PV attribute set Normal CAV4-Process-Entity-Paradigm
directory serivice resorce location update (detect name space collision) Low CAV4 Directory Service
directory service asynchronous query (implement client) Low CAV4 Directory Service
directory service asynchronous query (implement server) Low CAV4 Directory Service
directory service asynchronous query client side plug in (defne API) Low CAV4 Directory Service
directory service asynchronous query client side plug in (implement) Low CAV4 Directory Service
directory service PV location event subscription Low CAV4 Directory Service
directory service redundancy Low CAV4 Directory Service
directory service server side update (implement client) Low CAV4 Directory Service
directory service server side update (implement server) Low CAV4 Directory Service
directory service server side update plug in (define API) Low CAV4 Directory Service
directory service server side update plug in (implement) Low CAV4 Directory Service
directory service wild card queies Low CAV4 Directory Service
directory service wildcard queries Low CAV4 Directory Service
cancel asynchronous write (put callback) while in progress Low CAV4-Client-API
export process passive to client API Low CAV4-Client-API
2 channels same name => effective 1 channel in client Low CAV4-Compression
combine event subscriptions on same PV in client Low CAV4-Compression
move conversions to client Low CAV4-Compression
protocol compression (level 1) Low CAV4-Compression
protocol compression (level 2) Low CAV4-Compression
application extensable event types Low CAV4-Data-Acquisition
client adj monitor Q length Low CAV4-Data-Acquisition
client adj server's dispatch priority and net QOS Low CAV4-Data-Acquisition
client sets event subscription's min / max rate Low CAV4-Data-Acquisition
client sets server's event queue length Low CAV4-Data-Acquisition
client specified analog event subscription deadband Low CAV4-Data-Acquisition
dynamic evaluation of event trigger criteria in event subscriupton Low CAV4-Data-Acquisition
built in PVs for server diagnostics Low CAV4-Process-Entity-Paradigm
client discovers N dimensional bounds of matrix Low CAV4-Process-Entity-Paradigm
N dimensional matrix addressing Low CAV4-Process-Entity-Paradigm
replace IP broadcast with IP multicast Low CAV4-WAN
strengthened security (kerberos SSH tuinnel) Low CAV4-WAN


Navigate by Date:
Prev: VxWorks global variable device support William Lupton
Next: Re: VxWorks global variable device support Ralph . Lange
Index: 1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: Re: VxWorks global variable device support Tim Mooney
Next: cd2400 or z85230 serial driver ioctl options Porter, Rodney
Index: 1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 10 Aug 2010 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·