This chapter describes database definitions. The following definitions are described:
Record Instances are fundamentally different from the other definitions. A file containing record instances should never contain any of the other definitions and vice-versa. Thus the following convention is followed:
This chapter also describes utility programs which operate on these definitions
Any combination of definitions can appear in a single file or in a set of files related to each other via include files.
The following summarizes the Database Definition syntax:
path "path" addpath "path" include "filename" #comment menu(name) { include "filename" choice(choice_name, "choice_value") ... } recordtype(record_type) { include "filename" field(field_name, field_type) { asl(asl_level) initial("init_value") promptgroup(gui_group) prompt("prompt_value") special(special_value) pp(pp_value) interest(interest_level) base(base_type) size(size_value) extra("extra_info") menu(name) prop(yesno) } %C_declaration ... } device(record_type, link_type, dset_name, "choice_string") driver(drvet_name) registrar(function_name) variable(variable_name) breaktable(name) { raw_value eng_value ... }
The Following defines a Record Instance
record(record_type, record_name) { include "filename" field(field_name, "value") alias(alias_name) info(info_name, "value") ... } alias(record_name,alias_name)
The following are keywords, i.e. they may not be used as values unless they are enclosed in quotes:
path addpath include menu choice recordtype field device driver registrar function variable breaktable record grecord info alias
In the summary section, some values are shown as quoted strings and some unquoted. The actual rule is that any string consisting of only the following characters does not have to be quoted unless it contains one of the above keywords:
a-z A-Z 0-9 _ - : . [ ] < > ;
These are also the legal characters for process variable names. Thus in many cases quotes are not needed.
A quoted string can contain any ascii character except the quote character "
. The quote character itself can given by using \
as an escape. For example "\""
is a quoted string containing the single character "
.
Macro substitutions are permitted inside quoted strings. Macro instances take the form:
$(name)
or
${name}
There is no distinction between the use of parentheses or braces for delimiters, although the two must match for a given macro instance. The macro name can be made up from other macros, for example:
$(name_$(sel))
A macro instance can also provide a default value that is used when no macro with the given name is defined. The default value can be defined in terms of other macros if desired, but cannot contain any unescaped comma characters. The syntax for specifying a default value is as follows:
$(name=default)
Finally macro instances can also contain definitions of other macros, which can (temporarily) override any existing values
for those macros but are in scope only for the duration of the expansion of this macro instance. These definitions consist
of name=value
sequences separated by commas, for example:
$(abcd=$(a)$(b)$(c)$(d),a=A,b=B,c=C,d=D)
The database routines translate standard C escape sequences inside database field value strings only. The standard C escape sequences supported are:
\a \b \f \n \r \t \v \\ \? \' \" \ooo \xhh
\ooo
represents an octal number with 1, 2, or 3 digits. \xhh
represents a hexadecimal number with 1 or 2 digits.
The comment symbol is ``#''. Whenever the comment symbol appears, it and all characters through the end of the line are ignored.
No item can be referenced until it is defined. For example a recordtype
menu field can not reference a menu unless
that menu definition has already been defined. Another example is that a record instance can not appear until the
associated record type has been defined.
If a menu, recordtype, device, driver, or breakpoint table is defined more than once, then only the first instance is used. Record instance definitions however are (normally) cumulative, so multiple instances of the same record may be loaded and each time a field value is encountered it replaces the previous value.
.db
'' or ``.vdb
'' if the file also contains visual layout information
.dbd
''
path "dir:dir...:dir" addpath "dir:dir...:dir
The path string follows the standard convention for the operating system, i.e. directory names are separated by a colon ``:
'' on Unix
and a semicolon ``;
'' on Windows.
The path
command specifies the current search path for use when loading database and database definition files.
The addpath
appends directory names to the current path.
The path is used to locate the initial database file and included files.
An empty dir
at the beginning, middle, or end of a non-empty path string means the current directory.
For example:
nnn::mmm # Current directory is between nnn and mmm :nnn # Current directory is first nnn: # Current directory is last
Utilities which load database files (dbExpand
, dbLoadDatabase
, etc.) allow the user to specify an initial path. The
path
and addpath
commands can be used to change or extend the initial path.
The initial path is determined as follows:
EPICS_DB_INCLUDE_PATH
is defined, it is used. Else:
.
'', i.e. the current directory.
The path is used unless the filename contains a /
or \
.
The first directory containing the specified filename is used.
include "filename"
An include statement can appear at any place shown in the summary. It uses the path as specified above.
menu(name) { choice(choice_name, "choice_string") ... }
enum
generated by dbdToMenuH.pl
or dbdToRecordtypeH.pl
.
This must be a legal C/C++ identifier.
menu(menuYesNo) { choice(menuYesNoNO, "NO") choice(menuYesNoYES, "YES") }
recordtype(record_type) { field(field_name, field_type) { asl(as_level) initial("init_value") promptgroup(gui_group) prompt("prompt_value") special(special_value) pp(pp_value) interest(interest_level) base(base_type) size(size_value) extra("extra_info") menu("name") prop(yesno) } %C_declaration ... }
promptgroup
is not defined.
DBF_STRING
fields.
DBF_NOACCESS
fields.
DBF_MENU
fields. It is the name of the associated menu.
DBF_STRING
DBF_CHAR
, DBF_UCHAR
DBF_SHORT
, DBF_USHORT
DBF_LONG
, DBF_ULONG
DBF_FLOAT
, DBF_DOUBLE
DBF_ENUM
, DBF_MENU
, DBF_DEVICE
DBF_INLINK
, DBF_OUTLINK
, DBF_FWDLINK
DBF_NOACCESS
ASL0
ASL1
(default value)
Fields which operators normally change are assigned ASL0
.
Other fields are assigned ASL1
.
For example, the VAL
field of an analog output record is assigned ASL0
and all other fields ASL1
.
This is because only the VAL
field should be modified during normal operations.
GUI_COMMON
GUI_ALARMS
GUI_BITS1
GUI_BITS2
GUI_CALC
GUI_CLOCK
GUI_COMPRESS
GUI_CONVERT
GUI_DISPLAY
GUI_HIST
GUI_INPUTS
GUI_LINKS
GUI_MBB
GUI_MOTOR
GUI_OUTPUT
GUI_PID
GUI_PULSE
GUI_SELECT
GUI_SEQ1
GUI_SEQ2
GUI_SEQ3
GUI_SUB
GUI_TIMER
GUI_WAVE
GUI_SCAN
This information is for use by Database Configuration Tools.
This is defined only for fields that can be given values by database configuration tools.
File guigroup.h
contains all possible definitions.
This allows database configuration tools to group fields together by functionality, not just order them by name.
This feature has seldom been used, so many record types do not have appropriate values assigned to some fields.
SPC_MOD
- Notify record support when modified.
The record support special
routine will be called whenever the field is modified by the database access routines.
SPC_NOMOD
- No external modifications allowed.
This value disables external writes to the field, so it can only be set by the record or device support module.
SPC_DBADDR
- Use this if the record support cvt_dbaddr
routine should be called by dbNameToAddr
,
i.e. when code outside record/device support is connecting to the field.
The following values are for database common fields. They must not be used for record specific fields:
SPC_SCAN
- Scan related field.
SPC_ALARMACK
- Alarm acknowledgment field.
SPC_AS
- Access security field.
The following values are deprecated, use SPC_MOD
instead:
SPC_RESET
- a reset field is being modified.
SPC_LINCONV
- A linear conversion field is being modified.
SPC_CALC
- A calc field is being modified.
NO
(default)
YES
dbpr
command.
DECIMAL
(Default)
HEX
DBF_STRING
field.
DBF_NOACCESS
fields, this is the C language definition for the field.
The definition must end with the fieldname in lower case.
%
inside the record body introduces a line of code that is to be included in the generated C header file.
The following is the definition of the event record type:
recordtype(event) { include "dbCommon.dbd" field(VAL,DBF_USHORT) { prompt("Event Number To Post") promptgroup(GUI_INPUTS) asl(ASL0) } field(INP,DBF_INLINK) { prompt("Input Specification") promptgroup(GUI_INPUTS) interest(1) } field(SIOL,DBF_INLINK) { prompt("Sim Input Specifctn") promptgroup(GUI_INPUTS) interest(1) } field(SVAL,DBF_USHORT) { prompt("Simulation Value") } field(SIML,DBF_INLINK) { prompt("Sim Mode Location") promptgroup(GUI_INPUTS) interest(1) } field(SIMM,DBF_MENU) { prompt("Simulation Mode") interest(1) menu(menuYesNo) } field(SIMS,DBF_MENU) { prompt("Sim mode Alarm Svrty") promptgroup(GUI_INPUTS) interest(2) menu(menuAlarmSevr) } }
device(record_type, link_type, dset_name, "choice_string")
record_type
and choice_string
must be unique.
If the same combination appears more than once, only the first definition is used.
CONSTANT
PV_LINK
VME_IO
CAMAC_IO
AB_IO
GPIB_IO
BITBUS_IO
INST_IO
BBGPIB_IO
RF_IO
VXI_IO
DTYP
choice string for this device support.
A choice_string
value may be reused for different record types, but must be unique for each specific record type.
device(ai,CONSTANT,devAiSoft,"Soft Channel") device(ai,VME_IO,devAiXy566Se,"XYCOM-566 SE Scanned")
driver(drvet_name)
driver(drvVxi) driver(drvXy210)
registrar(function_name)
void
and has been marked in
its source file with an epicsExportRegistrar
declaration, e.g.
static void myRegistrar(void); epicsExportRegistrar(myRegistrar);
This can be used to register functions for use by subroutine records or that can be invoked from iocsh. The example application described in Section 2.2, ``Example IOC Application'' on page gives an example of how to register functions for subroutine records.
registrar(myRegistrar)
variable(variable_name[, type])
epicsExportAddress
declaration.
int
is assumed.
Currently only int
and double
variables are supported.
This registers a diagnostic/configuration variable for device or driver support or a subroutine record subroutine so that
the variable can be read and set with the iocsh var
command (see Section 18.2.5 on page ).
The example application described in Section 2.2 on page
provides an example of how to register a debug variable for a subroutine record.
In an application C source file:
#include <epicsExport.h> static double myParameter; epicsExportAddress(double, myParameter);
In an application database definition file:
variable(myParameter, double)
function(function_name)
epicsRegisterFunction
declaration.
This registers a function so that it can be found in the function registry for use by record types such as sub or aSub which refer to the function by name. The example application described in Section 2.2 on page provides an example of how to register functions for a subroutine record.
In an application C source file:
#include <epicsExport.h> #include <registryFunction.h> static long myFunction(void *argp) { /* my code ... */ } epicsRegisterFunction(myFunction);
In an application database definition file:
function(myFunction)
breaktable(name) { raw_value eng_value ... }
breaktable(typeJdegC) { 0.000000 0.000000 365.023224 67.000000 1000.046448 178.000000 3007.255859 524.000000 3543.383789 613.000000 4042.988281 692.000000 4101.488281 701.000000 }
record(record_type, record_name) { alias(alias_name) field(field_name, "field_value") info(info_name, "info_value") ... } alias(record_name, alias_name)
a-z A-Z 0-9 _ - + : [ ] < > ;
NOTE: If macro substitutions are used the name must be quoted.
If duplicate definitions are given for the same record, then the last value given for each field is the value assigned to the field.
\"
, \t
, \n
, \064
and \x7e
, and these will be translated appropriately when loading the database.
Permitted values are as follows:
DBF_STRING
DBF_CHAR
, DBF_UCHAR
, DBF_SHORT
, DBF_USHORT
, DBF_LONG
, DBF_ULONG
DBF_FLOAT
, DBF_DOUBLE
DBF_MENU
DBF_DEVICE
DBF_INLINK
, DBF_OUTLINK
, DBF_FWDLINK
INP
or OUT
then this field is associated with DTYP
, and the permitted values
are determined by the link type of the device support selected by the current DTYP
choice string.
Other DBF_INLINK
and DBF_OUTLINK
fields must be either CONSTANT
or PV_LINK
s.
CONSTANT
can be given either a constant or a PV_LINK
.
The allowed values for the field depend on the device support's link type as follows:
CONSTANT
PV_LINK
record.field process maximize
record
is the name of a record that exists in this or another IOC.
The .field
, process
, and maximize
parts are all optional.
The default value for .field
is .VAL
.
process
can have one of the following values:
NPP
- No Process Passive (Default)
PP
- Process Passive
CA
- Force link to be a channel access link
CP
- CA and process on monitor
CPP
- CA and process on monitor if record is passive
NOTES:
CP
and CPP
are valid only for DBF_INLINK
fields.
DBF_FWDLINK
fields can use PP
or CA
.
If a DBF_FWDLINK
is a channel access link it must reference the target record's PROC
field.
maximize
can have one of the following values:
NMS
- No Maximize Severity (Default)
MS
- Maximize Severity
MSS
- Maximize Severity and Status
MSI
- Maximize Severity if Invalid
VME_IO
#Ccard Ssignal @parm
card
- the card number of associated hardware module
signal
- signal on card
parm
- An arbitrary character string of up to 31 characters. This field is optional and is device specific.
CAMAC_IO
#Bbranch Ccrate Nstation Asubaddress Ffunction @parm
branch
, crate
, station
, subaddress
, and function
should be obvious to camac
users.
subaddress
and function
are optional (0 if not given).
parm
is also optional and is device specific (25 characters max).
AB_IO
#Llink Aadapter Ccard Ssignal @parm
link
- Scanner, i.e. vme scanner number
adapter
- Adapter. Allen Bradley also calls this rack
card
- Card within Allen Bradley Chassis
signal
- signal on card
parm
- optional device-specific character string (27 char max)
GPIB_IO
#Llink Aaddr @parm
link
- gpib link, i.e. interface
addr
- GPIB address
parm
- device-specific character string (31 char max)
BITBUS_IO
#Llink Nnode Pport Ssignal @parm
link
- link, i.e. vme bitbus interface
node
- bitbus node
port
- port on the node
signal
- signal on port
parm
- device specific-character string (31 char max)
INST_IO
@parm
parm
- Device dependent character string
BBGPIB_IO
#Llink Bbbaddr Ggpibaddr @parm
link
- link, i.e. vme bitbus interface
bbadddr
- bitbus address
gpibaddr
- gpib address
parm
- optional device-specific character string (31 char max)
RF_IO
#Rcryo Mmicro Ddataset Eelement
VXI_IO
#Vframe Cslot Ssignal @parm
(Dynamic addressing)
#Vla Signal @parm
(Static Addressing)
frame
- VXI frame number
slot
- Slot within VXI frame
la
- Logical Address
signal
- Signal Number
parm
- device specific character string(25 char max)
record(ai,STS_AbAiMaS0) { field(SCAN,".1 second") field(DTYP,"AB-1771IFE-4to20MA") field(INP,"#L0 A2 C0 S0 F0 @") field(PREC,"4") field(LINR,"LINEAR") field(EGUF,"20") field(EGUL,"4") field(EGU,"MilliAmps") field(HOPR,"20") field(LOPR,"4") } record(ao,STS_AbAoMaC1S0) { field(DTYP,"AB-1771OFE") field(OUT,"#L0 A2 C1 S0 F0 @") field(LINR,"LINEAR") field(EGUF,"20") field(EGUL,"4") field(EGU,"MilliAmp") field(DRVH,"20") field(DRVL,"4") field(HOPR,"20") field(LOPR,"4") info(autosaveFields,"VAL") } record(bi,STS_AbDiA0C0S0) { field(SCAN,"I/O Intr") field(DTYP,"AB-Binary Input") field(INP,"#L0 A0 C0 S0 F0 @") field(ZNAM,"Off") field(ONAM,"On") }
Information items provide a way to attach named string values to individual record instances that are loaded at the same time as the record definition.
They can be attached to any record without having to modify the record type, and can be retrieved by programs running on the IOC (they are not visible via Channel Access at all).
Each item attached to a single record must have a unique name by which it is addressed, and database access provides routines to allow a record's info items to be scanned, searched for, retrieved and set.
At runtime a void*
pointer can also be associated with each item, although only the string value can be initialized from the record definition when the database is loaded.
Each record type can have any number of record attributes.
Each attribute is a psuedo field that can be accessed via database and channel access.
Each attribute has a name that acts like a field name but returns the same value for all instances of the record type.
Two attributes are generated automatically for each record type: RTYP
and VERS
.
The value for RTYP
is the record type name.
The default value for VERS
is ``none specified'', which can be changed by record support.
Record support can call the following routine to create new attributes or change existing attributes:
long dbPutAttribute(char *recordTypename, char *name, char*value)
The arguments are:
recordTypename
- The name of recordtype.
name
- The attribute name, i.e. the psuedo field name.
value
- The value assigned to the attribute.
The menu menuConvert
is used for field LINR
of the ai
and ao
records.
These records allow raw data to be converted to/from engineering units via one of the following:
Other record types can also use this feature.
The first choice specifies no conversion; the second and third are both linear conversions, the difference being that for Slope conversion the user specifies the conversion slope and offset values directly, whereas for Linear conversions these are calculated by the device support from the requested Engineering Units range and the device support's knowledge of the hardware conversion range.
The remaining choices are assumed to be the names of breakpoint tables.
If a breakpoint table is chosen, the record support modules calls cvtRawToEngBpt
or cvtEngToRawBpt
.
You can look at the ai
and ao
record support modules for details.
If a user wants to add additional breakpoint tables, then the following should be done:
menuConvert.dbd
file from EPICS base/src/ioc/bpt
menuConvert.dbd
is loaded into the IOC instead of EPICS version.
It is only necessary to load a breakpoint file if a record instance actually chooses it.
It should also be mentioned that the Allen Bradley IXE device support misuses the LINR
field.
If you use this module, it is very important that you do not change any of the EPICS supplied definitions in menuConvert.dbd
.
Just add your definitions at the end.
If a breakpoint table is chosen, then the corresponding breakpoint file must be loaded into the IOC before iocInit
is called.
Normally, it is desirable to directly create the breakpoint tables.
However, sometimes it is desirable to create a breakpoint table from a table of raw values representing equally spaced engineering units.
A good example is the Thermocouple tables in the OMEGA Engineering, INC Temperature Measurement Handbook.
A tool makeBpt
is provided to convert such data to a breakpoint table.
The format for generating a breakpoint table from a data table of raw values corresponding to equally spaced engineering values is:
!comment line <header line> <data table>
The header line contains the following information:
An example definition is:
"TypeKdegF" 32 0 1832 4095 1.0 -454 2500 1 <data table>
The breakpoint table can be generated by executing
makeBpt bptXXX.data
The input file must have the extension of data.
The output filename is the same as the input filename with the extension of .dbd
.
Another way to create the breakpoint table is to include the following definition in a Makefile
:
BPTS += bptXXX.dbd
NOTE: This requires the naming convention that all data tables are of the form bpt<name>.data
and a breakpoint table bpt<name>.dbd
.
Given a file containing menu definitions, dbdToMenuH.pl
generates a C/C++ header file for use by code which needs those menus.
Given a file containing any combination of menu definitions and record type definitions, dbdToRecordtypeH.pl
generates a C/C++ header file for use by any code which needs those menus and record type.
EPICS Base uses the following conventions for managing menu and recordtype definitions. Users generating local record types are encouraged to follow these.
menuScan
) or is of global use (for example menuYesNo
) should be defined in its own file.
The name of the file is the same as the menu name, with an extension of .dbd
.
The name of the generated include file is the menu name, with an extension of .h
.
Thus menuScan
is defined in a file menuScan.dbd
and the generated include file is named menuScan.h
Record.dbd
.
The name of the generated include file is the same as the .dbd
file but with an extension of .h
.
Thus the record type ao
is defined in a file aoRecord.dbd
and the generated include file is named aoRecord.h
.
Since aoRecord
has a private menu called aoOIF
, the dbd
file and the generated include file will have definitions for this menu.
Thus for each record type, there are two source files (xxxRecord.dbd
and xxxRecord.c
) and one generated file (xxxRecord.h
).
Note that developers don't normally execute the dbdToMenuH.pl
or dbdToRecordtypeH.pl
programs manually.
If the proper naming conventions are used, it is only necessary to add definitions to the appropriate Makefile
.
Consult the chapter on the EPICS Build Facility for details.
This tool is executed as follows:
dbdToMenuH.pl [-D] [-I dir] [-o menu.h] menu.dbd [menu.h]
It reads in the input file menu.dbd
and generates a C/C++ header file containing enumerated type definitions for the menus found in the input file.
Multiple -I
options can be provided to specify directories that must be searched when looking for included files.
If no output filename is specified with the -o menu.h
option or as a final command-line parameter, then the output filename will be constructed from the input filename, replacing .dbd
with .h
.
The -D
option causes the program to output Makefile dependency information for the output file to standard output, instead of actually performing the functions describe above.
For example menuPriority.dbd
, which contains the definitions for processing priority contains:
menu(menuPriority) { choice(menuPriorityLOW,"LOW") choice(menuPriorityMEDIUM,"MEDIUM") choice(menuPriorityHIGH,"HIGH") }
The include file menuPriority.h
that is generated contains:
/* menuPriority.h generated from menuPriority.dbd */ #ifndef INC_menuPriority_H #define INC_menuPriority_H typedef enum { menuPriorityLOW /* LOW */, menuPriorityMEDIUM /* MEDIUM */, menuPriorityHIGH /* HIGH */, menuPriority_NUM_CHOICES } menuPriority; #endif /* INC_menuPriority_H */
Any code that needs the priority menu values should include this file and make use of these definitions.
This tool is executed as follows:
dbdTorecordtypeH.pl [-D] [-I dir] [-o xRecord.h] xRecord.dbd [xRecord.h]
It reads in the input file xRecord.dhd
and generates a C/C++ header file which defines the in-memory structure of the given record type and provides other associated information for the compiler.
If the input file contains any menu definitions, they will also be converted into enumerated type definitions in the output file.
Multiple -I
options can be provided to specify directories that must be searched when looking for included files.
If no output filename is specified with the -o xRecord.h
option or as a final command-line parameter then the output filename will be constructed from the input filename, replacing .dbd
with .h
.
The -D
option causes the program to output Makefile dependency information for the output file to standard output, instead of actually performing the functions describe above.
For example aoRecord.dbd
, which contains the definitions for the analog output record contains:
menu(aoOIF) { choice(aoOIF_Full,"Full") choice(aoOIF_Incremental,"Incremental") } recordtype(ao) { include "dbCommon.dbd" field(VAL,DBF_DOUBLE) { prompt("Desired Output") promptgroup(GUI_OUTPUT) asl(ASL0) pp(TRUE) } field(OVAL,DBF_DOUBLE) { prompt("Output Value") } ... many more field definitions }
The include file aoRecord.h
that is generated contains:
/* aoRecord.h generated from aoRecord.dbd */ #ifndef INC_aoRecord_H #define INC_aoRecord_H #include "epicsTypes.h" #include "link.h" #include "epicsMutex.h" #include "ellLib.h" #include "epicsTime.h" typedef enum { aoOIF_Full /* Full */, aoOIF_Incremental /* Incremental */, aoOIF_NUM_CHOICES } aoOIF; typedef struct aoRecord { char name[61]; /* Record Name */ ... define remaining fields from database common epicsFloat64 val; /* Desired Output */ epicsFloat64 oval; /* Output Value */ ... define remaining record specific fields } aoRecord; typedef enum { aoRecordNAME = 0, aoRecordDESC = 1, ... indices for remaining fields in database common aoRecordVAL = 43, aoRecordOVAL = 44, ... indices for remaining record specific fields } aoFieldIndex; #ifdef GEN_SIZE_OFFSET #ifdef __cplusplus extern "C" { #endif #include <epicsExport.h> static int aoRecordSizeOffset(dbRecordType *prt) { aoRecord *prec = 0; prt->papFldDes[aoRecordNAME]->size = sizeof(prec->name); ... code to compute size for remaining fields prt->papFldDes[aoRecordNAME]->offset = (char *)&prec->name - (char *)prec; ... code to compute offset for remaining fields prt->rec_size = sizeof(*prec); return 0; } epicsExportRegistrar(aoRecordSizeOffset); #ifdef __cplusplus } #endif #endif /* GEN_SIZE_OFFSET */ #endif /* INC_aoRecord_H */
The analog output record support module and all associated device support modules should include this file. No other code should use it.
Let's discuss the various parts of the file:
enum
generated from the menu definition should be used to provide values for the field associated with that menu.
typedef struct
defining the record are used by record support and device support to access the fields in an analog output record.
enum
defines an index number for each field within the record.
This is useful for the record support routines that are passed a pointer to a DBADDR
structure.
They can have code like the following:
switch (dbGetFieldIndex(pdbAddr)) { case aoRecordVAL : ... break; case aoRecordXXX: ... break; default: ... }
The generated routine aoRecordSizeOffset
is executed when the record type gets registered with an IOC.
The routine is compiled with the record type code, and is marked static so it will not be visible outside of that file.
The associate record support source code MUST include the generated header file only after defining the GEN_SIZE_OFFSET
macro like this:
#define GEN_SIZE_OFFSET #include "aoRecord.h" #undef GEN_SIZE_OFFSET
This convention ensures that the routine is defined exactly once.
The epicsExportRegistrar
statement ensures that the record registration code can find and call the routine.
dbdExpand.pl [-D] [-I dir] [-S mac=sub] [-o out.dbd] in.dbd ...
This program reads and combines the database definition from all the input files, then writes a single output file containing all information from the
input files.
The output content differs from the input in that comment lines are removed, and all defined macros and include files are expanded.
Unlike the previous dbExpand
program, this program does not understand database instances and cannot be used with .db
or .vdb
files.
Multiple -I
options can be provided to specify directories that must be searched when looking for included files.
Multiple -S
options are allowed for macro substitution, or multiple macros can be specified within a single option.
If no output filename is specified with the -o out.dbd
option then the output will go to stdout.
The -D
option causes the program to output Makefile dependency information for the output file to standard output, instead of actually performing the functions describe above.
dbLoadDatabase(char *dbdfile, char *path, char *substitutions)
NOTES:
dbdfile
may contain environment variable macros of the form ${MOTOR}
which will be expanded before the file is opened.
This command loads a database file containing any of the definitions given in the summary at the beginning of this chapter.
Note that dbLoadDatabase
should only used to load a Database Definition (.dbd
) file, although it is currently possible to use it for loading Record Instance (.db
) files as well.
As each line of dbdfile is read, the substitutions specified in substitutions
are performed. Substitutions are specified as follows:
"var1=sub1,var2=sub3,..."
Variables are specified in the dbfile as $(var)
.
If the substitution string
"a=1,b=2,c=\"this is a test\""
were used, any variables $(a)
, $(b)
, $(c)
in the database file would have the appropriate values substituted during parsing.
dbLoadRecords(char* dbfile, char* substitutions)
NOTES:
dbfile
should contain only record instances, record aliases and/or breakpoint tables.
dbfile
string may itself contain environment variable macros of the form ${MOTOR}
which will be expanded before the file is opened.
For example, let the file test.db
contain:
record(ai, "$(pre)testrec1") record(ai, "$(pre)testrec2") record(stringout, "$(pre)testrec3") { field(VAL, "$(STR)") field(SCAN, "$(SCAN)") }
Then issuing the command:
dbLoadRecords("test.db", "pre=TEST,STR=test,SCAN=Passive")
gives the same results as loading:
record(ai, "TESTtestrec1") record(ai, "TESTtestrec2") record(stringout, "TESTtestrec3") { field(VAL, "test") field(SCAN, "Passive") }
dbLoadTemplate(char *subfile, char *substitutions)
NOTES:
dbLoadTemplate
reads a template substitution file.
This file contains rules about loading database instance files and provides values for the $(xxx)
macros they contain.
This command performs those substitutions while loading the database instances requested.
The subfile
parameter provides the name of the template substitution file to be used.
The optional substitutions
parameter may contain additional global macro values, which can be redefined within the substitution file.
The template substitution file syntax is described in the following Extended Backus-Naur Form grammar:
substitution-file ::= ( global-defs | template-subs )+ global-defs ::= 'global' '{' variable-defs? '}' template-subs ::= template-filename '{' subs? '}' template-filename ::= 'file' file-name subs ::= pattern-subs | variable-subs pattern-subs ::= 'pattern' '{' pattern-names? '}' pattern-defs? pattern-names ::= ( variable-name ','? )+ pattern-defs ::= ( global-defs | ( '{' pattern-values? '}' ) )+ pattern-values ::= ( value ','? )+ variable-subs ::= ( global-defs | ( '{' variable-defs? '}' ) )+ variable-defs ::= ( variable-def ','? )+ variable-def ::= variable-name '=' value variable-name ::= variable-name-start variable-name-char* file-name ::= file-name-char+ | double-quoted-str | single-quoted-str value ::= value-char+ | double-quoted-str | single-quoted-str double-quoted-str ::= '"' (double-quoted-char | escaped-char)* '"' single-quoted-str ::= "'" (single-quoted-char | escaped-char)* "'" double-quoted-char ::= [^"\] single-quoted-char ::= [^'\] escaped-char ::= '\' . value-char ::= [a-zA-Z0-9_+:;./\<>[] | '-' | ']' variable-name-start ::= [a-zA-Z_] variable-name-char ::= [a-zA-Z0-9_] file-name-char ::= [a-zA-Z0-9_+:;./\] | '-'
Note that the current implementation may accept a wider range of characters for the last three definitions than those listed here, but future releases may restrict the characters to those given above.
Any record instance file names must appear inside quotation marks if the name contains any environment variable macros of the form ${ENV_VAR_NAME}
, which will be expanded before the named file is opened.
Two different template formats are supported by the syntax rules given above. The format is either:
file name.template { { var1=sub1_for_set1, var2=sub2_for_set1, var3=sub3_for_set1, ... } { var1=sub1_for_set2, var2=sub2_for_set2, var3=sub3_for_set2, ... } { var1=sub1_for_set3, var2=sub2_for_set3, var3=sub3_for_set3, ... } }
or:
file name.template { pattern { var1, var2, var3, ... } { sub1_for_set1, sub2_for_set1, sub3_for_set1, ... } { sub1_for_set2, sub2_for_set2, sub3_for_set2, ... } { sub1_for_set3, sub2_for_set3, sub3_for_set3, ... } }
The first line (file name.template
) specifies the record instance input file. The file name may appear inside double
quotation marks; these are required if the name contains any characters that are not in the following set, or if it contains
environment variable macros of the form ${ENV_VAR_NAME}
which must be expanded to properly generate the file name:
a-z A-Z 0-9 _ + - . / \ : ; [ ] < >
Each set of definitions enclosed in {}
is variable substitution for the input file. The input file has each set applied to it to
produce one composite file with all the completed substitutions in it. Version 1 should be obvious. In version 2, the
variables are listed in the pattern{}
line, which must precede the braced substitution lines. The braced substitution
lines contains sets which match up with the pattern{}
line.
Two simple template file examples are shown below. The examples specify the same substitutions to perform:
this=sub1
and that=sub2
for a first set, and this=sub3
and that=sub4
for a second set.
file test.template { { this=sub1,that=sub2 } { this=sub3,that=sub4 } } file test.template { pattern{this,that} {sub1,sub2} {sub3,sub4 } }
Assume that the file test.template
contains:
record(ai,"$(this)record") { field(DESC,"this = $(this)") } record(ai,"$(that)record") { field(DESC,"this = $(that)") }
Using dbLoadTemplate
with either input is the same as defining the records:
record(ai,"sub1record") { field(DESC,"this = sub1") } record(ai,"sub2record") { field(DESC,"this = sub2") } record(ai,"sub3record") { field(DESC,"this = sub3") } record(ai,"sub4record") { field(DESC,"this = sub4") }