This chapter describes database definitions. The following definitions are described:
Record Instances are fundamentally different from the other definitions. A file containing record instances should never contain any of the other definitions and vice-versa. Thus the following convention is followed:
This chapter also describes utility programs which operate on these definitions
Any combination of definitions can appear in a single file or in a set of files related to each other via include files.
The following summarizes the Database Definition syntax:
path "path" addpath "path" include "filename" #comment menu(name) { include "filename" choice(choice_name, "choice_value") ... } recordtype(record_type) { include "filename" field(field_name, field_type) { asl(asl_level) initial("init_value") promptgroup(gui_group) prompt("prompt_value") special(special_value) pp(pp_value) interest(interest_level) base(base_type) size(size_value) extra("extra_info") menu(name) } %C_declaration ... } device(record_type, link_type, dset_name, "choice_string") driver(drvet_name) registrar(function_name) variable(variable_name) breaktable(name) { raw_value eng_value ... }
The Following defines a Record Instance
record(record_type, record_name) { include "filename" field(field_name, "value") alias(alias_name) info(info_name, "value") ... } alias(record_name,alias_name)
The following are keywords, i.e. they may not be used as values unless they are enclosed in quotes:
path addpath include menu choice recordtype field device driver registrar function variable breaktable record grecord info alias
In the summary section, some values are shown as quoted strings and some unquoted. The actual rule is that any string consisting of only the following characters does not have to be quoted unless it contains one of the above keywords:
a-z A-Z 0-9 _ + - : . [ ] < > ;
These are all legal characters for process variable names, although .
is
not allowed in a record name since it separates the record from the field name
in a PV name. Thus in many cases quotes are not needed around record or field
names in database files. Any string containing a macro does need to be quoted
though.
A quoted string can contain any ascii character except the quote character "
. The quote character itself can given by using \
as an escape. For example "\""
is a quoted string containing the single character "
.
Macro substitutions are permitted inside quoted strings. Macro instances take the form:
$(name)
or
${name}
There is no distinction between the use of parentheses or braces for delimiters, although the two must match for a given macro instance. The macro name can be made up from other macros, for example:
$(name_$(sel))
A macro instance can also provide a default value that is used when no macro with the given name is defined. The default value can be defined in terms of other macros if desired, but cannot contain any unescaped comma characters. The syntax for specifying a default value is as follows:
$(name=default)
Finally macro instances can also contain definitions of other macros, which can (temporarily) override any existing values
for those macros but are in scope only for the duration of the expansion of this macro instance. These definitions consist
of name=value
sequences separated by commas, for example:
$(abcd=$(a)$(b)$(c)$(d),a=A,b=B,c=C,d=D)
The database routines translate standard C escape sequences inside database field value strings only. The standard C escape sequences supported are:
\a \b \f \n \r \t \v \\ \? \' \" \ooo \xhh
\ooo
represents an octal number with 1, 2, or 3 digits. \xhh
represents a hexadecimal number with 1 or 2 digits.
The comment symbol is ``#''. Whenever the comment symbol appears, it and all characters through the end of the line are ignored.
No item can be referenced until it is defined. For example a recordtype
menu field can not reference a menu unless
that menu definition has already been defined. Another example is that a record instance can not appear until the
associated record type has been defined.
If a menu, recordtype, device, driver, or breakpoint table is defined more than once, then only the first instance is used. Record instance definitions however are (normally) cumulative, so multiple instances of the same record may be loaded and each time a field value is encountered it replaces the previous value.
.db
'' or ``.vdb
'' if the file also contains visual layout information
.dbd
''
path "dir:dir...:dir" addpath "dir:dir...:dir"
The path string follows the standard convention for the operating system, i.e. directory names are separated by a colon ``:
'' on Unix
and a semicolon ``;
'' on Windows.
The path
command specifies the current search path for use when loading database and database definition files.
The addpath
appends directory names to the current path.
The path is used to locate the initial database file and included files.
An empty dir
at the beginning, middle, or end of a non-empty path string means the current directory.
For example:
nnn::mmm # Current directory is between nnn and mmm :nnn # Current directory is first nnn: # Current directory is last
Utilities which load database files (dbExpand
, dbLoadDatabase
, etc.) allow the user to specify an initial path. The
path
and addpath
commands can be used to change or extend the initial path.
The initial path is determined as follows:
EPICS_DB_INCLUDE_PATH
is defined, it is used. Else:
.
'', i.e. the current directory.
The path is used unless the filename contains a /
or \
.
The first directory containing the specified filename is used.
include "filename"
An include statement can appear at any place shown in the summary. It uses the path as specified above.
menu(name) { choice(choice_name, "choice_string") ... }
enum
generated by dbToMenuH
or dbToRecordtypeH
.
This must be a legal C/C++ identifier.
menu(menuYesNo) { choice(menuYesNoNO, "NO") choice(menuYesNoYES, "YES") }
recordtype(record_type) { field(field_name, field_type) { asl(as_level) initial("init_value") promptgroup(gui_group) prompt("prompt_value") special(special_value) pp(pp_value) interest(interest_level) base(base_type) size(size_value) extra("extra_info") menu("name") } %C_declaration ... }
promptgroup
is not defined.
DBF_STRING
fields.
DBF_NOACCESS
fields.
DBF_MENU
fields. It is the name of the associated menu.
DBF_STRING
DBF_CHAR
, DBF_UCHAR
DBF_SHORT
, DBF_USHORT
DBF_LONG
, DBF_ULONG
DBF_FLOAT
, DBF_DOUBLE
DBF_ENUM
, DBF_MENU
, DBF_DEVICE
DBF_INLINK
, DBF_OUTLINK
, DBF_FWDLINK
DBF_NOACCESS
ASL0
ASL1
(default value)
Fields which operators normally change are assigned ASL0
.
Other fields are assigned ASL1
.
For example, the VAL
field of an analog output record is assigned ASL0
and all other fields ASL1
.
This is because only the VAL
field should be modified during normal operations.
GUI_COMMON
GUI_ALARMS
GUI_BITS1
GUI_BITS2
GUI_CALC
GUI_CLOCK
GUI_COMPRESS
GUI_CONVERT
GUI_DISPLAY
GUI_HIST
GUI_INPUTS
GUI_LINKS
GUI_MBB
GUI_MOTOR
GUI_OUTPUT
GUI_PID
GUI_PULSE
GUI_SELECT
GUI_SEQ1
GUI_SEQ2
GUI_SEQ3
GUI_SUB
GUI_TIMER
GUI_WAVE
GUI_SCAN
This information is for use by Database Configuration Tools.
This is defined only for fields that can be given values by database configuration tools.
File guigroup.h
contains all possible definitions.
This allows database configuration tools to group fields together by functionality, not just order them by name.
This feature has seldom been used, so many record types do not have appropriate values assigned to some fields.
SPC_MOD
- Notify record support when modified.
The record support special
routine will be called whenever the field is modified by the database access routines.
SPC_NOMOD
- No external modifications allowed.
This value disables external writes to the field, so it can only be set by the record or device support module.
SPC_DBADDR
- Use this if the record support cvt_dbaddr
routine should be called by dbNameToAddr
,
i.e. when code outside record/device support is connecting to the field.
The following values are for database common fields. They must not be used for record specific fields:
SPC_SCAN
- Scan related field.
SPC_ALARMACK
- Alarm acknowledgment field.
SPC_AS
- Access security field.
The following values are deprecated, use SPC_MOD
instead:
SPC_RESET
- a reset field is being modified.
SPC_LINCONV
- A linear conversion field is being modified.
SPC_CALC
- A calc field is being modified.
NO
(default)
YES
dbpr
command.
DECIMAL
(Default)
HEX
DBF_STRING
field.
DBF_NOACCESS
fields, this is the C language definition for the field.
The definition must end with the fieldname in lower case.
%
inside the record body introduces a line of code that is to be included in the generated C header file.
The following is the definition of the event record type:
recordtype(event) { include "dbCommon.dbd" field(VAL,DBF_USHORT) { prompt("Event Number To Post") promptgroup(GUI_INPUTS) asl(ASL0) } field(INP,DBF_INLINK) { prompt("Input Specification") promptgroup(GUI_INPUTS) interest(1) } field(SIOL,DBF_INLINK) { prompt("Sim Input Specifctn") promptgroup(GUI_INPUTS) interest(1) } field(SVAL,DBF_USHORT) { prompt("Simulation Value") } field(SIML,DBF_INLINK) { prompt("Sim Mode Location") promptgroup(GUI_INPUTS) interest(1) } field(SIMM,DBF_MENU) { prompt("Simulation Mode") interest(1) menu(menuYesNo) } field(SIMS,DBF_MENU) { prompt("Sim mode Alarm Svrty") promptgroup(GUI_INPUTS) interest(2) menu(menuAlarmSevr) } }
device(record_type, link_type, dset_name, "choice_string")
record_type
and choice_string
must be unique.
If the same combination appears more than once, only the first definition is used.
CONSTANT
PV_LINK
VME_IO
CAMAC_IO
AB_IO
GPIB_IO
BITBUS_IO
INST_IO
BBGPIB_IO
RF_IO
VXI_IO
DTYP
choice string for this device support.
A choice_string
value may be reused for different record types, but must be unique for each specific record type.
device(ai,CONSTANT,devAiSoft,"Soft Channel") device(ai,VME_IO,devAiXy566Se,"XYCOM-566 SE Scanned")
driver(drvet_name)
driver(drvVxi) driver(drvXy210)
registrar(function_name)
void
and has been marked in
its source file with an epicsExportRegistrar
declaration, e.g.
static void myRegistrar(void); epicsExportRegistrar(myRegistrar);
This can be used to register functions for use by subroutine records or that can be invoked from iocsh. The example application described in Section 2.2, ``Example IOC Application'' gives an example of how to register functions for subroutine records.
registrar(myRegistrar)
variable(variable_name[, type])
epicsExportAddress
declaration.
int
is assumed.
Currently only int
and double
variables are supported.
This registers a diagnostic/configuration variable for device or driver support or a subroutine record subroutine.
This variable can be read and set with the iocsh var
command (see Section 18.2.5.
The example application described in Section 2.2 shows how to register a debug variable for use in a subroutine record.
In an application C source file:
#include <epicsExport.h> static double myParameter; epicsExportAddress(double, myParameter);
In an application database definition file:
variable(myParameter, double)
function(function_name)
epicsRegisterFunction
declaration.
This registers a function so that it can be found in the function registry for use by record types such as sub or aSub which refer to the function by name. The example application described in Section 2.2 shows how to register functions for a subroutine record.
In an application C source file:
#include <epicsExport.h> #include <registryFunction.h> static long myFunction(void *argp) { /* my code ... */ } epicsRegisterFunction(myFunction);
In an application database definition file:
function(myFunction)
breaktable(name) { raw_value eng_value ... }
breaktable(typeJdegC) { 0.000000 0.000000 365.023224 67.000000 1000.046448 178.000000 3007.255859 524.000000 3543.383789 613.000000 4042.988281 692.000000 4101.488281 701.000000 }
record(record_type, record_name) { alias(alias_name) field(field_name, "field_value") info(info_name, "info_value") ... } alias(record_name, alias_name)
a-z A-Z 0-9 _ - + : [ ] < > ;
NOTE: If macro substitutions are used the name must be quoted.
Duplicate definitions are normally allowed for a record as long as the record type is the same. The last value given for each field is the value used.
The variable dbRecordsOnceOnly
can be set to any non-zero value using the iocsh var
command to make loading duplicate record definitions into the IOC illegal.
\"
, \t
, \n
, \064
and \x7e
, and these will be translated appropriately when loading the database.
Permitted values are as follows:
DBF_STRING
DBF_CHAR
, DBF_UCHAR
, DBF_SHORT
, DBF_USHORT
, DBF_LONG
, DBF_ULONG
DBF_FLOAT
, DBF_DOUBLE
DBF_MENU
DBF_DEVICE
DBF_INLINK
, DBF_OUTLINK
, DBF_FWDLINK
INP
or OUT
then this field is associated with DTYP
, and the permitted values
are determined by the link type of the device support selected by the current DTYP
choice string.
Other DBF_INLINK
and DBF_OUTLINK
fields must be either CONSTANT
or PV_LINK
s.
CONSTANT
can be given either a constant or a PV_LINK
.
The allowed values for the field depend on the device support's link type as follows:
CONSTANT
PV_LINK
record.field process maximize
record
is the name of a record that exists in this or another IOC.
The .field
, process
, and maximize
parts are all optional.
The default value for .field
is .VAL
.
process
can have one of the following values:
NPP
- No Process Passive (Default)
PP
- Process Passive
CA
- Force link to be a channel access link
CP
- CA and process on monitor
CPP
- CA and process on monitor if record is passive
NOTES:
CP
and CPP
are valid only for DBF_INLINK
fields.
DBF_FWDLINK
fields can use PP
or CA
.
If a DBF_FWDLINK
is a channel access link it must reference the target record's PROC
field.
maximize
can have one of the following values:
NMS
- No Maximize Severity (Default)
MS
- Maximize Severity
MSS
- Maximize Severity and Status
MSI
- Maximize Severity if Invalid
VME_IO
#Ccard Ssignal @parm
card
- the card number of associated hardware module
signal
- signal on card
parm
- An arbitrary character string of up to 31 characters. This field is optional and is device specific.
CAMAC_IO
#Bbranch Ccrate Nstation Asubaddress Ffunction @parm
branch
, crate
, station
, subaddress
, and function
should be obvious to camac
users.
subaddress
and function
are optional (0 if not given).
parm
is also optional and is device specific (25 characters max).
AB_IO
#Llink Aadapter Ccard Ssignal @parm
link
- Scanner, i.e. vme scanner number
adapter
- Adapter. Allen Bradley also calls this rack
card
- Card within Allen Bradley Chassis
signal
- signal on card
parm
- optional device-specific character string (27 char max)
GPIB_IO
#Llink Aaddr @parm
link
- gpib link, i.e. interface
addr
- GPIB address
parm
- device-specific character string (31 char max)
BITBUS_IO
#Llink Nnode Pport Ssignal @parm
link
- link, i.e. vme bitbus interface
node
- bitbus node
port
- port on the node
signal
- signal on port
parm
- device specific-character string (31 char max)
INST_IO
@parm
parm
- Device dependent character string
BBGPIB_IO
#Llink Bbbaddr Ggpibaddr @parm
link
- link, i.e. vme bitbus interface
bbadddr
- bitbus address
gpibaddr
- gpib address
parm
- optional device-specific character string (31 char max)
RF_IO
#Rcryo Mmicro Ddataset Eelement
VXI_IO
#Vframe Cslot Ssignal @parm
(Dynamic addressing)
#Vla Signal @parm
(Static Addressing)
frame
- VXI frame number
slot
- Slot within VXI frame
la
- Logical Address
signal
- Signal Number
parm
- device specific character string(25 char max)
record(ai,STS_AbAiMaS0) { field(SCAN,".1 second") field(DTYP,"AB-1771IFE-4to20MA") field(INP,"#L0 A2 C0 S0 F0 @") field(PREC,"4") field(LINR,"LINEAR") field(EGUF,"20") field(EGUL,"4") field(EGU,"MilliAmps") field(HOPR,"20") field(LOPR,"4") } record(ao,STS_AbAoMaC1S0) { field(DTYP,"AB-1771OFE") field(OUT,"#L0 A2 C1 S0 F0 @") field(LINR,"LINEAR") field(EGUF,"20") field(EGUL,"4") field(EGU,"MilliAmp") field(DRVH,"20") field(DRVL,"4") field(HOPR,"20") field(LOPR,"4") info(autosaveFields,"VAL") } record(bi,STS_AbDiA0C0S0) { field(SCAN,"I/O Intr") field(DTYP,"AB-Binary Input") field(INP,"#L0 A0 C0 S0 F0 @") field(ZNAM,"Off") field(ONAM,"On") }
Information items provide a way to attach named string values to individual record instances that are loaded at the same time as the record definition.
They can be attached to any record without having to modify the record type, and can be retrieved by programs running on the IOC (they are not visible via Channel Access at all).
Each item attached to a single record must have a unique name by which it is addressed, and database access provides routines to allow a record's info items to be scanned, searched for, retrieved and set.
At runtime a void*
pointer can also be associated with each item, although only the string value can be initialized from the record definition when the database is loaded.
Each record type can have any number of record attributes.
Each attribute is a psuedo field that can be accessed via database and channel access.
Each attribute has a name that acts like a field name but returns the same value for all instances of the record type.
Two attributes are generated automatically for each record type: RTYP
and VERS
.
The value for RTYP
is the record type name.
The default value for VERS
is ``none specified'', which can be changed by record support.
Record support can call the following routine to create new attributes or change existing attributes:
long dbPutAttribute(char *recordTypename, char *name, char*value)
The arguments are:
recordTypename
- The name of recordtype.
name
- The attribute name, i.e. the psuedo field name.
value
- The value assigned to the attribute.
The menu menuConvert
is used for field LINR
of the ai
and ao
records.
These records allow raw data to be converted to/from engineering units via one of the following:
Other record types can also use this feature.
The first choice specifies no conversion; the second and third are both linear conversions, the difference being that for Slope conversion the user specifies the conversion slope and offset values directly, whereas for Linear conversions these are calculated by the device support from the requested Engineering Units range and the device support's knowledge of the hardware conversion range.
The remaining choices are assumed to be the names of breakpoint tables.
If a breakpoint table is chosen, the record support modules calls cvtRawToEngBpt
or cvtEngToRawBpt
.
You can look at the ai
and ao
record support modules for details.
If a user wants to add additional breakpoint tables, then the following should be done:
menuConvert.dbd
file from EPICS base/src/bpt
menuConvert.dbd
is loaded into the IOC instead of EPICS version.
It is only necessary to load a breakpoint file if a record instance actually chooses it.
It should also be mentioned that the Allen Bradley IXE device support misuses the LINR
field.
If you use this module, it is very important that you do not change any of the EPICS supplied definitions in menuConvert.dbd
.
Just add your definitions at the end.
If a breakpoint table is chosen, then the corresponding breakpoint file must be loaded into the IOC before iocInit
is called.
Normally, it is desirable to directly create the breakpoint tables.
However, sometimes it is desirable to create a breakpoint table from a table of raw values representing equally spaced engineering units.
A good example is the Thermocouple tables in the OMEGA Engineering, INC Temperature Measurement Handbook.
A tool makeBpt
is provided to convert such data to a breakpoint table.
The format for generating a breakpoint table from a data table of raw values corresponding to equally spaced engineering values is:
!comment line <header line> <data table>
The header line contains the following information:
An example definition is:
"TypeKdegF" 32 0 1832 4095 1.0 -454 2500 1 <data table>
The breakpoint table can be generated by executing
makeBpt bptXXX.data
The input file must have the extension of data.
The output filename is the same as the input filename with the extension of .dbd
.
Another way to create the breakpoint table is to include the following definition in a Makefile
:
BPTS += bptXXX.dbd
NOTE: This requires the naming convention that all data tables are of the form bpt<name>.data
and a breakpoint table bpt<name>.dbd
.
Given a file containing menus, dbToMenuH
generates an include file that can be used by any code which uses the associated menus.
Given a file containing any combination of menu definitions and record type definitions, dbToRecordtypeH
generates an include file that can be used by any code which uses the menus and record type.
EPICS base uses the following conventions for managing menu and recordtype definitions. Users generating local record types are encouraged to do likewise.
menuScan
) or is of global use (for example menuYesNo
) is defined in a separate file.
The name of the file is the same as the menu name with an extension of .dbd
.
The name of the generated include file is the menu name with an extension of .h
.
Thus menuScan
is defined in a file menuScan.dbd
and the generated include file is named menuScan.h
Record.dbd
.
The name of the generated include file is the same name with an extension of .h
.
Thus aoRecord
is defined in a file aoRecord.dbd
and the generated include file is named aoRecord.h
.
Since aoRecord
has a private menu called aoOIF
, the dbd
file and the generated include file have definitions for this menu.
Thus for each record type, there are two source files (xxxRecord.dbd
and xxxRecord.c
) and one generated file (xxxRecord.h
).
Before continuing, it should be mentioned that developers don't normally execute dbToMenuH
or dbToRecordtypeH
themselves.
If the proper naming conventions are used, it is only necessary to add definitions to the appropriate Makefile
.
Consult the chapter on the EPICS Build Facility for details.
This tool is executed as follows:
dbToMenuH -Idir -Smacsub menuXXX.dbd
It generates a file which has the same name as the input file but with an extension of .h
.
Multiple -I
options can be specified for an include path and multiple -S
options for macro substitution.
For example menuPriority.dbd
, which contains the definitions for processing priority contains:
menu(menuPriority) { choice(menuPriorityLOW,"LOW") choice(menuPriorityMEDIUM,"MEDIUM") choice(menuPriorityHIGH,"HIGH") }
The include file, menuPriority.h
, generated by dbToMenuH
contains:
#ifndef INCmenuPriorityH #define INCmenuPriorityH typedef enum { menuPriorityLOW, menuPriorityMEDIUM, menuPriorityHIGH, }menuPriority; #endif /*INCmenuPriorityH*/
Any code that needs to use the priority menu values should use these definitions.
This tool is executed as follows:
dbTorecordtypeH -Idir -Smacsub xxxRecord.dbd
It generates a file which has the same name as the input file but with an extension of .h
.
Multiple -I
options can be specified for an include path and multiple -S
options for macro substitution.
For example aoRecord.dbd
, which contains the definitions for the analog output record contains:
menu(aoOIF) { choice(aoOIF_Full,"Full") choice(aoOIF_Incremental,"Incremental") } recordtype(ao) { include "dbCommon.dbd" field(VAL,DBF_DOUBLE) { prompt("Desired Output") asl(ASL0) pp(TRUE) } field(OVAL,DBF_DOUBLE) { prompt("Output Value") } ... (Many more field definitions } }
The include file, aoRecord.h
, generated by dbToRecordtypeH
contains:
#include "ellLib.h" #include "epicsMutex.h" #include "link.h" #include "epicsTime.h" #include "epicsTypes.h" #ifndef INCaoOIFH #define INCaoOIFH typedef enum { aoOIF_Full, aoOIF_Incremental, }aoOIF; #endif /*INCaoOIFH*/ #ifndef INCaoH #define INCaoH typedef struct aoRecord { char name[29]; /*Record Name*/ ... Remaining fields in database common double val; /*Desired Output*/ double oval; /*Output Value*/ ... remaining record specific fields } aoRecord; #define aoRecordNAME 0 ... defines for remaining fields in database common #define aoRecordVAL 42 #define aoRecordOVAL 43 ... defines for remaining record specific fields #ifdef GEN_SIZE_OFFSET int aoRecordSizeOffset(dbRecordType *pdbRecordType) { aoRecord *prec = 0; pdbRecordType->papFldDes[0]->size=sizeof(prec->name); pdbRecordType->papFldDes[0]->offset= (short)((char *)&prec->name -- (char *)prec); ... code to compute size&offset for other fields in dbCommon pdbRecordType->papFldDes[42]->size=sizeof(prec->val); pdbRecordType->papFldDes[42]->offset= (short)((char *)&prec->val -- (char *)prec); pdbRecordType->papFldDes[43]->size=sizeof(prec->oval); pdbRecordType->papFldDes[43]->offset= (short)((char *)&prec->oval -- (char *)prec); ... code to compute size&offset for remaining fields pdbRecordType->rec_size = sizeof(*prec); return(0); } #endif /*GEN_SIZE_OFFSET*/
The analog output record support module and all associated device support modules should use this include file. No other code should use it. Let's discuss the various parts of the file.:
enum
generated from the menu definition should be used to reference the value of the field associated with the menu.
typedef
and structure
defining the record are used by record support and device support to access fields in an analog output record.
#define
is present for each field within the record.
This is useful for the record support routines that are passed a pointer to a DBADDR
structure.
They can have code like the following:
switch (dbGetFieldIndex(pdbAddr)) { case aoRecordVAL : ... break; case aoRecordXXX: ... break; default: ... }
The C source routine aoRecordSizeOffset
is automatically called when a record type file is loaded into an IOC.
Thus user code does not have to be aware of this routine except for the following convention:
The associate record support module MUST include the statements:
#define GEN_SIZE_OFFSET #include "xxxRecord.h" #undef GEN_SIZE_OFFSET
This convention ensures that the routine is defined exactly once.
dbExpand -Idir -Smacsub -ooutfile file1 file2 ...
Multiple -I
options can be specified for an include path, and multiple -S
options for macro substitution.
If no output filename is specified with -ooutfile
then the output will go to stdout.
Note that the environment variable EPICS_DB_INCLUDE_PATH
can also be used in place of the -I
options.
NOTE: This is supported only on the host.
This command reads all the input files and then writes a file containing the definitions for all information described by the input files. The output content differs from the input in that comment lines do not appear and all include files are expanded.
This routine is extremely useful if an IOC is not using NFS for the dbLoadDatabase
commands.
It takes more than 2 minutes to load the base/rec/base.dbd
file into an IOC if NFS is not used.
If dbExpand
creates a local base.dbd
file, it takes about 7 seconds to load (25 MHZ 68040 IOC).
dbLoadDatabase(char *dbdfile, char *path, char *substitutions)
NOTES:
dbdfile
may contain environment variable macros of the form ${MOTOR}
which will be expanded before the file is opened.
This command loads a database file containing any of the definitions given in the summary at the beginning of this chapter.
Note that dbLoadDatabase
should only used to load a Database Definition (.dbd
) file, although it is currently possible to use it for loading Record Instance (.db
) files as well.
As each line of dbdfile is read, the substitutions specified in substitutions
are performed. Substitutions are specified as follows:
"var1=sub1,var2=sub3,..."
Variables are specified in the dbfile as $(var)
.
If the substitution string
"a=1,b=2,c=\"this is a test\""
were used, any variables $(a)
, $(b)
, $(c)
in the database file would have the appropriate values substituted during parsing.
dbLoadRecords(char* dbfile, char* substitutions)
NOTES:
dbfile
should contain only record instances, record aliases and/or breakpoint tables.
dbfile
string may itself contain environment variable macros of the form ${MOTOR}
which will be expanded before the file is opened.
For example, let the file test.db
contain:
record(ai, "$(pre)testrec1") record(ai, "$(pre)testrec2") record(stringout, "$(pre)testrec3") { field(VAL, "$(STR)") field(SCAN, "$(SCAN)") }
Then issuing the command:
dbLoadRecords("test.db", "pre=TEST,STR=test,SCAN=Passive")
gives the same results as loading:
record(ai, "TESTtestrec1") record(ai, "TESTtestrec2") record(stringout, "TESTtestrec3") { field(VAL, "test") field(SCAN, "Passive") }
dbLoadTemplate(char* template_def)
NOTES:
dbLoadTemplate
reads a template definition file. This file contains rules about loading database instance files, which
contain $(xxx)
macros, and performing substitutions.
template_def
contains the rules for performing substitutions on the instance files. For convenience two formats are
provided. The format is either:
file name.template { { var1=sub1_for_set1, var2=sub2_for_set1, var3=sub3_for_set1, ... } { var1=sub1_for_set2, var2=sub2_for_set2, var3=sub3_for_set2, ... } { var1=sub1_for_set3, var2=sub2_for_set3, var3=sub3_for_set3, ... } }
or:
file name.template { pattern { var1, var2, var3, ... } { sub1_for_set1, sub2_for_set1, sub3_for_set1, ... } { sub1_for_set2, sub2_for_set2, sub3_for_set2, ... } { sub1_for_set3, sub2_for_set3, sub3_for_set3, ... } }
The first line (file name.template
) specifies the record instance input file. The file name may appear inside double
quotation marks; these are required if the name contains any characters that are not in the following set, or if it contains
environment variable macros of the form ${ENV_VAR_NAME}
which are to be expanded before the file is opened:
a-z A-Z 0-9 _ + - . / \ : ; [ ] < >
Each set of definitions enclosed in {}
is variable substitution for the input file. The input file has each set applied to it to
produce one composite file with all the completed substitutions in it. Version 1 should be obvious. In version 2, the
variables are listed in the pattern{}
line, which must precede the braced substitution lines. The braced substitution
lines contains sets which match up with the pattern{}
line.
Two simple template file examples are shown below. The examples specify the same substitutions to perform:
this=sub1
and that=sub2
for a first set, and this=sub3
and that=sub4
for a second set.
file test.template { { this=sub1,that=sub2 } { this=sub3,that=sub4 } } file test.template { pattern{this,that} {sub1,sub2} {sub3,sub4 } }
Assume that the file test.template
contains:
record(ai,"$(this)record") { field(DESC,"this = $(this)") } record(ai,"$(that)record") { field(DESC,"this = $(that)") }
Using dbLoadTemplate
with either input is the same as defining the records:
record(ai,"sub1record") { field(DESC,"this = sub1") } record(ai,"sub2record") { field(DESC,"this = sub2") } record(ai,"sub3record") { field(DESC,"this = sub3") } record(ai,"sub4record") { field(DESC,"this = sub4") }
dbReadTest -Idir -Smacsub file.dbd ... file.db ...
This utility can be used to check for correct syntax in database definition and database instance files. It just reads all the specified files
Multiple -I,
and -S
options can be specified. An arbitrary number of database definition and database instance files can
be specified.