next up previous contents index
Next: 7. IOC Initialization Up: AppDevGuide Previous: 5. Database Locking, Scanning,   Contents   Index


6. Database Definition

6.1 Overview

This chapter describes database definitions. The following definitions are described:

Record Instances are fundamentally different from the other definitions. A file containing record instances should never contain any of the other definitions and vice-versa. Thus the following convention is followed:

Database Definition File
A file that contains any type of definition except record instances.

Record Instance File
A file that contains only record instance definitions.

This chapter also describes utility programs which operate on these definitions

Any combination of definitions can appear in a single file or in a set of files related to each other via include files.

6.2 Summary of Database Syntax

The following summarizes the Database Definition syntax:

path "path"
addpath "path"
include "filename"
menu(name) {
    include "filename"
    choice(choice_name, "choice_value")

recordtype(record_type) {
    include "filename"
    field(field_name, field_type) {

device(record_type, link_type, dset_name, "choice_string")




breaktable(name) {
    raw_value eng_value

The Following defines a Record Instance

record(record_type, record_name) {
include "filename"
field(field_name, "value")
    info(info_name, "value")

6.3 General Rules for Database Definition

6.3.1 Keywords

The following are keywords, i.e. they may not be used as values unless they are enclosed in quotes:


6.3.2 Unquoted Strings

In the summary section, some values are shown as quoted strings and some unquoted. The actual rule is that any string consisting of only the following characters does not have to be quoted unless it contains one of the above keywords:

a-z A-Z 0-9 _ - : . [ ] < > ;

These are also the legal characters for process variable names. Thus in many cases quotes are not needed.

6.3.3 Quoted Strings

A quoted string can contain any ascii character except the quote character ". The quote character itself can given by using \ as an escape. For example "\"" is a quoted string containing the single character ".

6.3.4 Macro Substitution

Macro substitutions are permitted inside quoted strings. Macro instances take the form:




There is no distinction between the use of parentheses or braces for delimiters, although the two must match for a given macro instance. The macro name can be made up from other macros, for example:


A macro instance can also provide a default value that is used when no macro with the given name is defined. The default value can be defined in terms of other macros if desired, but cannot contain any unescaped comma characters. The syntax for specifying a default value is as follows:


Finally macro instances can also contain definitions of other macros, which can (temporarily) override any existing values for those macros but are in scope only for the duration of the expansion of this macro instance. These definitions consist of name=value sequences separated by commas, for example:


6.3.5 Escape Sequences

The database routines translate standard C escape sequences inside database field value strings only. The standard C escape sequences supported are:

\a \b \f \n \r \t \v \\ \? \' \" \ooo \xhh

\ooo represents an octal number with 1, 2, or 3 digits. \xhh represents a hexadecimal number with 1 or 2 digits.


The comment symbol is ``#''. Whenever the comment symbol appears, it and all characters through the end of the line are ignored.

6.3.7 Define before referencing

No item can be referenced until it is defined. For example a recordtype menu field can not reference a menu unless that menu definition has already been defined. Another example is that a record instance can not appear until the associated record type has been defined.

6.3.8 Multiple Definitions

If a menu, recordtype, device, driver, or breakpoint table is defined more than once, then only the first instance is used. Record instance definitions however are (normally) cumulative, so multiple instances of the same record may be loaded and each time a field value is encountered it replaces the previous value.

6.3.9 Filename Extensions

By convention:

6.4 path addpath - Path Definition

6.4.1 Format

path "dir:dir...:dir"
addpath "dir:dir...:dir

The path string follows the standard convention for the operating system, i.e. directory names are separated by a colon ``:'' on Unix and a semicolon ``;'' on Windows.

The path command specifies the current search path for use when loading database and database definition files. The addpath appends directory names to the current path. The path is used to locate the initial database file and included files. An empty dir at the beginning, middle, or end of a non-empty path string means the current directory. For example:

 nnn::mmm    # Current directory is between nnn and mmm
 :nnn        # Current directory is first
 nnn:        # Current directory is last

Utilities which load database files (dbExpand, dbLoadDatabase, etc.) allow the user to specify an initial path. The path and addpath commands can be used to change or extend the initial path.

The initial path is determined as follows:

If an initial path is specified, it is used. Else:

If the environment variable EPICS_DB_INCLUDE_PATH is defined, it is used. Else:

the default path is ``.'', i.e. the current directory.

The path is used unless the filename contains a / or \. The first directory containing the specified filename is used.

6.5 include - Include File

6.5.1 Format

include "filename"

An include statement can appear at any place shown in the summary. It uses the path as specified above.

6.6 menu - Menu Declaration

6.6.1 Format

menu(name) {
    choice(choice_name, "choice_string")

6.6.2 Definitions

Name for menu. This is the unique name identifying the menu. If duplicate definitions are specified, only the first is used.

The name used in the enum generated by or This must be a legal C/C++ identifier.

The text string associated with this particular choice.

6.6.3 Example

menu(menuYesNo) {
    choice(menuYesNoNO, "NO")
    choice(menuYesNoYES, "YES")

6.7 recordtype - Record Type Declaration

6.7.1 Format

recordtype(record_type) {
    field(field_name, field_type) {

6.7.2 Field Definition Rules

Sets the Access Security Level for the field. Access Security is discussed in chapter 8.

Provides an initial (default) value for the field.

The group to which the field belongs, for database configuration tools.

A prompt string for database configuration tools. Optional if promptgroup is not defined.

If specified, special processing is required for this field at run time.

Whether a passive record should be processed when Channel Access writes to this field.

Interest level for the field.

For integer fields, the number base to use when converting the field value to a string.

Must be specified for DBF_STRING fields.

Must be specified for DBF_NOACCESS fields.

Must be specified for DBF_MENU fields. It is the name of the associated menu.

Must be YES or NO (default). Indicates that the field holds Channel Access meta-data.

6.7.3 Definitions

The unique name of the record type. If duplicates are specified, only the first definition is used.

The field name, which must be a valid C identifier. When include files are generated, the field name is converted to lower case. Previous versions of EPICS required the field name be a maximum of four characters, but this restriction no longer applies.

This must be one of the following values:

This must be one of the following values:

Fields which operators normally change are assigned ASL0. Other fields are assigned ASL1. For example, the VAL field of an analog output record is assigned ASL0 and all other fields ASL1. This is because only the VAL field should be modified during normal operations.

A legal value for data type.

A prompt value for database configuration tools.

This must be one of the following:

Must be one of the following:

Should a passive record be processed when Channel Access writes to this field? The allowed values are:

An interest level for the dbpr command.

For integer type fields, the default base. The legal values are:

The number of characters for a DBF_STRING field.

For DBF_NOACCESS fields, this is the C language definition for the field. The definition must end with the fieldname in lower case.

A percent sign % inside the record body introduces a line of code that is to be included in the generated C header file.

6.7.4 Example

The following is the definition of the event record type:

recordtype(event) {
    include "dbCommon.dbd" 
    field(VAL,DBF_USHORT) {
        prompt("Event Number To Post")
    field(INP,DBF_INLINK) {
        prompt("Input Specification")
    field(SIOL,DBF_INLINK) {
        prompt("Sim Input Specifctn")
    field(SVAL,DBF_USHORT) {
        prompt("Simulation Value")
    field(SIML,DBF_INLINK) {
        prompt("Sim Mode Location")
    field(SIMM,DBF_MENU) {
        prompt("Simulation Mode")
    field(SIMS,DBF_MENU) {
        prompt("Sim mode Alarm Svrty")

6.8 device - Device Support Declaration

6.8.1 Format

    device(record_type, link_type, dset_name, "choice_string")

6.8.2 Definitions

Record type. The combination of record_type and choice_string must be unique. If the same combination appears more than once, only the first definition is used.

Link type. This must be one of the following:

The name of the device support entry table for this device support.

The DTYP choice string for this device support. A choice_string value may be reused for different record types, but must be unique for each specific record type.

6.8.3 Examples

    device(ai,CONSTANT,devAiSoft,"Soft Channel")
    device(ai,VME_IO,devAiXy566Se,"XYCOM-566 SE Scanned")

6.9 driver - Driver Declaration

6.9.1 Format


6.9.2 Definitions

If duplicates are defined, only the first is used.

6.9.3 Examples


6.10 registrar - Registrar Declaration

6.10.1 Format


6.10.2 Definitions

The name of an C function that accepts no arguments, returns void and has been marked in its source file with an epicsExportRegistrar declaration, e.g.

    static void myRegistrar(void);

This can be used to register functions for use by subroutine records or that can be invoked from iocsh. The example application described in Section 2.2, ``Example IOC Application'' on page [*] gives an example of how to register functions for subroutine records.

6.10.3 Example


6.11 variable - Variable Declaration

6.11.1 Format

    variable(variable_name[, type])

6.11.2 Definitions

The name of a C variable which has been marked in its source file with an epicsExportAddress declaration.

The C variable's type. If not present, int is assumed. Currently only int and double variables are supported.

This registers a diagnostic/configuration variable for device or driver support or a subroutine record subroutine so that the variable can be read and set with the iocsh var command (see Section 18.2.5 on page [*]). The example application described in Section 2.2 on page [*] provides an example of how to register a debug variable for a subroutine record.

6.11.3 Example

In an application C source file:

#include <epicsExport.h>

static double myParameter;
epicsExportAddress(double, myParameter);

In an application database definition file:

    variable(myParameter, double)

6.12 function - Function Declaration

6.12.1 Format


6.12.2 Definitions

The name of a C function which has been exported from its source file with an epicsRegisterFunction declaration.

This registers a function so that it can be found in the function registry for use by record types such as sub or aSub which refer to the function by name. The example application described in Section 2.2 on page [*] provides an example of how to register functions for a subroutine record.

6.12.3 Example

In an application C source file:

#include <epicsExport.h>
#include <registryFunction.h>

static long myFunction(void *argp) {
    /* my code ... */

In an application database definition file:


6.13 breaktable - Breakpoint Table

6.13.1 Format

breaktable(name) {
    raw_value eng_value

6.13.2 Definitions

Name, which must be alpha-numeric, of the breakpoint table. If duplicates are specified the first is used.

The raw value, i.e. the actual ADC value associated with the beginning of the interval.

The engineering value associated with the beginning of the interval.

6.13.3 Example

breaktable(typeJdegC) {
    0.000000 0.000000
    365.023224 67.000000
    1000.046448 178.000000
    3007.255859 524.000000
    3543.383789 613.000000
    4042.988281 692.000000
    4101.488281 701.000000

6.14 record - Record Instance

6.14.1 Format

record(record_type, record_name) {
    field(field_name, "field_value")
    info(info_name, "info_value")
alias(record_name, alias_name)

6.14.2 Definitions

The record type.

The record name. This must be composed of the following characters:

    a-z A-Z 0-9 _ - + : [ ] < > ;

NOTE: If macro substitutions are used the name must be quoted.

If duplicate definitions are given for the same record, then the last value given for each field is the value assigned to the field.

An alternate name for the record, following the same rules as the record name.

A field name.

A value for the named field, depending on the particular field type. Inside double quotes the field value string may contain escaped C89 characters such as \", \t, \n, \064 and \x7e, and these will be translated appropriately when loading the database. Permitted values are as follows:

The name of an Information Item related to this record. See section 6.15 below for more on Information Items.

Any ASCII string. IOC applications using this information item may place additional restrictions on the contents of the string.

6.14.3 Examples

record(ai,STS_AbAiMaS0) {
    field(SCAN,".1 second")
    field(INP,"#L0 A2 C0 S0 F0 @")
record(ao,STS_AbAoMaC1S0) {
    field(OUT,"#L0 A2 C1 S0 F0 @")
record(bi,STS_AbDiA0C0S0) {
    field(SCAN,"I/O Intr")
    field(DTYP,"AB-Binary Input")
    field(INP,"#L0 A0 C0 S0 F0 @")

6.15 Record Information Item

Information items provide a way to attach named string values to individual record instances that are loaded at the same time as the record definition. They can be attached to any record without having to modify the record type, and can be retrieved by programs running on the IOC (they are not visible via Channel Access at all). Each item attached to a single record must have a unique name by which it is addressed, and database access provides routines to allow a record's info items to be scanned, searched for, retrieved and set. At runtime a void* pointer can also be associated with each item, although only the string value can be initialized from the record definition when the database is loaded.

6.16 Record Attributes

Each record type can have any number of record attributes. Each attribute is a psuedo field that can be accessed via database and channel access. Each attribute has a name that acts like a field name but returns the same value for all instances of the record type. Two attributes are generated automatically for each record type: RTYP and VERS. The value for RTYP is the record type name. The default value for VERS is ``none specified'', which can be changed by record support. Record support can call the following routine to create new attributes or change existing attributes:

long dbPutAttribute(char *recordTypename,
    char *name, char*value)

The arguments are:

recordTypename - The name of recordtype.
name - The attribute name, i.e. the psuedo field name.
value - The value assigned to the attribute.

6.17 Breakpoint Tables - Discussion

The menu menuConvert is used for field LINR of the ai and ao records. These records allow raw data to be converted to/from engineering units via one of the following:

  1. No Conversion.
  2. Slope Conversion.
  3. Linear Conversion.
  4. Breakpoint table.

Other record types can also use this feature. The first choice specifies no conversion; the second and third are both linear conversions, the difference being that for Slope conversion the user specifies the conversion slope and offset values directly, whereas for Linear conversions these are calculated by the device support from the requested Engineering Units range and the device support's knowledge of the hardware conversion range. The remaining choices are assumed to be the names of breakpoint tables. If a breakpoint table is chosen, the record support modules calls cvtRawToEngBpt or cvtEngToRawBpt. You can look at the ai and ao record support modules for details.

If a user wants to add additional breakpoint tables, then the following should be done:

It is only necessary to load a breakpoint file if a record instance actually chooses it. It should also be mentioned that the Allen Bradley IXE device support misuses the LINR field. If you use this module, it is very important that you do not change any of the EPICS supplied definitions in menuConvert.dbd. Just add your definitions at the end.

If a breakpoint table is chosen, then the corresponding breakpoint file must be loaded into the IOC before iocInit is called.

Normally, it is desirable to directly create the breakpoint tables. However, sometimes it is desirable to create a breakpoint table from a table of raw values representing equally spaced engineering units. A good example is the Thermocouple tables in the OMEGA Engineering, INC Temperature Measurement Handbook. A tool makeBpt is provided to convert such data to a breakpoint table.

The format for generating a breakpoint table from a data table of raw values corresponding to equally spaced engineering values is:

!comment line
<header line>
<data table>

The header line contains the following information:

An alphanumeric ascii string specifying the breakpoint table name
Low Value Eng
Engineering Units Value for first breakpoint table entry
Low Value Raw
Raw value for first breakpoint table entry
High Value Eng
Engineering Units: Highest Value desired
High Value Raw
Raw Value for High Value Eng
Allowed error (Engineering Units)
First Table
Engineering units corresponding to first data table entry
Last Table
Engineering units corresponding to last data table entry
Delta Table
Change in engineering units per data table entry

An example definition is:

"TypeKdegF" 32 0 1832 4095 1.0 -454 2500 1
<data table>

The breakpoint table can be generated by executing


The input file must have the extension of data. The output filename is the same as the input filename with the extension of .dbd.

Another way to create the breakpoint table is to include the following definition in a Makefile:

BPTS += bptXXX.dbd

NOTE: This requires the naming convention that all data tables are of the form bpt<name>.data and a breakpoint table bpt<name>.dbd.

6.18 Menu and Record Type Include File Generation.

6.18.1 Introduction

Given a file containing menu definitions, generates a C/C++ header file for use by code which needs those menus. Given a file containing any combination of menu definitions and record type definitions, generates a C/C++ header file for use by any code which needs those menus and record type.

EPICS Base uses the following conventions for managing menu and recordtype definitions. Users generating local record types are encouraged to follow these.

Note that developers don't normally execute the or programs manually. If the proper naming conventions are used, it is only necessary to add definitions to the appropriate Makefile. Consult the chapter on the EPICS Build Facility for details.


This tool is executed as follows: [-D] [-I dir] [-o menu.h] menu.dbd [menu.h]

It reads in the input file menu.dbd and generates a C/C++ header file containing enumerated type definitions for the menus found in the input file.

Multiple -I options can be provided to specify directories that must be searched when looking for included files. If no output filename is specified with the -o menu.h option or as a final command-line parameter, then the output filename will be constructed from the input filename, replacing .dbd with .h.

The -D option causes the program to output Makefile dependency information for the output file to standard output, instead of actually performing the functions describe above.

For example menuPriority.dbd, which contains the definitions for processing priority contains:

menu(menuPriority) {

The include file menuPriority.h that is generated contains:

/* menuPriority.h generated from menuPriority.dbd */

#ifndef INC_menuPriority_H
#define INC_menuPriority_H

typedef enum {
    menuPriorityLOW                 /* LOW */,
    menuPriorityMEDIUM              /* MEDIUM */,
    menuPriorityHIGH                /* HIGH */,
} menuPriority;

#endif /* INC_menuPriority_H */

Any code that needs the priority menu values should include this file and make use of these definitions.


This tool is executed as follows: [-D] [-I dir] [-o xRecord.h] xRecord.dbd [xRecord.h]

It reads in the input file xRecord.dhd and generates a C/C++ header file which defines the in-memory structure of the given record type and provides other associated information for the compiler. If the input file contains any menu definitions, they will also be converted into enumerated type definitions in the output file.

Multiple -I options can be provided to specify directories that must be searched when looking for included files. If no output filename is specified with the -o xRecord.h option or as a final command-line parameter then the output filename will be constructed from the input filename, replacing .dbd with .h.

The -D option causes the program to output Makefile dependency information for the output file to standard output, instead of actually performing the functions describe above.

For example aoRecord.dbd, which contains the definitions for the analog output record contains:

menu(aoOIF) {
recordtype(ao) {
    include "dbCommon.dbd" 
    field(VAL,DBF_DOUBLE) {
        prompt("Desired Output")
    field(OVAL,DBF_DOUBLE) {
        prompt("Output Value")
    ... /* many more field definitions */

The include file aoRecord.h that is generated contains:

/* aoRecord.h generated from aoRecord.dbd */

#ifndef INC_aoRecord_H
#define INC_aoRecord_H

#include "epicsTypes.h"
#include "link.h"
#include "epicsMutex.h"
#include "ellLib.h"
#include "epicsTime.h"

typedef enum {
    aoOIF_Full                      /* Full */,
    aoOIF_Incremental               /* Incremental */,
} aoOIF;

typedef struct aoRecord {
    char                name[61];   /* Record Name */
    ... /* define remaining fields from database common */
    epicsFloat64        val;        /* Desired Output */
    epicsFloat64        oval;       /* Output Value */
    ... /* define remaining record specific fields */
} aoRecord;

typedef enum {
    aoRecordNAME = 0,
    aoRecordDESC = 1,
    ... /* indices for remaining fields in database common */
    aoRecordVAL = 43,
    aoRecordOVAL = 44,
    ... /* indices for remaining record specific fields */
} aoFieldIndex;


#ifdef __cplusplus
extern "C" {
#include <epicsExport.h>
static int aoRecordSizeOffset(dbRecordType *prt)
    aoRecord *prec = 0;
    prt->papFldDes[aoRecordNAME]->size = sizeof(prec->name);
    ... /* code to compute size for remaining fields */
    prt->papFldDes[aoRecordNAME]->offset = (char *)&prec->name - (char *)prec;
    ... /* code to compute offset for remaining fields */
    prt->rec_size = sizeof(*prec);
    return 0;

#ifdef __cplusplus
#endif /* GEN_SIZE_OFFSET */

#endif /* INC_aoRecord_H */

The analog output record support module and all associated device support modules should include this file. No other code should use it.

Let's discuss the various parts of the file:

switch (dbGetFieldIndex(pdbAddr)) {
    case aoRecordVAL :
    case aoRecordXXX:

The generated routine aoRecordSizeOffset is executed when the record type gets registered with an IOC. The routine is compiled with the record type code, and is marked static so it will not be visible outside of that file. The associate record support source code MUST include the generated header file only after defining the GEN_SIZE_OFFSET macro like this:

#include "aoRecord.h"

This convention ensures that the routine is defined exactly once. The epicsExportRegistrar statement ensures that the record registration code can find and call the routine.

6.19 [-D] [-I dir] [-S mac=sub] [-o out.dbd] in.dbd ...

This program reads and combines the database definition from all the input files, then writes a single output file containing all information from the input files. The output content differs from the input in that comment lines are removed, and all defined macros and include files are expanded. Unlike the previous dbExpand program, this program does not understand database instances and cannot be used with .db or .vdb files.

Multiple -I options can be provided to specify directories that must be searched when looking for included files. Multiple -S options are allowed for macro substitution, or multiple macros can be specified within a single option. If no output filename is specified with the -o out.dbd option then the output will go to stdout.

The -D option causes the program to output Makefile dependency information for the output file to standard output, instead of actually performing the functions describe above.

6.20 dbLoadDatabase

int dbLoadDatabase(char *dbdfile, char *path, char *substitutions);


This command loads a database file containing any of the definitions given in the summary at the beginning of this chapter. Note that dbLoadDatabase should only used to load a Database Definition (.dbd) file, although it is currently possible to use it for loading Record Instance (.db) files as well.

As each line of dbdfile is read, the substitutions specified in substitutions are performed. Substitutions are specified as follows:


Variables are specified in the dbfile as $(var). If the substitution string

"a=1,b=2,c=\"this is a test\""

were used, any variables $(a), $(b), $(c) in the database file would have the appropriate values substituted during parsing.

6.21 dbLoadRecords

int dbLoadRecords(char* dbfile, char* substitutions);


6.21.1 Example

For example, let the file test.db contain:

record(ai, "$(pre)testrec1")
record(ai, "$(pre)testrec2")
record(stringout, "$(pre)testrec3") {
    field(VAL, "$(STR)")
    field(SCAN, "$(SCAN)")

Then issuing the command:

dbLoadRecords("test.db", "pre=TEST,STR=test,SCAN=Passive")

gives the same results as loading:

record(ai, "TESTtestrec1")
record(ai, "TESTtestrec2")
record(stringout, "TESTtestrec3") {
    field(VAL, "test")
    field(SCAN, "Passive")

6.22 dbLoadTemplate

int dbLoadTemplate(char *subfile, char *substitutions);


dbLoadTemplate reads a template substitution file. This file contains rules about loading database instance files and provides values for the $(xxx) macros they contain. This command performs those substitutions while loading the database instances requested.

The subfile parameter provides the name of the template substitution file to be used. The optional substitutions parameter may contain additional global macro values, which can be redefined within the substitution file.

6.22.1 Template File Syntax

The template substitution file syntax is described in the following Extended Backus-Naur Form grammar:

substitution-file ::= ( global-defs | template-subs )+

global-defs ::= 'global' '{' variable-defs? '}'

template-subs ::= template-filename '{' subs? '}'
template-filename ::= 'file' file-name
subs ::= pattern-subs | variable-subs

pattern-subs ::= 'pattern' '{' pattern-names? '}' pattern-defs?
pattern-names ::= ( variable-name ','? )+
pattern-defs ::= ( global-defs | ( '{' pattern-values? '}' ) )+
pattern-values ::= ( value ','? )+

variable-subs ::= ( global-defs | ( '{' variable-defs? '}' ) )+
variable-defs ::= ( variable-def ','? )+
variable-def ::= variable-name '=' value

variable-name ::= variable-name-start variable-name-char*
file-name ::= file-name-char+ | double-quoted-str | single-quoted-str
value ::= value-char+ | double-quoted-str | single-quoted-str

double-quoted-str ::= '"' (double-quoted-char | escaped-char)* '"'
single-quoted-str ::= "'" (single-quoted-char | escaped-char)* "'"
double-quoted-char ::= [^"\]
single-quoted-char ::= [^'\]
escaped-char ::= '\' .

value-char ::= [a-zA-Z0-9_+:;./\<>[] | '-' | ']'
variable-name-start ::= [a-zA-Z_]
variable-name-char ::= [a-zA-Z0-9_]
file-name-char ::= [a-zA-Z0-9_+:;./\] | '-'

Note that the current implementation may accept a wider range of characters for the last three definitions than those listed here, but future releases may restrict the characters to those given above.

Any record instance file names must appear inside quotation marks if the name contains any environment variable macros of the form ${ENV_VAR_NAME}, which will be expanded before the named file is opened.

6.22.2 Template File Formats

Two different template formats are supported by the syntax rules given above. The format is either:

file name.template {
    { var1=sub1_for_set1, var2=sub2_for_set1, var3=sub3_for_set1, ... }
    { var1=sub1_for_set2, var2=sub2_for_set2, var3=sub3_for_set2, ... }
    { var1=sub1_for_set3, var2=sub2_for_set3, var3=sub3_for_set3, ... }


file name.template {
pattern { var1, var2, var3, ... }
    { sub1_for_set1, sub2_for_set1, sub3_for_set1, ... }
    { sub1_for_set2, sub2_for_set2, sub3_for_set2, ... }
    { sub1_for_set3, sub2_for_set3, sub3_for_set3, ... }

The first line (file name.template) specifies the record instance input file. The file name may appear inside double quotation marks; these are required if the name contains any characters that are not in the following set, or if it contains environment variable macros of the form ${ENV_VAR_NAME} which must be expanded to properly generate the file name:

a-z A-Z 0-9 _ + - . / \ : ; [ ] < >

Each set of definitions enclosed in {} is variable substitution for the input file. The input file has each set applied to it to produce one composite file with all the completed substitutions in it. Version 1 should be obvious. In version 2, the variables are listed in the pattern{} line, which must precede the braced substitution lines. The braced substitution lines contains sets which match up with the pattern{} line.

6.22.3 Example

Two simple template file examples are shown below. The examples specify the same substitutions to perform: this=sub1 and that=sub2 for a first set, and this=sub3 and that=sub4 for a second set.

file test.template {
    { this=sub1,that=sub2 }
    { this=sub3,that=sub4 }

file test.template {
    {sub3,sub4 }

Assume that the file test.template contains:

record(ai,"$(this)record") {
    field(DESC,"this = $(this)")
record(ai,"$(that)record") {
    field(DESC,"this = $(that)")

Using dbLoadTemplate with either input is the same as defining the records:

record(ai,"sub1record") {
    field(DESC,"this = sub1")
record(ai,"sub2record") {
    field(DESC,"this = sub2")

record(ai,"sub3record") {
    field(DESC,"this = sub3")
record(ai,"sub4record") {
    field(DESC,"this = sub4")

next up previous contents index
Next: 7. IOC Initialization Up: AppDevGuide Previous: 5. Database Locking, Scanning,   Contents   Index
Andrew Johnson 2014-07-23