g+
g+ Communities
Argonne National Laboratory

Experimental Physics and
Industrial Control System

Tornado/vxWorks 5.x Information

This page provides a repository for information about using the WRS Tornado environment and the vxWorks 5.x RTOS with EPICS that doesn't really belong anywhere else on the EPICS site. If you discover any other information that ought to go on this page, please let me know.

Note that there is a separate page provided for users of vxWorks 6.x and Wind River Workbench.

There is also a reasonably good Tornado 2.0 FAQ available on the web, mostly comprising answers to questions posted to the comp.os.vxworks news group.

Tornado 2.2 (vxWorks 5.5)

Installation

According to the Tornado 2.2 release notes, you cannot install multiple host and/or target architectures in the same directory.

Linux Hosting

See this page for information building vxWorks target code on Linux.

EPICS Changes

Ron Sluiter has succeeded in building Base R3.13.7 against Tornado 2.2, but said in an email to tech-talk that in order to do so he had to remove the switch -nostdinc from the EPICS configuration file <base>/config/CONFIG.Vx. Base R3.14.x should build on Tornado 2.2 without change.

Tornado 2.0.x (vxWorks 5.4.x)

VxWorks 5.4.x will not be supported by future EPICS Base 3.15 releases. The GNU C++ compiler provided with 5.4.2 is too old, you will have to upgrade to at least vxWorks 5.5 or preferably vxWorks 6.x.

Installation

With Tornado 2.0.x it is possible to install multiple host and target architectures to the same location, and this is probably better than keeping them separate (Tornado 1.0.1 allowed some combinations, but not all). It is definitely not a good idea to mix difference versions of Tornado (for different architectures) in the same location.

WRS Patches

The following patches have been found to fix certain problems with Tornado 2.0 and ought to be installed at all EPICS sites using the indicated architectures or board types. Patches must be downloaded from WRS's WindSurf website:

  • sprT2CP4: Cumulative Patch 4 - this converts a Tornado 2.0 installation into Tornado 2.0.2, and is highly recommended.
  • TDK-13418: Tornado-Comp-Drv Feburary 2002 release v0700 - this upgrades various BSP drivers for Tornado 2.0.2.

Linux Hosting

If you want to use Linux to host your EPICS development, you will need to build your own cross-compiler as Wind River don't support Tornado 2.x on Linux. Ask your WRS FAE for a copy of the GCC source code - you are entitled to this since GCC is licensed under the terms of the GNU GPL and WRS have sold you a binary copy of it. If you're having a hard time doing that, you can also download a copy from SNS. You might want to read this tech-talk thread for more information about this topic, and possibly contact David Thompson if you have problems or questions.

Configuring vxWorks

See the document Configuring Tornado 2.0.x for EPICS for some information on how to configure your vxWorks image to be able to load and run EPICS.

Some additional points to note about the Tornado 2 configuration:

  1. If you're configuring Tornado 2.2, make sure you don't include RIP unless you know you need it (very unlikely for EPICS IOCs). This has caused problems at some sites as it can eat up 100% of the CPU in a fairly high priority task.
  2. The network memory buffer configuration for Tornado 2.x is significantly different to that for Tornado 1.x and earlier, because of the new network stack that is uses. The new configuration process requires that the complete number and size of these buffers be set at compile time (actually these numbers can be changed early in the BSP startup, but not afterwards); the old stack could dynamically grow the number of mbufs it used as necessary, so the initial configuration was not particularly critical, unlike with the new stack. The WRS defaults provided in Tornado 2.x are too small for most operational EPICS IOCs.

    The official WRS method of configuring the network stack is described in section 4.6.3 Network Memory Pool Configuration of the vxWorks Network Programmer's Guide (that section number is from the vxWorks 5.4 edition, for the vxWorks 5.5 edition it's 4.3.3). APS and SNS have both developed their own alternatives to this fixed approach which permit the number of buffers to be selected at boot time from the vxWorks boot parameters. At APS we provide two configurations that are selected between by one of the bits in the boot flags according to the loading on that particular IOC. The actual numbers of buffers used are as follows:

    ParameterValue (light)Value (heavy)
    Data Pool clDescTbl []
    NUM_64125250
    NUM_128400400
    NUM_2565050
    NUM_5122550
    NUM_10242525
    NUM_20482525
    NUM_NET_MBLK8001200
    NUM_CL_BLK650800
    System Pool sysClDescTbl []
    NUM_SYS_642561024
    NUM_SYS_1282561024
    NUM_SYS_256256512
    NUM_SYS_512256512
    NUM_SYS_MBLK10243072
    NUM_SYS_CL_BLKS10243072

Known Problems

  • The WRS SPR#31718 "cksum: out of data message logged to console when target is proxyArp server" may affect sites that are using the Proxy ARP server to forward packets to a secondary CPU through the backplane (shared memory) network. The published workaround is to add proxyPortFwdOff(67) to the startup script of the relevent IOC, and should be safe providing you are not using DHCP or BOOTP to configure the slave IOCs on the backplane.
  • If the shell on an MVME1xx IOC (68k family) ever runs continuously without ever relinquishing control to lower priority tasks for more than 1.5 seconds, you might get the error message

    interrupt:
    ei0: reset

    This comes from the ei ethernet driver whenever tNetTask takes longer than 1.5 seconds to process some incoming packets, and may or may not be serious - at iocInit it appears to be benign (it often occurs in dbLoadRecords with large databases), but if it occurs later there are reports that it can cause a complete hang. If you're not using an NFS filesystem I would suggest switching to this if possible as it might solve it, otherwise you can temporarily reduce the priority of the tShell task at the beginning of the startup script are restore it again afterwards as follows:

    taskPrioritySet(0,60)
    
    ...
    
    taskPrioritySet(0,1)
  • It is unwise to configure Tornado 2.x to include the DNS resolver if you are running EPICS R3.13.x. If you do and your DNS server dies or doesn't contain some data that it should, Channel Access connections can take a very long time to be made and the IOC will appear to be very flaky, although nothing obviously appears to be wrong other than bad connections. This is caused by a CA task being held up waiting for a DNS reply and preventing further connections until this times out. EPICS R3.14 does not suffer from this problem however.
  • When upgrading a shared-memory backplane networked system from Tornado 1.x to Tornado 2.x, you should be careful about any subnet mask given in the "inet on backplane" boot parameter setting. In Tornado 1.x the subnet mask appears to have been less critical, whereas in Tornado 2.x values that worked fine before can cause the main network to stop working (processor 0 might decide to route all packets destined for your boot host through the backplane network, which isn't very helpful when it's looking for the symbol table file or startup script). This is a standard TCP/IP network configuration issue associated with subnets, and you should talk to someone knowledgable in that area if you can't work out what the setting should be yourself.
  • The error message "arpresolve: unable to allocate llinfo" may be an indication that you have something wrong with your routing tables, although there may be other causes for this message too. Tornado 2 is less forgiving of network configuration problems than earlier versions, so if you get these you should check your boot parameters (gateway inet) and any routeAdd() calls in your startup to make sure they're correct for your network configuration.
  • A message from tNetTask "arptnew failed on xxxxxxxx" where xxxxxxxx is the hexadecimal representation of an IP address is similar to the previous error, indicating some kind of routing configuration problem. I received this explanation from a former WRS developer:
    An IP packet is being transmitted, and based upon how it matches network
    interfaces' IP addresses and mask, an outgoing interface is selected, at
    which point the interface driver calls arpresolve to get a MAC address
    for that IP address, either by sending an ARP request, or by using
    a cached entry.
    
    At that point, if the ARP code, looking for an ARP cache entry, finds
    that this particular IP address has a specific routing entry which is
    not an ARP entry, it will return that particular error.
    
    Since ARP uses has a common mechanism with the IP routing table,
    there could not be a routing entry and an ARP entry for the same
    destination IP address at the same time.
    
    One possible reason for this is if an ICMP Redirect is received for
    a particuar IP address, which would be considered "local" if mathcing
    its address to the network interface's IP address and mask.
    
    mRouteShow() at the time of this error should pretty much paint the picture
    in bright colour.

Tornado 1.0.1 (vxWorks 5.3.1)

Ethernet on mv2700 (dc driver)

There can be a network reliability problem with Tornado 1.0.1 on the mv2700 through certain types of 10/100baseT media. A quick way to resolve this problem (rather than getting a patch for Tornado 1.0.1 from WRS) is to change the setting of DC_MODE in the config/mv2700/config.h file from 0x08 to 0x18. This puts the driver into Full Duplex mode. Note that this setting must be correct for the network port you're using, so you might need two boot files if you have both full- and half-duplex ports.

PowerPC Issues

There is a separate page discussing the specific problems associated with using PowerPC CPUs under vxWorks/Tornado.

ANJ, 16 Feb 2012 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· EPICSv4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·