Experimental Physics and
| |||||||||||||||||
|
various IOC interface options. The current system consists of a 68060 VME cpu and a KS-2917 vme/camac interface board. The interface is connected to two CAMAC crates that together house six serial highway drivers. The IOC (EPICS version 3.13) runs VxWorks 5.2 and contains around 750 hardware-connected PVs. Custom optical link devices couple the CAMAC signals across 20-25 MV. Performance requirements are somewhat low. We scan all PVs at a rate of 2 Hz and a small subset (variable contents, total < 25) at 10 Hz. Access is entirely via programmed I/O; dma and interrupts are not employed. ============================================================= I have experimented with a KS-2915 pci/camac interface card and a Weiner pci/camac interface card. Kinetic Systems does not support linux so I had to develop my own driver for the 2915. The Weiner board comes with a linux driver. The 2915 may be connected to multiple crates (probably 8). The Weiner solution requires one board per crate but the throughput is about twice that of the 2915. ============================================================== To keep hardware costs to a minimum, we decided to consider only potential future configurations that included the use of our existing KS-2917 vme/camac interface boards. RTEMS and linux solutions were examined. ============================================================== First, I looked at RTEMS 4.6.99.3 built with Till Straumann's mvme2100 bsp. The mvme2100 is a vme cpu and could replace the 68060 directly. Evaluation of the system was done by creating a database of 500 PVs connected to a single CAMAC input, the value of which was changing at a rate of 2 Hz. Network loading was done by running independent remote edm processes that added 500 events to the IOC (one per PV). Scanning the 500 I/O points twice per second consumed 25% of the cpu. Client connections (500 events booked) cpu usage (%) 0 25 1 60 2 90 3 100 Cpu usage was estimated by creating a minimum priority task which incremented a volatile interger in a loop. The count was recorded over a 60 second interval. The value corresponding to no load (100% idle) was approximated by the system running with a single PV scanned at 2 Hz with no events booked. A small (presumably negligible) systematic error is therefore present in the above data. ============================================================== I also tested a Fedora core 6 linux solution on a generic 1.7 GHz Celeron. VME access was obtained with a SBS 618 pci/vme bus adapter (originally manufactured by bit3, purchased by SBS, and now owned and sold by GE). The system performance was evaluated as before. Scanning the 500 I/O points twice per second consumed 15% of the cpu. Client connections (500 events booked) cpu usage (%) 0 15 1 16 5 21.5 Cpu usage is from top, averaged over a 30 sec interval. John Sinclair Paul Sichta wrote: Fellow CAMAC'rs, Attachment:
smime.p7s
| ||||||||||||||||
ANJ, 10 Nov 2011 |
·
Home
·
News
·
About
·
Base
·
Modules
·
Extensions
·
Distributions
·
Download
·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing · |