CAT Chat Minutes

April 11, 1997

 

Issues that require action and/or follow-up
1. The BESSRC computer tables were moved by contractors or APS personnel without asking permission. What can be done?

We are informing all support personnel (riggers, plant facilities, etc.) that before any work is performed or any equipment can be moved on the floor of the experiment hall, the floor coordinator needs to be notified. The floor coordinator will then in turn discuss the planned work with the CAT.

2. We are not getting a notice when Tecknit is working in an adjacent station.

This incident occurred because of a problem with the work request software. The problem has been corrected and should not occur again. The system is now capable of notifying specific or all sectors.

3. Is it possible to receive more information on the storage-ring status, especially during times when the storage ring is down for some major problem that has a serious impact on user beam availability?

We will continue to improve the communications between the facility and the users. We understand the frustration of not knowing what is going on, but very often even the facility engineers do not know the full extent or implication of their early diagnostic efforts. Often the full extent of the damage from a failure is not evident until the obvious repairs are made, and the more subtle failures become evident. In any case, we will work on keeping you informed. At the end of these minutes, we have included a summary of this past week's activities. We will provide a weekly status summary for the storage ring.

4. Can we be notified when work that may affect our operation is done on the utilities?

We will add utility modification to the list of items that were discussed in item 1.5.

A station door could not be opened because the PSS was indicating a minor fault. What is being done to correct this?

The new PSS software that is being installed corrects this problem. Most of the beamlines have the new software installed. We are trying to finish the remaining ones as soon as possible.

6. Is there any truth to the rumor that the shops will be declassified as tornado shelters (the signs have been taken down)? If only the restrooms are used as shelters, the men's restroom size becomes an issue.

The shops will be reposted while we review other options.

7. There is no place to put a name on the temporary visitors badges. When using the badge for multiple days, it makes it difficult to determine which badge belongs to whom. Can something be done?

We will be providing stickers for writing your names and attaching to both the dosimeter rack and the dosimeter.

8. On Wednesday, DND noticed that the beam took a sudden jump on their bending-magnet BPM. Also the indication on the status screen was indicating the orbit correction was being done continuously. What happened for each of these events?

The sudden beam motion did occur, but we do not have a direct correlation to the cause. It is possible that one of the correctors made an uncommanded change, and it took time for it to recover or for the orbit correction program to restore the orbit. In the case of the status indicator, an abnormal termination of the orbit correction program modified the status PV, and other control problems prevented it from being corrected.

9. ASD studies and shielding verification are done during the day shift during the week and not on weekends. Can something be done to provide us with more day shifts?

Both studies and shielding verification often require a significant number of support personnel, because these activities often have unique or non-standard operational requirements. It is difficult to have a large number of personnel here during the weekends or at night; call-ins from home also waste time, decreasing the efficiency of the activities. Also the amount of dedicated shielding verification time is dropping rapidly with most stations now being monochromatic and verified under full current, parasitic conditions.

10. Because of all of the down time, is it possible to get more running time this run?

That was the reason for canceling the dedicated shielding verification day on Thursday, April 10, and returning it to User Ops.11. We have heard that the storage-ring power supplies are running at 120% of capacity. Is this true?

This information is false. The storage ring power supplies were designed to operate at power levels required for 7 GeV + 10%, i.e., at 7.7 GeV. This provides adequate capacity for stable operation at 7 GeV. Most of the problems with the power supplies are either infant mortality failures or control system problems.

12. We have seen data on beam availability that appears to be too high. What is the true situation?

There are several sets of numbers being quoted for availability. One is storage-ring availability, which only uses stored beam current as a criteria; the other is x-ray availability, which uses the shutter permit as the criteria. However, the x-ray availability did not take into account the time it took for the IDs to close after the shutter permit was granted. We are currently closing the ID gaps and letting the orbit stabilize before granting shutter permit We are also tracking fill duration and will try to fold it into the measure of availability.

Information
The trips causing beam dumps have been equally shared between the power supplies and the rf. In general, the reliability of most systems has improved dramatically. Unfortunately, the reliability of the rf system has gotten worse. The major problem delaying the start of User Ops last week was caused by a vacuum leak in the electron linac, which required a waveguide to be replaced.

The increased lifetime of the stored beam is due to a new fill pattern, which has 6 filled buckets for BPM triggering, followed by an additional 200 filled buckets. The fundamental reason for these changes in fill pattern is to test the stability and reliability of the storage-ring BPM system.

Status summary for April 11 - 17
On Saturday, April 12, at 0710, some disk drives attached to the controls server were inadvertently shut down. This prevented any new processes to be run on any workstation attached to the controls network. However, all running processes continued without affecting the storage ring. At noon, the server was rebooted, and the stored beam was not lost. Plans are underway to move the controls mirror system to building 412 to avoid any reoccurrence.

The power supply system for the synchrotron dipole magnets failed at 1330 on Saturday. The failure was major and caused significant smoke and damage to the system. The power supply consists of a dual (master/slave) arrangement in a push-pull configuration for minimizing the maximum system voltage. The failure occurred in both supplies and was identical in each. Each supply has a three-phase input transformer that includes a small single tertiary winding around all of the three phase windings for equalizing phase unbalances within the transformer. These windings overheated due to a phase unbalance in both supplies of unknown causes. The damage in the main supply transformer was more significant, so it was replaced by the single spare transformer. The slave supply transformer was repaired in-house. The system was operational by Wednesday at 0800.

When this failure occurred, the stored beam was not lost. The plan was to continue the store as long as possible. Unfortunately, a trip of rf2 caused the beam to be dumped at 1530 hours.

During the synchrotron dipole power supply repair, additional work was performed on the PAR rf fundamental cavity, storage-ring power supply controls and storage-ring rf systems.

During the ASD studies period following the repair, injector studies, storage-ring orbit correction and real-time feedback studies were performed until User Operations began at 0800 on Friday. During the injector studies, the time was used in the ongoing effort to understand and minimize the number of rf trips.