CERN

LPC meeting summary 08-10-2018 - final

Minutes overview      LPC home


Minutes and Summary

Main purpose of the meeting: Review experimental request for low energy run, optimize luminosity, consider use of crystal collimation and update on overall schedule

Introduction (Christoph Schwick):

A meeting with the accelerator management was held this morning to plan the remainder of Run-2 proton running.
The outcome of that meeting was reported:

For the low-energy run, the setup for the VdM scan and 11m data taking still remains to be done. The first part of the setup for setting up the cycle would be be common, while the VdM setup requires loss maps and for the 11m datataking requires the collimators and roman pots needs to be aligned. The plan is to do the cycle setup and loss maps Thursday morning before the start of data-taking, while collimator and roman pot setup for the 11m run will be done just before the data taking at 11m. Jorg noted that switching between 11m and high beta* requires a precycle which will degrade the machine stability for a few hours, so it is preferred not to interleave the two types of optics with each other.

The collimation scheme for the 11m data taking was discussed. TOTEM needs to have the roman pots inserted to between 3 and 5 sigma (the closer, the better). The possible options are two use a single stage collimation scheme as was done in the May 2018 tests or the two-stage scheme used during the most recent test. Neither scheme has been simulated or tested with the 11m optics, so it is not known which one will work the best, if at all. The single stage scheme is faster to setup than the two stage scheme (1-2 hours faster), but the experiments expressed a preference for the two stage as that was a huge improvement for the high beta* test. It was noted that the priority is to have acceptable backgrounds from TOTEM, but ATLAS will take collision data as well if conditions allow.

Performance projections (Brian Petersen)

For estimating the required data taking time, it is assumed that the luminosity assumptions presented by the experiments in the 18th August 2017 LPC meeting are still valid, i.e. 300-500µb-1 for ATLAS and 190-380µb-1 for TOTEM. The LPCs interpreted this as a request for 400µb-1 and it was later clarified this would be accounting only for good data taking conditions, i.e. roman pot inserted and acceptable background levels.

Since the luminosity signals are not reliable for 900 GeV running, the rate of an ATLAS minimum bias trigger line during the recent test run was used as a proxy for estimating the luminosity loss during data-taking and during beam scraping. For the former a luminosity loss of 20% per hour was observed and attributed to beam blow-up and intensity losses, while in two scrapings a luminosity loss of 20% per scraping was observed. Based on this and an assumed turn-around time of 30 minutes an optimal fill length of 2 hours was estimated assuming the background conditions remain good for physics. Rescraping the beams does not appear advantageous unless the beam conditions deteriorate much faster or the luminosity loss in the scraping can be substantially reduced. With the two hour fill length and 20% luminosity drop per hour, the average luminosity is about 65% of the peak. This changes only minimally when varying the assumptions.

For predicting the luminosity, the beam conditions at start of the M1 data period was assumed as these were quite relaxed. This yields a peak luminosity for TOTEM of 6.5x1027 cm-2s-1 or 23 µb-1 per hour. Accounting for the effective running time, this gives 360µb-1 per day. It was estimated that this corresponds to about 1.1 million eleastic signal events delivered to ATLAS. The luminosity can be increased by 20-40% by either increasing the intensity as was done in the last test fill at the cost of higher background or by adding one or two bunches more. For the 11m run, a peak luminosity of 4.9x1028 cm-2s-1 or 176 µb-1 per hour is predicted. These predictions assume 100% machine availability.

Based on these projections, the original data request can be delivered in just over one day. The VdM scan would add ~1/2 day of running, while the 11m run should be less than 1/2 day even accounting for setup. Accounting for normal machine availability and some remaining setup, the special run is therefore expected to take less than four days.

Experiments input:

ATLAS (Karlheinz Hiller):

ATLAS requested up to 1.5nb-1 of high beta data in order to record a few million elastic events and stated that one million elastic events is the minimum required. For the luminosity request, ATLAS also counted luminosity delivered while inserting roman pots etc. like was done in the 90m special run. However, since in this case no stable beam is declared it makes more sense to only account for luminosity when roman pots are in place which reduces the request somewhat.

The preference is for (at least) 6 colliding bunches since during the tests the background was worse when injecting high intensity bunches and required two scrapings. In addition for two fills ATLAS requests to have one non-colliding bunch. Pros and cons of different strategies were discussed, but with only two fills during the test, it is difficult to conclude on the optimal scheme before seeing more data. It was suggested that it would be possible to inject 5 higher intensity bunches, do a first scraping and then inject a sixth bunch to compensate for the loss. This would require the sixth bunch to be same intensity as the first ones and to do two scrapings. For two fills, ATLAS plans to turn on their inner detector which requires "quiet beams", i.e. no beam tweaking and in particular no additional scraping before the detector is ramped down. During the scraping periods, ATLAS will retract the ALFA roman pots.

ATLAS noted they can see the beam conditions immediately online and they propose to rescrape when S/B is around 1 and to refill when elastic trigger rate in one arm is below 10 Hz. This requested should be modified in view of the above studies.

ATLAS preferred the standard two-stage collimation over the crystal collimation scheme as in the latter the background distributions are more similar to the elastic signal which could be a problem in high background conditions. Still, they would not object to having 1-2 fills more with the crystal collimation scheme to better evaluate how this performs. Christoph mentioned that in any case the crystal collimation is not considered operational and it would be difficult to cover the full run, but Stefano Readelli stated that if it is the better scheme, they could find enough experts to cover operations. However, there is not a dedicated piquet in case of hardware problems. He suggested to do a fill early on with crystal collimation which was agreed to as long as the two-stage collimation is the default.

For the VdM scan it was noted that "VdM beams" will be needed from the injectors. These are unlikely to have an emittance as low as 1.5 µm, but this needs to be checked with injectors. Jorg confirmed that +-3 sigma movement for the VdM scan is fine, but he needs to check if there is a possibility of larger movement during the length-scale calibration (up to 4sigma requested). Collisions in IP5 were requested for stability reasons. Machine experts did not think this was necessary, but this would in any case be the default. To track the emittance growth, BRST data was requested. Jorg noted that it would also be possible to do wire-scans between scan points since this is being done at injection energy.

ATLAS does not request the 11m run, but confirmed that they will take the data.

TOTEM (Valentina Avati):

TOTEM largely agreed with the request from ALFA, but is keen to also getting the 11m dataset. They have a preference for the crystal collimation since during that test they did not see any background growth with time, so this could give longer fills. However, this is based on a single fill and does not account for the luminosity loss during the fill which will tend to keep fills short. It was noted that their roman pots will stay at 3sigma during any scraping.