CERN

LPC meeting summary 29-10-2018 - final

Minutes overview      LPC home


Minutes and Summary

Main purpose of the meeting: Update on plans for the heavy ion run and a summary of the low energy run from ALFA

Introduction (Christoph Schwick)

Proton data taking

The proton data taking has finished with more than 65/fb of data delivered to ATLAS and CMS, while 2.46/fb was delivered to LHCb according to online luminosity estimates. In total there were 242 physics fills with stable beams. 52% of these were dumped by the operators (this was 57% in 2017) and the mean duration was 7.95 hour (this was 8.19 hours in 2017). The online luminosity and Z-counting are all consistent with one within errors. Witold Kozanecki asked if the CMS luminosities reported in the Massi files were updated for the early fills, but this was unclear in the meeting. However, this can be checked on the LPC website consulting the page of the Massi File versioning information. 

Heavy Ion run

For the heavy ion run, the luminosity sharing was discussed. ALICE needs to be levelled at 1027 cm-2s-1 and therefore its integrated luminosity is largely given by the amount of time in Stable Beams. They therefore benefit mostly from long fills and gain very little from going to 75ns bunch spacing despite the higher number of bunches.  ATLAS and CMS gain from running without leveling and in their case the optimal fill length is rather short, but depends strongly on the assumed turn-around time. They also gain from the higher number of bunches with 75ns bunch spacing. LHCb does not need leveling, but it might be required due to magnet quench risks near IP8 at higher luminosity. Many more bunches collide in LHCb with the 75ns scheme and LHCb therefore benefit greatly from this scheme.

The IonSharer tool on the LPC website can be used to evaluate the expected performance for different filling and leveling schemes with different leveling times. This is calculated in an approximate way, but found to be consistent with more detailed simulations within 5%. The results from a range of filling schemes where shown and the LPC concluded that the best approach is to keep the fill at least for as long as ALICE can remain leveled, but to keep ATLAS and CMS unleveled. John Jowett noted that ATLAS/CMS luminosity will not be unleveled from the start, but raised over several fills to ensure the high luminosity does not cause any problems.

For the Heavy Ion VdM scans, the ALICE scan cannot be done with a full set of head-on collisions at full intensity as this would likely cause a beam dump/quench. The LPC would therefore like to know how many bunches ALICE would need and the minimal intensity. The simplest approach would be to start with a normal filling scheme and physics run until the intensities are low enough that ALICE can go head-on. At that point IP1/5 would be separated and the ALICE scan start. For the ATLAS/CMS scan, Witold stated that in his opinion it would be beneficial if the VdM scans for those experiments were in same (long) fill, but that this is not critical. If it is the same fill, ATLAS has already arranged with David Strickland from CMS, that the ATLAS off-axis scan would go first. The length-scale calibrations would be recorded in different fills. In 2015, the luminosity level in the non-scanning IPs were reduced by a factor four, but if ATLAS and CMS are scanning in separate fills it might be possible to have a smaller reduction. Due to many of its luminosity sensors having failed since 2015, ATLAS will not be able to reuse the 2015 luminosity calibration and will therefore need an extended emittance scan early in the run. Information on the maximum allowed separation was again requested and Roderik Bruce reported that this is under discussion with Jorg Wenninger.

The commissioning and run plan for the heavy ion run is given in the commissioning spreadsheet at http://cern.ch/go/8qwd. The most important steps were highlighted, in particular that the first stable beam could come as soon as Monday night.

Special run wrap up

Since the accelerator conditions were quite unusual in the low-energy run and probed a new operations mode of the LHC, machine experts have proposed a mini-workshop with the various machine and experiment groups. The meeting should be scheduled before the Evian workshop which takes place at the end of January and so the mini-workshop would have to take place in either December or January.

ATLAS/ALFA report on low energy run (Karlheinz Hiller and Witold Kozanecki)

The setup used during the recent low energy run was recapped by Karlheinz. The ALFA roman pots were located 3 sigma (5mm) from the beam center. For the main part of the run, the detector was running well. One motherboard was lost during part of it and effectively caused the loss of one arm of elastics for about 300k events. At the trigger level, the elastic trigger rate was dominated by signal events with the background rate estimated at 10%. Rescrapings were typically required after 30-60 minutes. The characteristic pattern of elastic events were visible in all fills and even at the detector edge very little background was seen. A first offline analysis of 1/3 of the data has been done and shows signals consistent with the expectations from simulation in the full expected t-range. For now only loose selection cuts are applied and with those the background fraction is estimated to be O(0.2-1.5%) and equally low over the full t-range. The background during the fills with crystal collimation factor was a factor 3 higher offline than in the conventional two-stage schme, due to the background being more concentrated underneath the signal. While higher, the background levels were stable for longer with the crystal collimation scheme. To what extent the data from the crystal collimation scheme can be used, in particular for measuring the rho value, remains to be seen. With tighter selection it was expected that the background level for most of the data could be reduced to a level of a few permille. Roderik and Christoph asked why the online feedback on the crystal collimation scheme had been significantly more negative than indicated by the offline analysis. Marko Milovanovic explained that the online evaluation was based on a limited set of distributions available in real time. Christoph emphasized that for any future running, it would be important to have more accurate online evaluations in order to make the optimal decisions. Valentina Avati noted that for TOTEM the good data with the conventional 2-stage collimation had about 1% of background while during the fills with bad quality it was 10-15% which is significantly worse than what is reported now by ALFA for the crystal collimation scheme. In her opinion, the ALFA experts should have been able to evaluate the rate of the problematic background already from the online plots for the non-colliding bunch and that in any future special run this should be done more accurately online.

Witold summarized the status of the VdM scan for the low energy run. The luminosity data collected overall was of good quality and had adequate statistics despite the low luminosity thanks to the dedicated filling scheme. The orbit drifts were also observed to be quite moderate. However, the offline analysis of the LDM data by Markus Palm showed that the ghost charge increased over the two fills to 10-15% whereas the normal level is around 0.1%. Similarly the satellite fraction was around 0.5% where it normally it is O(10-4). This effect is therefore expected to dominate the luminosity uncertainty. LHCb attempted to measure these effects from beam-gas interactions, but due to the unusual conditions 95% of the data was lost and a precise estimate will not be possible. In addition the debunched beam caused background collisions all along the straight section around IP1 which will need to be estimated and subtracted from the luminosity signals. It is therefore unlikely that a luminosity uncertainty of 2% can be reached and even 3-5% looks very challenging. The debunching was unfortunately not detected online as the online analysis of the LDM signal did not work properly. Helmut Burkhardt noted that the debunching could have been reduced by increasing the RF voltage from the 4MV used.