CERN

LPC meeting summary 26-05-2025 - final

Minutes overview      LPC home


Minutes and Summary

Main purpose of the meeting: Data taking progress, feedback on VdM filling schemes and “physics MD”, transmutation in pO/OO.

LPC minutes 26 May 2025

Present (P = in person): Chris Young (P), Chiara Zampolli (P), Robert Münzer (P), Andrej Gorisek (P), Paula Collins (P), Eric Torrence (P), Flavio Pisani, Silvia Pisano (P), Rosen Matev (P), Filip Moortgat (P), Andrea Massironi (P), Giulia Negro (P), Roderik Bruce (P), John Jowett (P), Cedric Hernalsteens, Flavio Pisani, Gerardo Vasquez, Juan Esteban, Matthew Nguyen, Peter Steinberg, Riccardo Long, Richard Hawkings, Witold Kozanecki, Jorg Wenninger (P), Michi Hostletter (P), Dragolslav Lazic, Anna Sfyrla, Giulia Ripellino, Qipeng Hu, Benoit Salvant, Georges Trad, Wilke van der Schee

Introduction (Chris Young)

Jorg Wenninger: what is on the slide [slide 7], has not been proven yet. We should put everything on the table to see whether this is feasible. But it is clear, to achieve this, we need to treat this YETS like a long TS: no change, nothing. Even then it could be challenging from both the machine and the experiments side. 

Filip Moortgat: are you going to suggest different ramp up steps?

Chris: probably towards the end of the year, there will be discussions like Jorg mentioned about whether it is possible to do it as quickly and at that point we’ll come up with a scheme. Normally they’ll want to do the same as last year, but doing it in 2 weeks might be difficult. We could just about do up to the 1200 bunches in 2 weeks, if we’re not doing anything else. 

Filip Moortgat: if we have the same time and the same steps, we can do the same things.

Chris Young: yes, but you cannot maybe have a third fill at 400b because one did not work well. 

Filip Moortgat: sure, we never asked for a third fill. 

Jaime Boyd: at the beginning you said that you don’t plan to change anything, but I thought there was this question about the radiation to the D1

Chris Young: yes, but to make it short, you need to not change anything.

Jaime Boyd: so the assumption is that we will not change anything. 

Jorg Wenninger the only way to improve the D1 situation would be to go back to reverse polarity, but this is not good for FASER. 

Chris Young: the default is “no changes”, from what was said in the LMC.

Filip Moorgat: to be clear: the difference between the different luminometers look quite ok, but in terms of absolute calibration, there is nothing we can say until the VdM. The beginning is whatever you want to be, since it is controlled by the pileup of the experiments, that it on our control, while the end it is the machine.

Michi Hostletter: one thing to check is that the optics difference, and waste shift with the original cogging set point we had during the original optics measurement, in the meanwhile, to recenter, we did quite a bit of IP shift, by ~30 degrees and this can change this picture slightly. We can at some point check at end of fill if, going back to the original point, the ratio changes. 

Chris Young: the error on this ratio is of order of 5-6% between the teo luminometers, so it is not too surprising.

Michi Hostletter: agreed, but it is worth checking if it can be something coming from this waste shift. The beams were a bit at a different positions when this was measured.

Michi Hostletter: [concerning the emittance plots where the measurements from the scans were not agreeing with the BSRT] there was actually a bug in the code, up to 24th-25th May for which the nominal crossing and separation planes were used and therefore the subtraction of the crossing angles effect was affecting the wrong plane. In the last fills it should be fine both for the plots from beginning to end of fill, and in the statistics plots. 

Michi Hostletter [concerning the cryo limit and the pileup set by the experiments, in particular the fact that the limit should be increased after some time when you run at 2.1e34]: it is indeed not happening. It looks like the signal is provided to a spike cryo and there seems to still be a little bug on the cryo side that prevents this from rising. Benjamin will have a look when back, hopefully by this week it will be fixed. This signal was implemented with the plan that it should be rising at soon as we reach 2e34 for more than 45 minutes: it should then rise. This was the original plan. It will be fixed.

Roderik Bruce: about the O run intensity: on 13th June there is the meeting,, and even if I cannot say anything officially now. it looks like the injectors are doing great and we can potentially go to 5e10 charges per bunch. What we are unofficially (for the moment) discussing is that we could put the cap at 2e12 per ring for machine protection reasons, with a reduced validation. This means that we could potentially keep the 48 bunches for pO but with a higher bunch charge, which would then increase the pileup for everybody except ATLAS; for OO we could also go with the full bunch charge with 40 or even 48 bunches, depending on what we come up with at the end. We can put some constraints on the pileup potentially. So it would be great if everybody could think about what is the pileup that you can accept for pO and OO. 

Chris Young: in pO, we’d match the charge of the p beam with the O one. 

Roderik Bruce: yes. 

 

Transmutation in the Oxygen run (John Jowett)

Roderik Bruce [about the plot on s5]: we don’t have more information about the peak, which is just below the beam energy. We should make sure that the simulation was run with the correct beam energy.

John Jowett: yes, it is being followed up.

Roderik Bruce: On the momentum acceptance of the collimation system the one 1.77 might be conservative. We might be able to have it tighter so the particles in that peak are lost.

Filip: on the plots on slide 12: what are the colors?

John: yellow is OO; the second largest is alphas, blue is deuterons. 

Chris: for the top right plot (N(Z, 2Z) / Ntot / %), if you have more intense beams, will it look the same?

John: if you have more intense beams, but you keep the luminosity the same by e.g. leveling, it would change the percentage of the contaminants. But if you could keep the luminosity the same, it would be good.

Chris Young: for every useful collision you would have the same chance of making an ion with the bad ridigity such that it conntinues round the ring.

John Jowett: no, if you increased the total intensity of the beam, but you kept the same luminosity, then you’d have fewer of those collisions relative to the intensity of the beam.

Chris Young: then you would have more denominator and still the same number in the numerator.

Roderik Bruce: which be actually what we plan to do because we think we can get higher intensity from the injectors, so we’ll probably get higher bunch intensity than what I showed here, and we could level down to the proton luminosity. 

John Jowett: if this is ok for the total intensity… 

Roderik Bruce: what is the contamination level that is acceptable by the experiments? At what level do you want to dump? 3 hours, 6 hours?... 

Chris Young: with this input, people can discuss exactly this with the heavy ion experts.

Roderik: this is important for the planning of the run. For the moment we are counting one single long fill (13h). If we have to refill every 2-3 hours, this will cost a lot of extra time. 

Chris Young: for the moment we have a 13h long fill, but the old intensity parameters. So if you count 2 fills of 6h, that could get you to the same target, but that loses you 2 hours in total, because the turnaround of 3 hours but saving 1 hour of stable beams: 6 + 6 is 12 + the 3 hours turnarround gives a total 2 hours more than the original plan.

Filip Moortgat: if we get to a conclusion of 10% is fine, we’d be limited to 6h. 

Chris Young: yes, getting to 1% is not feasible.

Filip Moortgat: yes, it was to make an example. 

Jaime Boyd: the systematic uncertainty on the plots is very large.

General agreement that indeed it is.

Jaime Boyd: probably the answer is no, but is there anything that one can do from the machine side to change the momentum acceptance? can we do something with the collimators?

Roderik Bruce: we can push them in the momentum collimators but up to a certain limit. We now have these five-track input files with which we can start some tracking studies and see how effective it is to clean the beam with the collimator, how much we can kill of those extra fragments.

Jaime Boyd: the problem with that is that at some point there is the risk to kill the real beam.

Roderik Bruce: yes. We don’t want to adjust the collimator to be closer in than the primary in P7, for example. This would negatively affect the lifetime. There was a much worse cleaning in P3 if you have any other type of beam losses in the cycle, maybe this intensity is not too bad. But with the latest news from the injectors it is not so low what we’ll get. So we’ll have a look at this offline.

John Jowett: another assumption I should mention is that in the filling schemes we assumed that all bunches have the same intensity lifetime. 

Chris Young: yes, but this is not 100% true since they are a bit different: some are LHCb bunches, some ATLAS/CMS/LHCb bunches… 

Robert Muenzer: can we do anything to see which scenario we are closer to? Do we know in the end the actual cocktail in the beam?

John Jowett: strictly speaking the answer is no. Maybe looking at ZDC or some extra VdM. 

Roderik Bruce: from the modelling point of view, there is not a good way to know this.

Paula Collins: from LHCb point of view, it would be better (oeprationally) to have 2 fills, to have two to compare. We are also planning to inject H. If we can track the multiplicities as the fill goes on and comparing the beam-empty with the beam-beam, this might give us a constraint. And we could check some change in composition as the fill goes on.

Chris Young: maybe with an offline analysis, but not during taking data. 

Paula Collins: but we could plot multiplicities.

Rosen Matev: do we expect the same contamination in both beams?

Roderik/John: yes, they are symmetric.

Chris Young: there seems a preference to not have one fill only, but doing 3 becomes not very useful to have the total time in stable beams around 13 hours. This would be the argument between 2 fills and 3 fills. 

John Jowett: one more information: the xsection of He4-O is ½ of the one for OO. 

Roderik Bruce: if you count for the burnoff, also the other particles should go down.

Reyes Alemany: what is the uncertainty on the clustering percentage?

John Jowett: I don’t know. I am told that 30% is the most realistic, but I don’t know.

Reyes Alemany: if we decide to cut after 6 hours, but then the percentage is 60%, it does not make sense. We are cutting on something that we don’t know. Should we instead make 1 fill to get to the target, and then take more that are less long and compare for the contamination?

Roderik Bruce: the point is that we have only 13 hours. If we take 1 fill, and then it is not useful, we’ll have to throw away the data. 

Robert Muenzer: the two handles we have is to increase the intensity but keeping the lumi the same, and reducing the fill length. 

Roderik Bruce: and reduce the momentum cut. 

Robert Muenzer: then we should keep the fill as short as possible.

Jaime Boyd: can you do the same analysis but with a different momentum cut?

John Jowett: yes. 

Rosen Matev: what about pO?

John Jowett: we expect some but less. It will take some time to get the results, same for Ne.

Chris Young: for pO, the EMD will be smaller, practically zero.

John Jowett: yes, but you will get some hadronic events due to clustering. 

Robert Muenzer: for Ne, can we expect a similar order of magnitude?

John Jowett: I think so. 

Reyes Alemany: Ne is O plus an alpha, so it is even worse, since it is 5 clusters instead of 4. 

 

ATLAS (Eric Torrence)

Eric Torrence: VdM filling scheme is ok

Chris Young: [about the filling scheme in pO] the second filling scheme is only different from the first because the injections are more spread, and this is better for SPS. So, since the other experiments don’t have a large preference, we’ll use that.

Roderik Bruce: the SPS cycle is the same for the first and second. This is what we were considering.

Chris Young: yes, for the third and fourth, the SPS would have to prepare two different cycles.

Eric Torrence: and this will be the same for OO?

Roderik Bruce: with the caveat that if for OO we get very high bunch intensity from the injectors, we might have to take fewer bunches, but then this might not be good for you if we have too fewer bunches. So we need to check.

Eric Torrence: it is better for us to have more collisions as this gives a more relaxed trigger setup due to the higher rate allowed by the IBL veto.

Beam Beam MD: ATLAS would like to know better the precision with which the measurement will be made, and the impact on the overall luminosity calibration. Not sure whether it is worth to invest 9h of physics time in this measurement.

Chris: for the Beam Beam MD: it is not supposed to improve the calibration uncertainty, but to check the methodology for the luminosity determination.

ATLAS are happy with the VdM filling scheme.

 

CMS (Giulia Negro)

Michi Hostletter: there is no alarm in case the roman pot cannot move. This case was also a bit peculiar, since it was moving fine, and then decided to stop moving for no apparent reason, without generating a fault. It stopped at 20-25 mm, and happily stayed sitting there. We then send the command to move it.

Mich Hostletteri: [about the emittance scan, that needs to wait a 3-4 minutes after SB is declared before being done in CMS, so CMS proposes to swap the order with ATLAS, which usually comes second] when we go in SB, ATLAS has a veto for the emittance scan for 3-4 minutes. This is why the sequence is set to do P5 first, since you do allow it immediately.

Andrej Gorisek: in ATLAS need to wait that the detectors are ready. 

Michi Hostletter: David said that this is nice-to-have, but not a blocking request. If it is something that you need, you should do like ATLAS, and send a pause request before you are ready, because we’d like to not look at the clock and wait. 

Chris Young: if we do ATLAS first, then for sure it will be fine for CMS too.

Michi Hostletter: the other option is that you send a pause till you’re ready, this is why we have this protocol.

Chris Young: till that is implemented, can we swap.

Filip Moortgat: since we do it 2x a week, it is not an issue.

Michi Hostletter: we agreed to do it every second fill.

Filip Moortgat: every 2 fills is a bit too much. 

Chris Young: we wanted every 2 fills till someone says that they don’t need it that often. 

Michi Hostletter: We can also reduce the rate. From the machine side it would be nice to have at least points every 2 or max 3 fills because otherwise we lose a bit of track. Now more people are interested in these scans, but clearly they come with a cost. For now let’s do them first in ATLAS. We agree for the moment every second fill.

Georges Trad [from the chat on zoom]: can we do scans with 9 points if the machine needs them?

Giulia Negro: in CMS we prefer always the same number of points, so if they need a scan, it should also be with 15.

Michi Hostletter: what Georges is proposing is probably to do 15 points every second fill, and 9 points every other fill. But then we may get some push back from your physics coordinators. 

Chris Young: for now let’s keep what we discussed in the lumi days, so every second fill with 15 points, and if there is any need from the machine of one when we’d otherwise not do them, they can do a shorter one. 

Chris Young: [about the Roman pot higher trigger rate] the only different things is that there are 12 non-colliding bunches whereas these start colliding in ALICE and LHCb in the very full scheme. Why this would be wrong in 1200b and not in 400b, it is not clear.

Roderik Bruce: if you do an emittance scan, do you see the signal going up and down?

Giulia Negro/Filip Moortgat: we don’t know, and this is anyway not a showstopper. 

Roderik Bruce: do you see this on both sides or one only?

Giulia Negro: both.

Michi Hostletter: we had also some fills with emittance blow up in B2 at injection but this would be very specific cases.

Jorg Wenninger: what they see should come from the IP.

Beam Beam MD: ok for CMS to take 9h from physics for the lumi MDs. Ok the VdM filling scheme

Giulia Negro: VdM filling scheme is ok.

Chris Young: feel free to put higher pileup than the cryo limit, because you will be limited by the machine. 

 

LHCb (Rosen Matev)

Michi Hostletter: [concerning the losses in IP8] thanks for all the checks, everything is consistent with what we observe, the fact that it is related to luminosity, it comes out of the IP, so you would not see this a background rate, not something that would come from the outside into the IP. We just need to find why this changed compared to last year.  

Jorg Wenninger: [concerning the fact that LHCb lost the leveling before the end of the fill, and their request to keep the pileup constant] if we have to dump when LHCb loses the leveling, then we’ll limit the duration of the fill to 10 hours.

Rosen Matev: we expect that the optimal fill length for LHCb is done like for ATLAS and CMS such that it is not beyond the moment when we exhaust the leveling.

Michi Hosteltter: it should be like that. Yesterday it was significantly after the end of the xing angle leveling but then it was exhausted. 

Jorg Wenninger: but the xing angle leveling ends very early, then you dump at lumi 1.8e34 or so, or 1.6.

Xavier Buffat: is there some margin to increase the intensity at a later stage? 

Jorg Wenninger: maybe 2%.now. More in september.

Michi Hostletter/Jorg Wenninger: we’d need 1.4 whereas ATLAS and CMS are at 1.3-1.4, so it is very close to the end of the leveling. 

Rosen Matev: we’re working on the assumption that pileup is constant, unless the fill has to be kept longer for any operational reason. I think that also the lumi predictions assumed this. If this cannot be done for whatever reason, we’ll have to see. 

Chris Young: using Michi’s tool, with 1.6, I’d get 12 hours of leveling for LHCb, while the optimal fill time is like 11.5h for ATLAS, so it is on the edge. 

Xavier Buffat: I think we’re blowing up a bit more than last year, so maybe this is something that we can fix, and it would lengthen your leveling time a bit as well. Then when we increase the intensity, clearly also your leveling time will go up and the optimal fill for ATLAS and CMS will be shorter, so the problem will be gone. 

Chris Young: is it an issue if for 1 hour the pileup decreases a tiny bit? Or is it just inconsistent for reconstruction?

Rosen Matev: we need to see. The dataset might then not be completely homogeneous. We have to see if this is what we want, or if we prefer to decrease the lumi target.

Michi Hosteletter: the problem with the lumi target reduction is that you would have to put it exactly at the end of the fill since you are not the dominant factor of burn off. So even if you reduce your target, you can only say that you want the last value for the entire fill, but this will not change how ATLAS and CMS burn the beam. 

Jaime Boyd: isn’t there a spread in the pileup from the bunch to bunch dependence anyway? So it is probably not bigger than this. You should plot the gaussian for a fill where you go beyond and where you don’t and compare. 

Jorg Wenninger: [about the magnet flip] we’ll use less bunches at the next fill to something like 800. 

Michi Hostletter: the feedforward that limits the excursion in lumi during the b* leveling has to be redone after the flip. What we did last year was to go back and forth between two configurations depending on the polarity, but this year we don’t have a knowledge for the negative polarity, so you should expect that for every b* step you might shoot up by 5% or so.

Rosen Matev: VdM filling scheme: requested more pilot bunches.

Chris Young: [about the pilot bunches in the VdM scheme] the reason why we had only one is because then everything was better than before: there were more bunches for ALICE, more bunches for LHCb not including the pilot, and then same number of 1s per ATLAS and CMS. To add one, it meant to go down in one parameter, so it would have not come for free anymore. If you want you can give up one of your bunches. This for me is easy to do. The private bunches for ATLAS and CMS: are they used?

Eric Torrence: we don’t need them

Chris Young: we can get rid of them.

Witold Kozanecki: we don’t need them. The comment about the pilot bunches in the ATLAS/CMS (this is my personal opinion, not yet approved by ATLAS lumi), is not a good idea.

Rosen Matev: this is not really a request (even if it is on the slide, number 5).

Beam Beam MD: LHCb is not interested in this program, since it might be not sensitive enough.

 

ALICE (Robert Münzer)

Roderik Bruce: [about the pileup in OO] if you go to 0.1, you’ll take less data. This will be up to you.

Robert Muenzer: yes. Would the lumi go down linearly? So with half the pile up we’d have half the data?

Roderik Bruce: it is actually a bit better. It depends on when we dump. If you’re always leveling then what you say is true, but when you get out of leveling, it scales better than linear and you’ll have more than half the data with 0.1.

Robert Muenzer: we need to see, we might give up some of the statistics for a better data quality (with less pileup). It would not be 0.1, but more 0.15.

Robert Muenzer: VdM filling scheme is ok.

Beam Beam MD: no feedback yet. 

Rosen Matev: for OO: the later you do the VdM, the more contaminated it will be.

Roderik Bruce: we can try to redo the contamination estimates with a realistic pileup and higher intensity. Maybe then it will be slightly better. So we can try 0.1 or 0.15 for ALICE. For ATLAS and CMS we tried 0.3. 

Filip Moorgat: for CMS, it is a compromise between the pileup vs the luminosity. In this discussion the luminosity wins, so we’d take more pileup. But after what we heard today, now there is a third ingredient, the contamination, and we need to rediscuss. 

Chris Young: so you don’t mind having two collisions in the same bunch crossing, or you veto this events?

Filip Moortgat: yes, we veto them. But is still much better to have more lumi and once in a while we veto, than to have less. 

 

LHC (Michi Hostletter)

Jorg Wenninger: this effect seems to exist since ever.

Michi Hosteletter: this year it seems a bit larger and also permanent, while last year it was something coming for a few hours and going. This year it is there every fill. So something has changed a bit.

Robert Muenzer: thanks for the work. For pp it is not so severe to have these excursions. The main reason to understand is that with high lumi we saw instabilities in the TPC, and it would be good to understand if they are related, or if they are detector related, since they might have an impact on PbPb.

Michi Hostletter: we could ramp up the orbit feedback and suppress this effect at the end of a fill, and you could do your tests then. It is not so nice if we do it in a dynamic phase, but for 1-2 hours, it is fine. 

Robert Muenzer: we could profit from running 2 hours and see if we see the same instabilities.

Jorg Wenninger: you need to be careful if we do this, we might pick some noise.

Chris Young: if this is done at the end of the fill, it would not be a disaster.

Michi Hostletter: it might also not be a big correction. 

Robert Muenzer: it would be good to have the test to disentangle. Could the magnet polarity have an impact?

Jorg Wenninger: it should not, since it is not coming from the IP. It comes from outside.