![]() |
LPC meeting summary 28-04-2025 - final |
![]() |
Main purpose of the meeting: Schedule, status of commissioning, OO, NeNe, high intensity tests
LPC minutes 28th April 2025
Present (P = in person): Chris Young (P), Chiara Zampolli (P), Robert Münzer (P), Andrej Gorisek (P), Filip Moortgat (P), Giulia Negro (P), Paula Collins (P), Eric Torrence (P), Andrea Massironi (P), Andres Dellanoy, Dragoslav Lazic, Klaus Monig, Riccardo Longo, Silvia Pisano (P), Joanna Wanczyk (P), Jorg Wenninger (P), Reyes Allemany, Flavio Pisani, Rosen Matev, Matteo Solfaroli Camillocci (P), Witold, Flavio Pisani, Giula Ripellino, Guliano Giacalone, Govert Nijs, Ivan Cali, John Jowett, Lorenzo Bonechi, Maciej Trzebinzki, Mario Deile, Peter Steinberg, Richard Hawkiings, Rosen Matev, Tomasz Bold, Vale, Valeriia Zhovkoska, Roderik Bruce (P), Natalia (P), Maciej Slupecki (P), David Stickland (P), Ilias Efthymiopoulos, Witold Kozanecki, Vladislav Balagura
Introduction (Chris Young)
S8:
Roderik Bruce: the commissioning of PbPb cycle and optics, of 1m b* for LHCb and the advanced dispersion knobs for ALICE background will likely go together, because if you go to 1 m b* in LHCb you need the dispersion knobs not to get a nasty background in ALICE. We hope it will work, but we need to test it.
Chris Young: ok, to be followed up with the machine people.
Witold Kozanecki: in ATLAS we’re still discussing how many points we want in the crossing angle scan and therefore how long it will take and part of the input that we would need is whether the machine needs to validate not just at 0 and full crossing angle but also at intermediate values. Is it enough to do zero and full? Do you need zero, min and max? Or zero, min, max and half a way between min and max? This will have a bigger impact on the schedule than the data taking time [in VdM] of ATLAS
Chris Young: this will be done in vdM optics?
Witold Kozanecki: yes, this is part of the ATLAS vdM program.
Chris Young: I will follow up.
Tomasz Bold: about the filling scheme for Oxygen on s9: are they just tentative or certain? If not certain, what is the direction to which we’ll go?
Chris Young: this is the 18 bunch scheme for OO, in the scheme linked there is 1 bunch per injection which takes quite some time (1 minute per injection per beam => ~40 minutes), which is not ideal. What we’ll provide to ATLAS (since ATLAS is interested and you’re in ATLAS) is another scheme with 2 bunches per injection, so that you can check the L1 max trigger to see if it is still acceptable. For 18 bunches as a total, Roderik can maybe comment.
Roderik Bruce: this is the current working hypothesis. We’re still trying to optimize to reach the lumi target in 8 days as requested by the experiments. If we can have more bunches, it is a discussion on the machine side with collimation, machine protection etc, to understand the actual limit on the intensity. Also for the number of injections, maybe one can think about 3 or similar, there could be an optimal value.
Chris Young: so it might change, but not by too much: not from 18 to 180, but more from 18 to 24.
Tomasz Bold: what matters to me if whether it can change to 12.
Roderik Bruce: no, or we won’t reach the lumi targets otherwise.
Robert Münzer: If you go to more injections, what about the bunch spacing?
Roderik Bruce: we should not put too many to not be able to respect the 1 mus.
Oxygen/Neon test (Maciej Slupecki)
Roderik Bruce: how do you know that in the main peak there is no (16)O(4+) next to (20)Ne(5+)?
Maciej Slupecki: this is an educated guess: you see that in the neighbouring peaks the Ne is much higher and the O is already very low. So it would be surprising to have the majority of O and not Ne.
Roderik Bruce: it might also be not the majority.
Maciej Slupecki: you cannot distinguish. The rigidity of these is 6x10e-5 which is not distinguishable here, we could distinguish at the level of percent. So whatever we produce at this stage will be kept till the LHC.
Robert Münzer: why is there much less contamination of N in the O than in Ne?
Maciej Slupecki: because the O was running for weeks while the other gas line was just started. The pressure when you supply gases to the source are at the level of 10e-3 mb so there is still some air pockets close to the valve that you cannot flush with the gas which you cannot flush with the gas because they are already next to the valve leading to the source. The only way to flush them is to run the gas through the source at these very low pressures, and then you extract it to the other side of the source. The pipes can be flushed, it is the last part of the valve leading to the source. So if we had more time for this, the N would decay, and there would be no N.
Chris Young: in any case there is no N in the gated region anyway, so no N would make it to the LHC.
Maciej Slupecki: exactly. Because of the content of N, we have a little bit lower intensity there, because there is some limit on the global intensity that you can extract at the source because of charge related effects at low energy.
Filip Moortgat: 4% on the contamination is already good, but on the day of the (next) test, can you measure it with better precision?
Maciej Slupecki: our precision is limited by the red Ne peaks (see S6) and the O; the O is the one driving mostly the resolution. One could do other denser tests with more acquisition per settings, this could improve the resolution, but we cannot go below 1% precision
Filip Moortgat: 1% precision would be very good for analysis.
Roderik Bruce: in the LHC, we’ll get the contamination over time: if you start with 4%, this will grow over time, after 10 hours you might have 10% contamination, because we create oxygen in the collision of the Ne ions at the IP. So this is the component at the start time, and then you have the contamination over time and this we cannot measure in a good way, but can try to simulate it but with a big uncertainty.
Chris Young: looking at the papers for Ne, you see that the difference between O and Ne is very small, so you’re looking for a difference, and if the difference is there at all, then you see it, then it’d be a matter of correct it up.
Maciej Slupecki: note that this is a very first result done last week.
Filip Moortgat: thanks. What is the contamination in the O beam? You said almost nothing. Do you have a number?
Maciej Slupecki: O is really pure.
Filip Moortgat: ok so the contamination will be from the time it spends in the LHC.
Chris Young: In O you might get carbon.
Filip Moortgat: yes, during the time it spends in the LHC.
Roderik Bruce: on the last slide, you say 30% less equivalent intensity for Ne compared to O. Do you mean 30% less ion per bunch or nucleons per bunch?
Maciej Slupecki: yes, ion per bunch, but this should still be 3x more than what you had supposed in your simulations where, I think, you had 1e9 - for Ne.
Roderik Bruce: we need to check and then redo all the projections. We should redo the Ne luminosity studies with the same filling scheme as we have now for O and with the best estimates that we can get for the intensity. We might have a different number as we had before.
Chris Young: for O I think we have 3e10 charges, to be divided by 8 to get ions, which gives 4e9.
Maciej Slupecki: here we would get 3e9.
Chris Young: can you make the O that intense?
Roderik Bruce: I don’t think so.
Maciej Slupecki: what is the present estimate? How much should we deliver out of SPS?
Roderik Bruce: I think we assumed out of SPS 4x10^9 O ions, but if you cannot deliver it, we increase the number of bunches, from 18 to 21 or 24.
John Jowett: I am making calculation with the latest lumi on the transmutation of the beams during the fills for O at least. You typically get after 10 hours or 20 hours 1% or so of N C and so on. This is being updated.
Chris Young: if you could show it in the LPC meeting or the meeting with the ion community, it’ll be good.
John: yes, I can do it in about 1 week.
Reyes Alemany: Concerning the O contamination, Maciej had a proposal, not mentioned today: we could start with LHC MD crystal collimation to get rid of the O contamination to get it lower.
Chris Young: yes
Reyes Alemany: assuming 55 uA estimated out of LINAC3, this is a factor 3 more intensity that was assumed by Natalia and Roderik in the lumi estimates for NeNe, so we are in a good state.
Ilias Efthymiopoulos: Between O6 and O2 in the plot on s 7 there is a factor 2 difference, which we don’t see in the plot where there is both Ne and O. Can you use this to estimate the contamination? Since O6 and O2 are separated, maybe they can be used and give a better estimate compared to the others that are close to the Ne.
Maciej Slupecki: there are many ways to do that, and one should note that things are complicated. You would expect that is gaussian but it is not. This has a lot to do with how you tune the source and the extraction voltage. There is then electron binding energy in the plasma. It is complicated to simulate and predict these things. So we cannot say a priori how things will behave. The fact that O2+ is high and O6+ is low, would make me reject the data because it is not expected. I trust the measurement but I don’t know why it behaves like this. Maybe the source is tuned to extract more O3.5+ than O4+ and then you extract O2+ easier? While in O → Ne, we have many more charge states, so many more gases, and I don’t know what the O2+ lower than O6+ means. I will think about it.
Summary of Oxygen lumi projections (Natalia Triantafyllou)
S5:
Chris Young: you are assuming that the emittance scans after we have taken the whole physics and reached the targets? Because you added 4 hours of emittance scan. But you cannot do it at the beginning or you’d lose the best data. It is even a better idea to do like this.
Peter: The 100 urad is positive crossing angle?
Roderik Bruce/Chris Young: for ATLAS we can choose, for CMS we cannot choose, it has to be horizontal, because otherwise there is no acceptance for the ZDC.
Eric: in fact we want to have it positive or we won’t have acceptance in the ZDC.
Roderik Bruce: for pO you might also wanted positive, but then we put negative for AFP.
Peter Steinberg: how do we characterize the risk associated with running above the setup beam limit?
Roderik Bruce: safety wise for the machine, there is not really a risk. The risk is that we lose in efficiency, since we cannot mask any interlock, and we run with the same interlocks as with full beams, so if we hit any interlock, we might then dump and lose time, and the time is very tight here. So the risk is that it takes too much time because we forget some interlock.
Chris Young: it is a “beam-loss-monitor-dumping-the-beam” risk, rather than “blowing-up-the-LHC” risk.
Robert Münzer: concerning the emittance or VdM scan, can they be done in parallel for the experiments? Meaning not all VdM at the same time.
Chris Young: you never want to be scanning one experiment while the other is also scanning or you won’t have constant conditions at the other IP.
Reyes Alemany: for the intensity of the beam we need to do some calculations from the injectors, what we expect from extraction at the SPS is an intensity of 4e10 charges. If we need to reduce these intensity, for one single injection from LINAC, because you need less, we’ll need to scrape in the SPS, which should not be an issue for OO because you require 3 which is within uncertainty; while for pO you require 1e10 which is quite lower, and we’d need to scrape quite a lot and we’d need to find a way to lower the intensity. We would need to find a solution from the injector side. The other question is why the intensity per bunch in pO, especially for O, is so low? 1e10 and not 3e10.
Roderik Bruce: the reason is that we want to keep the intensity reasonably low, with as many bunches as possible to have a low pileup target at ATLAS. So we need many collisions, only by doing that we can get to reach the lumi target in the allocated time. One could have a higher bunch intensity and the same number of bunches, and this should be checked with collimation and MP. We tried here to reach the target with the lowest intensity possible. If we can increase and go to 3e10 charges per bunch also for pO, there is no showstopper, but we might need more validation, and this would take more time, and we’d not gain. The only showstopper is if you need more validation fills, which takes time. For OO, 3e10 is not absolute, if we get 4e10 we’re happy. For OO we’d take as much as we can. For pO we would be much higher than the setup beam flag limit up to a factor 8 above.
CMS (Andrea Massironi)
Chris Young: for the PPS in pO: is it ok to have 60 urad xing angle to avoid LR?
Filip Moortgat: we were assuming zero, so we need to see.
ALICE (Robert Münzer)
Chris Young: anyone else sees a shift in the beam spot position in z as reported by ALICE (by 3.5 cm)?
Paula Collins: we saw a shift but had no comparison to last year at injection energy. But this is xy.
Robert Münzer: we did not publish the Massi file, for comparison by the others, so they’d have to do it manually.
ATLAS (Eric Torrence)
Chris Young: do you know what you will need for NeNe in terms VdM scan?
Eric Torrence: we don’t know yet, maybe just a quick scan.
David Stickland: what is the purpose of the xing angle scan?
Eric Torrence: some of our detectors not in the averaged luminosity measurement but for the single tube measurement which we use as cross-checks, since they are phi-symmetric they are sensitive to the xing angle, so we’d like to measure that as precisely as we can.
David Stickland: did you do it last year?
Eric Torrence: last year we went directly from 0 to 160, so it was not needed.
LHCb (Paula Collins)
Paula Collins: I confirm that we see -3 cm shift in z.
Chris Young: when you request for a long period without stable beam, do you really mean without stable beams, or with no collisions?
Paula Collins: no stable beams.
Witold Kozanecki: For the first request (s8), about the pilot bunches: nominal bunches in VdM are typically 1e11 or a little less, so 10x lower is 1e10; is this compatible with the BPM intensity related abort?
Jorg Wenninger: they are invisible, so if we lose them, we will not notice.
Witold Kozanecki: what is the visibility threshold?
Jorg Wenning: it is 3-4e10 at that settings.
Witold Kozanecki: so the BPM are blind to those bunches.
Jorg Wenninger: yes. I would not swear what happens to DOROS, even though this also tries to find a peak which it probably won’t be able to so probably you will not take them too much into account. But that is another problem.
Witold Kozanecki: I have other concerns about this, for the ATLAS fill, but this is not the right place to discuss this.
Jorg Wenninger: for variability in population and emittance (second bullet on the same slide) you need to take into account longer preparations. Intensity is easier, you just do a bit of scraping. What do you mean by “we would like to scrape the beams” during the fill (third bullet on the same slide)? When? Before injecting?
Paula Collins: it should be at the end of the fill.
Roderik Bruce: so you want to scrape in with the collimators at the end of the fill?
Jorg Wenninger: then we need to go in ADJUST.
Chris Young: this would also be rather complicated because LHCb has to go first since you need the SMOG OFF data without background. So you need LHCb SMOG OFF, LHCb SMOG ON, then ALICE, then whatever you measure after this will not be equivalent to what you had before because of the change over time. So I am not sure how easy this will be to analyze.
Vladislav Balagura: this should be no problem. Every scan is independent. We don’t rely on the identity of the bunches across the scans.
Witold Kozanecki: I think that what Chris means is that the non-factorization will probably have evolved and you probably won’t really know if the scraping has really done you any good.
Vladislav Balagura: We will compare the invariant cross-section.
Witold Kozanecki: yes, but if the purpose is to determine if the scraping helps non-factorization, I think Chris’ point is that the beam would change anyway with or without scraping and so the comparison of before and after scraping would not be valid, unless you do it in very short succession.
Vladislav Balagura: I am not sure that the bunches will be so much perturbed that we can make the comparison anyway, or am I wrong? I mean by the scraping?
Chris Young: unless you make another full VdM scan after the scraping how do you extract the cross-section, the sigma_vis?
Witold Kozanecki: you might get the same answer and then you are happy, or a very different answer and then you don’t know what the problem is.
Vladislav Balagura: This is definitely fine. We should get the same answer. The invariant cross section should be constant, if not we reveal something unknown.
Chris Young: I think we need to think about this a bit more since you risk to dump the beam.
Vladislav Balagura: we understand that it is not trivial, if it is not possible, we can forget about it.
Roderik Bruce: how much would do you want to scrape?
Jorg Wenninger: yes, if you scrape between 4 and 5 sigma it is not a problem.
Vladislav Balagura: it depends on the area of the 2D scan: the ideal is that we scan the whole remaining acceptance. Even large part of it will be very useful. If the non-factorization is in the tails which are not visible normally in the 2D scans, we will see it in a difference in the cross-section. In principle we should see it.
Roderik Bruce: how many sigmas are we talking about?
Vladislav Balagura: it depends on how much you can scrape safely. We don’t want to bump the beams.
Michi Hostettler: normally we can scan up to 5 sigmas, or even 6 sigma differential separation, so if you scrape down to 5 sigmas, then you get the rest scanned.
Roderik Bruce: at 5 sigmas we have the collimators cuts. But this is collimation sigmas.
Jorg Wenninger: those bunches have ~3um, so it is not far away.
Michi Hostettler: maybe you scan ½ sigma, then scrape ½ sigma down and we are there.
Witold Kozanecki: 5 sigma separation is 2.5 per beam. I still don’t understand how Vladic would like to collimate.
Vladislav Balagura: this depends on the possibilities: I don’t know the margin that is available.
Witold Kozanecki: if you ask the machine to scrape at 2.5 sigma per beam, it won’t be good.
Vladislav Balagura: I am asking for something reasonable. The danger is that the tails are invisible and if there is something there, we never checked these tails. Even scraping at the every end will help to see if there is something unknown. This is the reasoning.
Witold Kozanecki: would it help if you look at last year data, you have the luminosity profile as a function of separation and deduce from that how many single beam sigmas you want to scrape to make a difference?
Vladislav Balagura: we don’t know, we never measured the tails, in the 2D scan.
Witold Kozanecki: you can assume that the beams are cylindrically symmetric, at least elliptical.
Chris Young: this requires a discussion at the LLCMWG.
Paula Collins: we’ll continue internally.
Roderik Bruce: from the machine side it would be useful to understand how much we’d need to scrape in. Then we can think. Note that there is also some time overhead since you need to go slowly when you scrape to not dump the beam. So it is also a question of how much extra time you are prepared to give to this.
Witold Kozanecki: I think that an estimate from LHCb of how much they have in the tails of the 1D scan would be helpful. And to comment in due time by the collimation experts about how much time it would take to go from the nominal settings to let’s say 3 sigma per beam, being careful. This is also part of the discussion. So each side could make some estimates.
Roderik Bruce: and you want to scrape both vertical and horizontal.
Vladislav Balagura: in principle yes. I agree with Witold, it is a good suggestion. We don’t know how much time it would take. Even scraping a few percent would be useful.
Jorg Wenninger: (about the continuous scan) it will take longer, since we need much more current change. This is very specialized, though, and we do not change the parameters a lot like in the nominal lumi scan. It needs some preparation. Technically we already saw that it works (it was tested the night before). On your side, what do you have to do?
Paula Collins: we are thinking about the best way to validate it and to compare with the emittance scan.
Michi Hostettler: on my side, the main thing is to see that the data are in the end analyzable and that we get the signal with the correct rate and the synchronization with the analysis. If this is proven, then on the machine side we can make an effort to make it more operational than preparational heavy. But of course first we should see that we can get something useful out of it.
Chris Young: (about the request to participate in the single bunch colliding for afterglow studies) in this data there is only 1 bunch colliding in ATLAS/CMS, so you would have nothing.
Paula Collins: can we not collide also a bunch in LHCb, would that really spoil it?
Jorg Wenninger: then we’d need 2.
David Stickland: the request by CMS is 1 colliding bunch in CMS. We would not have a problem if there was one for LHCb too.
Chris Young: This request we wanted to make in the intensity ramp up after TS1. Is this ok for LHCb too?
Paula Collins: yes.
Chris Young: for the request of a pilot bunch in beam 1: would it not be better to have it collide with a pilot bunch in beam 2 somewhere and then colliding with a normal one in LHCb?
Witold Kozanecki: since this would be in the LHCb fill, it is not for ATLAS and CMS to say.
Vladislav Balagura: why would you not want it in your fill?
Witold Kozanecki: first one should check if he has the sensitivity; then the filling scheme for ATLAS and CMS scan is already extremely constrained by parasitic crossing and I am concerned that if we add a pilot once you take into account the constraints from the injectors and the abort gap, it will cost more bunches, while we want all the colliding bunches that we can have for the tile over track calibration studies.
Vladislav Balagura: in principle in LHCb we should have sensitivity. We have few tens more rate.
Witold Kozanecki: it is not just the rate, but also the non-factorization that will be different for pilots and nominals if the mechanism is one of the resonant mechanisms that we were talking about and the difference is of the order of ⅓ - ½ percent. So the combination of the bunch to bunch reproducibility of the results and the limited statistics makes me largely skeptical. But it is your fill and your beam time.
Chris Young: there is more space in the LHCb fill and we can add it there.
AOBs
Chris Young: please note the “Annotated fill table” inthe LPC website with the information on the fills.