The Personal Blog of Stephen Sekula
A view across the Rauthsplatz food market, The city hall is visible in the distance.

TAUP Journal: Day Two

The plenaries on the second day began with beam-based neutrino physics [Wen]. The speaker reiterated the target parameters and discussed existing results and future facilities for measuring them. They began by noting that sin2(2theta13) is known by Daya Bay to 2.8%, and that Double Chooz has just updated their measurement to a 11% uncertainty (highly compatible with the Daya Bay result). 

The construction of JUNO was discussed. It expects to begin operations in 2024, with two reactors in its observing window with baselines of O(10)km (short baseline). JUNO expects to achieve 3-sigma significance on the mass ordering question with 3 years of data. Monte Carlo simulations imply they can achieve a 3% energy resolution. A question in the previous day’s parallel session talks questioned how JUNO intends to verify this resolution empirically, or to at least indicate what experimental proof (e.g. from research and development) they have that such a resolution is possible [Guo]. They did not provide a clear answer to the audience question. In the plenary talk, the speaker noted that JUNO is investigating future options for neutrinoless double beta decay operations.

The talk then turned to sterile neutrino searches. The speaker began by reviewing the evidence that fuel evolution, and not new physics, is strongly favoured as the explanation for earlier observations of “antineutrino anomalies” from reactors. Globally, the existence of that anomaly still only stands at 2.5-sigma. However, fuel evolution is the prevailing hypothesis, with increasing evidence that U-235 is the underlying cause. 

However, a spectrum bump has been observed by multiple experiments in the range of 4-6 MeV, standing at 4-sigma significance. Its cause is unknown. Hypotheses include a forbidden beta decay or sources beyond standard explanations. A review of the short baseline program at FNAL and JPARC was provided.

The next plenary talk then focused on the long-baseline program [Sanchez]. The target measurements for these programs are the CP-violating phase and the mass hierarchy. Evidence from NOVA and T2K is so far in tension and not sufficient to draw strong conclusions. If you do combine their results you would, for now, note that they are highly inconsistent when imposing a normal ordering, since the experiments favour differing regions of parameter space. In the inverted hierarchy interpretation, the two experiments appear to favour the same region of parameter space. The speaker implied that if you were placing bets, you might lean toward IH today.

The study of atmospheric neutrinos will be dominated by IceCube, KM3Net + ORCA, and Super-K. These detectors provide the capability to assess up- vs. down-going events, which in turn will provide valuable additional information on the matter effects of oscillations. 

For the future, the field looks to DUNE and Hyper-K. With liquid argon and water, respectively, these differing technologies will provide complementary information on the same target measurements. DUNE expects to achieve up to 5-sigma sensitivity on the CP-violating phase for -pi/2 or 3pi/2, and 3-sigma for other possibilities. Hyper-K expects to achieve up to 5-sigma sensitivity on CPV while firmly establishing the mass ordering.

We then went deep into the CEvNS process [Vignati]. A key takeaway for me was that detailed study of this process is essential for a key reason beyond the obvious ones: it will be the ultimate background for dark matter searches. However, CEvNS offers a raft of other physics measurements including the magnetic dipole moment of the neutrino, new weak interactions and probes of the weak interaction at extremely low momentum transfer, and even as a SN detector (with reach to all three flavours). 

The experimental challenge is akin to dark matter searches, and in fact it’s no accident that the hunt for low-threshold recoils in those search correlates with the era of positive detection of CEvNS. The recoils induced by CEvNS have energies in the range of 100eV to the keV scale. The COHERENT experiment plans upgrades to increase detector sensitivity by enhancing measurement precision and deploying more detectors. 

A reactor neutrino flux can be used for CEvNS measurements, and this is what the DRESDEN experiment aims to do. One controversial aspect of this experiment so far has been their own measurement of the quenching factor of germanium, which they found to be two times higher than that determined by other experiments. There was a call for another independent germanium experiment at a reactor to help resolve this issue. During the Q&A, it was noted that DRESDEN has not shown “reactor off” data, while other experiments have done this.

An example of such other experiments is the CONUS experiment in Germany. It obtained a null result on coherent scattering, but the experiment has been upgraded and relocated to a different reactor in Switzerland. Another experiment, vGEN, also used germanium but only obtained an upper limit on coherent scatters. Examples of other experiments in this space are RICHOCHET and NUCLEUS, but they are not alone. 

The focus then switched to the direct measurement of neutrino mass [Mertens, Lasserre]. The current upper bound on m_{\nu} is at the level of 1 eV. The main ways to interrogate this mass are cosmology (features of the universe are a proxy for \sum_{i} m_{\nu_{i}} and rely on models), neutrinoless double beta decay (which depends on the neutrino being a Majorana fermion), and beta decay or electron capture. The talk focused on this latter method.

Nuclear beta decay basically boils down to an endpoint measurement, but not in the classical sense (a slight deviation of the maximum observed energy from the theoretical value assuming m_{\nu}=0). Rather, the shape distortion of the spectrum due to non-zero neutrino mass is the target. Effectively, the slope of the electron energy contains information about neutrino mass and is independent of the fundamental nature of the neutrino (Dirac or Majorana). The challenge is that only 10^{-13} of all decays occur near the endpoint (in that last electron-Volt of energy) and so these mesurements demand a very low background.

Efforts are concentrated in the application of electrostatic filters, phonons, and the cyclotron frequency of the electron (as a proxy for E_{e}). KATRIN uses a molecular tritium source whose output cycles in and out of an electrostatic field. The system is calibrated for performance using gaseous krypton instead of gaseous tritium. There have been 10 data campaigns so far. The first two resulted in m_{\nu} < 1.1~\mathrm{eV} while the first five combined results in m_{\nu}<0.9~\mathrm{eV}. These limits are set at the 90% confidence level (CL). KATRIN has just reached 100M electrons in the 10th campaign. The measurement is still dominated by statistical error (at the level of about 6-to-1, statistical-to-systematic). The community can expect updated results this year, with six times more statistics and a decrease in the systematics by a factor of three. The expected sensitivity will be at the level of m_{\nu} < 0.5~\mathrm{eV}. By 2025, the expectation is to achieve m_{\nu} < 0.3~\mathrm{eV}, all at the 90% CL. KATRIN will be superceded in 2026-2027 by the TRISTAN project, whose focus will be sterile neutrino searches.

Beyond KATRIN, the goal is to distinguish degenerate and hierarchical mass situations (e.g. cases where at least two neutrino masses are/are not the same). Experiments will need to push from molecular tritrium to atomic tritium.The target sensitivity will be at the level of m_{\nu} < 10^{-2}~\mathrm{eV} which will allow the experiments to reach sensitivity to the expected inverted hierarchy mass space. Research and development has been launched toward these goals.

Project-8 is an example of a tritium experiment exploiting cyclotron frequency as a proxy for electron energy.

    \[\omega(\gamma) = \frac{\omega_0}{\gamma} = \frac{eB}{E+m_e}\]

and the cyclotron frequency is detected by means of an external antenna. The challenge to this approach is the demand for very high statistics necessary to detect femto- to-zettawatt radiation. Project-8 is ongoing and another experiment, QTNM (Quantum Technology for Neutrino Mass) is in the conceptual stage. Project-8 has established proof of concept and has achieved m_{\nu} < 185~\mathrm{eV} for its demonstration phase. The goal is to scale a cubic-millimeter- or cubic-centimeter-scale cavity and resonator up to cubic-metre scale.The goals also require an atomic tritium source. This project will ultimately be on the same scale as KATRIN.

HolMES, using holmium, and ECHO (using the same atom) are examples of phonon-based approaches. The basic idea is

    \[{}^{163}\mathrm{Ho} + e \longrightarrow {}^{163}\mathrm{Dy*} (\to {}^{163}\mathrm{Dy} + E_{C}) + \nu_{e}\]

where the released colorimetric heat energy (E_C) in the de-excitation of dysprosium serves as a proxy for the neutrino mass. This requires a microcalorimeter at cryogenic temperatures to allow detection of the heat energy (phonons). This demands a low temperature (millikelvin), small pixels with small heat capacities (at the micron scale), and high statistics data collection.

ECHO will use a metallic magnetic calorimeter. A prorotype is capable of probing m_{\nu} < 150~\mathrm{ev}. ECHO-1K is completed and expects a sensitivity to the level of 20 eV. ECHO-100K is the future and is aimed at 2 eV mass sensitivity. 100,000 pixels will be required for sub-eV sensitivity. HolMES uses transition edge sensors as their calorimeter. They will finalize a first array of 64 pixels this year.

Finally, Ptolemy (using tritium) is aimed for 10 meV sensitivity.

We then moved onto theoretical discussion of double beta decay [Moore]. Rapid improvements in experimental capabilities may allow this process to probe beyond standard model sensitivity levels. The key relationship between the 0vbb half-life of an isotope and the effective Majorana mass (which is related to the neutrino masses) is

    \[(T^0_{1/2})^{-1} = G_{0\nu} g_{A}^4 \left| M^{0\nu} \right|^2 \frac{\langle m_{\beta\beta} \rangle^2}{m_e^2}.\]

The phase space term G_{0\nu} is very well understood from theory and contains the considerations of normal or inverted mass ordering. The nuclear matrix element term g_{A}^4 \left| M^{0\nu} \right|^2 contains significant theory uncertainties. The effective Majorana mass term \frac{\langle m_{\beta\beta} \rangle^2}{m_e^2} is also effectively a “standard” computation. So far, KamLAND-ZEN has the most sensitive measurement in its target isotope space and probes into the inverted hierarchy space. To push to the normal ordering space requires 10 or 100 times more data. The remaining corner of phase space insto which experiments have to push is roughly in the regions \langle m_{\beta\beta} \rangle (in meV) from 0-40 and \sum_i m_i (also in meV) from 50-125. That region contains the most unconstrained space.

There has been immense progress in nuclear matrix element theory, including “ab initio” calculations recently that demonstrate stable computations across a range of isotopes. Stability was not a hallmark of existing approaches. A good review of progress and details is available in https://arxiv.org/abs/2207.01085. NMEs are a very challenging theory problem. The ab initio approaches come out on the low side of the previously constrains NME space, but are not final. For example, including in these calculations the effect of the “contact term” resulted in increasing values for the NMEs, pushing them a little higher into the previously explored theory space.

The most serious question posed in the community is “what if the mass hierchy is normal-ordered?”, which appears to make measurements more challenging. This ordering requires experiments beyond the next-generation “tonne scale” (e.g. LEGEND-1000, nEXO, CUPID, SNO+, NEXT, AMORE, SuperNEMO), programs such as THEIA, LEGENG-6000, and ORIGIN-X.

The plenary sessions closed with two “lightning talks”. The first was on IceCube’s evidence for neutrinos from the galactic plane [Sclafani], coincident with gamma ray sources previously observed in the plane. The second was on evidence for a stochastic gravitational wave background [Schmitz], the result expected from the “hum” of orbiting and colliding binary supermassive black holes in galaxies throughout the cosmos.

A key takeaway from the IceCube talk was that their measurement is challenging because they have to look “up” to see the galactice centre and plane, whereas they are more sensitive if they can look “down” through the earth and use the planet as a background filter. Water-based kilometer-cubed observatories like KM3NET will be more ideal for this task since they have to look through the earth to see the galactic center and plane. A key takeaway from the gravitational wave talk was that the so-far observed background of waves exceeds what was expected from existing supermassive black hole models. This has forced a re-visitation of dynamics of these massive objects.

The parallel sessions in the afternoon were numerous, and I took a lot of notes. I focused on talkes about low-temperature dark matter detectors (MAGNETO-DM [Kim], EDELWEISS [Guy], and remoTES sensors [Mukund]). THe remoTES talk teased results being presented the next day from the COSINUS experiment, which employs these remotely attached transition edge sensors to read out phonons from a sodium iodide crystal. NaI is notoriously hard to work with as it is hygroscopic and is not amenable to traditional crystal etching techniques normally used to apply sensors.

I thoroughly enjoyed a detailed talk on the SNO+ program [Lozza], which put all three phases of the experiment – water, scintillator, and tellurium-loaded scintillator – in the context of how they support the 0vbb goals of the experiment. It may be the best SNO+ talk I have ever see, and clearly reviewed the whole science case for this program.