LBNL-2016: Systematic Errors and Forecasting for Inflation and Lensing
Inputs to Inflation Forecasting Machinery
It is necessary that the forecasts for CMB-S4 science reach should be actually borne out in the final results.
To ensure that this is the case here is a proposal for the features of "S4 compliant projections".
1) Avoid ab initio calculations which start from per detector NET, number of detectors and nominal run time. Instead use as input actual N_l noise spectra taken from existing experiments (derived from full season Q/U maps), and apply simple scaling for relative numbers of detectors and integration time. This automatically builds in all "real world" inefficiencies. (Although one might still be concerned about correlated noise causing failure of N_det scaling.) This was at some level agreed at the Ann Arbor meeting.
2) Assume that the ultimate proof that an apparent signal is not systematic in origin will come from the data itself - unknown systematics should be assumed to be as large as null tests can prove them not to be - i.e. the noise uncertainty. We may wish to consider stronger criteria which push towards high signal to noise in the map - because systematics are often "obvious" when viewed in a map with s/n>=1 per mode. This "systematics penalty" should be built into the forecasting machinery as an adjustable constraint so that its effect on the instrument/survey design can be probed.
3) All projections of course need to include realistic foreground removal (critical for inflation projections).
We would like multiple sets of forecasts so we can check and cross-compare. But they should all respect the above requirements before being taken seriously.
A huge amount of work has already been done (and published) at low ell e.g. Systematics Figure from BICEP paper https://cosmo.uchicago.edu/CMB-S4workshops/index.php/File:Sys.png
We already have apples-oranges problems with forecasting. The plot on the left is from the Berkeley group presentation in the Instrumentation section. The one on the right is from Victor's presentation above. The assumptions about foregrounds, bands, instrument, N_l etc are all different - and the results are different...
The curve on the right does not even respect bullet 2 above which will push more strongly to smaller sky. And it also assumes that as one expands the sky patch the foregrounds don't get any worse - when fixed that will also push to smaller sky.
Notes on parallel (compiled by TC)
- Even N_l is not adequate to describe achieved noise (Borrill)
- Could use multiple map-level noise realizations (Kovac)
- Need to know how low you can go in ell from a given platform (Bond)
- Limited by atmosphere or systematics? (Pryke)
- Do we need two different kinds of forecasting: "proven" and "desired," both clearly labeled? (Kovac)
- Most importantly, need to avoid apples-to-oranges comparisons between forecasts. (Pryke)
- Need at least a baseline grid of platform configurations. (Hlozek)
- Consensus: We need to agree on the inputs to forecasting.
- What about foreground bias to de-lensing. (Baccigalupi & others)
- Consider forecasts for r=0 and r=some fiducial value and/or adaptive observing strategy? (Kovac)
Notes on plenary
- Proposal to split forecasts into "aspirational" and "performance-based" and to move from the former to the latter as the project evolves.
- The statistical argument for r seems to be to go to small sky areas, but how do we convince ourselves the signal is cosmological if we see one?
- If you see something, go wider. Or at least do multiple patches.
Lensing systematics and forecasting
This session will discuss the impact of systematic errors (as well as complications for forecasting) from instrument, modeling, or atmosphere.
Instrumental systematics and their impact on lensing measurements (Meng Su): 10+5
LSS non-Gaussianity and higher order lensing biases (M. Schmittfull): 5+5