UMICH-2015: Simulations & Data Analysis Break-Out Session 2

From CMB-S4 wiki
Jump to navigationJump to search

Forecasting for the S4 Science Book

What S&DA do we need to do to support the science sections of the CMB-S4 Science Book?

Julian

What are the right inputs to the simulations?

  • Sky model
    • CMB:
      • scalar, tensor, non-Gaussian, (cosmic strings?)
    • Foregrounds:
      • components
    • Self-consistency
  • Mission model
    • Instrument:
      • beams, band-passes, noise PSDs
      • detector electronics, data acquisition
    • Observation:
      • scanning strategy, flags
      • atmosphere, ground pickup

What are the required output(s) of the simulations?

  • Maps
  • Covariances

What are the required analysis outputs/metrics/figures of merit?

  • Power spectra
  • Parameters
  • Uncertainties

What is the right balance between the veracity of the models used in the simulations and the tractability of their production and analysis?

  • Spectral domain
  • Map domain
  • Time domain

What resources are available over the coming year?

  • People
  • Codes
  • Cycles & storage

Some additional "Forecasting for S4 Science Book" Framing Questions

(John)

- How can we make these forecasts truly realistic?

  • grounded in experience / achieved performance
    • discussion:
  • conservative regarding systematics
    • discussion:
  • conservative regarding foreground complexity
    • discussion:


- What drives need for simplicity, transparency, flexibility?

  • Forecasting <--> Survey design feedback loop needs to be fast as we explore tradeoffs for different science drivers
  • Need flexibility to separately explore large scale / degree scale / arcmin scale survey parameters
  • Input assumptions easily changeable to determine dependencies
  • Multiple parallel forecasting efforts should coordinate inputs/outputs to
    • understand where differences are due to input assumptions vs. methods.
  • What are the roles in this forecasting stage for
    • timestream level sims (if any)?
    • map level sims?
    • bandpower level likelihoods / fisher analyses?


- Role of Data Challenges

  • not most efficient tool for exploring tradeoffs
  • at specific points in survey definition space, can validate Fisher forecasts
  • common input sims, what is needed:
    • foreground maps
    • CMB maps
    • noise / systematics maps
    • encoding experiment filters: reobs matrices?
  • separate challengers / challengees?


- How to parameterize S4 survey specifications

  • discussion:


- What assumptions for external datasets? (e.g. BAO from...)

  • discussion:



Forecasting using simulations: complexity vs. feasibility?

Large scale simulations with full complexity require long investment and significant resources. How can we go beyond very basic forecasting that neglects foregrounds and systematics and defines an instrument with only a beam size, a sky coverage, and a sensitivity, without implementing the full sims? How many levels of complexity should we consider, e.g.

  • basic: sensitivity with Gaussian white stationary noise; gaussian beams; sky fraction
  • moderate: inhomogeneous noise with non-stationarity and low-frequency excess
  • advanced: model of atmospheric noise based on map differences from existing data on patches or on model; scanning and filtering + map-making for detector sets, scaling-up as a function of number of detectors; simple foreground subtraction with, e.g. decorrelation, NILC, or simple model fitting per pixel
  • full: as realistic as possible

A few additional ideas:

  • 1/ell noise: usually experiments do not perform as well on large scales as extrapolated from noise rms. Can we work-out an empirical law, possibly based on existing experiments?
  • foregrounds: separate polarization issues (for primordial and lensing B-modes mostly) from temperature issues (on small scales, for SZ and extragalactic astrophysics). Should we worry about EE and TE foregrounds for parameter estimations?
  • How can we get a "trustable" model of the level of foreground residuals (to treat them as a noise in forecasting)?



Sky model for simulations and forecasting

A significant unknown in any forecasting with S4 is the complexity of foreground emission (the level is approximately known from current models). To go a step beyond first order estimates, we should make simulations of the sky emission with representative foreground complexity, including (plausible / possible) surprises.

An effort is ongoing in the Planck collaboration to produce a final Planck Sky Model (PSM). This tool builds on 10 years of development of modelling sky emission in a parametric way, based on a large collection of data sets (maps observed at various frequencies, catalogues of objects, number counts, etc.).

A description of a early version of the Planck Sky model can be found in Delabrouille, Betoule, Melin, et al. (2013), "The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths", A&A; 553, 96 (http://adsabs.harvard.edu/abs/2013A%26A...553A..96D). See also the PSM web page: http://www.apc.univ-paris7.fr/~delabrou/PSM/psm.html

The PSM has been used to generate simulations for Planck described in Planck 2015 results. XII. Full Focal Plane simulations (http://adsabs.harvard.edu/abs/2015arXiv150906348P).

Here is a short overview of the components in the model, their main limitations, and plans to fix those limitations. Suggestions and prioritization would be useful.

  • Galactic emission: Low frequency model is still based on WMAP + Haslam 408 MHz maps. Dust based on Planck for the current version used to make Planck FFP8 maps (not public yet). Low frequency foregrounds should be updated on the basis of recent Planck analysis (http://adsabs.harvard.edu/abs/2015arXiv150606660P). All maps are limited by the limited angular resolution of the input observations.
    • Synchrotron: based on WMAP + Haslam 408 MHz maps, polarization from a large scale model of the galactic magnetic field and of depolarization. Complemented by random fluctuations on small scales.
    • Free-free: Should be de-noised at high galactic latitude. Assumed unpolarized for the moment.
    • CO: based on ground based observations (Dame et al.), only part-sky. Assumed unpolarized for the moment.
    • Spinning dust: Should be de-noised, frequency dependence of the emission should be pixel-dependent. Assumed unpolarized for the moment.
    • Dust: Temperature and Polarization based on Planck HFI (mostly 353 GHz). Several options for scaling with frequency exist.
    • Other lines than CO are missing
  • Clusters: based on number counts + random distribution on the sky, with relativistic corrections, scaling laws to connect mass to Y. Thermal SZ, kinetic SZ (with random directions), polarized SZ effects are implemented. Missing: contamination by radio and IR sources, correlation with CIB and with lensing.
  • CIB (unresolved background of (mostly high redshift) dusty galaxies
  • Radio sources
  • Local IR galaxies

Some questions

  • How do we go beyond very simple models (e.g. one single power law synchrotron per pixel, one single dust greybody, ...). What is the evidence that we should go beyond such simple models?
  • Should we make simulations with (nasty) surprises, e.g. 0.1 to 1% polarization for spinning dust, CO, (free-free?)
  • Should we do blind analyses of simulations at this stage ?
  • What complementary data do we envisage (should we work on CMB-S4 only, or CMB-S4 + LiteBIRD, COrE+, PIXIE, X-rays, other?)
  • should we try to put in the simulated sky signatures of all the effects we will look for in real S4 data (prioritize, set requirements on accuracy of modeling)
  • alternatives to the PSM?



Previous Forecasting plots from S4 Snowmass / Examples of current tools

See also http://lanl.arxiv.org/pdf/1402.4108.pdf Wu, Errard, Dvorkin, Lee, McDonald, Slosar, Zahn.

S4 Snowmass Inflation white paper (1309.5381) forecast plot:

S4 snowmass inflation 1309.5381 p12.png

How do we want the S4 science book forecast to do better?

- Foreground separation approach (e.g. multifreq/multicomponent tools that can be used, while maintaining simplicity/transparency)

- Delensing treatment (e.g. allow different survey depth assumptions for arcmin vs deg scales)

- Systematics treatment (e.g. include assumption of systematic uncertainty scaling with noise?)

- alternative B-mode FOM's? (min detectable r, n_t, etc.)

-

Current "r" forecasting code examples:


S4 Snowmass Neutrinos white paper (1309.5383) forecast plot:

S4 snowmass neutrinos 1309.5383 p11.png

How do we want the S4 science book forecast to do better?

-

-


Who will sign up (and for what)?



Wiki navigation

Return to main workshop page

Return to Simulations & Data Analysis page