Difference between revisions of "UMICH-2015: Simulations & Data Analysis Break-Out Session 2"

From CMB-S4 wiki
Jump to navigationJump to search
Line 45: Line 45:
 
==='''Forecasting using simulations: complexity vs. feasibility?'''===
 
==='''Forecasting using simulations: complexity vs. feasibility?'''===
  
JLarge scale simulations with full complexity require long investment and significant resources.
+
Large scale simulations with full complexity require long investment and significant resources.
 
How can we go beyond very basic forecasting that neglects foregrounds and systematics and defines an instrument with only a beam size, a sky coverage, and a sensitivity, without implementing the full sims?
 
How can we go beyond very basic forecasting that neglects foregrounds and systematics and defines an instrument with only a beam size, a sky coverage, and a sensitivity, without implementing the full sims?
 
How many levels of complexity should we consider, e.g.
 
How many levels of complexity should we consider, e.g.
Line 68: Line 68:
  
 
An effort is ongoing in the Planck collaboration to produce a final Planck Sky Model. This tool builds on 10 years of development of modelling sky emission in a parametric way, based on a large collection of data sets (maps observed at various frequencies, catalogues of objects, number counts, etc.).
 
An effort is ongoing in the Planck collaboration to produce a final Planck Sky Model. This tool builds on 10 years of development of modelling sky emission in a parametric way, based on a large collection of data sets (maps observed at various frequencies, catalogues of objects, number counts, etc.).
 +
 +
A description of a early version of the Planck Sky model can be found in Delabrouille, Betoule, Melin, et al. (2013), "The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths", A&A; 553, 96. See also the PSM web page: http://www.apc.univ-paris7.fr/~delabrou/PSM/psm.html
  
  

Revision as of 05:15, 22 September 2015

Forecasting for the S4 Science Book

What S&DA do we need to do to support the science sections of the CMB-S4 Science Book?

Julian

What are the right inputs to the simulations?

  • Sky model
    • CMB:
      • scalar, tensor, non-Gaussian
    • Foregrounds:
      • components
    • Self-consistency
  • Mission model
    • Instrument:
      • beams, band-passes, noise PSDs
      • detector electronics, data acquisition
    • Observation:
      • scanning strategy, flags
      • atmosphere, ground pickup

What are the required output(s) of the simulations?

  • Maps
  • Covariances

What are the required analysis outputs/metrics/figures of merit?

  • Power spectra
  • Parameters
  • Uncertainties

What is the right balance between the veracity of the models used in the simulations and the tractability of their production and analysis?

  • Spectral domain
  • Map domain
  • Time domain

What resources are available over the coming year?

  • People
  • Codes
  • Cycles & storage



Forecasting using simulations: complexity vs. feasibility?

Large scale simulations with full complexity require long investment and significant resources. How can we go beyond very basic forecasting that neglects foregrounds and systematics and defines an instrument with only a beam size, a sky coverage, and a sensitivity, without implementing the full sims? How many levels of complexity should we consider, e.g.

  • basic: sensitivity with Gaussian white stationary noise; gaussian beams; sky fraction
  • moderate: inhomogeneous noise with non-stationarity and low-frequency excess
  • advanced: model of atmospheric noise based on map differences from existing data on patches or on model; scanning and filtering + map-making for detector sets, scaling-up as a function of number of detectors; simple foreground subtraction with, e.g. decorrelation, NILC, or simple model fitting per pixel
  • full: as realistic as possible

A few additional ideas:

  • 1/ell noise: usually experiments do not perform as well on large scales as extrapolated from noise rms. Can we work-out an empirical law, possibly based on existing experiments?
  • foregrounds: separate polarization issues (for primordial and lensing B-modes mostly) from temperature issues (on small scales, for SZ and extragalactic astrophysics). Should we worry about EE and TE foregrounds for parameter estimations?
  • to what extend can we consider that we have a trustable model of the level of foreground residuals (to treat them as a noise in forecasting)?
  • for cluster science we probably should consider the issues of
    • relativistic corrections
    • point source contamination in clusters

Sky model for simulations and forecasting

A significant unknown in any forecasting with S4 is the complexity of foreground emission (the level is approximately known from current models). To go a step beyond first order estimates, we should make simulations of the sky emission with representative foreground complexity, including (plausible / possible) surprises.

An effort is ongoing in the Planck collaboration to produce a final Planck Sky Model. This tool builds on 10 years of development of modelling sky emission in a parametric way, based on a large collection of data sets (maps observed at various frequencies, catalogues of objects, number counts, etc.).

A description of a early version of the Planck Sky model can be found in Delabrouille, Betoule, Melin, et al. (2013), "The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths", A&A; 553, 96. See also the PSM web page: http://www.apc.univ-paris7.fr/~delabrou/PSM/psm.html



Some additional "Forecasting for S4 Science Book" Framing Questions

(John)

- How can we make these forecasts truly realistic?

  • grounded in experience / achieved performance
  • conservative regarding systematics
  • conservative regarding foreground complexity


- How important are Simplicity, Transparency, Flexibility?

  • Forecasting <--> Survey design tradeoffs for different science drivers
  • Would like to see this separately for large scale / degree scale / arcmin scale survey parameters
  • Input assumptions easily changeable to determine dependencies
  • What are the roles in this forecasting stage for
    • timestream level sims (if any)?
    • map level sims?
    • bandpower level likelihoods / fisher analyses?


- What assumptions for external datasets? (e.g. BAO from...)




Previous Forecasting plots from S4 Snowmass

S4 Snowmass Inflation white paper (1309.5381) forecast plot:

S4 snowmass inflation 1309.5381 p12.png

How do we want the S4 science book forecast to do better?

- Foreground separation approach (e.g. multifreq/multicomponent tools that can be used, while maintaining simplicity/transparency)

- Delensing treatment (e.g. allow different survey depth assumptions for arcmin vs deg scales)

- Systematics treatment (e.g. include assumption of systematic uncertainty scaling with noise?)

- alternative B-mode FOM's? (min detectable r, n_t, etc.)

-


S4 Snowmass Neutrinos white paper (1309.5383) forecast plot:

S4 snowmass neutrinos 1309.5383 p11.png

How do we want the S4 science book forecast to do better?

-

-



Wiki navigation

Return to main workshop page

Return to Simulations & Data Analysis page