LBNL-2020: Transients: Science Pipelines Data

From CMB-S4 wiki
Jump to navigationJump to search

Connection Details

Please post slides!

Scribe: Cail Daley, google doc for notes


To bring together science, technical, and data management requirements for transient science, for example, sensitivity, cadence, and data structure


--Science Overview (Gil Holder) Slides

--Lessons from stage 2/3 (Nathan Whitehorn) Slides

--Restrictions on scan strategy & cadence (Reijo Keskitalo) File:Modulated high cadence scans.pdf

--Data management ideas (Don Petravick)

--Open Discussion


Science Overview (Holder)

  • Fewer transients for CMB-S4 than LSST: <= O(1000) for any given source type
    • Lower depth, not sensitive to stars, etc.
  • Multimessenger:
    • Neutrino events may be associated with Blazars?
    • Gamma ray multimessenger prospects aren’t great, cm/mm may be the most promising place to look
    • Possibility for (first) follow-up of LIGO events
  • Timescales: huge range (hours to years, depending on source type)
    • e.g. supernovae: may not be seen until shell slams into ISM, months later
    • “Hierarchy of timescales” that we need to track simultaneously
  • GRBs
    • Some long, on-axis GRBs show up in the mm, O(1) mJy
    • Baseline goal: nail down mm GRB rate as a function of flux
    • We could possibly see “orphan” (off-axis, high-z) population missed by gamma ray observations
    • Afterglows can peak in the mm!
    • Reverse shocks: early (< 1 day), we should catch a few within hours of burst
    • Uncertainty in GRB event rate means big science potential!
  • Requirements:
    • ~ mJy sensitivity, 2’ resolution
    • Transient pipeline:
      • Daily difference maps
      • Protocol for issuing and responding to alerts.
      • Lightcurve querying (as a function of ra, dec)
  • Discussion:
    • Joaquin: didn’t we decide not to look at external triggers?
      • We already scan the sky quickly…
      • ALMA probably better for follow-up, except for very large localization regions?
      • Gil: no need to box ourselves out of things preemptively, stay flexible

Lessons from stages 2 & 3 (Whitehorn)

  • SPTpol pathfinder analysis found one GRB afterglow candidate (Whitehorn et al. 2016)
  • Technique
    • Measure map change relative to some reference template
    • Template can be constructed from small amount of data
    • Sensitive to glitches, pointing offsets that would usually average down in CMB products
  • Current SPT3G analysis:
    • just commenced
    • “strongly analogous” to S4 effort, high potential for code reuse if we don’t box ourselves in
  • Multi-messenger & Multi-wavelength:
    • Three Astronomer’s Telegrams from SPT and ACT in 2019.
    • Millimeter flares of interest for cross-correlation with high-energy datasets
  • Key Requirements:
    • Real-time calibration information
    • Low latency mapmaking, on-site if at Pole
    • Automated pipeline needs babysitting
    • Outreach to broader astronomical community (currently bad)
  • Difficulties: gaps in light curves, field switching, post-calibration, insufficient computing
  • Discussion:
    • Chile vs. Pole: generally easier from Chile, with two caveats:
    1. Calibration may be less reliable?
    2. No on-site dedicated computing infrastructure in Chile, data analysis happens on shared cluster in US. This may cause latency issues, unless we make a dedicated queue.
      • MOU for real-time queue at NERSC?
      • Argument for diversity in computing infrastructure

Restrictions on scan strategy & cadence (Keskitalo)

  • Scan strategy presented in San Diego had high (50-70%) daily sky coverage, but unevenness in hit maps hurts statistical analyses.
  • New strategy modulates scan rate by observing azimuth to make hit map more uniform, maintains high daily sky coverage:
  • - LBNL 2020 scan strategy.png.png
  • Higher scan rate as you approach turnaround (there is still a slowdown just before turnaround).
    • Diminishing returns in sky coverage as you increase scan rate near turnarounds.
  • While changing scan rate in order to even the hit map simplifies statistical map analysis, it also complicates filtering.
  • Evidence from current telescopes indicate that they can (mechancialy) handle such high scan rates.
  • Discussion:
    • Conventional scan strategy is constant velocity, or sinusoidal to avoid sharp changes at turnaround. This strategy is the opposite of both, Joaquin’s brain is broken.
    • This means spatially varying transfer function? How does a more complicated analysis pipeline weigh against uniform sky coverage?

Data management ideas (Petravick & Vieira)

  • Each object has its own long-term, short-term, and special requirement.
  • Needed infrastructure: map sims, low-latency processing, alerts/triggers, lightcurves, thumbnails, polarization
  • Different releases for transient analysis vs. CMB analysis? Transient products would presumably be point-source filtered.
  • Both offline and online (daily) processing pipelines.
  • Probably don’t need to worry about doing alert distribution ourselves, can hook up existing multi-messenger infrastructure (can also dump real-time variability information here)
  • Data management responsible for transient products?
    • Where do we draw the line between data management (project) and transient (working group) responsibilities?
    • transient working group may end up having their own catalogue anyway.
    • Galaxy cluster people and transient people shouldn’t need to talk to each other