# Difference between revisions of "Estimated observing efficiency for past and current telescopes"

(Created page with "''Colin Bischoff, 2018-09-24'' ---- In this posting, I try to estimate the relative observing efficiency for telescopes at South Pole vs Chile. It is hard to get a clean answ...") |
(still working on posting) |
||

Line 1: | Line 1: | ||

− | ''Colin Bischoff, 2018-09- | + | ''Colin Bischoff, 2018-09-25'' |

---- | ---- | ||

Line 11: | Line 11: | ||

The idea behind these statistics is that tod weight describes the experiment on paper, ''i.e.'' "I will put together an array of detectors with NEQ = 15 μK s<sup>1/2</sup> and then observe for three years". The bandpower weight describes the results that were actually obtained, including data cuts, instrument downtime, filtering, inefficiencies in sky coverage, etc, etc. Note however that I am using actual array NEQ as reported in published results to calculate tod weight, so detector yield, noisy detectors, and increased NEQ from marginal weather all get baked into the tod weight to some extent and should not lead to a discrepancy between the two statistics. | The idea behind these statistics is that tod weight describes the experiment on paper, ''i.e.'' "I will put together an array of detectors with NEQ = 15 μK s<sup>1/2</sup> and then observe for three years". The bandpower weight describes the results that were actually obtained, including data cuts, instrument downtime, filtering, inefficiencies in sky coverage, etc, etc. Note however that I am using actual array NEQ as reported in published results to calculate tod weight, so detector yield, noisy detectors, and increased NEQ from marginal weather all get baked into the tod weight to some extent and should not lead to a discrepancy between the two statistics. | ||

− | Figure 1 is a plot of the ratio of tod weight to bandpower weight for BICEP/Keck, ACTpol, ABS, and QUIET. I didn't include SPTpol or POLARBEAR because I couldn't array NEQ numbers for those instruments. Points are color-coded according to observing band (red for 95 GHz, green for 150 GHz, and blue for 220 GHz). A larger value of the weight ratio (y-axis) means that the statistical power of the bandpower result fell short of what we might expect from the instrument sensitivity and time on sky. | + | Figure 1 is a plot of the ratio of tod weight to bandpower weight for BICEP/Keck, ACTpol, ABS, and QUIET. I didn't include SPTpol or POLARBEAR because I couldn't array NEQ numbers for those instruments. Points are color-coded according to observing band (red for 95 GHz, green for 150 GHz, and blue for 220 GHz). A larger value of the weight ratio (y-axis) means that the statistical power of the bandpower result fell short of what we might expect from the instrument sensitivity and time on sky. The tod weight is ~100 times larger than the bandpower weight for most experiments. While we all know that there are many significant factors that cause observing efficiency to be less than a naive calculation would indicate, I haven't spent any time thinking about whether there are any order ~10 numbers that would be needed to make these two statistics comparable -- I wouldn't recommend reading much into the absolute scale of the y-axis, but it would be interesting to cross-check with an ''ab initio'' sensitivity calculator such as [https://github.com/chill90/BoloCalc BoloCalc]. |

− | + | For most experiments I include two points (connected by a line) that make different choices for how to define observing time. The upper point uses a strict definition that calculates τ as the number of seconds between when the experiment first started observing and when it completed. For the lower point, I tried to count only the stretches of time that were spent in standard observing mode, ''i.e.'' excluding downtimes for maintenance / upgrades. For ABS and BICEP2 150 GHz, I added an additional unfilled point that counts only the observing time after data cuts. | |

− | ...posting | + | [[File:CMB_achieved_efficiency.png|frame|Figure 1: survey weight ratio vs bandpower-derived survey weight|center]] |

+ | |||

+ | == Details for figure inputs == | ||

+ | |||

+ | * BICEP/Keck 150 GHz includes points from [http://adsabs.harvard.edu/abs/2014PhRvL.112x1101A BK-I 2014], [http://adsabs.harvard.edu/abs/2015ApJ...811..126B BK-V 2015] (same dataset used for BKP joint analysis), [http://adsabs.harvard.edu/abs/2016PhRvL.116c1302B BK-VI 2016] (BK14), and the upcoming BK15 results. | ||

+ | ** For BICEP2, I used an array NEQ of 17 μK s<sup>1/2</sup> with τ = 3 years. For the lower point, τ is reduced to 936 days (2010-02-15 to 2012-11-06, except for 2011-01-01 to 2011-03-01) to remove time spent on deployment and calibration campaigns. | ||

+ | ** The BK-V result adds in Keck Array data from 2012 (11.5 μK s<sup>1/2</sup> for five receivers) and 2013 (9.5 μK s<sup>1/2</sup> for five receivers). These each have nominal τ = 1 year. For the lower points, I deducted time spent on deployment and calibration campaigns, ending up with 240 days in 2012 and 223 days in 2013. | ||

+ | ** The BK14 result adds in Keck Array data from 2014 (13.3 μK s<sup>1/2</sup> for three receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration. | ||

+ | ** The BK15 result adds in Keck Array data from 2015 (19.5 μK s<sup>1/2</sup> for one receiver). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration. | ||

+ | * BICEP/Keck 95 GHz includes points from [http://adsabs.harvard.edu/abs/2016PhRvL.116c1302B BK-VI 2016] (BK14) and the upcoming BK15 results. | ||

+ | ** The BK14 result uses 2014 Keck Array data (17.4 μK s<sup>1/2</sup> for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration. | ||

+ | ** The BK15 result adds in Keck Array data from 2015 (13.5 μK s<sup>1/2</sup> for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration. | ||

+ | * BICEP/Keck 220 GHz is from the upcoming BK15 results. Array NEQ is 41.6 μK s<sup>1/2</sup> for two receivers. This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting calibration. | ||

+ | * [http://adsabs.harvard.edu/abs/2018arXiv180101218K ABS] has array NEQ of 41 μK s<sup>1/2</sup> and observed for 464 days (2012-09-13 to 2013-12-21). For the lower (filled) point, I used τ = 1634 + 209 + 1745 + 3135 = 6723 hours (Section 3 of Kusaka 2018). For the unfilled point, I used 461,237 TES-hours on Field A after cuts (bottom line of Table 3 of Kusaka 2018) and used a per-TES sensitivity of 580 μK s<sup>1/2</sup>. | ||

+ | * [http://adsabs.harvard.edu/abs/2011ApJ...741..111Q QUIET 43 GHz] has array NEQ of 69 μK s<sup>1/2</sup> and observed for 232 days (2008-10-24 to 2009-06-13). For the lower point, I used τ = 3458 hours (Section 3 of QUIET 2011). | ||

+ | * [http://adsabs.harvard.edu/abs/2012ApJ...760..145Q QUIET 95 GHz] has array NEQ of 87 μK s<sup>1/2</sup> and observed for 497 days (2009-08-12 to 2010-12-22). For the lower point, I used τ = 7426 hours (Section 3 of QUIET 2012). | ||

+ | * [http://adsabs.harvard.edu/abs/2014JCAP...10..007N ACTpol season 1] has array NEQ of 19 μK s<sup>1/2</sup> and observed for 94 days (2013-09-11 to 2013-12-14). For the lower point, I multiplied by τ by 63% to account for the fact that their analysis used only nighttime data for fields D1, D5, and D6 (Section 3.1 of Næss 2014). | ||

+ | * For [http://adsabs.harvard.edu/abs/2017JCAP...06..031L ACTpol season 2], I kept the season 1 accumulated tod weight and added an additional 133 days (2014-08-20 to 2014-12-31) with array NEQ of 11.3 μK s<sup>1/2</sup> (inverse-quadrature sum of 23 and 12.9 μK s<sup>1/2</sup> for PA1 and PA2, respectively). For the lower point, the ACTpol season 2 observing time was scaled by a factor of 45% to account for D5 and D6 nighttime data only. It seems like Louis 2017 reanalyzes the season 1 data with somewhat different choices than Næss 2014, so this addition of weights might not be strictly accurate. | ||

+ | |||

+ | == Conclusions == | ||

+ | |||

+ | |||

+ | |||

+ | == Tabulated results == |

## Revision as of 12:17, 25 September 2018

*Colin Bischoff, 2018-09-25*

In this posting, I try to estimate the relative observing efficiency for telescopes at South Pole vs Chile. It is hard to get a clean answer to this question because every experiment has its own unique circumstances and there are a limited number of data points to examine.

The method I will use here is to compare a survey weight (units of μK^{-2}) calculated from published BB bandpowers to a survey weight calculated from instantaneous sensitivity and observing time. Note that survey weight is the quantity that should scale linearly with efforts, so the survey weight at 150 GHz for the BK14 paper is equal to the BICEP2 2010--2012 survey weight plus Keck Array 150 GHz survey weight for 2012--2014.

- The "bandpower weight" is easier to define unambiguously -- in a previous posting I calculated the
*N*_{ℓ}and effective*f*_{sky}for many different experiments that have published BB results. From these results, I calculate the bandpower weight as.*f*_{sky}/*N*_{ℓ} - The "tod weight" is calculated from instantaneous sensitivity (NEQ) of the full experiment and observing time (τ) as
**τ / NEQ**. While this definition is quite simple, there are many possible choices for how to select τ and it can be difficult to do this in a consistent way across experiments.^{2}

The idea behind these statistics is that tod weight describes the experiment on paper, *i.e.* "I will put together an array of detectors with NEQ = 15 μK s^{1/2} and then observe for three years". The bandpower weight describes the results that were actually obtained, including data cuts, instrument downtime, filtering, inefficiencies in sky coverage, etc, etc. Note however that I am using actual array NEQ as reported in published results to calculate tod weight, so detector yield, noisy detectors, and increased NEQ from marginal weather all get baked into the tod weight to some extent and should not lead to a discrepancy between the two statistics.

Figure 1 is a plot of the ratio of tod weight to bandpower weight for BICEP/Keck, ACTpol, ABS, and QUIET. I didn't include SPTpol or POLARBEAR because I couldn't array NEQ numbers for those instruments. Points are color-coded according to observing band (red for 95 GHz, green for 150 GHz, and blue for 220 GHz). A larger value of the weight ratio (y-axis) means that the statistical power of the bandpower result fell short of what we might expect from the instrument sensitivity and time on sky. The tod weight is ~100 times larger than the bandpower weight for most experiments. While we all know that there are many significant factors that cause observing efficiency to be less than a naive calculation would indicate, I haven't spent any time thinking about whether there are any order ~10 numbers that would be needed to make these two statistics comparable -- I wouldn't recommend reading much into the absolute scale of the y-axis, but it would be interesting to cross-check with an *ab initio* sensitivity calculator such as BoloCalc.

For most experiments I include two points (connected by a line) that make different choices for how to define observing time. The upper point uses a strict definition that calculates τ as the number of seconds between when the experiment first started observing and when it completed. For the lower point, I tried to count only the stretches of time that were spent in standard observing mode, *i.e.* excluding downtimes for maintenance / upgrades. For ABS and BICEP2 150 GHz, I added an additional unfilled point that counts only the observing time after data cuts.

## Details for figure inputs

- BICEP/Keck 150 GHz includes points from BK-I 2014, BK-V 2015 (same dataset used for BKP joint analysis), BK-VI 2016 (BK14), and the upcoming BK15 results.
- For BICEP2, I used an array NEQ of 17 μK s
^{1/2}with τ = 3 years. For the lower point, τ is reduced to 936 days (2010-02-15 to 2012-11-06, except for 2011-01-01 to 2011-03-01) to remove time spent on deployment and calibration campaigns. - The BK-V result adds in Keck Array data from 2012 (11.5 μK s
^{1/2}for five receivers) and 2013 (9.5 μK s^{1/2}for five receivers). These each have nominal τ = 1 year. For the lower points, I deducted time spent on deployment and calibration campaigns, ending up with 240 days in 2012 and 223 days in 2013. - The BK14 result adds in Keck Array data from 2014 (13.3 μK s
^{1/2}for three receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration. - The BK15 result adds in Keck Array data from 2015 (19.5 μK s
^{1/2}for one receiver). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration.

- For BICEP2, I used an array NEQ of 17 μK s
- BICEP/Keck 95 GHz includes points from BK-VI 2016 (BK14) and the upcoming BK15 results.
- The BK14 result uses 2014 Keck Array data (17.4 μK s
^{1/2}for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration. - The BK15 result adds in Keck Array data from 2015 (13.5 μK s
^{1/2}for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration.

- The BK14 result uses 2014 Keck Array data (17.4 μK s
- BICEP/Keck 220 GHz is from the upcoming BK15 results. Array NEQ is 41.6 μK s
^{1/2}for two receivers. This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting calibration. - ABS has array NEQ of 41 μK s
^{1/2}and observed for 464 days (2012-09-13 to 2013-12-21). For the lower (filled) point, I used τ = 1634 + 209 + 1745 + 3135 = 6723 hours (Section 3 of Kusaka 2018). For the unfilled point, I used 461,237 TES-hours on Field A after cuts (bottom line of Table 3 of Kusaka 2018) and used a per-TES sensitivity of 580 μK s^{1/2}. - QUIET 43 GHz has array NEQ of 69 μK s
^{1/2}and observed for 232 days (2008-10-24 to 2009-06-13). For the lower point, I used τ = 3458 hours (Section 3 of QUIET 2011). - QUIET 95 GHz has array NEQ of 87 μK s
^{1/2}and observed for 497 days (2009-08-12 to 2010-12-22). For the lower point, I used τ = 7426 hours (Section 3 of QUIET 2012). - ACTpol season 1 has array NEQ of 19 μK s
^{1/2}and observed for 94 days (2013-09-11 to 2013-12-14). For the lower point, I multiplied by τ by 63% to account for the fact that their analysis used only nighttime data for fields D1, D5, and D6 (Section 3.1 of Næss 2014). - For ACTpol season 2, I kept the season 1 accumulated tod weight and added an additional 133 days (2014-08-20 to 2014-12-31) with array NEQ of 11.3 μK s
^{1/2}(inverse-quadrature sum of 23 and 12.9 μK s^{1/2}for PA1 and PA2, respectively). For the lower point, the ACTpol season 2 observing time was scaled by a factor of 45% to account for D5 and D6 nighttime data only. It seems like Louis 2017 reanalyzes the season 1 data with somewhat different choices than Næss 2014, so this addition of weights might not be strictly accurate.