# Optimal lensing sphere progress

## Contents

# Progress towards optimal real delensing of S4 sims

To run the joint maximum a posteriori algorithm from Millea, Anderes, and Wandelt (2017), we need to be able to do the following things on the sphere (checkboxes indicate current progress):

- ☑ LenseFlow (forward, inverse, and gradients)
- ☑ Wiener filtering
- ☐ Gradient ϕ step
- ☑ Direction which traces the truth
- ☐ Acceptable step-size

- ☐ Iterating to convergence

## LenseFlow

We need LenseFlow to model lensing because it gives a lensing operation with unit determinant (which makes reparametrization easy), and because it makes taking gradients of the inverse lensing operation straightforward and numerically very stable (via something akin to backpropagation).

Once you code up spatial gradients of maps, running the LenseFlow ODE follows immediately. On the flat sky, we were doing gradients with FFTs for accuracy / stability. On the sphere, doing gradients by going to harmonic space would be too costly due to the 𝓞(N^(3/2)) spherical harmonic scaling. Hence, we do gradients in real space. We do so by fitting a 2nd order polynomial in θ and ϕ to the 8 neighbor Healpix pixel to each pixel (I find instabilities if using 1st order or bivariate polynomial, or using only 4 neighbor pixels). Here is of the resulting lensed spectra we get:

**Fig 1:**

Lensing 2% of the sky of a Q/U map pair at Nside=2048 takes currently about 1 second (after an initial 100 second precomputation step). This is much slower than e.g. LensePix, but that's the price we pay for LenseFlow's crucial properties. In any case, this is fast enough that the bottle-neck for maximum a posteriori estimation comes from spherical transforms and not LenseFlow.

## Wiener filtering

Since the S4 delensing patch is only fsky~2%, and because the Wiener filter goes to zero far outside the mask, we can actually compute it using spherical transforms which ignore pixels well outside the mask. The best way to do this is to rotate the S4 patch to the north pole, then use the `zbounds`

option to Healpix to only compute things up to a certain `z = cos(θ)`

. This gets us a crucial factor of ~5-10 speed-up, without which this really gets unfeasible.

On single cases, we can check that doing the Wiener filter using `zbounds`

gets close enough to the same answer as if we had correctly used the full-sky. Here's a comparison of Wiener filtered B-modes for some simulated 1μKarcmin-noise data at Nside=512 on the full sky vs. with `zbounds`

. These are azeqview projections of the north pole:

**Fig 2:**

Here is a comparison of the spectra. Above ℓ≳50, the difference is two orders of magnitude below the signal. E is not pictured here but its even better.

**Fig 3:**

## Gradient ϕ step

Next, we do a gradient update of ϕ. This involves the gradient of the inverse lensing operation with respect to ϕ, which is done via backpropagation, and is one of the more complex aspects of this to code up. This seems to be working well (in the sense that the gradient compares well with checks based on finite differences). For analyzing simulated data, I'm still working just on the 2% patch using `zbounds`

. Here is an example of a first step:

**Fig 4:**

[[File:|frame|none|alt=|caption gradstep.png]]

Visually it traces the truth well. There are some artifacts near the north pole, however. Here's is the cross-correlation with the truth nevertheless.

**Fig 5:**

[[File:|frame|none|alt=|caption corr.png]]

Currently, however, I'm unable to get the posterior to accept a large enough step. This could be due to the artifact. It could also be due to using `zbounds`

for this step as well, since while the Wiener filter goes to zero outside the mask, ϕ goes to a mean-field value. It may be necessary to do the gradient step on the full-sky (which computationally would be OK, as its a sub-dominant part of the whole maximization)