Noise-resistant phase imaging with intensity correlation

Interferometric methods form the basis of highly sensitive measurement techniques from astronomy to bioimaging. Interferometry typically requires high stability between the measured and reference beams. The presence of rapid phase fluctuations washes out interference fringes, making phase profile recovery impossible. This challenge can be addressed by shortening the measurement time. However, such an approach reduces photon-counting rates, precluding applications in low-intensity imaging. We introduce a phase imaging technique which is immune to time-dependent phase fluctuations. Our technique, relying on intensity correlation instead of direct intensity measurements, allows one to obtain high interference visibility for arbitrarily long acquisition times. We prove the optimality of our method using the Cramér-Rao bound in the extreme case when no more than two photons are detected within the time window of phase stability. Our technique will broaden prospects in phase measurements, including emerging applications such as in infrared and x-ray imaging and quantum and matter-wave interferometry.


Introduction
In the field of quantitative phase imaging (1)(2)(3), interferometric methods play a crucial role in capturing phase information of a sample through the measurement of phase shifts (4).These phase shifts can provide insights into the refractive index, thickness, and structure of the sample.Consequently, phase imaging has made a significant impact on various fields such as bio-medical imaging (5)(6)(7), sample-safe defectoscopy (8), wavefront sensing (9), and 3D holography (10).Interferometry (11) is central to phase imaging due to its ability to detect minute variations in optical paths.Moreover interferometry is the basis of optical coherence tomography (12,13).Some of the interferometric techniques, especially those related to biology, require very low photon fluxes.For an interferometric measurement a wave field that has interacted with an object is superposed with a reference field and the resulting interference pattern 1 arXiv:2301.11969v2[physics.optics]3 Feb 2023 is detected by a camera.If the object and the reference fields are mutually coherent, the time-averaged intensity on the camera is given by (3,14): where I r and I o are intensities of the reference and the object fields respectively, Θ is the spatially uniform phase that can be changed by introducing delay between the two fields, and φ(x, y) is the phase profile of the object.Standard interferometric phase imaging techniques are based on the signature of φ(x, y) left in the detected intensity pattern.However, for any such method to be applicable, the object and the reference fields need to be mutually coherent and phase Θ must be stable in time.The method is therefore vulnerable to time-dependent, uncontrollable phase fluctuations.When the phase fluctuates much faster compared to the detection time, the coherence between the object and image fields is effectively lost, and no interference is observed, i.e., I(x, y) = I r + I o . ( Since there is no information of φ(x, y) in this intensity pattern, the standard phase imaging schemes become inapplicable.One way to avoid the effect of time-dependent phase fluctuations is to shorten the duration of the measurement (15).A short measurement time, however, reduces the amount of detected light and is therefore impractical in many cases including, for instance, imaging of photo-sensitive biological specimens, which require low-intensity light.Furthermore, for interferometric fluorescence super-resolution microscopy (16), very low-intensity light (17) often needs to be superposed.In such cases, any time-dependent phase fluctuations must be avoided due to the relatively long detection time requirement.
Here, we introduce a method of phase imaging that is fully resistant to time-dependent phase fluctuations as long as it is possible to detect at least two photons within a time window in which the phase Θ is stable (phase stability time).The advantage of our method stems from the fact that it relies on the intensity correlation rather than intensity measurement, which makes it fundamentally different from standard phase imaging techniques (18).Numerous state-of-the-art approaches to low-light imaging (19)(20)(21)(22)(23) rely on quantum light sources.It is crucial, that our technique works for easily accessible and robust classical light sources, which dramatically broadens its applicability.
The scheme of our experiment is illustrated in Fig. 1.The object field is superposed with a reference field and the resulting interference pattern is detected by a camera.A time-dependent phase fluctuation Θ(t) is introduced to the reference field.Under these circumstances, no information on φ(x, y) can be retrieved from the intensity pattern given by Eq. ( 2), and therefore standard phase imaging techniques become inapplicable.Our phase imaging method is resistant to time-dependent phase fluctuations, provided that the fluctuating phase is spatially invariant throughout the entire sample (24).This approach is inspired by higher-order correlation effects observed in interference of light from independent sources (25).It relies on measuring intensity correlations of light like in the intensity interferometry technique introduced by Hanbury Brown and Twiss (HBT) (26).The HBT method and its generalizations were applied to a variety of light sources (27)(28)(29)(30)(31)(32)(33)(34)(35) and similarly, our technique might be applied in various scenarios including, for instance, laser and thermal light.
We determine the correlation function between the intensities measured at a pair of points (x, y) and For rapidly fluctuating Θ(t), we observe the loss of interference visibility with increasing measurement time, whereas, for a short measurement time, the signal-to-noise ratio (SNR) is very low due to an insufficient number of detected photons (c).In contrast, in our method, the visibility remains constant for an arbitrary acquisition time, which allows to create high-SNR interferograms (d).Images (b) and (c) depict normalized one photon interference fringes for non-fluctuating and rapidly fluctuating phase scenarios respectively.Second-order correlation interferograms shown in (d) are created using the same photon detections as standard interferograms presented in (c).Even for rapidly fluctuating phase, where we record only a few photons within the stability time of Θ(t), we can retrieve second-order correlation interferograms with high visibility and high SNR, that contain full information about the phase profile φ(x).
(x , y ) (see Supplementary information S1 for detail), where Ĩ(x, y; t) is the instantaneous intensity measured at a point (x, y) at time t.On the right-hand side of Eq. ( 3), the plus (+) and minus (−) signs apply when the two points of measurement are in the same and different beam splitter outputs, respectively, we also assumed that I r = I o .Note that the information about the phase map of the object, which was lost in the intensity pattern [Eq.( 2), Fig. 1c], reappears in the intensity correlation map [Eq.( 3), Fig. 1d].
Remarkably, the 2nd-order intensity correlations map contains the full information required to optimally reconstruct φ(x, y) in the extreme case when only two photons are detected during the phase stability time.Our strategy of reconstructing the actual phase distribution is optimal in this scenario, which we prove rigorously using estimation theory tools, namely Fisher Information and Cramér-Rao bound (36) (see Supplementary information S2 for detail).Michelson interferometer equipped with a 4f imaging system.Light from a 780 nm laser passes through a half-wave plate (λ/2), quarter-wave plate (λ/4), polarization beam splitter (PBS) followed by additional λ/2-in this way, we control the beam intensity and polarization.Subsequently, the beam enters the interferometer.In one of its arms, the spatial phase φ(x) is imprinted onto the beam.The mirror in the other adds temporal fluctuations Θ(t) caused by a piezoelectric actuator.The beams from both arms are combined at the PBS, and the pass through a 4f imaging system consisting of two lenses (L1, L2), which are positioned such that the sample plane is imaged on the camera plane.The calcite polarizer combined with the output λ/2 acts as a 50/50 beam splitter.The intensified sCMOS (I-sCMOS) camera records single photons coming from both outputs of the interferometer.
The idea demonstrated in Fig. 1 is implemented using a polarization-based Michelson interferometer equipped with an imaging system, shown in Fig. 2. We perform experiments with three kinds of different phase masks φ(x) applied to the object beam.We imprint a 1D quadratic phase profile (Fig. 3a-b) to the beam by placing a cylindrical lens of a focal length, f = 1000mm in proximity to the mirror (Fig. 2).In a slightly modified setup (see Supplementary information S3), the sample is realized using a spatial light modulator (SLM), as it can display an arbitrary phase profile.The SLM imprints 1D sinusoidal (Fig. 4ac) and exponential (Fig. 4d-f) phases to our object beam (see Supplementary information S3 for details).A time-dependent fluctuating phase Θ(t) is introduced in the reference arm to make object and reference beams effectively incoherent.In order to register a very low photon flux and to minimize the exposure time, we use an Intensified-sCMOS camera.To get a high interference visibility within a single frame, we keep the camera exposure time (T exp ) low, such that fluctuations of Θ(t) are negligible within T exp .For T exp ∼ 20ns, this leads to a very low SNR for a single frame, as it contains only a few detected photons on average.However, this does not limit our method since we can sum many correlation maps created from different frames, even though the value of Θ(t) is different for each frame.As a consequence, we can perform quantitative phase imaging in the presence of temporal fluctuations of frequency of the order ∼ 1/T exp 50MHz.This would be impossible for intensity-based interferometry-we would loose either visibility or SNR.

Results
In our experiment, each frame contains on average ∼ 15 photons at both outputs of the interferometer per frame.We remove temporal correlations between subsequent frames by randomly permuting the order of frames before further processing-this does not change the performance of our method but allows us to simulate the conditions, in which the phase Θ(t) is independently sampled for each consecutive frame.Such conditions occur, when the noise frequency is larger than the delay between subsequent exposures.It is then impossible to retrieve phases using standard interferometric methods.Averaging recorded intensities over multiple subsequent frames would result in a loss of the visibility of the interference fringes.In contrast, we average correlations of detected photons' positions without any loss of the phase information.Such averaging over multiple frames results in the reproduction of the correlation function (Eq.3), from which we can retrieve the phase profile using standard digital holography methods-Fourier off-axis holography (37) has been used in our case.This analyzing mechanism is the essence of our noise-resistant phase imaging technique.See the supplementary information S4 for details.
To quantitatively assess the precision of our phase reconstruction method, we measure a 1D quadratic phase object and compare the mean squared error (MSE) obtained in the experiment and in a simulation with the fundamental lower-bound for the MSE given by the inverse of Fisher Information, namely Cramér-Rao bound (C-R bound), see Supplementary information S2 for details.The measured coincidence map (Fig. 3(a)) consists of approximately 10 7 registered photon pairs.To simulate the most extreme case, we use only two randomly chosen photons from each frame for further processing.We estimate the phase profile shape using the collected data, and compute the associated MSE, which is the average square distance between the real (ground truth) and measured phase (Fig. 3(b)).As we show in Fig. 3(c), the MSE drops down with the total number of measured photons, and eventually reaches the theoretical minimum.This proves, that our method of phase estimation is optimal when at most two photons are measured during the phase stability time-notice, that this is the most extreme limit in which one can gain any information about the phase profile.
To show the versatility of our method, we also reconstruct SLM-encoded phase profiles φ(x) of different shapes-sinusoidal and exponential.Figures 4(a/e), (b/f), and (c/g) represent the measured hologram, the retrieved phase, and the error per pixel for the sinusoidal/exponential phase φ(x) Extra systematic errors of phase recovery in both cases are caused by the SLM imperfections.The phase reconstructed from experimental data with error bars is also shown in (b).The visibility of the fringes in the correlation map (a) is equal to v 2 /2, where v = 0.6 (theoretical maximum with classical light is v = 1, which leads to 1/2 visibility of the 2nd order interference).The total number of coincidences detected in the experiment is ∼ 10 7 .By randomly removing a part of the collected signal, we can check how the MSE, associated with the phase reconstruction, scales with the mean number of photons detected per pixel during the whole experiment (c).The MSE from the experiment is then compared with the MSE obtained using a simulated hologram, with the same parameters as in the experiment.We calculate the fundamental C-R lower bound on the MSE, assuming the visibility of hologram fringes to be equal to 0.6 2 /2 (as in our experiment).When no noise apart from shot noise is present (as in simulation), our method allows the saturation of this fundamental limit for a large enough (∼ 5 × 10 4 ) number of photons detected per pixel.Other possible noise sources, e.g.camera dark counts, may slightly affect the MSE obtained experimentally.

Discussion
We have demonstrated a complete retrieval of phase patterns in the presence of time-dependent highfrequency phase fluctuations using spatial intensity correlations measurement.Since this time-dependent phase fluctuates much faster compared to the detection time, the reference and the object fields can be considered mutually incoherent (38).Consequently, our method is applicable to the case when reference and object fields are generated from independent sources (25,31,39,40).Although we performed the experiment with laser light, our method is applicable to optical fields characterized by different statis- tics, such as thermal light.What is more, our technique provides a quantitative phase image and an amplitude image simultaneously.The latter can be easily accessed by summing the intensity correlation interferogram over one of the dimensions.We stress that the presented method optimality is proven using the C-R bound (36)-all the spatial phase information contained in the detected photons is retrieved with our technique.
High temporal resolution (short exposure time T exp ) is necessary for overcoming the problem of the rapidly fluctuating temporal phases.In our experiment, we used I-sCMOS camera, which allows to achieve T exp ∼ 20ns, corresponding to 50MHz maximal phase noise frequency we can deal with.However, our method is not limited to this type of camera and can be implemented using various hightemporal resolution detection platforms.Because of high quantum efficiency, high temporal resolution, and low noise level in recent single-photon avalanche diode (SPAD) array technology (41), our method can also be implemented by SPAD arrays.The technique works both in the photon counting regime and by employing less accurate intensity measurements, yet it is the most remarkable for cases where registering more than two photons per phase stability time is rare.Our method can be readily generalized to two-dimensional spatial phase profiles by creating higher-dimensional correlation maps.It also allows for implementation in different degrees of freedom, such as temporal or spectral, allowing the creation of joint probability maps both for photon detection times or their detected wavelengths.It is also possible to incorporate an additional degree of freedom to a measurement, measuring for instance joint temporalspatial correlation maps.
Equation 3 is only exact when all the values of Θ have the same probability of appearing during the time interval in which the whole measurement is performed.To satisfy this condition for arbitrary temporal phase noise, it is enough to add random uniformly distributed signal oscillating between 0 and 2π to the unknown phase fluctuations Θ(t).
Our method opens up possible applications in quantitative phase imaging under low light conditions for microscopy as well as for fundamental research.Unbalanced interferometers, such as the ones used in the time-bin encoding, could be of particular interest, as our method enables the use of additional degrees of freedom (for multi-dimensional information encoding) while filtering out phase fluctuations arising from unmatched optical paths.
Moreover, X-rays and matter waves (42,43), because of their shorter wavelengths, require much tighter alignment and better mechanical stability to interfere.Since our technique is phase noise resistant, it holds potential for phase-sensitive imaging using X-ray and matter waves (44,45).
It is of particular interest for future work to investigate the possibility of in-situ correlation measurement using non-linear effects, such as second harmonic generation or two-photon absorption.It could be realized by creating a setup superimposing an original fringe pattern and one rotated by 90 degrees.A nonlinear readout can act as a coincidence detector (46) and therefore allow to average on the camera chip, fulfilling the requirement for fast measurement by the instantaneous coincidence detectiontherefore allowing an almost unlimited noise bandwidth rejection.photon correlation measurements for quantum metrology and super-resolution microscopy' co-financed by the European Union under the European Regional Development Fund (POIR.04.04.00-00-3004/17-00).JS acknowledges that this research was funded in part by the National Science Centre, Poland, Grant number: 2022/45/N/ST2/04249.

S1: Statistical model of the experiment
Two cameras are set on the two outputs of the interferometer, and each of them consists of the same number of pixels (n pix ).The sample area with phase φ i is imaged onto the pixel number i on both cameras.Only two photons are received per the stability time of the interferometer phase.A single measurement consists of the detection of these two photons.The output of the single measurement is a pair (i +/− , j +/− ).Numbers i, j stand for the numbers of pixels in which photons were detected, whereas indices + or − indicate in which of the two outputs the corresponding photon was measured.The probability of measuring a single photon in a pixel i +/− is: where Ñ is a normalization factor, v is interferometer visibility, Θ is an extra, spatially uniform, possibly fluctuating phase, and I i is the intensity of the beam illuminating the phase mask in the area corresponding to pixel i. Phase Θ is stable within the time of detection of a single photon pair, its value for each pair is independently drawn from the continuous uniform probability distribution p(φ) = 1 2π for φ ∈ [0, 2π].We do not have access to the randomly chosen value of Θ, so the observed probability of measuring pair (i +/− , j +/− ) in every single frame is: where p(i +/− , j +/− , Θ) = p(i +/− , Θ)p(j +/− , Θ) is a joint probability distribution of measuring pair (i +/− , j +/− ) with the fixed value of Θ. From Eq. 2, we obtain the formulas: where N is a new normalization factor.Notice, that Eq. (3) from the main text is retrieved from the two above equations after substituting v = 1 and neglecting the normalization factor.The above equations are the starting point for further inference 1 arXiv:2301.11969v2[physics.optics]3 Feb 2023 about the maximal precision of the measurement.Full information about every single measurement is included in the dependence of the probability p(i ± , j ± ) on the estimated parameters φ i .

S2.1 Rapid fluctuations regime
In order to calculate the maximal precision of estimation of the parameters φ i , Fisher Information (FI) matrix F will be calculated.Then the inverse of the covariance matrix for all sets of unbiased estimators φi is lower bounded by the inverse of FI matrix [?].
There are 4 different types of events, which can occur during one experiment -two photons may be detected in one output (+ or −) or in different outputs ( we distinguish between +− and −+).We can distinguish between these 4 types, so the FI is the sum of FI matrices for all events types: From equations 3 and 4 we can simply conclude, that F ++ = F −− and F +− = F −+ .
Let us now focus on the calculation of F ++ matrix.
In order to simplify the formulas, the following notation will be used: The elements of the F ++ can be written in the following form: Moreover, where δ ij is a Kronecker delta.Consequently, If k = l, then for any m we have δ mk δ ml = 0, so That means, that non-diagonal matrix elements are: With the help of the equality (δ jk − δ ik ) 2 = δ jk + δ ik − 2δ ik δ jk we can obtain diagonal terms of F ++ : For any function f : where f (φ i , I i ) i is the mean value of the function over all pixels.From now on, we assume that the number of pixels is big and that each phase in the sample occurs with the same frequency.What is more, the intensity of illuminating beam I i is assumed to change slowly compared to the change of phase φ i .In other words, many different phases occur in the region with approximately constant intensity.From these assumptions, we obtain the equality which is true when f is linear with its second argument I, I stands for the mean intensity of the illuminating beam.Using the above assumptions, we can rewrite equation 11 as: Consequently, diagonal terms of F ++ are: Let us now calculate the normalization factor N .We will use the fact, that sum of probabilities of all events must be equal to one: Using equations 3 and 4 we obtain:

8N
n pix i,j=1 We can rewrite the sum in the above equation as: and obtain: Finally, ++ matrix can be written in the form: We have calculated the F ++ matrix, which is obviously the same as F −− matrix, because formulas for probabilities in both cases are the same.Analogous calculation show, that also F +− = F −+ = F ++ .Using the FI additivity we obtain the terms of F matrix: This is the FI matrix associated with the measurement of a single frame.If the total number of n phot photons is detected in the experiment (which means n phot /2 independent photon pairs), then, from the Cramér-Rao bound, In general, the estimator which satisfies the above inequality may not exist, however, it is possible to get arbitrarily close to the above bound if the number of measurements is big enough.That means, that the inequality becomes an equality if n phot → ∞.To simplify the calculations we also use the inequality: which is true for all hermitian F .The above inequality is not saturated in general, especially when non-diagonal terms of F are significant.However, in our case, the nondiagonal terms are asymptotically n pix times smaller than diagonal terms.n pix is also size of the F matrix.It may be proven, that for such scaling of non-diagonal terms with the size of the matrix, the above inequality becomes saturable for n pix → ∞.Using both of the above inequalities, we obtain the following bound: The value n k = n phot I k n pix I may be interpreted as the expected number of photons detected in pixel number k.The above bound may be rewritten in the intuitive form: From this form of inequality, it's clear, that the accuracy of measuring the value of the particular phase depends directly on the number of photons interacting with the measured area.

S2.2 Comparison with slow fluctuations regime
Let's compare our result with the phase estimation precision limit for an interferometer with a slowly fluctuating phase Θ.First of all, let's notice that we can't beat the accuracy achievable in the situation, in which extra phase Θ is known for all the detected photons-the information we get in a situation with unknown Θ is always smaller, even if the stability time of the interferometer is bigger.If Θ values are known, each single photon detection could be treated as an independent event (which was not the case in the previous section).Let's calculate the FI matrix for the single photon detection when Θ is fixed.A single measurement is fully described by the probability distribution from equation 1, and In the case with fixed Θ, the one-photon FI matrix is From equation 26 it is clear, that all non-diagonal terms of the F (1) matrix are equal to zero.This is because we obtain information about the φ i phase only in case of detection of a photon in the pixel i +/− .The diagonal terms are To make this case similar to the case described in the previous section let's assume, that Θ fluctuates and each value of Θ appears with the same frequency ( the difference is that Θ fluctuates slowly and we know its value).Then the mean FI for the single measurement is: kk dΘ = where formula Ñ = 1 n pix I obtained from the normalization condition was used.If we define n k = n phot I k n pix I as in the previous section, we obtain the best possible accuracy of measuring each phase φ k : Equation 30 is very similar to the equation 25-the only difference is that term v 4 4 is substituted by the term v 2 .That means, that the fact that one has only two photons per phase fluctuations stability time, leads to a decrease of the effective visibility of the interferometer from v to v 2 2 , compared to a slowly fluctuating case, in which we can assume, that we know the value of Θ.

S3: Experimental setup details
Our experimental setup comprises a polarization-based Michelson interferometer equipped with a 4f imaging system.As a light source, we use a diode laser at a wavelength of 780nm coupled to a single-mode fiber.At the output of the fiber, for intensity and polarization control, the beam passes through a half-wave plate, a quarter-wave plate, and polarizing beam splitter (PBS), and another half-wave plate, and then enters the interferometer.Each of the two paths in the interferometer is encoded with orthogonal polarization.In order to imprint different kinds of phase profiles φ(x) on the object beam, we build two kinds of slightly modified setups-one with a cylindrical lens placed in front of the mirror in the horizontally polarized light beam path in the interferometer, while in the other setup, we replace the mirror in the same path with a spatial light modulator (SLM).The interferometric mirror in the reference arm is given a phase fluctuation by attaching it to a piezoelectric actuator driven by a ramp function signal.We perform experiments with three kinds of different phase masks applied to our object beam.Our first configuration is to imprint a one-dimensional quadratic local phase profile to the beam by placing a cylindrical lens of focal length, f = 1000 mm in proximity to the mirror (Fig. 2 in the main text).Additionally, in our second configuration with SLM (from the HOLOEYE PLUTO) as a phase mask, we can display any arbitrary phase profile.As an example, we imprint one-dimensional exponential and sinusoidal phases onto our object beam.Since the SLM is efficient for only one polarization of light (in our case horizontal), we place the SLM in the horizontal path of the interferometer and the reflected beam with a given phase mask (the object beam) passes through a λ/2 plate.We introduce a time-dependent phase fluctuation Θ(t) in the reference arm -vertically polarized beam path in the interferometer) to make it effectively incoherent with the object beam.This is realized with a piezoelectric actuator driven by a ramp of 1.234 Hz.This shouldn't be confused with the maximal noise frequency for which our method works.Both the object and reference beams are combined on the polarizing beam splitter (PBS).At the output of the interferometer, we have a 4f imaging system consisting of lenses L3 and L4 of focal length 200mm each to image the SLM plane (phase plane) the intensified sCMOS (I-sCMOS -with the image intensifier from Hamamatsu V7090D-71-G272 and sCMOS from Andor Zyla) camera.To observe the interference, the orthogonally polarized object and the reference beam are required to be indistinguishable, and to do so, we rotate the polarization of both beams by 45 degrees with a half-wave plate and we perform the projective measurement in the original bases with a calcite crystal.Here, the calcite along with the waveplate act as a 50/50 Beamsplitter.This mixes the light from both outputs and allows us to observe interference in both outputs of the splitter.The I-sCMOS camera records single photons at both outputs of the interferometer on two regions.In order to get a high interference visibility within a single frame, we keep the camera exposure time low such that fluctuations are negligible within one camera frame.The use of short exposure time of the I-sCMOS, in the single nanosecond timescale, gives it stability and resistance against fluctuations up to tens of MHz.The interference visibility is slightly reduced due to imperfect imaging because of the path length difference in the calcite.We collect the data with a 200 Hz of frame rate.

S4: Data analysis
In this section, we describe how to obtain the final phase profiles from the raw experimental data.We first describe a standard approach for a non-fluctuating phase Θ, which we treat as verification of the ground truth.Then, we will follow with the analysis of the phase-fluctuating case and quantitative analysis of the reconstruction precision.

S4.1 One dimensional non-fluctuating phase
For ground truth measurements, we stabilize the interferometer with a box to reduce the airflow in the setup and use a laser beam to record bright interference fringes in a single shot.We record these fringes using a standard CMOS camera (Matrix Vision mvBlueFOX-IGC).We are then blocking interferometer paths, one at a time, to record beam profiles.We use these profiles to normalize the interference fringes and get an interferogram not modulated by the input beam profile.
We add padding-increase the size of the image from n pix by n pix pixels to 3n pix by 3n pix pixels by adding an n pix pixel ramp function on each side of the array from the image's edge value to zero (see NumPy library documentation for full description: numpy.pad,mode = 'linear__ramp', end_values = 0).Adding padding increases the available Fourier space and allows us to get a smoother phase retrieval.Then the normalized and padded interferogram is converted to the Fourier space using a fast Fourier transform (FFT).Then we extract half of the data, removing low frequencies in the mid-dle.This is a common practice for phase retrieval [?] and allows us to directly retrieve the phase-by inversing the FFT we get the phase as an argument of complex numbers from the resulting map.
The reconstructed phase is still wrapped-an argument lies in the range from 0 to 2π so the phase needs to be unwrapped; we perform this using the SciPy library in Python.At the end of the process, we get a two-dimensional phase profile, and since the analyzed phase is one-dimensional, we average the retrieved phase within the area of sufficient signal strength over one of the dimensions.We end up with a one-dimensional phase profile that was created on the basis of standard, well-established interferometric method and that can be treated as the basis for further experiments.

S4.2 One Dimensional Fluctuating Phase
For the creation of the joint probability map, we record the positions of all photons.We are considering only one-dimensional cases, so it is enough to know the position along the direction in which the phase changes.We extract the correct positions and create a square array with dimensions of n pix by n pix , where n pix is the number of pixels rows for the single-photon camera.This square array is our correlation map-its element with coordinates (i, j) contains the number of frames in which a pair of photons, one at position i, another at position j, was detected.In principle, two photons detected per frame are enough to create such a correlation map, given a sufficient number of measured frames.
We create correlation maps for all frames and sum them.We finally get an average joint probability map that contains phase information φ(x): where v 2 /2 is the visibility of 2nd order intereferogram fringes.We use notation v 2 /2 because in an ideal scenario, visibility of 1st order fringes v (observed for non-fluctuating case) would lead to visibility v 2 /2 observed for fluctuating case for a 2nd order interference.Notice, that the above equation is equivalent to Eq. ( 3) from the main text-the difference is that now we consider 1D phase profile and imperfect interference visibility.
Please note that at this point we can use exactly the same phase retrieval method as for the non-fluctuating case, as the phase information is imprinted in a form resembling the first-order interference: cos [φ(x, y) − φ(x , y )].We use the same data analysis as in Section 3.1 of this document.The only difference is that we normalize the joint intensity map using a sum of all interferograms, instead of the sum of separate measurements, as the phase information in the first-order interferogram is lost (please refer to the normalization steps in Fig. 2).

S4.3 Quantitive error analysis
We want to assess the accuracy of our phase imaging method in the most extreme scenario, in which only two photons are collected in each frame.Phase fluctuations are fast compared to the time between subsequent frames, but the fluctuating phase Θ is stable within a single frame-the same assumptions were made in S1 and S2.
Firstly, we need to construct the figure of merit assessing the phase retrieval accuracy.Let us assume, that our goal is to estimate the phase profile of the part of the object corresponding to n pixels, whose numbers are m, m + 1, ..., m + n − 1.The real phase in pixel k is φ k , its estimation is φk .Then, the MSE associated with the phase estimation in this region is where the difference φ k − φk is always taken modulo 2π, such that φ k − φk ∈ [−π, π].
The theoretical minimum of the MSE can be obtained using C-R bound as described in S2-it is enough to insert Eq. 25 into Eq.32.This theoretical minimum is depicted in Figure 3 using solid, black line.
To check the optimality of our phase reconstruction method, we compare this minimum with the MSE obtained in the simulation.For a given beam profile I i , phase profile φ i and visibility v, we randomly sample n phot /2 pairs of x photons positions from the probability distribution described by Eq. 4. Therefore, in our simulation, there are only two sources of noise-non-unit interference visibility and a shot noise, resulting from the finite number of photon pairs.The number of photon pairs is two times smaller than the total number of photons because one pair is created from each two-photon pairthis makes all pairs statistically independent.After creating an artificial hologram, we perform the whole phase reconstruction procedure described in to get phase estimators φ.Then, we check a region of interest in the middle of the beam-we don't want to take into account regions where the beam intensity is very low.Finally, we compute the MSE associated with the phase estimation in this region using Eq.32.We repeat this procedure for different total number of photons n phot , which leads to different mean number of photons per pixel.We can observe in Figure 3, that the performance of our phase reconstruction method is optimal for large enough number of photons.
We also perform a similar, quantitative analysis using experimental data.To make the comparison between theory, experiment and simulation possible, we measure the beam intensity, phase profile and the visibility of 2nd order interference fringes in the correlation map (v 2 /2).Then, the experimental parameters are used in the simulation and to calculate the C-R bound.We obtain phase estimators φi using experimentally measured correlation map.To obtain such a map, we randomly choose just one photon pair from each frame-the goal is to simulate the experimental conditions, in which only two photons per frame are available.The ground truth phase profile φ i is quadratic because we use a standard, thin lens as a phase mask.The parameters of this ground truth phase profile are measured using the standard technique with stable phases in both arms.The experimental phase estimators φi are calculated without any assumptions about the shape of the phase profile-we don't assume, that the measured phase profile is quadratic.The MSE for the region of interest is again computed using Eq.32.We observe, that the MSE from experiment and simulation are very similar-they only start to differ slightly for the number of photons per pixel ∼ 4 × 10 4 .This may suggest, that noise sources other than shot noise start to play a significant role at this point.

Figure 1 :
Figure1: (a) Simplified schematic of the experiment: we divide input light into two paths, an object path, and a reference path.In the object path, we introduce a spatially varying 1D phase object φ(x) that we want to image.A time-fluctuating interferometric phase Θ(t) can be introduced to the system.For non-fluctuating phase Θ, we can measure high visibility interference fringes (b), see Eq. 1.For rapidly fluctuating Θ(t), we observe the loss of interference visibility with increasing measurement time, whereas, for a short measurement time, the signal-to-noise ratio (SNR) is very low due to an insufficient number of detected photons (c).In contrast, in our method, the visibility remains constant for an arbitrary acquisition time, which allows to create high-SNR interferograms (d).Images (b) and (c) depict normalized one photon interference fringes for non-fluctuating and rapidly fluctuating phase scenarios respectively.Second-order correlation interferograms shown in (d) are created using the same photon detections as standard interferograms presented in (c).Even for rapidly fluctuating phase, where we record only a few photons within the stability time of Θ(t), we can retrieve second-order correlation interferograms with high visibility and high SNR, that contain full information about the phase profile φ(x).

Figure 2 :
Figure2: Experimental setup for noise-resistant phase imaging is constructed using polarization-based Michelson interferometer equipped with a 4f imaging system.Light from a 780 nm laser passes through a half-wave plate (λ/2), quarter-wave plate (λ/4), polarization beam splitter (PBS) followed by additional λ/2-in this way, we control the beam intensity and polarization.Subsequently, the beam enters the interferometer.In one of its arms, the spatial phase φ(x) is imprinted onto the beam.The mirror in the other adds temporal fluctuations Θ(t) caused by a piezoelectric actuator.The beams from both arms are combined at the PBS, and the pass through a 4f imaging system consisting of two lenses (L1, L2), which are positioned such that the sample plane is imaged on the camera plane.The calcite polarizer combined with the output λ/2 acts as a 50/50 beam splitter.The intensified sCMOS (I-sCMOS) camera records single photons coming from both outputs of the interferometer.

Figure 3 :
Photons per pixel

Figure 4 :
Figure 4: Experimental results of the spatial phase profiles displayed by SLM.Measured coincidence maps (correlation functions) between outputs of the interferometer for sinusoidal, and exponential phases are shown in (a) and (d) respectively.(b) and (e) represent the aforementioned reconstructed phases.(c) and (f) show errors and inverse square-root of the intensities.

Figure 1 :
Figure 1: Experimental setup for noise-resistant phase imaging with SLM.

Figure 2 :
Figure2: Additional steps for a highly distorted beam data processing: a) A raw coincidence map, created from all frames of the measurement.b) A normalized coincidence map in respect to the intensity of the beam.Even low photon count regions carry phase information.c) normalized coincidence map from (b) multiplied by the Gaussian profile of the perfect beam.We add this step for the off-axis Fourier analysis to remove highnoise regions and smoothing the interferogram.Please note, that despite raw image being highly distorted from imperfections of the SLM, our phase retrieval works effectively and the normalized coincidence map has a high visibility.