Next Article in Journal
Ghost Fringe Suppression by Modifying the f-Number of the Diverger Lens for the Interferometric Measurement of Catadioptric Telescopes
Previous Article in Journal
Crystal ZnGeP2 for Nonlinear Frequency Conversion: Physical Parameters, Phase-Matching and Nonlinear Properties: Revision
Previous Article in Special Issue
Improved DeepLabV3+ Network Beacon Spot Capture Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flow Field Estimation with Distortion Correction Based on Multiple Input Deep Convolutional Neural Networks and Hartmann–Shack Wavefront Sensing

1
National Laboratory on Adaptive Optics, Chengdu 610209, China
2
Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China
3
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
4
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
5
Laboratory of Measurement and Sensor System Techniques, Faculty of Electrical and Computer Engineering, TUD Dresden University of Technology, 01062 Dresden, Germany
*
Authors to whom correspondence should be addressed.
Photonics 2024, 11(5), 452; https://doi.org/10.3390/photonics11050452
Submission received: 3 April 2024 / Revised: 29 April 2024 / Accepted: 9 May 2024 / Published: 11 May 2024
(This article belongs to the Special Issue Challenges and Future Directions in Adaptive Optics Technology)

Abstract

:
The precise estimation of fluid motion is critical across various fields, including aerodynamics, hydrodynamics, and industrial fluid mechanics. However, refraction at complex interfaces in the light path can cause image deterioration and lead to severe measurement errors if the aberration changes with time, e.g., at fluctuating air–water interfaces. This challenge is particularly pronounced in technical energy conversion processes such as bubble formation in electrolysis, droplet formation in fuel cells, or film flows. In this paper, a flow field estimation algorithm that can perform the aberration correction function is proposed, which integrates the flow field distribution estimation algorithm based on the Particle Image Velocimetry (PIV) technique and the novel actuator-free adaptive optics technique. Two different multi-input convolutional neural network (CNN) structures are established, with two frames of distorted PIV images and measured wavefront distortion information as inputs. The corrected flow field results are directly output, which are divided into two types based on different network structures: dense estimation and sparse estimation. Based on a series of models, a corresponding dataset synthesis model is established to generate training datasets. Finally, the algorithm performance is evaluated from different perspectives. Compared with traditional algorithms, the two proposed algorithms achieves reductions in the root mean square value of velocity residual error by 84% and 89%, respectively. By integrating both flow field measurement and novel adaptive optics technique into deep CNNs, this method lays a foundation for future research aimed at exploring more intricate distortion phenomena in flow field measurement.

1. Introduction

When we track and understand fluid mechanics, Particle Image Velocimetry (PIV) provides a non-invasive measurement of the fluid velocity field as an important experimental technique [1]. Turbulence, a complex phenomenon ubiquitous in fluid dynamics, manifests in chaotic and unpredictable flow patterns, making it essential to decipher its underlying dynamics through particle trajectory analysis. To understand turbulence, we aim to capture the trajectories of tiny particles within the fluid, enabling us to unravel the intricate motion characteristics that define the entirety of the flow field. The distribution of fluid motion velocity field is obtained by using the difference information between two consecutive PIV images, so it is necessary to extract and calculate the fluid velocity field within the field of view from the two PIV images using certain algorithms [2,3]. The estimation algorithm of fluid motion field in PIV is essentially a flexible matching process between images. Traditional methods include cross-correlation (CC) and optical flow method. The CC is the most common method in Particle Image Velocimetry. Its basic idea is to find the optimal matching through statistical methods on local two-dimensional discrete image signals, which is achieved by discrete CC function in PIV image signals. Optical flow method is an important technology for motion analysis of objects in video or images in the field of computer vision [4,5,6]. The definition of optical flow is the change in pixel brightness exhibited by the motion projection of a three-dimensional target object during imaging.
The above two methods have been widely used in the field of Particle Image Velocimetry, but both methods have certain shortcomings due to their basic principles. For example, the CC algorithm requires window division of the entire particle image based on the uniform linear motion assumption during the calculation process, which limits its spatial resolution. In addition, peak extraction on the correlation plane also introduces sampling errors: the optical flow method is based on the conservation assumption of optical flow, which is extremely sensitive to noise and interference light, and lacks robustness.
To address these issues, researchers have developed deep learning-based fluid motion field estimation algorithms in conjunction with deep convolutional neural networks (CNNs). The main works include the cascade-based deep neural network structures PIV-DCNN [7], PIV-FlowNetS [8], and PIV-LiteFlowNet [9]. The basic principles of these three algorithms are consistent; that is, building a certain CNN structure, taking two consecutive frames of PIV images as input, and outputting the estimated velocity vector field. PIV-DCNN is obtained by cascading four deep CNN modules to gradually obtain coarse-to-fine velocity field vectors. By superimposing the number of CNN layers, the estimated vectors are continuously corrected to improve measurement accuracy. This method improves the measurement accuracy to a certain extent, but since the method is the same as the basic principle of the CC analysis method, it needs to repeat the calculation for different windows, which greatly increases the computation time. Unlike PIV-DCNN, references [8,9] adopted FlowNetS [10,11,12], which were proposed for optical flow estimation, to improve two CNN structures for optical flow computation. They were applied to Particle Image Velocimetry to densely estimate the fluid motion velocity field. The resolution of the output flow field can be consistent with the input PIV image, achieving pixel-level estimation. The reference [9] compared the performance of these three network structures as well as the HS optical flow method and WIDIM cross-correlation algorithm. PIV-LiteFlowNet outperformed the others in measurement accuracy while PIV-FlowNetS demonstrated advantages in computational efficiency.
All these algorithms are designed for ideal PIV images without distortion. However, optical distortions introduced by inhomogeneous refractive index fields [13] or fluctuating phase boundaries, such as those occurring at an open air–water interfaces at water channels [14] or basins, can lead to blurred particle images and uncertainty in the assignment of particle positions, resulting in a degradation in velocity measurement accuracy [15]. Although static wavefront aberrations can be easily corrected through calibration measurements or data analysis [16,17], eliminating time-varying distortions is a challenge.
Estimating the fluid motion field through the distorted PIV images can only yield erroneous or distorted flow field measurements. In our previous work, a traditional adaptive optics system [18,19] and novel actuator-free adaptive optics technique [20] were used for correction distortion in a PIV imaging system. In [20], we established an adaptive optics technique without wavefront correction devices to correct PIV images, and then estimated the velocity distribution of the flow field using traditional Particle Image Velocimetry algorithms. This technical approach effectively corrects the optical distortion of PIV images, but for the entire process of flow field measurement, obtaining a corrected undistorted PIV image and then using traditional Particle Image Velocimetry algorithms for fluid motion field estimation still has drawbacks due to the inherent limitations of these traditional algorithms. Based on the above background, this paper combines an actuator-free wavefront distortion correction method and PIV measurement, and proposes a fluid motion field estimation algorithm that includes distortion correction. The basic idea for flow field estimation and distortion correction based on so-called AOPIV-Net is shown in Figure 1. The algorithm takes the distorted PIV image pairs and the wavefront distortion information measured by Hartmann–Shack wavefront sensor as two inputs, builds two different multi-input deep CNN structures, and outputs the corrected flow field.
The content structure of this paper is arranged as follows: First, the basic principle of optical distortion caused by air–water interface in a PIV imaging system is analysed, and a optical setup with laser-guide star and Hartmann–Shack wavefront sensor for the wavefront distortion measurement is shown. Then proposed method with the detailed descriptions of the two multi-input CNN structures proposed in this paper are provided. Furthermore, the generation method of datasets required for training and testing of neural networks is introduced. Next, the performance of the fluid motion field estimation algorithm after training is tested and analyzed. Finally, conclusions are presented.

2. Principles and Methods

When the imaging system images through different refractive index media surfaces and the surface morphology of the media changes dynamically, the imaging will severely degrade due to changes in refractive index, resulting in severe random distortion. As shown in Figure 2, the PIV imaging system vertically collects the PIV image generated by illuminating particles in the fluid with a laser sheet light at the depth of interest (DOI) of the measuring object. The imaging optical path passes through the surface of the perturbed medium, causing optical path refraction and imaging distortion. At a point x Ω in the spatial domain Ω , the real PIV image in the instantaneous state t can be equivalent to a stationary plane scene I g ( x , t ) , the original undistorted image. The image captured by the PIV imaging system I ( x , t ) is a distorted image I g ( x , t ) due to the distortion caused by the disturbance of the phase boundary:
I ( x , t ) = I g ( x + w ( x , t ) ) ,
where w ( x , t ) is the distortion function at pixel x and time t. The distortion function w ( x , t ) is related to the water surface height profile h ( x , t ) at time t, where ∇ is the gradient operator [21]:
w ( x , t ) = α h ( x , t )
We can calculate that α = h 0 ( 1 n n ) according to Snell’s Law under first-order approximation, which shows α is a constant related to water surface reference height h 0 when the air–water interface is at rest and relative refraction index n / n between two media. From Equations (1) and (2), we know that at any time instant, the distortion function results in local geometric distortions at each pixel, which depends on its location, relative refraction index and surface height.
Correcting such distortions from a single-frame image is challenging since the shape of the air–water interface is unknown. The process is similar to blind deconvolution, but the kernel, which is the unknown Point Spread Function (PSF) of the optical imaging system, is spatially varying and can be much larger than what is typically considered in image deblurring. Even by using the different neural network, such geometric distortion still can not be corrected thoroughly [22].
In order to obtain more information from the phase boundary, we apply a spatially distributed guide star technique, which was introduced in our previous work [23], and a Hartmann–Shack wavefront sensor (HSWFS) for the wavefront measurement. Most of the guide star techniques are using focused light into or through scattering or spatially diffusing medium. However, for camera-based optical flow measurement, like PIV, the distortion occurs on a 2D image, to track the optical path length change within the imaging system path, a spatially resolved guide star needs to be used. As shown in Figure 2, a single phase boundary is located between detecting setup and the measuring object, which is the depth-of-interest (DOI) layer where particles illuminated by the light sheet. To obtain the shape of fluctuating phase boundary, this optical path length change needs to be traced [23]. By using distributed guide star techniques for the fluctuating phase boundary, according to the illustration in Figure 2 and Fermat’s principle, the reference phase can be obtained, which means that when the air–water interface is steady and will not cause any distortion,
ϕ r e f = 2 π λ ( h 0 n + h 0 n )
and also the distorted phase can be obtained
ϕ d i s t = 2 π λ [ ( h 0 h ( x , t ) ) n + ( h 0 + h ( x , t ) ) n ]
the phase difference can be given as
Δ ϕ ( x , t ) = ϕ d i s t ϕ r e f = 2 π λ ( n n ) h ( x , t )
where n and n denotes the refractive indices of two media, λ is the wavelength of the laser guide star, and h here can be considered as the spatially distributed height profile of the phase boundary. From Equations (2) and (5), the relation between image distortion function w and phase difference Δ ϕ is obtained
w ( x , t ) = λ h 0 2 π n [ Δ ϕ ( x , t ) ]
which shows that w and the gradient of Δ ϕ are linearly related. Hence, in order to obtain distortion information in an under-water image, we can use a wavefront sensor to measure the phase difference Δ ϕ .
A HSWFS consists of a two-dimensional microlens array and a detector, which is a charge-coupled device (CCD) in our case. The input wavefront is sampled by the microlens array first, where each microlens focuses the sampled local wavefront into the CCD, which is located in the focal plane of the microlens. The resulting Hartmannogram is a spot array image. The average slope of the sampled sub-wavefront can be calculated by the spot displacement from the reference position. The reference spot positions are the centroids of the spots estimated when the air–water surface is at rest, i.e, laser guide star passing through the optical setup without any distortion. Then the local wavefront gradient can be calculated as follows:
[ ( Δ ϕ ) ] i j = x y Δ ϕ i j = 1 f Δ x Δ y i j
Δ ϕ is the phase difference in Equation (6), which is the residual wavefront corresponding to the difference between the distorted and the reference wavefronts; f is the focal length of the microlens, also denoting the distance between the microlens and CCD; i, j denote the ith and jth microlens; and ( Δ x , Δ y ) are the displacements between corresponding distorted and reference spot centroids. From Equations (6) and (7), the relation between image distortion function w ( x , t ) and spot displacement is as follows:
[ w ( x , t ) ¯ ] i j = λ h 0 2 π n f Δ x ( t ) Δ y ( t ) i j
From Equation (8), we can conclude that the average value of distortion function in the sampled region is linearly related to the spot displacement from HSWFS, which shows that spot displacement represents the geometric distortion on the image. At any time instant, ignoring the spatial sampling error from the microlens array, we can rewrite Equation (1) as follows:
[ I ( x , y ) ] i j = [ I g ( x + λ h 0 2 π n f Δ x s p o t , y + λ h 0 2 π n f Δ y s p o t ) ] i j
Equation (9) shows that the fluctuating phase boundary causing geometric distortion in an under-water image can be represented by the spot displacements of the HSWFS, which means this distortion can be directly measured by HSWFS. The spatial guide star is used here for wavefront distortion measurement, so the measured wavefront is spatially related to the distorted image. And because the convolutional neural network maintains a spatial structure during the propagation, information from the measured wavefront can be used for deep leaning-based image distortion correction. The necessity of using another input for convolutional neural network comes from this principle. In this problem, unlike other image translation problems, correction by convolutional neural network needs both features from the distortion and image. The features that the convolutional neural network can extract from a distorted image come from image itself, for numerous images are in flow measurement and have dynamically changing distortions, meaning different distortions can lead to different distorted images on one same real image, i.e., different inputs for the neural network have same label. This will cause the so-called curse of dimensionality [24] in deep learning. It is impossible for a single-input neural network to solve such a problem. As concluded from Equations (1)–(9), spatially related distortion information can be easily obtained by wavefront measurement; hence, we develop a multiple-input neural network architecture for image distortion correction, and additional input for the neural network will be spot displacements along the x and y direction because of its high spatial relevance to the distorted image.
In order to quickly and accurately obtain aberration information from the phase boundary, deep learning is used as a platform to achieve wavefront measurement. More specifically, two multi-input deep CNN structures are constructed, both of which use the aberration information, i.e., the local wavefront slope, measured by the Hartmann–Shack wavefront sensor as the auxiliary information, to complete an end-to-end mapping relationship from the aberrated PIV image to the corrected flow field distribution. The established algorithm achieves the estimation of the fluid kinematic field distribution while completing the correction of the aberrations, and the two network structures are collectively referred to as AOPIV-MICNN (Adaptive Optics Particle Image Velocimetry-Multiple Input Convolutional Neural Network). The basic framework of the algorithm is shown in Figure 3:
The network input consists of two consecutive frames of PIV image pairs and two consecutive frames of the spot displacement matrix. The labels used during the training process are the true values of the flow field. Therefore, the first step is to synthesize the undistorted PIV image based on the true flow field distribution. Then, the distorted PIV image is synthesized through certain wavefront distortion. Finally, the spot displacement matrix is obtained through the Hartmann–Shack simulation model to obtain the input during the training process. After completing the training, the neural network can output the corrected flow field distribution results based on the wavefront distortion information and distorted PIV images measured by the Hartmann–Shack wavefront sensor.
Based on different requirements and theoretical models, we propose multi-input deep CNN structures that can obtain estimation results of sparse and dense fluid motion fields.

2.1. Estimation of Dense Fluid Motion Field Based on U-Net Structure

The neural network structure proposed in this paper for estimating the motion field of dense fluids is shown in Figure 4. Its main structure is inspired by the widely used U-Net structure in recent years [25]. U-Net adopts a U-shaped architecture, which connects the downsampling and upsampling paths of feature maps. This U-shaped structure can capture contextual information at different scales and effectively transmit detailed information through skip connections, which helps to improve the accuracy and stability of the model. In addition, compared to some more complex neural network architectures, the training and inference speed of U-Net is usually faster, making it suitable for real-time applications in flow field measurements. The distortion correction process of PIV images can be defined as an image regression problem from pixel to pixel. However, unlike the original U-Net, this paper adds another input on the basis of the original structure. The displacement of the light spot measured by the Hartmann–Shack wavefront sensor corresponds spatially to the distortion function, that is, the displacement of the light spot within a single subaperture represents the local geometric distortion within the sampling area corresponding to the subaperture. According to this, this paper takes the displacement of the light spot calculated from the centroid of the Hartmann–Shack wavefront sensor light spot array as a 16 × 16 × 2 tensor input to the dense estimation deep learning model.
The main input of the neural network is two frames of distorted PIV images with a size of 128 × 128 × 2 pixels (grayscale images, two frames, i.e., two channels). The additional input is the spot displacement obtained from the Hartmann–Shack wavefront sensor, with a size of 16 × 16, and since it is a two-frame wavefront slope matrix, there are four channels of this input, which constitutes a 16 × 16 × 4 slope tensor as input. The output is a 128 × 128 × 2 distribution of the flow field, with one channel for the x-direction flow velocity vector and the other for the y-direction. The dataset is generated by a synthetic model, and the specific generation method is described in detail in the subsequent Section 2.4. The dense fluid motion field estimation neural network mainly consists of three parts: the downsampling channel, the upsampling channel, and the multi-input channel unique to the network.
In the downsampling channel, each downsampling module uses a depthwise separable convolution layer, which performs channel-wise convolution on different channels during the convolution process, and then fuses the feature maps through pointwise convolution. This architecture design reduces the number of trainable parameters in deep models and thus improves training efficiency. The upsampling channel is the same as the original U-Net, using upsampling layers to enlarge the feature map size, and then bridging the feature maps of the downsampling channel through feature map merging operations to preserve detailed information. The essential difference between the dense fluid motion field estimation neural network proposed in this paper and the original U-Net is a multi-input channel. The basic structure of the multi-input channel is similar to that of the downsampling channel, with two depthwise separable convolution layers with a kernel size of 3 × 3 connected. After the convolution layer, a Tahn activation function that has no effect on negative weights is used. The feature maps after the convolution module are added to the feature maps in the downsampling channel of the same size through feature map merging operations, so that wavefront aberration information is input into the main network structure of the dense fluid motion field estimation neural network. The proposed network retains the core structure of U-Net, and introduces multiple input channels to directly input wavefront aberration information into pixel-level resolution flow field estimation. Furthermore, it fuses different input feature maps through feature map merging. This improvement enables the network to effectively process highly linearly correlated spatial structure information between PIV images and wavefront slopes.

2.2. Sparse Fluid Motion Field Estimation Based on Multi-Input Xception Structure

Although the dense fluid motion field estimation algorithm based on U-Net can output pixel-level flow field distribution information, its distortion correction capability is actually limited by the spatial resolution of Hartmann–Shack wavefront distortion information in multiple input channels. The main input of the network structure in Section 2.1 is a distorted PIV image pair with a size of 128 × 128, and the secondary input is a 16 × 16 wavefront slope matrix. The displacement obtained for each light spot actually represents the average wavefront slope information within an 8 × 8 region on the distortion PIV image. The spatial resolution of the wavefront slope matrix is lower than the resolution of the flow field distribution represented by the PIV image pair, so from an actual physical model analysis, if the output has the same spatial resolution as the wavefront distortion information, then the neural network has a more significant effect on distortion correction. To improve the performance of distortion correction, this paper proposes a new network structure based on the Xception algorithm framework.
Xception is a network that performs well in image classification and recognition tasks, with advantages such as parameter efficiency, high computational efficiency, and effective feature learning. The residual connection mechanism similar to ResNet added to Xception significantly accelerates the convergence process of Xception and makes it easier to achieve higher accuracy [26]. The structure outputs a sparse fluid motion field distribution with the same spatial resolution as the Hartmann–Shack wavefront sensor. Compared with dense fluid motion field estimation algorithms, this algorithm theoretically achieves higher distortion correction performance at the expense of sacrificing flow field resolution. As shown in Figure 5, the sparse flow field estimation algorithm is based on the multi-input Xception deep neural network.
Since the final output scale is no longer at the pixel level, this algorithm adopts a completely different framework structure from the dense estimation deep learning model. It adds multiple input channels based on the Xception basic framework. In addition, since Xception was originally used to solve image classification problems, the final output part has been modified accordingly. The network structure is divided into a main input channel, a secondary input channel, an intermediate channel, and an output channel. The main input channel uses three modules similar to ResNet, each of which includes a deep separable convolutional layer and a max pooling layer to preserve detail information through Shortcut connections. The secondary input channel connects two deep separable convolutional layers, and integrates wavefront distortion information into the main network through feature map merging. The intermediate channel is consistent with the intermediate channel in Xception, using four convolutional modules in residual neural networks plus a deep separable convolutional module to obtain feature maps. For the final output layer, unlike the original Xception, the output here is the flow field result, which has both negative and positive values and is essentially a regression problem. It does not require a fully connected layer to shrink the dimension of the feature map, nor does it require any activation function. It directly outputs the result linearly after the last convolutional layer. The proposed dense and sparse estimation deep learning model are collectively referred to as AOPIV-MICNN (Adaptive Optics Particle Image Velocimetry-Multiple Input Convolutional Neural Network).

2.3. Training of AOPIV-MICNN

The mathematical representation of the training process is
F : { I , S } F ,
that is, the neural network is made to learn a complex functional mapping from the distorted PIV images and wavefront distortion input tensor to the ideal distortion-free flow field distribution (i.e., the true-value labels). The MSE loss function was used for training and the training procedure was rewritten as
arg min F { I , S } Ω F { I , S } F 2 2 .
Adam optimizer is also used in the training process. In total, 5000 pairs of data are divided into training, validation, and testing datasets according to a ratio of 8:1:1. The datasets are generated synthetically, and the specific generation method is described in detail in the next section. The implementation of the algorithm is done on the Tensorflow framework. It is worth mentioning that since the altered dataset is obtained by synthesis, white noise of different intensities is added to the PIV image pairs during the training process to avoid overfitting.

2.4. Dataset Generation Based on the Synthetic Model

The learning process of deep neural networks is a supervised learning process, which requires ground-truth data as labels for training. However, the flow field data obtained through the Particle Image Velocimetry experimental platform is not the true value of the flow field, and there is always a certain systematic error. Therefore, the dataset needs to be obtained through a certain synthetic model. A set of data includes the distortion PIV image pair, the wavefront slope pair in the Hartmann–Shack wavefront sensor, and the true value of the flow field distribution as a label. This paper proposes a dataset generation method based on a synthetic model the specific generation method idea is shown in Figure 6:
First, a certain fluid velocity field is generated as the true value of the dataset, and a particle distribution is also obtained, and then the corresponding two consecutive frames of PIV images are generated from the particle distribution and the flow field distribution; however, at this time, the PIV images are distortion-free, so it is necessary to generate the distorted PIV images through a distortion model. The wavefront aberration measurements on the surface of the shaking air–water medium measured on the actual platform are used, and two consecutive frames of wavefront aberration measurements are taken to compute the aberrated PIV images under the corresponding wavefront aberration to obtain the main input of the dataset. Then the wavefront aberration measurement results and the simulation model of Hartmann–Shack wavefront sensor are simulated to obtain the Hartmann–Shack spot array map, and another input, i.e., the slope value of the wavefront aberration, is obtained after center-of-mass calculation to complete the generation of the dataset. Each frame in the dataset contains two frames of distorted PIV images, two frames of wavefront distortion slope tensors, and a real flow field distribution tensor as a training label. The specific implementation process of each step is introduced below.

Generation of Distorted PIV Image Pairs and Corresponding Flow Fields

As shown in Figure 6, the generation of distorted PIV image pairs firstly generates a flow field distribution, and then generates particle images through the particle image model. The combination of different types and parameters of flow fields and different parameters of PIV images yields distortion-free PIV image pairs, which are then distorted on the image through the distortion model to obtain distorted PIV image pairs. For the generation of ideal distortion-free PIV images, this paper adopts the open source software piv-image-generator (PIG) established in [27]. In addition, in order to increase the diversity of datasets, a large number of datasets were obtained from different open source databases.
  • Fluid velocity field generation. The PIG used in this paper can simulate uniform flow, simple shear flow, vortex flow, Poiseuille flow, stagnation point flow, etc. However, in practical Particle Image Velocimetry, the flow fields faced are often more complex and difficult to obtain through software simulation. Therefore, a variety of flow fields through open-source flow field databases are also obtained, such as the back-step flow and cylinder flow in [9], the isotropic free turbulence provided in [28], and the surface quasi-geostrophic (SQG) flow provided in [29]. In addition, various types of different parameters of flow fields such as MHD turbulence, forced isotropic turbulence, and channel flow are obtained in the Johns Hopkins Turbulence Databases. Through these simulations or open-source databases, a total of 5000 frames of different flow field distributions are generated as labels for the dataset. Figure 7 shows several representative flow field distribution maps in the dataset.
  • Particle image generation. The generation of particle images is also achieved using the open-source PIG software, which is based on a Gaussian grayscale contour model of the particles [1]:
    I ( x , y ) = I 0 e x p ( ( x x 0 ) 2 + ( y y 0 ) 2 ( 1 / 8 ) d p 2 ) ,
    where I 0 is the peak gray value of the particle center, d p is the particle diameter, and ( x 0 , y 0 ) is the center position of the particle. These are the parameters that can be adjusted when generating particle images. In addition, PIG defines particle density, noise intensity, and particle off-plane deviation. Table 1 shows the parameter ranges when generating PIV images using PIG open source software in this paper, and Figure 8 shows PIV images generated by differentparameters. The first PIV image obtained is combined with the generated flow field distribution to obtain a set of particle image pairs that represent the flow field distribution. When generating particle images, the distribution of particle positions is random. The displacement of the particle positions in this frame is calculated based on the direction and velocity of the flow field to obtain the particle distribution in the second frame. Then, based on the definition of each particle parameter in the previous frame, the PIV image in the second frame is obtained to form a particle image pair. The size of the particle images generated in this paper is 128 × 128, and a total of 5000 frames of flow field distributions are generated, thus corresponding to 5000 PIV image pairs, i.e., 10,000 frames of particle images.
  • Distorted PIV image generation. After obtaining the ideal particle image pair, the distorted PIV image is calculated based on the distortion model of the turbulent air–water two-phase medium surface on the PIV image and the wavefront distortion results obtained from simulation [20]. The wavefront distortion measured and restored by the Hartmann–Shack wavefront sensor can be represented by Zernike polynomials. After obtaining the wavefront distortions, the distortion function on the PIV image can be calculated:
    w ( x , t ) = λ h 0 2 π n 0 [ Δ ϕ ( x , t ) ] ,
    where λ is the wavelength of the laser beacon, h 0 is the depth of the imaging target, n 0 is the refractive index of the medium in which the imaging target is located, and ∇ is the gradient operator. It can be concluded that the distortion function on the PIV image w ( x , t ) is linearly related to the gradient of the wavefront distortion phase Δ ϕ ( x , t ) . Then interpolation operation is performed on the ideal undistorted PIV image to obtain the distorted PIV image. The parameters are set as follows: h 0 = 20 mm, n 0 = 1.33 , λ = 660 nm The dataset contains a total of 10,000 frames of PIV images, and the consecutive 10,000 frames of wavefront aberrations are added frame-by-frame to each PIV image pair to obtain 10,000 frames of aberrated PIV images.

2.5. Simulation Model of Hartmann–Shack Wavefront Sensor

For the other input of the neural network, the wavefront slope matrix is obtained through a Hartmann–Shack wavefront sensor simulation model. In order to improve the diversity of the dataset and enhance the generalization ability of the neural network, different intensities of noise were added to the generated dataset on the spot array images. Figure 9 shows the implementation process of the simulation model. Firstly, the parameters of the sensor are set, which are summarized in Table 2. Then the randomly generated Zernike coefficients are used to express the wavefront distortion. Finally, by setting the parameters of the microlens array, the incident wavefront is sampled and segmented, and the spot is imaged separately in each subaperture.
The complex amplitude distribution of the light field on the image plane can be obtained by Fourier transform of the light field distribution of the subaperture entering the pupil, and then the light intensity distribution on the focal plane can be obtained by conjugate multiplication of the complex amplitude of the light field, thereby obtaining the light spot imaged by the microlens. Then, based on the parameter settings of the CCD, the spot is spatially sampled to obtain the spot image. Finally, by calculating the centroid of the spot and obtaining the wavefront slope matrix, it serves as another input in the dataset. In addition, to verify the correctness of the simulation model, the wavefront was reconstructed using the least squares method and compared with the incident wavefront, proving the reliability of the simulation model. Finally, a dataset containing 10,000 frames of PIV distorted images, 10,000 frames of wavefront slope matrices, and 5000 frames of flow field distributions is generated. The dataset is split into training, validation, and testing sets in a ratio of 8:1:1. It is worth noting that due to the inconsistent final output size of the two neural networks, the flow field distribution generated by the synthetic model has a size of 128 × 128 × 2, which can be directly used for network training of dense fluid motion fields. Network training for sparse fluid motion fields is obtained by sampling the flow field distribution with mean value, with a size of 16 × 16 × 2.

3. Results and Discussions

The neural networks can output corrected flow fields through distorted PIV images and wavefront slope after training. Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 show the estimated results of several different flow fields obtained through two deep CNN algorithms and traditional CC algorithms without correction, along with the actual flow field distribution for comparison. From the results, it can be seen that the wavefront distortion caused by the surface of the fluctuating air–water two-phase medium creates a virtual turbulence in the PIV image distribution. The traditional CC algorithm cannot correct this virtual flow field, but the distortion correction estimation algorithm proposed in this paper can effectively correct the distortion, resulting in a result that is more consistent with the actual flow field distribution.
The top left of each image is the sparse flow field ground truth, and the top right is the dense ground truth. From left to right, the bottom images are the results of the CC method, the sparse neural network algorithm, and the dense neural network algorithm. The background color of each image represents the flow field velocity at that point, and the black arrows represent the fluid motion velocity vectors in that region. In the dense result representation, for clarity, not all velocity vector arrows are displayed, but the velocity results are still calculated from the original 128 × 128 results. It can be seen that neural networks not only estimate the flow field distribution, but also correct the distortion. Even when traditional algorithms fail due to excessive local perturbations, both neural networks can still provide accurate results. In order to comprehensively evaluate the performance of the algorithm, the mean absolute error (MAE) of the estimated velocity vectors are calculated on the test set. The MAE is equivalent to the absolute value of the velocity vector residual, and the resulting value is on a two-dimensional plane. Therefore, the mean and root mean square (RMS) values on the entire plane are further calculated to evaluate the performance of the algorithms. The velocity residual can be used to analyze the systematic error of the estimation algorithm:
MAE = | v v 0 | = ( v x v x 0 ) 2 + ( v y v y 0 ) 2 ,
where v and v 0 are the estimated and true values of the flow field velocity vector, respectively. The average and mean square values of all the residuals in the entire flow field can reflect the overall deviation of the algorithm and the fluctuation of the measurement results relative to the true value, respectively. The results of analyzing the residuals of velocity vectors on the test data are shown in Figure 15. It can be seen that the performance of both algorithms is superior to that of traditional CC method. Table 3 provides the average and RMS values of the residuals for different algorithms on the entire test set. It can be seen that, compared with traditional cross-correlation algorithms, the two algorithms reduced the root mean square value of velocity residual error by 84% and 89%, respectively.
The above results demonstrate that the estimation accuracy of the two neural network-based fluid motion field estimation algorithms is superior to that of traditional CC algorithm under the influence of distortions. They can estimate the correct flow field distribution under distorted interference conditions, and perform distortion correction while estimating the flow field. For the performance comparison between two neural networks, the measurement accuracy of the sparse neural network structure is slightly better than that of the dense neural network structure in terms of performance indicators, but this is achieved at the expense of spatial resolution. Therefore, both algorithms have their own advantages. In addition, in order to verify the distortion correction function of neural networks, this paper statistically analyzes the measurement errors of different algorithms. Figure 16 shows a box plot of measurement error statistics for 500 sets of test data under different algorithms. The horizontal axis in the figure represents three different algorithms, and the vertical axis represents the MAE of each estimated flow field. From the results, it can be seen that the average measurement error of the flow field results obtained by the traditional CC algorithm is large and fluctuates greatly, while the error of the algorithm based on neural networks is significantly reduced, which once again verifies the distortion correction ability of the neural network. Among them, sparse estimation algorithms have a slightly higher distortion correction ability than dense estimation algorithms.

4. Conclusions

Deep learning with various multi-input CNN structures for the adaptive correction of aberrations for the precise velocity measurement of multi-phase flows was presented. The two distorted PIV images and the measured wavefront distortion information are used as inputs to directly output the corrected flow field results. Based on the model generated from the PIV image and the simulation model of the Hartmann–Shack wavefront sensor, a model of synthetic dataset is established. After training, the performance of the two algorithms is evaluated and analyzed on the test dataset. These two algorithms estimate the velocity distribution of fluid motion while correcting distortion, and solves the problem of measurement inaccuracy caused by wavefront distortion in flow field measurement through an end-to-end approach. Compared with traditional cross-correlation algorithms, the two algorithms reduced the root mean square value of velocity residual error by 84% and 89%, respectively. This paper integrates fluid motion field estimation and wavefront distortion correction technology through a deep learning model for the first time, thereby achieving an algorithm that can directly output the corrected flow field distribution based on wavefront distortion measurement results, and completing the intersection of two different disciplines.

Author Contributions

Conceptualization, Z.G. and L.B.; data curation, L.Z. and S.M.; investigation, Z.G. and P.Y.; methodology, Z.G. and A.L.; software, Z.G., X.G. and L.Z.; writing—original draft, Z.G.; writing—review & editing, X.G. and P.Y.; resources, J.C. and L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (62305343) and Sichuan Science and Technology Program (2022JDRC0095), and partially by the German Research Foundation (Deutsche Forschungsgemeinschaft, project no. 459505672).

Data Availability Statement

The data presented in this study are cited within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Raffel, M.; Willert, C.E.; Scarano, F.; Kähler, C.J.; Wereley, S.T.; Kompenhans, J. Particle Image Velocimetry: A Practical Guide; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  2. Grant, I. Particle image velocimetry: A review. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 1997, 211, 55–76. [Google Scholar] [CrossRef]
  3. Schröder, A.; Willert, C.E. Particle Image Velocimetry: New Developments and Recent Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–13. [Google Scholar]
  4. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
  5. Barron, J.L.; Fleet, D.J.; Beauchemin, S.S. Performance of optical flow techniques. Int. J. Comput. Vis. 1994, 12, 43–77. [Google Scholar] [CrossRef]
  6. Sun, D.; Roth, S.; Black, M.J. Secrets of optical flow estimation and their principles. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2432–2439. [Google Scholar]
  7. Lee, Y.; Yang, H.; Yin, Z. PIV-DCNN: Cascaded deep convolutional neural networks for particle image velocimetry. Exp. Fluids 2017, 58, 171. [Google Scholar] [CrossRef]
  8. Cai, S.; Liang, J.; Gao, Q.; Xu, C.; Wei, R. Particle image velocimetry based on a deep learning motion estimator. IEEE Trans. Instrum. Meas. 2019, 69, 3538–3554. [Google Scholar] [CrossRef]
  9. Cai, S.; Zhou, S.; Xu, C.; Gao, Q. Dense motion estimation of particle images via a convolutional neural network. Exp. Fluids 2019, 60, 73. [Google Scholar] [CrossRef]
  10. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar]
  11. Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2462–2470. [Google Scholar]
  12. Hui, T.W.; Tang, X.; Loy, C.C. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8981–8989. [Google Scholar]
  13. Vanselow, C.; Fischer, A. Influence of inhomogeneous refractive index fields on particle image velocimetry. Opt. Lasers Eng. 2018, 107, 221–230. [Google Scholar] [CrossRef]
  14. Gomit, G.; Chatellier, L.; Calluaud, D.; David, L. Free surface measurement by stereo-refraction. Exp. Fluids 2013, 54, 1540. [Google Scholar] [CrossRef]
  15. Böhm, B.; Heeger, C.; Gordon, R.L.; Dreizler, A. New Perspectives on Turbulent Combustion: Multi-Parameter High-Speed Planar Laser Diagnostics. Flow Turbul. Combust. 2011, 86, 313–341. [Google Scholar] [CrossRef]
  16. Reuss, D.L.; Megerle, M.; Sick, V. Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder. Meas. Sci. Technol. 2002, 13, 1029–1035. [Google Scholar] [CrossRef]
  17. Minor, G.; Oshkai, P.; Djilali, N. Optical distortion correction for liquid droplet visualization using the ray tracing method: Further considerations. Meas. Sci. Technol. 2007, 18, L23–L28. [Google Scholar] [CrossRef]
  18. Radner, H.; Büttner, L.; Czarske, J. Interferometric velocity measurements through a fluctuating phase boundary using two Fresnel guide stars. Opt. Lett. 2015, 40, 3766–3769. [Google Scholar] [CrossRef]
  19. Bilsing, C.; Radner, H.; Burgmann, S.; Czarske, J.; Büttner, L. 3D imaging with double-helix point spread function and dynamic aberration correction using a deformable mirror. Opt. Lasers Eng. 2022, 154, 107044. [Google Scholar] [CrossRef]
  20. Gao, Z.; Radner, H.; Büttner, L.; Ye, H.; Li, X.; Czarske, J. Distortion correction for particle image velocimetry using multiple-input deep convolutional neural network and Hartmann-Shack sensing. Opt. Express 2021, 29, 18669–18687. [Google Scholar] [CrossRef] [PubMed]
  21. Tian, Y.; Narasimhan, S.G. Seeing through water: Image restoration using model-based tracking. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2303–2310. [Google Scholar] [CrossRef]
  22. Li, Z.; Murez, Z.; Kriegman, D.; Ramamoorthi, R.; Chandraker, M. Learning to See Through Turbulent Water. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 512–520. [Google Scholar] [CrossRef]
  23. Koukourakis, N.; Fregin, B.; König, J.; Büttner, L.; Czarske, J.W. Wavefront shaping for imaging-based flow velocity measurements through distortions using a Fresnel guide star. Opt. Express 2016, 24, 22074–22087. [Google Scholar] [CrossRef]
  24. Keogh, E.; Mueen, A. Curse of Dimensionality. In Encyclopedia of Machine Learning and Data Mining; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2017; pp. 314–315. [Google Scholar] [CrossRef]
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  26. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar] [CrossRef]
  27. Mendes, L.; Bernardino, A.; Ferreira, R.M. piv-image-generator: An image generating software package for planar PIV and Optical Flow benchmarking. SoftwareX 2020, 12, 100537. [Google Scholar] [CrossRef]
  28. Carlier, J. Second Set of Fluid Mechanics Image Sequences. European Project Fluid Image Analysis and Description (FLUID). 0018–9456. 2005. Available online: http://www.fluid.irisa.fr (accessed on 3 March 2024).
  29. Resseguier, V.; Mémin, E.; Chapron, B. Geophysical flows under location uncertainty, part II quasi-geostrophy and efficient ensemble spreading. Geophys. Astrophys. Fluid Dyn. 2017, 111, 177–208. [Google Scholar] [CrossRef]
Figure 1. Principle of PIV estimator and corrector based on AOPIV-Net.
Figure 1. Principle of PIV estimator and corrector based on AOPIV-Net.
Photonics 11 00452 g001
Figure 2. Distortion model and measurement principle for distorted phase from fluctuating air–water interface.
Figure 2. Distortion model and measurement principle for distorted phase from fluctuating air–water interface.
Photonics 11 00452 g002
Figure 3. Principle of AOPIV-MICNN (Adaptive Optics Particle Image Velocimetry-Multiple Input Convolutional Neural Network). The input of this network is wavefront distortion information measured by Hartmann–Shack wavefront sensor and distorted PIV images, and the output is the corrected flow field distribution.
Figure 3. Principle of AOPIV-MICNN (Adaptive Optics Particle Image Velocimetry-Multiple Input Convolutional Neural Network). The input of this network is wavefront distortion information measured by Hartmann–Shack wavefront sensor and distorted PIV images, and the output is the corrected flow field distribution.
Photonics 11 00452 g003
Figure 4. Schematics of the proposed dense estimation deep learning model. The blue numbers represent the size of the feature map, while the black numbers represent the number of channels.
Figure 4. Schematics of the proposed dense estimation deep learning model. The blue numbers represent the size of the feature map, while the black numbers represent the number of channels.
Photonics 11 00452 g004
Figure 5. Schematics of the proposed sparse estimation deep learning model.
Figure 5. Schematics of the proposed sparse estimation deep learning model.
Photonics 11 00452 g005
Figure 6. Synthetic dataset generation.
Figure 6. Synthetic dataset generation.
Photonics 11 00452 g006
Figure 7. Different types of flow field.
Figure 7. Different types of flow field.
Photonics 11 00452 g007
Figure 8. Particle image under different parameter.
Figure 8. Particle image under different parameter.
Photonics 11 00452 g008
Figure 9. Hartmann–Shack wavefront sensor simulator.
Figure 9. Hartmann–Shack wavefront sensor simulator.
Photonics 11 00452 g009
Figure 10. Channel flow estimation from different algorithms and ground truths.
Figure 10. Channel flow estimation from different algorithms and ground truths.
Photonics 11 00452 g010
Figure 11. SQG flow estimation from different algorithms and ground truths.
Figure 11. SQG flow estimation from different algorithms and ground truths.
Photonics 11 00452 g011
Figure 12. Back-Step flow estimation from different algorithms and ground truths.
Figure 12. Back-Step flow estimation from different algorithms and ground truths.
Photonics 11 00452 g012
Figure 13. DNS flow estimation from different algorithms and ground truths.
Figure 13. DNS flow estimation from different algorithms and ground truths.
Photonics 11 00452 g013
Figure 14. MHD flow estimation from different algorithms and ground truths.
Figure 14. MHD flow estimation from different algorithms and ground truths.
Photonics 11 00452 g014
Figure 15. Residual error from different algorithm.
Figure 15. Residual error from different algorithm.
Photonics 11 00452 g015
Figure 16. Comparison of MAE values from three algorithms. Box plots can be used to observe the overall distribution of data, using statistics such as median, 25% quantile, 75% quantile, and upper and lower bounds to describe the overall distribution of data. The gray circles represent outliers.
Figure 16. Comparison of MAE values from three algorithms. Box plots can be used to observe the overall distribution of data, using statistics such as median, 25% quantile, 75% quantile, and upper and lower bounds to describe the overall distribution of data. The gray circles represent outliers.
Photonics 11 00452 g016
Table 1. Parameters of PIV image generation.
Table 1. Parameters of PIV image generation.
ParametersValue RangeUnits
Particle central peak I 0 220–255ADU
Particle diameter d p 1–3Pixel
Particle Density N i 6–12Number of windows per IA
Particle off plane deviation0.025–0.1pixel/s
Noise intensity0–15dB
Table 2. Parameter of Hartmann–Shack wavefront sensor simulator.
Table 2. Parameter of Hartmann–Shack wavefront sensor simulator.
ParametersValueParametersValue
Zernike order64CCD resolution800 × 800
Incident wavelength532 nmExposure time5 ms
Entrance pupil diameter4 mmPixel size5 μm
Microlens focal length5 mmCamera bit-width8 bits
Number of microlenses16 × 16Read out noise (RMS)8 ADU
Microlens typeSpherical mirrorBlack level0.01 e-/pixel/s
Table 3. Mean and RMS of absolute velocity residual error.
Table 3. Mean and RMS of absolute velocity residual error.
AlgorithmsMean (pixel/s)RMS (pixel/s)
CC 0.4013 0.2028
Dense motion estimation 0.1558 0.0313
Sparse motion estimation 0.1339 0.0229
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Z.; Ge, X.; Zhu, L.; Ma, S.; Li, A.; Büttner, L.; Czarske, J.; Yang, P. Flow Field Estimation with Distortion Correction Based on Multiple Input Deep Convolutional Neural Networks and Hartmann–Shack Wavefront Sensing. Photonics 2024, 11, 452. https://doi.org/10.3390/photonics11050452

AMA Style

Gao Z, Ge X, Zhu L, Ma S, Li A, Büttner L, Czarske J, Yang P. Flow Field Estimation with Distortion Correction Based on Multiple Input Deep Convolutional Neural Networks and Hartmann–Shack Wavefront Sensing. Photonics. 2024; 11(5):452. https://doi.org/10.3390/photonics11050452

Chicago/Turabian Style

Gao, Zeyu, Xinlan Ge, Licheng Zhu, Shiqing Ma, Ao Li, Lars Büttner, Jürgen Czarske, and Ping Yang. 2024. "Flow Field Estimation with Distortion Correction Based on Multiple Input Deep Convolutional Neural Networks and Hartmann–Shack Wavefront Sensing" Photonics 11, no. 5: 452. https://doi.org/10.3390/photonics11050452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop