Main
Computational imaging has revolutionized optical microscopy in areas such as super resolution1,2,3,[4](#ref-CR4 “Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat…
Main
Computational imaging has revolutionized optical microscopy in areas such as super resolution1,2,3,4,5, optical sectioning6,7 and volumetric imaging8,9, but it is heavily dependent on reconstruction algorithms. For example, light-field microscopy (LFM) achieves unprecedented spatiotemporal resolution after three-dimensional (3D) reconstruction, facilitating sustained neural recordings and dynamic morphological imaging with low phototoxicity in vivo9,10,11,12,13,14,15. However, traditional reconstruction methods, mainly based on handcrafted Richardson–Lucy (RL) deconvolution15,16,17,18,19, are computationally expensive and prone to artifacts20. For instance, in scanning LFM (sLFM15) and virtual-scanning LFM (VsLFM21), the 3D deconvolution process can take days to reconstruct thousands of frames and requires parameter modification for different system configurations22. Moreover, existing LFM techniques suffer from the missing cone problem, which reduces axial performance, particularly in layers far from the native image plane23.
Recently, many supervised deep-learning methods, such as CARE24, VCD-Net25 and HyLFM-Net26, have been developed to substantially reduce the computational costs for practical applications. However, these methods still suffer from low spatial resolution and poor generalization in diverse sample structures or complex imaging environments. Moreover, they are not specifically optimized for sLFM data, limiting their fidelity. Dependency on ground-truth data is also a challenge in supervised learning. The training data pairs are not widely accessible, and the diversity of the samples is limited, so pretrained models often underperform on unseen data. Although imaging formation processes have been introduced for better interpretability and generalization27,28,29, they are difficult to apply to the four-dimensional (4D) measurements in LFM because there are phase correlations between different angular measurements used for aberration correction or noise reduction in complicated imaging environments15,21. If these 4D light-field measurements are treated merely as a series of separated two-dimensional (2D) images, the phase correlation will be lost, resulting in the degradation of deep-learning methods compared with iterative tomography in intravital environments. Moreover, accurate wave-optics-based point spread functions (PSFs) in the spatial-angular domain are crucial for high-resolution 3D reconstruction (Supplementary Fig. 1). Therefore, developing rapid, high-resolution 3D reconstruction algorithms for LFM and its variants without relying on data supervision remains a pivotal challenge for broad practical applications of LFM-based technologies in diverse complicated imaging conditions. No learning-based reconstruction technique has been developed yet, particularly for sLFM.
Here, we present SeReNet, a physics-driven self-supervised reconstruction network for LFM and its variants. By leveraging the 4D imaging formation priors of LFM, SeReNet achieves near-diffraction-limited resolution at millisecond-level processing speed without the requirement of training data pairs. SeReNet is trained in a self-supervised manner by gradually minimizing the loss between the forward projections of network estimation along 4D angular PSFs and the corresponding raw measurements. For broad generalizations of the reconstruction performance in complicated imaging environments, we fully integrate the imaging process in network training. This approach prevents overestimation of unknown information that the imaging system inherently cannot capture while accounting for the large freedoms provided by the 4D measurements, noises, non-rigid sample motions, and sample-dependent aberrations. An axial fine-tuning strategy can be integrated into SeReNet as an optional add-on to address the missing-cone problem and improve axial performance at the cost of slightly compromised generalization capability. Various benchmarks were conducted in both numerical simulations and experimental conditions, demonstrating that SeReNet outperforms recent state-of-the-art (SOTA) methods in speed, resolution, processing throughput, generalization capability and robustness to noise, aberrations and motions. SeReNet can be integrated in both unscanned LFM and sLFM, achieving processing speeds up to 700 times faster than that of iterative tomography. Compared with supervised neural networks in unscanned LFM, SeReNet achieves better performance when applied to distinct sample types or data from distinct microscopes owing to its superior generalization capability.
Equipped with SeReNet, sLFM facilitates versatile high-speed subcellular 3D observations in vivo with day-long durations in diverse animals including zebrafish (Danio rerio) embryos, Dictyostelium discoideum, Caenorhabditis elegans, zebrafish larvae and mice. Processing the massive tens of terabytes of data, with more than 300,000 volumes, produced from imaging these animals would take several years for previous iterative algorithms with high fidelity; however, SeReNet requires only 5 days, with even better axial performance. These advantages have allowed us to perform long-term monitoring of diverse subcellular dynamics during multiple liver injuries and conduct large-scale day-long cell tracking of immune responses. We believe that, with its broad generalization, low computational costs and high fidelity, SeReNet will lead to widespread practical applications of LFM and its variants in diverse fields such as neuroscience, immunology and pathology.
Results
Principle of SeReNet
The direct mapping from 4D multi-angular light-field images (x–y spatially and u–v angularly) to the 3D volume (x–y–z spatially) is ill-posed (Fig. 1a), because the number of measurement pixels is smaller than the number of reconstructed volume voxels. Therefore, previous supervised methods based on data priors could easily converge towards local optimum, leading to low generalization25,26. Such a problem becomes even worse, especially in complex imaging environments, owing to the large difference between the imaging formation process in training data and testing data. By contrast, iterative tomography with digital adaptive optics (DAO) fully exploits the high-dimensional freedom provided by 4D measurement of sLFM to achieve high-fidelity 3D reconstruction by better describing the whole imaging formation process in 4D, although it comes with large computational costs in iterative updates15,22. Moreover, unknown information not captured by the imaging system could be filtered out during the forward projection using PSF priors, which imposes physical constraints without relying on extensive data priors. Building on this concept, SeReNet harnesses 4D spatial-angular imaging formation priors in a self-supervised neural network, achieving high performance at millisecond-level processing speeds without the need for training data pairs (Fig. 1b,c).
Fig. 1: Principle of SeReNet.
a, The imaging and reconstruction process of sLFM. Fluorescent signals from different angles, indicated by different colors, are captured by sLFM with an angular resolution of 13 × 13. The 3D sample (with a voxel number of D × 13H × 13W, where D is usually over 100 for axial sectioning) is encoded into 3 × 3 scanning light-field images (3 × 3 × 13H × 13W) and realigned into multiple spatial-angular views (13 × 13 × 3H × 3W). The reconstruction process is the inverse of the imaging process. OBJ, objective; TL, tube lens; G, galvo; MLA, microlens array. b, The processing pipeline of SeReNet with self-supervised training. Before network training, data are preprocessed using TW-Net and preDAO. TW-Net corrects sample motion (details in Supplementary Fig. 13a), whereas preDAO estimates and corrects optical aberrations (details in Supplementary Fig. 14a). Using the main modules of SeReNet, we first generated a focal stack with digital refocusing of multiple angular images with the depth-decomposition module, and then gradually transformed the stack into a volume with the deblurring and fusion module. Next, the 4D wave-optics PSFs were used to achieve forward projections of the 3D estimation. Finally, the loss between projections and raw measurement was iteratively reduced during training. The NLL-MPG loss was derived as the loss function (details in Supplementary Fig. 11a). After the model is trained, SeReNet can make rapid predictions without the forward projection process. Four representative angular views are shown for simplicity. c, Comparisons of the generalization capability and processing speed on reconstructing timelapse unscanned LFM and sLFM data (429 × 429 × 101 voxels for each volume) among SeReNet, iterative tomography, VCD-Net and HyLFM-Net. More detailed information is provided in Supplementary Table 2. SeReNet offers a runtime over 700 times faster than iterative tomography and better generalization over supervised networks.
The structure of SeReNet is divided into three main modules (Supplementary Fig. 2 and Supplementary Table 1). First, the depth-decomposition module employs image translation and concatenation operators to generate the initial 3D focal stack lacking optical sectioning from 4D light-field measurements (Supplementary Fig. 2a). By leveraging multiple angular PSFs to distinguish structures at different depths in the 3D volume, the depth-decomposition module explicitly utilizes the PSF information, enhancing the axial performance after reconstruction (Supplementary Fig. 3a). Second, the deblurring and fusion module, consisting of nine 3D convolutional layers and three linear interpolation layers (Supplementary Fig. 2b), is trained to generate a 3D estimation from refocused volumes. This module can effectively recover high resolution, as demonstrated by the ablation study (Supplementary Fig. 3b). Third, and most importantly, the self-supervised module performs forward projections of the estimated volume along multiple angular PSFs, minimizing the loss between projections and corresponding angular measurements in each iteration (Supplementary Fig. 2c). This module prevents SeReNet from incorporating unknown information not captured by the imaging system, as indicated by the PSF, thereby fostering SeReNet’s broad generalization capability. Compared with supervised networks such as VCD-Net and HyLFM-Net, SeReNet will not overestimate or guess uncaptured information, which in this case eliminates artifacts and the risk of overfitting sample textures. If we remove the self-supervised module, SeReNet can be trained in a fully-supervised way, similar to VCD-Net and HyLFM-Net, by directly computing loss functions by comparing 3D predictions with the 3D sample. These supervised processes attempt to infer information that is not captured by the imaging system itself, leading to reduced generalization (Supplementary Fig. 3c–g).
The three-module framework allows the network to gradually converge to a valid solution, with a parameter number of 195,000 for high-speed processing (Supplementary Fig. 4). After being trained with physical constraints, SeReNet can make rapid predictions on sLFM measurements using the first two modules, which have already learned the inverse mapping from light-field measurements to 3D volumes. The reconstruction speed of SeReNet is nearly three orders of magnitude faster than that of iterative tomography (Supplementary Table 2). More importantly, SeReNet exhibits network interpretability because the depth-decomposition module leverages multiple angular PSFs, and each intermediate feature layer accurately reflects physically understandable information in the deblurring and fusion module (Supplementary Fig. 5). Moreover, the high-dimensional property of sLFM cannot be simply replaced by mathematical RL operators in Richardson–Lucy network27 (RLN) or straight-line light propagation with simple scaling in neural radiance field (NeRF)-based methods30, because they do not contain wave-optics PSF constraints to prevent network overfitting, especially for generalized applications (Supplementary Figs. 1, 6 and 7).
Advantages of SeReNet with comprehensive benchmarking
To maximize the physics-driven capabilities of SeReNet, we optimized the self-supervised framework to enhance its robust performance under complex imaging conditions. We then conducted both numerical simulations and experimental characterizations to demonstrate SeReNet’s advantages over previous methods, including an ablation study covering noise levels, sample motions, aberrations, and sample diversity.
First, to characterize the resolution of SeReNet, we imaged 100-nm-diameter fluorescence beads (Supplementary Fig. 8). We compared SeReNet with iterative tomography15. The measured full widths at half-maximum (FWHMs) revealed that SeReNet achieved a resolution of ~220 nm laterally and ~420 nm axially, approaching the diffraction limit. Although all methods experience a slight degradation in resolution as the axial defocus distance increases, SeReNet still demonstrated more uniform performance across the axial coverage, offering an extended depth of field (Supplementary Fig. 8d,e). Even when the system reverts to traditional LFM, SeReNet still maintains better resolution than other methods, including VCD-Net25 and HyLFM-Net26 (Supplementary Fig. 9). Considering its short processing time and high resolution, SeReNet achieves about two-times-higher processing throughput than do other SOTA approaches (Supplementary Table 2 and Methods). Furthermore, SeReNet is effective in sLFM with different scanning numbers and maintains stable performance, whereas previous learning-based methods are not designed for sLFM (Supplementary Fig. 10).
Second, noise in optical microscopy, typically characterized by a mixed Poisson–Gaussian (MPG) distribution dominated by the Poisson component, is inevitable31. To address this, we derived a negative log-likelihood loss function based on MPG distribution (NLL-MPG loss) for SeReNet (Fig. 1b, Supplementary Fig. 11a and Methods). Under low signal-to-noise-ratio (SNR) conditions, the NLL-MPG loss increased fidelity in distinguishing intricate organelles (Fig. 2a). Our method shows more stable performance than other loss functions (Fig. 2b and Supplementary Fig. 11b–d) and reconstruction methods (Fig. 2c and Supplementary Fig. 12), even in the case of a very low SNR with only a few dozen photons.
Fig. 2: Evaluation and benchmarking of the robustness and generalization of SeReNet.
a, Raw measurements and SeReNet reconstruction results of a mitochondria-labeled L929 cell with different levels of mixed Poisson–Gaussian noises applied in simulation. NLL-MPG loss and L1 loss are compared. b, Multiscale structural similarity (MS-SSIM) curves over photon numbers, comparing different loss functions. c, Boxplot showing MS-SSIM indices obtained by different methods under low-photon (5–15) conditions. n = 11 experiments. P = 7.78 × 10−4. d, Measurement with artificially induced non-rigid motion and its counterparts, corrected by time-weighted algorithm and TW-Net. The coefficient map estimated by TW-Net is shown. e, Peak SNR (PSNR) curve versus different methods and coefficients. n = 9 views are shown as scatter points. f, SeReNet results without (w/o) and with (w/) preDAO after the input was contaminated by an induced aberration wavefront, the root mean square (r.m.s.) of which was set to one wavelength. The estimated wavefront by preDAO and ground truth are attached. GT, ground truth; λ, wavelength. g, Visualization of the amplitudes of 18 Zernike modes decomposed from the estimated pupils by preDAO (red) and the ground truth (blue). h, MS-SSIM curves versus aberration levels with and without preDAO. i, Boxplot showing MS-SSIM indices obtained by different methods with severe aberrations. The r.m.s. was set to one wavelength. n = 10 aberration patterns were used. P = 1.42 × 10−6, 1.50 × 10−6 from left to right. j, Test of generalization from the bubtub dataset to multiple kinds of experimentally captured structures. k, Boxplot showing MS-SSIM indices obtained by different methods, compared with the ground truth. n = 14 represents the number of samples. P = 1.01 × 10−4, 3.06 × 10−10, 5.10 × 10−3 from left to right. In boxplots: center line, median; box limits, lower and upper quartiles; whiskers, 1.5 × interquartile range. Asterisks represent significance levels tested with two-sided paired t-test, significance at P < 0.05. **P < 1 × 10−2; ***P < 1 × 10−3; ****P < 1 × 10−4. All networks were trained on synthetic bubtub dataset. Scale bars, 10 μm (a,d,f,j).
Third, the dynamics of cells and organelles can induce ‘checkerboard’ or ‘stripe’ artifacts during scanning in sLFM22. We developed a lightweight content-aware time-weighted network (TW-Net) embedded in SeReNet to automatically correct spatially non-uniform motions on the basis of optimal weighting of surrounding pixels (Fig. 1b, Supplementary Fig. 13a and Methods). Although previous time-weighted algorithms15 could also reduce motion artifacts but with reduced fidelity and dependency of hyperparameters, TW-Net-assisted SeReNet offers superior resolution, speed and user convenience (Fig. 2d,e and Supplementary Fig. 13b–g).
Fourth, tissue heterogeneity and imperfect imaging systems will distort wavefronts of light, especially in intravital imaging32. These aberrations will alter the imaging process by affecting the PSFs, which can degrade the performance of supervised neural networks without a good generalization capability. We designed a DAO15 preprocessing (preDAO) module for SeReNet to estimate and correct optical aberrations on the basis of 4D measurements (Fig. 1b, Supplementary Fig. 14a and Methods). Instead of iterative aberration estimation, we developed a non-iterative strategy for accurate wavefront estimations across different aberration levels (Fig. 2f–h). Compared with previous methods, SeReNet with preDAO showed greater robustness to optical aberrations and significantly improved speed (Fig. 2i and Supplementary Fig. 14b–e). This advantage is crucial for robust performance in complex intravital imaging environments by fully exploiting the 4D property of light-field measurements (Supplementary Fig. 14f).
Fifth, practical experiments yield diverse image structures across different species and cells. To improve the generalization ability, the self-supervised module in SeReNet combines 4D image-formation priors into the network training process to prevent overfitting of the 3D information not captured by the imaging system in previous supervised networks. In this case, SeReNet can be trained solely on simulated data and directly generalized to diverse experimental samples. Its performance is comparable to the result obtained by training it directly on the experimental sample, greatly reducing the dependence on experimental datasets (Supplementary Fig. 15). Therefore, we constructed a simulated dataset named ‘bubtub’, which contains the sLFM images of various geometric structures such as bubbles, beads and tubes, with different densities, diameters and optical aberrations (Supplementary Fig. 16a). The bubtub dataset mainly provides substantial structural information, enhancing dataset diversity and serving as a basis for the network to learn physical priors through the 4D image-formation convolution with multiple angular PSFs. During experiments, we found that SeReNet, trained with bubtub, generalized to diverse experimental biological structures with high fidelity (Fig. 2j). It outperformed other supervised networks, which struggled with aberration mismatches between training and testing (Fig. 2k and Supplementary Fig. 16b,c).
Finally, SeReNet is compatible with data-driven supervised networks. We incorporated a subtle data prior into the pretrained self-supervised network to enhance axial performance and alleviate the missing cone problem (Supplementary Fig. 17a and Methods). By simulating three 5-μm-diameter spherical shells, we characterized the optical sectioning capability for different methods. Axially improved SeReNet reduced axial tailing and achieved better optical sectioning by exploiting data priors (Supplementary Fig. 17b,c). In addition, the axially improved SeReNet is built on the self-supervised SeReNet pretrained model, with only a small dataset and epochs used for supervised axial fine-tuning, retaining better generalization than fully supervised networks (Supplementary Fig. 18). However, the improvement in axial resolution still induces a slight degradation of generalization24, because the enhanced axial information is not captured by the imaging system, leading to a balance between the increased axial resolution and generalization. Therefore, the axial fine-tuning strategy is designed as an option of SeReNet, allowing users to flexibly choose their own configurations between this tradeoff depending on experimental requirements.
All these strategies have been integrated into SeReNet to maximize its usability across various applications. In this paper, SeReNet was trained on the bubtub dataset with NLL-MPG loss, TW-Net and preDAO, and was utilized for both validations and applications. Details regarding the application of axial fine-tuning are provided in the figure legend.
Experimental validations in diverse living model organisms
Although synthetic data and fixed biological samples have been used to demonstrate the advancements of SeReNet, verifying its practical utility with time-lapse data that capture real biophysiological processes is essential for broader applicability. To this end, we captured time-series sLFM images in diverse samples and reconstructed them using the SeReNet model trained only on the simulation dataset.
First, we observed a membrane-labeled zebrafish embryo in vivo to investigate SeReNet’s applicability in developmental biology. Migrasomes, reported to be involved in organ morphogenesis33, require high resolution for detection owing to their small size. We showed the formation process of two migrasomes from an embryonic cell in different ways (Fig. 3a). Physics-inspired SeReNet could detect the subcellular details with their native morphologies (Fig. 3b and Supplementary Video 1). In addition, we demonstrated that SeReNet, even when trained with different datasets, exhibited stable performance (Supplementary Fig. 19).
Fig. 3: Experimental comparisons of SeReNet and other SOTA methods in diverse living organisms.
a, Orthogonal maximum intensity projections (MIPs) showing the process of migrasome formation in a zebrafish embryo, obtained by SeReNet and iterative tomography. b, Normalized intensity profiles along the two lines marked in a. c, Center view and MIP obtained by SeReNet of membrane-labeled D. discoideum at t = 192 s, with white arrows indicating produced EVs and yellow arrows pointing to motion artifacts. The amoebas were cultured in a dish, as shown in the cartoon. d, Tracking of D. discoideum based on SeReNet results. The tracking traces were obtained through Imaris 9.0.1 software, with an overall tracking time of 1,260 s. A total of 49 cells were tracked with temporal-coding trajectory. The colors reflect different time points. e, Enlarged MIPs showing EV generation from D. discoideum at different stamps, comparing different methods. Profiles across an EV are compared, and yellow arrows indicate motion artifacts. f, Dual-directional MIPs and enlarged regions of an entire NeuroPAL worm (strain OH16230) obtained by SeReNet and iterative tomography. g, Orthogonal MIPs of neuron-labeled worm midbody by SeReNet, with enlarged regions showing comparisons between different methods. The identity is marked on the side of each neuron. h, Normalized intensity profiles along the marked dashed lines in g. i, Temporal traces (ΔF/F0) of GCaMP6s transients in four neurons, extracted from results of different methods. All networks were trained on the synthetic bubtub dataset, and the SeReNet here was the axially improved version. Scale bars, 10 μm (a,c–e, enlarged views in f,g), 50 μm (f,g, original views).
In addition to migrasomes, extracellular vesicles (EVs) play an important role in cellular interactions in multicellular systems34. We imaged D. discoideum, which was membrane-labeled and cultured in a Petri dish. It is sensitive to photodamage, migrates fast and causes motion artifacts (part I of Supplementary Video 2). SeReNet corrected these artifacts, enabling high-fidelity reconstructions that allowed us to track the free-moving trajectories of D. discoideum (Fig. 3c,d). EVs and retraction fibers were visualized with narrower intensity profiles than those obtained using previous methods (Fig. 3e). During observation, we noted that large amounts of EVs were produced from elongated retraction fibers. Subsequently, an EV generated by one D. discoideum was picked up by another (part II of Supplementary Video 2). The phenomenon is similar to those of migrasomes in zebrafish embryos33 and living cells35, warranting further exploration.
C. elegans is another commonly used animal model, widely studied in developmental biology36 and neuroscience9. To show SeReNet’s application in large-scale neural imaging, we observed the whole body of a young NeuroPAL37 transgenic C. elegans worm (strain OH16230) with GCaMP6s indicators and multi-color neural identities. Each neuron was uniquely identified by its color and position (Fig. 3f and Supplementary Video 3). NeuroPAL localized fluorophore expression to cell nuclei, which were densely packed in 3D space and clearly identified by SeReNet with subcellular resolution. SeReNet also resolved four neurons in the ventral nerve cord, which are neatly and tightly packed in the midbody (Fig. 3g,h). The temporal traces of neurons showed enhanced SNR with our method (Fig. 3i).
With improvements in processing speed, resolution, robustness and versatility, synergizing SeReNet and sLFM presents a promising tool for subcellular observations both in vivo and ex vivo.
High-fidelity investigation of subcellular dynamics in mice
Next, we used SeReNet to study liver injury in mice, highlighting its unique advantages in facilitating such applications. Mammalian liver injury is a systemic process involving multiple immune cells and organelles, and has become a global health concern38,39,40,41. Understanding the complex immune physio-pathogenesis at the cellular and subcellular levels is crucial for developing effective therapies. This not only requires an imaging system capable of high-speed 3D imaging in vivo, but also requires an algorithm for rapid, robust and high-fidelity reconstructions. Additionally, the liver’s dynamic environment, with constant blood flow and tissue movement, typically complicates imaging. However, SeReNet’s advanced noise reduction, aberration robustness and motion-correction capabilities ensured that the resulting images were of high quality.
We first established a liver ischemia–reperfusion injury (LIRI) model in wild-type mice (Fig. 4a and Methods). During recovery and liver regeneration after LIRI, Kupffer cells (KCs) and neutrophils play significant roles in tissue repair, but their specific mechanisms remain to be explored at the subcellular scale42,43. We injected antibodies and dyes intravenously (i.v.) into injured mice to label KCs, neutrophils and vessels, and the mouse livers were imaged during recovery (24 h post-LIRI). Compared with wild-type mice, in mice with LIRI, the number of neutrophils increased substantially, revealing various interactions between KCs and neutrophils (Fig. 4b and Supplementary Fig. 20). Notably, a neutrophil migrating in the vessels generated migrasomes inside a KC (Fig. 4c,d). As the neutrophil moved away, it was pulled by a thin retraction fiber extending from the KC for further interaction (part I of Supplementary Video 4). Thanks to SeReNet’s broad generalization capabilities, various subcellular structures during the bioprocess could be clearly distinguished without severe blurring or artifacts. We also observed KCs contacting each other through elongation and contraction of retraction fibers (Fig. 4e and part II of Supplementary Video 4), and a neutrophil producing a migrasome and delivering it into a KC by generating a long retraction fiber (Fig. 4f). These observations suggest that signals might be delivered between multiple immune cells through contact-generated migrasomes and intercellular pulling, facilitating innate immune system repair in mammals. Targeting these processes of organelle formation and intercellular pulling could offer potential therapies for LIRI.
Fig. 4: High-fidelity long-term imaging by SeReNet in mice with liver injury.
a, Illustrations of the LIRI model. Mice were anesthetized and subjected to hepatic ischemia and reperfusion. After 24 h, liver regeneration was initiated. b, Boxplot showing the neutrophil counts in mouse livers without and with LIRI. n = 4 regions. P = 5.89 × 10−3. c, Orthogonal MIPs by SeReNet showing neutrophils (Ly6G, green) and KCs (F4/80, magenta) in the vessels (WGA, blue) of living mouse livers following LIRI, with enlarged MIPs showing the interactions between neutrophils and KCs. Arrows indicate the image blur and artifacts. d, Retraction fiber length of the KC over 15 min, showing the elongation process. e, MIPs showing that a KC stretched out a retraction fiber to touch another KC. The lengths of the two retraction fibers over 75 min are plotted. f, MIPs showing that neutrophils generated a long retraction fiber and produced a migrasome that was delivered into a KC. The retraction fiber length over 60 min is plotted. g, Illustrations of the AILF model. Mice were given an i.p. injection of APAP (600 mg kg–1) for 16 h to induce a proinflammatory phenotype. h, Boxplot showing the count of CD63+ ECs in mouse livers without and with AILF. n = 4 regions. P = 1.17×10−2. i, MIPs obtained by SeReNet of monocytes (Ly6C, green) and CD63+ ECs (CD63, magenta) in the vessels (WGA, blue) of livers of living mice with AILF. Arrows indicate the image blur and artifacts. j,k, Enlarged MIPs of two regions demonstrate the proximity process (j) and adhesion process (k) between monocytes and CD63+ ECs. The color-coded trajectories of two monocytes are overlaid. l, Centroidal distances between the monocytes and CD63+ ECs over time. In boxplots: center line, median; box limits, lower and upper quartiles; whiskers, 0th–100th percentiles. Asterisks represent significance levels tested with two-sided paired t-test. Significance at P < 0.05. *P < 0.05; **P < 1 × 10−2. SeReNet was trained on the synthetic bubtub dataset. Scale bars, 10 μm (c,e,f,i–k).
Another common liver disease is drug-induced liver injury (DILI), characterized by the activation of endothelial cells (ECs) into a proinflammatory phenotype (Fig. 4g). This leads to an increase in CD63+ ECs with heightened adherent functions in the liver44,45 (Fig. 4h and Supplementary F