Introduction
Technical progress in systems neuroscience has led to an explosion in the volume of neural and behavioral data1,2,3,4. New challenge…
Introduction
Technical progress in systems neuroscience has led to an explosion in the volume of neural and behavioral data1,2,3,4. New challenges in processing these large, high-dimensional data have spurred the development of new computational methods to efficiently process them5,6,7. Such theoretical and computational efforts have increasingly focused on models of population dynamics8,9 that explicitly focus on high-dimensional neural activity10,11. In parallel, experiments have increasingly used complex stimuli and task structures that align more closely with those experienced in the wild12,13 to understand how neural circuitry and dynamics govern natural behavior14,15,16,17,18.
Yet this complexity has led to new challenges in experimental design. Our limitation is no longer the volume of data that can be collected but the number of hypotheses that can be tested in limited experimental time. For instance, even a few visual stimulus parameters—contrast, speed, and direction of moving gratings—imply thousands of combinations of unique stimuli, with even more for natural images. Given a fixed time budget and an increasing number of experimental conditions, statistical power is likely to decrease significantly without careful experimental design19.
But even beyond time limitations, many new questions can only be addressed when experiments can be adjusted during data acquisition. For example, behaviorally relevant neurons are widely distributed20,21 with unknown initial identities and locations. In these cases, performing causal testing via targeted stimulation methods requires first collecting data to assess the location and function of the relevant neural populations22,23. Moreover, many quantities of interest can only be learned from data, including information about the organization of behavioral states24, which behavioral variables are associated with neural activity, or which neural dynamics are most relevant to behavior25. By contrast, typical analyses are performed long after data acquisition, precluding any meaningful interventions that would benefit from information collected during the experiment26. This separation between data collection and informative analysis thus directly impedes our ability to test complex functional hypotheses that might emerge during the experiment.
Tighter model-experiment integration offers a potential solution: Models can speed up hypothesis testing by selecting the most relevant tests to conduct. Models can also be learned or refined continually as data is acquired. Such adaptive paradigms have been used with great success in learning features of artificially generated images that maximally excite neurons in the visual cortex27,28 or for system identification of sensory processing models by optimizing the presented stimuli29. Likewise, various models of decision-making can be tested and differentiated via decoding and causally perturbing a latent task variable using moment-to-moment readouts30. Closed-loop or adaptive designs led to identifying performance variability through real-time auditory disruptions31 and finding stimuli that optimally excite neurons with closed-loop deep learning32. These experimental design strategies all utilize models that are updated as soon as new data or test results become available.
Indeed, strategies that ‘close the loop’ are also essential for causal experiments that directly intervene in neural systems16,22,26,33. For instance, in experiments that aim to mimic endogenous neural activity via stimulation, real-time feedback can inform where or when to stimulate22,34,35,36,37,38,39, and such stimulations are critical to revealing the functional contributions of individual neurons to circuit computations and behavior16. In fact, for large circuits composed of thousands of neurons, establishing fine-grained causal connections between neurons may prove infeasible without models to narrow down candidate mechanisms or circuit hypotheses in real time40.
Thus, for testing many complex hypotheses, data analysis should not be independent from data acquisition. Yet while modern computing and new algorithms have made real-time preprocessing of large-scale recordings feasible7,41,42,43, significant technical barriers have prevented their routine use. For example, many successful model-based experiments have required significant offline or cluster-based computing resources32,44. In addition, most existing algorithms and software are not constructed to facilitate the parallel execution and streaming analyses critical for adaptive experiments. Finally, inter-process data sharing, concurrent execution, and pipeline specification pose significant technical difficulties, and because adaptive designs vary so widely, any practical solution must be easily configurable and extensible to facilitate rapid prototyping. That is, simply allowing users to choose from a set of built-in models is insufficient: the system must allow experimenters to flexibly compose existing methods with entirely novel algorithms of their own design.
To address these challenges, we present improv, a modular software platform for constructing and orchestrating adaptive experiments. By carefully managing the backend software engineering of data flow and task execution, improv can integrate customized models, analyses, and experimental logic into data pipelines in real time without requiring user oversight. Any type of input or output data stream can be defined and integrated into the setup (e.g., behavioral or neural variables), with information centralized in memory for rapid, integrative analysis. Rapid prototyping is facilitated by allowing simple text files to define arbitrary processing pipelines and streaming analyses. In addition, improv is designed to be highly stable, ensuring data integrity through intensive logging and high fault tolerance. It offers out-of-the-box parallelization, visualization, and user interaction via a lightweight Python application programming interface. The result is a flexible real-time preprocessing and analysis platform that achieves active model-experiment integration in only a few lines of code.
Results
improv is a flexible and user-friendly real-time software platform
We created improv to easily integrate custom real-time model fitting and data analysis into adaptive experiments by seamlessly interfacing with many different types of data sources (Fig. 1a). improv’s design is based on a simplified version of the ‘actor model’ of concurrent systems45. In this model, each independent function of the system is the responsibility of a single actor. For example, one actor could be responsible for acquiring images from a camera, with a separate actor responsible for processing those images. Each actor is implemented as a user-defined Python class that inherits from the Actor class provided by improv and is instantiated inside independent processes (Supplementary Fig. 1). Actors interact directly with other actors via message passing, with messages containing keys that correspond to items in a shared, in-memory data store built atop the Plasma library from Apache Arrow46. Rather than directly passing gigabytes worth of images from actor to actor (e.g., from acquisition to analysis steps), the image is placed into the shared data store, after which a message with the image’s location is passed to any actor requiring access. Thus, communication overhead and data copying between processes is minimized (Fig. 1b). At a higher level, pipelines are defined by processing steps (actors) and message queues, which correspond to nodes and edges in a directed graph (Fig. 1c). This concurrent framework also allows improv to ignore any faults in individual actors and maintain processing performance for long timescales without crashing or accumulating lag (Supplementary Fig. 2).
Fig. 1: Design architecture of improv.
a Schematic for possible use cases of improv, enabling real-time data collection from multiple sources (orange), modeling and analyzing these data via user-defined code (blue), and manipulating experimental variables (magenta). improv orchestrates input and output data streams independently and asynchronously. b Schematic for the actor model. (1) improv creates and manages actors as separate, concurrent processes. (2) Actors can access a shared data store and pass messages to other actors. (3) Actors send only addresses of data items, minimizing data copies. c Example actor graph, a pipeline that acquires neural data, preprocesses them, analyzes the resulting neural activity, suggests the next stimulus to present, and visualizes the result. Actors correspond to nodes in the processing pipeline, and arrows indicate logical dependencies between actors.
Real-time modeling of neural responses
Designed for flexibility, improv facilitates a wide class of experiments involving real-time modeling, closed-loop control, and other adaptive designs. To test these capabilities in silico, we first benchmarked its performance on a prerecorded two-photon calcium imaging data set. Using raw fluorescence images streamed from disk at the rate of original data acquisition (3.6 Hz), we simulated an experiment in which larval zebrafish were exposed to a sequence of visual whole field motion stimuli. The improv pipeline acquired the images, preprocessed them, analyzed the resulting deconvolved fluorescence traces, estimated response properties and functional connectivity of identified neurons, and displayed images and visuals of the results in a graphical user interface (GUI) (Fig. 2a). Images were originally acquired via two-photon calcium imaging of 6-day old larval zebrafish expressing the genetically encoded calcium indicator GCaMP6s in almost all neurons (“Methods”). Simultaneously, repetitions of visual motion stimuli, square-wave gratings moving in different directions, were displayed to the fish from below (Fig. 2b). These two data streams were sent to improv and synchronized via alignment to a common reference frame across time (Fig. 2c, d).
Fig. 2: improv provides streaming model-based characterization of neural function.
a Diagram showing the conceptual flow of data among all actors in the pipeline. Fluorescence images and visual stimuli data were acquired, preprocessed, fed into the model, and visualized, all in real**-**time. b Schematic of calcium imaging in zebrafish. An acquisition computer acquired fluorescence images and controlled the projection of visual stimuli, and a second networked computer running improv received data for processing in real time. c The ‘2p Acquisition’ actor was responsible only for sending images from the two-photon microscope to the improv computer, one image at a time (3.6 frames/s). d The ‘Visual Stimuli’ actor broadcast information about the stimulus status and displayed visual stimuli. Stimuli were interleaved moving (4.2 s) and stationary (5.3 s) square wave gratings drifting in eight directions (arrow wheel). e Each image was streamed to the CaImAn Online algorithm, encapsulated in a custom actor, that calculated neural spatial masks (ROIs), fluorescence traces, and estimated (deconvolved) spikes across time for each neuron, shown for three example traces. f A linear-nonlinear-Poisson (LNP) model was reformulated to work in the streaming, one-frame-at-a-time, setting. Center, Diagram of our model incorporating stimuli, self-history, and weighted connection terms. Bottom, Log-likelihood fit over time, as more frames were fed into improv. The online model fit converged to the offline fit obtained using the full data set (dotted line) after a single repetition of unique visual stimuli (shaded region). g The ‘Data Visualization’ actor was a GUI that displayed the raw acquired images (left), a processed image overlaid with ROIs color-coded for directional preference, neural traces for one selected neuron (white), and population average (red), tuning curves, and model metrics. The processed frame showed each neuron colored by its directional selectivity (e.g., green hues indicate forward motion preference). The LNP actor interactively estimated the strongest functional connections to a selected neuron (green lines). The LNP model likelihood function (bottom right) showed the optimization loss across time and estimated connectivity weights of highly connected neurons below.
Next, calcium images were preprocessed with an actor (‘Caiman Online’) that used the sequential fitting function from the CaImAn library7 to extract each neuron’s spatial location (ROI) and associated neural activity traces (fluorescence and spike estimates) across time (Fig. 2e; Supplementary Fig. 3). The visual stimuli and fluorescence traces were then used to compute each neuron’s response to motion direction, providing streaming and continually updated directional tuning curves. Additionally, within a separate ‘LNP Model’ actor, we fit a version of a linear-nonlinear-Poisson (LNP) model47, a widely used statistical model for neural firing (Fig. 2f; Supplementary Fig. 4). Here, in place of the entire data set, we used a sliding window across the most recent 100 frames and stochastic gradient descent to update the model parameters after each new frame of data was acquired. This model also scaled well to populations of thousands of neurons, allowing us to obtain up-to-the-moment estimates of model parameters across the brain, including circuit-wide functional connections among neurons. In testing, this online model fit converged quickly towards the value obtained by fitting the model offline using the entire data set. As a result, our replication experiment had the option of stopping early without needing to present each stimulus 5-10 times.
Finally, we constructed a GUI that displayed neural functional responses and connectivity maps in real time and offered interactive controls (Fig. 2g). While fully automating experiments could, in principle, enable more efficient experiments, it also remained important to provide status metrics and raw data streams that allowed for experimenter oversight. Here, we used the cross-platform library PyQt[48](https://www.nature.com/articles/s41467-025-64856-3#ref-CR48 “PyQt, https://riverbankcomputing.com/software/pyqt/
.“) to implement a ‘Data Visualization’ actor as a frontend display, visualizing raw and processed data in real time using improv as the backend controller. All plots were updated as new data were received, up to 60 times per second, providing users with up-to-the-minute visual feedback (Supplementary Video 1). In this way, improv can easily integrate incoming data with models to produce both visualizations and model-based functional characterizations in real time, with the benefit of early stopping, saving valuable experimental time.
Concurrent neural and behavioral analyses
We next demonstrate how improv can be used for streaming, simultaneous analysis of neural data and behavior in real time. To do so, we reproduced an analysis from Musall et al.49 in which features from mouse behavioral videos acquired at 30 frames per second were used to predict neural activity acquired as two-photon calcium imaging data in the dorsal cortex. As model variables from their behavioral video data had the most power for predicting single-neuron activity, we focused our replication solely on the video data rather than any other task information. Importantly, this study identified unstructured movements, which are generally not known ahead of time, as being the strongest predictors of neural activity, suggesting that identifying those significant behavioral metrics during the experiment would generate new hypotheses to test behavioral-brain links online.
To demonstrate this possibility, improv ingested simultaneously recorded video of a mouse and two-photon fluorescence traces. We implemented a streaming dimension reduction method to reduce each (240 × 320) video frame down to ten dimensions, used ridge regression to predict the neural activity from the low-dimensional behavioral video factors, and visualized the results (Fig. 3a). Here, we used our recently developed form of streaming dimension reduction, proSVD, to identify a stable low-dimensional representation of the video data50. Within one minute, proSVD found a subspace of behavioral features that were stable across time, one suitable to serve as a reduced data representation in the subsequent regression model (Fig. 3b). We next used an implementation of streaming ridge regression[51](https://www.nature.com/articles/s41467-025-64856-3#ref-CR51 “OnlineStats, ( https://joshday.github.io/OnlineStats.jl/latest/
).“) to predict neural data from the proSVD-derived features. We found that our identified regression coefficients β also converged quickly on the order of minutes (Fig. 3c; Supplementary Fig. 5).
Fig. 3: improv handles concurrent neural activity and behavioral video data streams.
a improv pipeline for processing behavioral video and neural activity traces, implementing streaming dimension reduction, streaming ridge regression, and real-time visualization. b Video frames were streamed from disk at the original data rate of 30 frames/sec. After downsampling images, a ‘proSVD’ actor implemented the dimension reduction algorithm. The learned 10-dimensional proSVD basis vectors stabilized within less than a minute. c The ‘Regression Model’ actor received the dimension-reduced video data and neural activity traces and computed the regression coefficients β. For model fitting, ridge regression was implemented via a streaming update algorithm in which each datum is only seen once. Here, Y represents the matrix of neural data (57 neurons x time) and X represents the matrix of reduced behavioral data (10 proSVD dimensions x time). Different gray lines correspond to different coefficients for each latent behavioral feature (10 total). d Two data visualization methods were used for monitoring during a simulated experiment. Left, Video data (dots) were plotted in the proSVD space with a representative trial highlighted in orange. Right, Regression coefficients were normalized to the top coefficient and overlaid back onto the original behavioral image by projecting from the proSVD basis. Regions of the mouse’s face and paws are most predictive of the simultaneously occurring neural activity (cf., Musall49, Fig. 3h).
To gain insight into what low-dimensional behavioral features were most significant for predicting neural activity, we visualized this dimension-reduced data, plotting the first two proSVD dimensions (Fig. 3d, orange trajectory). Simultaneously, as Musall et al., we visualized the identified effects by overlaying the weighted regression coefficients onto the original behavioral video, which highlighted the relevant regions of the image used in predicting neural activity (Fig. 3d; Supplementary Video 2). Thus, real-time modeling with improv allowed for rapid identification of brain-behavior relationships that could ultimately be used to time causal perturbations to the system based on current neural or behavioral state.
Streaming predictions of future neural activity
Given the growing importance of testing theories based on neural population dynamics10,11,52, we next asked whether improv could be used to learn and predict upcoming neural population activity during a single experiment. As neural dynamics, by definition, vary across time, and are highly individual- and trial-specific, it is important to learn and track these trajectories as they occur. By doing so, experiments could directly test hypotheses about how these neural dynamics evolve across time by perturbing activity at precise moments along neural trajectories. In this simulated example, we tackled the first stage of such an experiment by modeling latent neural dynamics in real time and generating future predictions of how trajectories would evolve given current estimates of the neural population state. Here, we used data[53](https://www.nature.com/articles/s41467-025-64856-3#ref-CR53 “O’Doherty, J. E., Cardoso, M. M., Makin, J. G., & Sabes, P. N. Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology. Zenodo https://doi.org/10.5281/zenodo
.3854034. (2020).“) recorded from the primary motor cortex (M1) in an experiment where monkeys made a series of self-paced reaches to targets.
For this pipeline, note that we did not need to reimplement or change the code for the proSVD actor from our previous experiment (Fig. 4a). Rather, we easily inserted this module into a new pipeline simply by modifying a parameter file with dataset-relevant values describing the neural data (Fig. 4b). With improv, it is thus extremely simple to combine old and new analyses by reusing actors or swapping models – either in new experiments or during an experiment in progress. As our new experiment, we thus acquired neural data in the form of sorted spikes, used proSVD for streaming dimension reduction on the neural data, implemented another streaming algorithm to learn and predict latent neural trajectories, and visualized the model metrics and projected neural paths.
Fig. 4: Real-time latent neural trajectory prediction with improv.
a improv pipeline for dimension-reducing multichannel neural electrophysiology data and predicting latent dynamics in real-time. Data from (O’Doherty et al.[53](https://www.nature.com/articles/s41467-025-64856-3#ref-CR53 “O’Doherty, J. E., Cardoso, M. M., Makin, J. G., & Sabes, P. N. Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology. Zenodo https://doi.org/10.5281/zenodo
.3854034. (2020).“)). b Neural spiking data are streamed from disk, simulating online waveform template matching, binned at 10 ms, and smoothed using a Gaussian kernel to obtain firing rates. The ‘proSVD’ actor then reduced 182 units down to a stable 6-dimensional space. c The ‘Bubblewrap’ actor incorporates dimension-reduced neural trajectories and fits (via a streaming EM algorithm) a Gaussian mixture Hidden Markov Model to coarsely tile the neural space. Left, A dimension-reduced input data trajectory (orange line), bubbles (shaded blue ellipses), and the (probabilistic) connections between bubbles (dashed black line). Right, The model predictive performance is quantified by the log predictive probability (blue, top) and the entropy of the learned transition matrix (purple, bottom). Black lines are exponentially weighted moving averages. d Predictions can be qualitatively and quantitatively monitored via improv. Left, Dimension-reduced neural data are displayed in light gray with the neural trajectory of the current arm reach shown in orange; bubbles and connections as in c. The dashed black line indicates the predicted transitions in the space given the first 150 ms of the trial, predicting 400 ms into the future. Right, Bubblewrap’s predictive performance (log predictive probability and entropy; mean and standard deviation) is shown as a function of seconds predicted ahead. Error bars denoting standard deviation are calculated across all timepoints in the second half of the dataset (n = 7500).
After dimension reduction on the neural data, we used a streaming probabilistic flow model, Bubblewrap50, to model the resulting low-dimensional latent trajectories in real-time (Fig. 4c). By covering the observed latent trajectories with Gaussian tiles (or ‘bubbles’), the model maximized the likelihood of observed transitions between these tiles, learning a transition matrix A that allowed it to predict the likely evolution of the current trajectory into the future. Predictions even one full second (100 samples) into the future remained accurate, dropping in performance by only 11% from one-step-ahead predictions, as quantified by the log predictive probability (Fig. 4d). Thus, this model can, in principle, be used to plan causal interventions of these neural trajectories and precisely time their delivery. Importantly, such trajectories or perturbations cannot be known in advance, and thus, real-time predictions of ongoing neural activity are essential for conducting true causal tests of neural population dynamics theories.
Closed-loop stimulus optimization to maximize neural responses
Causally perturbing neural dynamics in adaptive experiments requires real-time modeling to efficiently determine when, where, and what kind of feedback to deliver. Instead of being restricted to a pre-defined and limited set of options, we instead used improv during live, two-photon calcium imaging to choose visual stimuli based on ongoing responses to visually evoked neural activity in larval zebrafish (Fig. 5, Supplementary Fig. 6). A standard experimental design is to present a limited set of motion stimuli while simultaneously imaging neural activity to measure each neuron’s direction selectivity20,54. However, because of time constraints set by the number of stimuli and presentation duration, it is difficult to assess a larger stimulus space if sampling is not optimized. For instance, previously, we needed about eight minutes per plane (20 s per stimulus * 3 repetitions * 8 directions) for such a coarse directional tuning curve (45° intervals)20. Yet, evaluating each neuron for its response to twenty-four different angles (at 15° intervals) for each eye results in 576 possible stimuli (20 s * 24 per left eye * 24 per right eye * 3 reps) that would take close to ten hours for a single plane alone or over 200 h of continuous imaging for the entire brain.
Fig. 5: improv enables closed-loop optimization of peak neural responses.
a improv pipeline for optimization of neural responses during calcium imaging using a ‘Bayesian Optimization’ (BO) model actor to inform the ‘Visual Stimuli’ actor to select the next stimulus to display. b Matrix of 576 possible combinations of visual stimuli consisting of moving gratings shown to each eye individually. Stimuli are moving for 5 s and held stationary for 10 s. c Online BO actor assesses a neuron’s stimulus response to the current visual stimulus and updates its estimate of the tuning curve using a Gaussian process (GP) (f), as well as the associated uncertainty (σ). The next stimulus is selected by maximizing a priority score that balances exploration (regions of high uncertainty) and exploitation (regions of high response). Estimates and uncertainty are plotted here and in (d) using a color scale normalized to 1 for visualization. d To measure similar receptive field precision, the online BO approach typically only requires 8–20 stimulus presentations, compared to an incomplete grid search of 144 stimuli (gray denotes unsampled regions). The peak tuning identified by an offline GP fit and the empirical peak tuning from the grid search agree with the peak tuning determined online. e On average, just 15 stimuli are needed to determine peak tunings of 300 neurons in real time (N = 12 imaging sessions). f Heatmaps showing the distributions of identified peak tunings for individual neurons in the pretectum (Pt, left) or the optic tectum (OT, right). Color indicates the density of tuning curve peaks across the population. White ‘x’s mark the locations where the algorithm chose to sample. In this example, the algorithm sampled primarily near the diagonal (congruent, same direction of motion to both eyes) in the Pt but chose to sample more frequently in off-diagonal areas (different direction of motion to both eyes, e.g., converging motion) in the OT.
Here, we implemented an adaptive approach using Bayesian optimization (BO)55 to quickly determine fine directional tuning. Avoiding complex software-hardware integration, we applied ZMQ libraries to rapidly transfer fluorescence images via an Ethernet connection and communicate with improv, controlling stimulus parameters on the fly (“Methods”). improv utilized a ‘Bayesian Optimization’ actor (BO) to select which visual stimulus to display on each next trial to maximize a given neuron’s response to visual stimuli. To initialize the BO model, the responses to an initial set of eight stimuli were analyzed by the ‘Caiman Online’ actor (Fig. 5b). A Gaussian Process (GP)56 was then used to estimate a given neuron’s tuning curve f across all stimulus possibilities, as well as the uncertainty σ in that estimate (Fig. 5c). We then chose the optimal next stimulus based on a weighted sum of those two components, balancing exploration and exploitation57. This cycle of acquiring and analyzing the neural responses, updating the model estimate, and selecting the next stimulus continued until a chosen model confidence value or an upper limit on the number of stimuli (nmax=30) was reached (Supplementary Video 3). Next, a new neuron was randomly selected from all responsive neurons for optimization using the same procedure.
We validated our approach by comparing it to rigid sampling within a reduced stimulus space (144 unique combinations) and found that our online BO approach identified qualitatively similar neural response curves compared to the offline GP fit to all collected data (Fig. 5d). In addition, we quantified the accuracy of the identified peak by computing the Euclidean distance between the location of the maximum values of the offline and online GP fits, accounting for circular boundary conditions (Supplementary Fig. 7). On average, the correct peak was identified in 93% of neurons chosen for optimization, and incorrectly identified peaks tended to have more complex tuning curves with multiple maxima (Supplementary Fig. 8). Better accuracy could be achieved by increasing the desired confidence level.
While this method only optimized stimuli to drive peak responses of a single neuron at a time, simultaneous imaging of the entire population allowed us to observe the responses of all other neurons in the field of view, and thus we updated tuning information for all neurons after each stimulus. This meant that, in practice, just 15 stimuli were needed (on average) to optimize a population of 300 neurons (Fig. 5e). Thus, using improv, we quickly identified the peak tunings across different neural populations (Fig. 5f). For instance, when comparing peak tunings for neurons in the pretectum (Pt) and the optic tectum (OT), we observed differences between regions, differences that were reflected in the algorithm’s pattern of stimulus sampling. We noted that Pt neurons preferred whole-field stimuli where the same angle was shown to each eye, but OT neurons’ peak tunings were concentrated in off-diagonal regions, with converging or diverging stimuli displayed to the fish. The adaptive algorithm correctly chose to sample in regions where neurons were maximally tuned, depending on the region (Fig. 5f, white ‘x’s). Again, since we record simultaneously from all neurons, each new optimization routine leveraged data from the previous neurons, effectively aggregating information across the given neural population. Thus, this method is particularly well suited for applications where population correlations are expected, allowing for exploring larger stimulus spaces.
Adaptive optogenetic photostimulation of direction-selective neurons
Finally, we used improv to adaptively select neurons for optogenetic photostimulation based on direction selectivity. Optogenetics is powerful for dissecting causal neural interactions, activating neurons by opening light-gated channels while recording from downstream target neurons16,58. Yet, typically, the criteria for photostimulation, including location and neuron type, must be learned beforehand. Here, we leveraged improv to implement real-time data analysis to enable more flexible phototargeting that cannot be pre-specified, such as the functional roles of individual neurons (Fig. 6; Supplementary Fig. 6). Specifically, we used an all-optical approach59 in larval zebrafish, simultaneously performing two-photon photostimulation in red (1045 nm) of neurons expressing a novel redshifted marine opsin, rsChRmine60 during two-photon calcium imaging in green (920 nm, GCaMP6s), avoiding spectral overlap.
Fig. 6: Adaptive optogenetic photostimulation target selection during functional calcium imaging in zebrafish.
a *im