Introduction
Sensing plays a pivotal role in science and technology. By monitoring changes in the surroundings and reacting to external signals, sensors provide essential information about the environment of a system1,2,3. However, unwanted stochastic fluctuations fundamentally limit the amount of information that can be acquired. A sensor should provide a strong response to a signal and, at the same time, be minimally affected by the detrimental influence of noise. An impo…
Introduction
Sensing plays a pivotal role in science and technology. By monitoring changes in the surroundings and reacting to external signals, sensors provide essential information about the environment of a system1,2,3. However, unwanted stochastic fluctuations fundamentally limit the amount of information that can be acquired. A sensor should provide a strong response to a signal and, at the same time, be minimally affected by the detrimental influence of noise. An important figure of merit that quantifies this property is the signal-to-noise ratio which describes how good a detected signal can be distinguished from the noise1,2,3. Recent studies of biochemical networks, in particular of the sensing of chemical concentrations by biological cells, have revealed that the breaking of detailed balance away from equilibrium can enhance sensing performance4,5,6,7,8,9,10,11. These findings suggest that operating sensors far from equilibrium can be of significant advantage. Yet, the fundamental physical limits on nonequilibrium sensing are still unknown12,13.
We here address this central issue in the context of Maxwell’s demon14,15, using the tools of information thermodynamics16,17. By measuring a system and applying feedback, Maxwell’s demon is able to extract work from an equilibrium heat reservoir by breaking the fluctuation-dissipation relation that connects the response to an external field to the equilibrium correlation function of spontaneous fluctuations18,19. We show that the demon may also enhance the sensing ability of the system by exploiting their nonreciprocal interaction. Akin to the asymmetric conductance of a diode, nonreciprocity allows to strongly suppress the random fluctuations of the sensor, at the expense of those of the demon. As a consequence, the nonequillibrium signal-to-noise ratio may not only be improved compared to the equilibrium situation, it can be arbitrarily large at low frequencies in linear systems after optimization, even at a fixed overall amount of dissipation. This result implies that there is actually no fundamental limit on out-of-equilibrium sensing.
Results
We concretely consider a generic composite system whose state space can be divided into two distinct subsystems that interact with each other. This setup acts as an autonomous Maxwell demon where one subsystem generates information and the other one reacts to it20,21,22,23,24,25. It additionally provides a general model for molecular sensors and two-component molecular machines that operate without external measurement and feedback26. When the composite system is in a nonequilibrium steady state, created for instance by nonconservative forces, entropy is dissipated and detailed balance is broken. In the following, we combine a newly derived local form of the Harada-Sasa relation, that relates the dissipated heat to violation of the fluctuation-dissipation relation27,28,29,30, and the second law of information thermodynamics, that extends the entropy balance to include the contribution of the information flow between the subsystems21,22,23,24,25. We show that fluctuations of one subsystem (sensor) can be arbitrarily reduced compared to its response when its dissipated heat becomes negative for a sufficiently large information flow to the other subsystem (demon). Such apparent violation of the second law, which is made possible by the nonreciprocal coupling between the two subsystems, is at the origin of enhanced nonequilibrium sensing. We illustrate this generic result with the example of two overdamped harmonic oscillators (Fig. 1).
Fig. 1: Sensor-demon system.
The sensor consists of a Brownian particle (X) (blue) that is coupled via reciprocal (black spring) and nonreciprocal (orange arrows) interactions to another Brownian particle (Y) (purple) which acts as a demon. The demon increases the nonequilibrium signal-to-noise ratio by reducing the fluctuations of the sensor at the expense of its own. Optimal sensing of a force f(t) (green) is achieved for an inverted harmonic potential for the demon (dotted).
Harada-Sasa relation for subsystems
We begin by deriving a Harada-Sasa relation for coupled subsystems. We consider a composite system consisting of d overdamped degrees of freedom z(t) in contact with a viscous equilibrium environment characterized by a temperature T and a friction coefficient γ, whose dynamics obeys the Langevin equation (we set kB = 1)31
$$\gamma \dot{{{\boldsymbol{z}}}}(t)={{\boldsymbol{f}}}({{\boldsymbol{z}}}(t))+\sqrt{2\gamma T}{{\boldsymbol{\xi }}}(t),$$
(1)
where f(z) are arbitrary forces acting on the system and ξ(t) is a vector of mutually independent Gaussian white noises. When the forces are nonconservative (for example, external driving forces or nonreciprocal interactions), the nonequilibrium steady state of the system is characterized by a positive rate of heat dissipation16
$${\dot{Q}}_{{{\rm{diss}}}}=T\sigma=\left\langle {{{\boldsymbol{f}}}}{{{\rm{T}}}}\circ \dot{{{\boldsymbol{z}}}}\right\rangle=\frac{1}{\gamma }{\left\langle {\parallel {{\boldsymbol{f}}}-T{{{\boldsymbol{\nabla }}}}_{z}\ln {p}_{{{\rm{st}}}}\parallel }{2}\right\rangle }_{{{\rm{st}}}}\ge 0,$$
(2)
where ∘ is the Stratonovich product and 〈…〉st denotes the average with respect to the steady-state probability density pst(x). The quantity σ is the total entropy production rate that represents the increase in entropy of both the system and the environment due to the non-equilibrium nature of the dynamics16. We further divide the degrees of freedom into two subsets, z = (x, y), and interpret x and y as the degrees of freedom of the subsystems X and Y, respectively. Subsystem X will be the sensor whereas subsystem Y will act as the demon. Doing the same for the forces, f = (f X, f Y), we can split the total dissipation into local contributions from X and Y, ({\dot{Q}}_{{{\rm{diss}}}}=\left\langle {{{\boldsymbol{f}}}}{X,{\mbox{T}}}\circ \dot{{{\boldsymbol{x}}}}\right\rangle+\left\langle {{{\boldsymbol{f}}}}{Y,{\mbox{T}}}\circ \dot{{{\boldsymbol{y}}}}\right\rangle={\dot{Q}}_{{{\rm{diss}}}}{X}+{\dot{Q}}_{{{\rm{diss}}}}{Y}). To simplify the notation, we will focus on a two-dimensional space, z = (x, y), with single-variable subsystems.
By quantifying fluctuations of the velocity (\dot{x}(t)) of subsystem X (sensor) in a given frequency interval by the power spectral density ({S}_{v}{X}(\omega )) and its response to an external perturbation by the function ({R}_{v}{X}(\omega ))31, we obtain the local Harada-Sasa relation for subsystem X,
$$\frac{\gamma }{\pi }\int_{0}{\infty }d\omega ,\left[{S}_{v}{X}(\omega )-2T{R}_{v}{X}(\omega )\right]=\left\langle {f}{X}\circ \dot{X}\right\rangle={\dot{Q}}_{{{\rm{diss}}}}^{X},$$
(3)
that connects the violation of the local fluctuation-dissipation theorem, ({S}_{v}{X}(\omega )=2T{R}_{v}{X}(\omega ))18,19, to the local heat dissipation rate ({\dot{Q}}_{{{\rm{diss}}}}^{X}) (Methods).
Improved nonequilibrium sensing
Equation (3) for the local subsystem has the same form as the global Harada-Sasa relation for the composite system27,28,29,30. However, the underlying physics is radically different. According to the global second law, ({\dot{Q}}_{{{\rm{diss}}}}\ge 0), (2), the rate of heat dissipation is positive. The Harada-Sasa relation then implies that driving the system out of equilibrium always reduces the overall response compared to the fluctuations. This seems to suggest that better sensing, with a larger signal-to-noise ratio, is to be achieved near equilibrium. By contrast, the local second law for subsystem X reads (T{\sigma }{X}={\dot{Q}}_{{{\rm{diss}}}}{X}+T{l}{X}\ge 0), where σ**X is the local entropy production and ({l}{X}={\left\langle {\left({{{\boldsymbol{f}}}}{X}-T{{{\boldsymbol{\nabla }}}}_{x}\ln {p}_{{{\rm{st}}}}\right)}{{{\rm{T}}}}{{{\boldsymbol{\nabla }}}}_{x}\ln {p}_{{{\rm{st}}}}\right\rangle }_{{{\rm{st}}}}/\gamma) is the so-called learning rate, which quantifies the information flow between the subsystems21,22,23,24,25. Through the action of the demon, the local heat dissipation rate ({\dot{Q}}_{{{\rm{diss}}}}^{X}) of subsystem X can become negative in the presence of a sufficiently large information flow l**X. This effect allows one to cool X or to continuously extract work from it; it is the foundation for what has been termed nonreciprocal cooling32,33,34. A direct consequence of (3) is that the demon, with the help of the same effect, can also suppress the fluctuations of the subsystem compared to its response, and hence increase the signal-to-noise ratio.
The response function ({R}_{v}^{X}(\omega )) in (3) is the real part of the complex response function, and therefore only accounts for the in-phase response of the velocity. In practice, we are often interested in the amplitude of the response, which is characterized by the absolute value of the complex response function31,
$${\bar{R}}_{v}{X}(\omega )=\sqrt{{R}_{v}{X}{(\omega )}{2}+{\tilde{R}}_{v}{X}{(\omega )}^{2}},$$
(4)
where ({\tilde{R}}_{v}^{X}(\omega )) is the imaginary part of the complex response function that measures the out-of-phase response of the velocity. Equation (4) can be used to define the dimensionless signal-to-noise ratio of the sensor
$${{\mbox{SNR}}}{X}(\omega )=\frac{{\bar{R}}_{x}{X}(\omega )f}{\sqrt{{{\mbox{Var}}}_{{{\rm{st}}}}(x)}}=\frac{{\bar{R}}_{v}^{X}(\omega )f}{\omega \sqrt{{{\mbox{Var}}}_{{{\rm{st}}}}(x)}},$$
(5)
where f is the applied perturbation, Varst(x) is the variance of x and ({\bar{R}}_{x}{X}(\omega )={\bar{R}}_{v}{X}(\omega )/\omega). The main result of this paper is that there is no fundamental upper limit on SNRX away from equilibrium: in principle, we may design a system that has arbitrarily small fluctuations compared to the response, as we will now demonstrate in a concrete system. By contrast, the fundamental limit of the signal-to-noise ratio in an equilibrium system comes from the fact that noise cannot be eliminated at equilibrium: thermal fluctuations are indeed in general proportional to temperature31, as can be seen from the expression of the Johnson-Nyquist noise in a resistor35 and from the (generalized) equipartition theorem for nonharmonic confining potentials35.
Application to a linear system
Let us consider a two-dimensional system where subsystems X and Y can be locally approximated by linearly coupled harmonic oscillators (Fig. 1). The dynamics of the composite system follows the Langevin equation (1) with f (z(t)) = −Kz(t). We parameterize the force matrix K as
$${{\boldsymbol{K}}}=\left(\begin{array}{cc}{k}_{x}+\kappa &-\kappa -\delta \ -\kappa+\delta &{k}_{y}+\kappa \end{array}\right).$$
(6)
This corresponds to two overdamped particles confined in parabolic traps with strengths k**x and k**y. The particles interact via a spring with spring constant κ. In addition, the parameter δ describes a nonreciprocal coupling between the two particles. Similar nonreciprocal interactions have recently been realized experimentally in optically levitated particles36; they also naturally occur in active colloidal systems37. Since the dynamics is linear, we can analytically compute all the relevant quantities (Supplementary Information). The heat flow out of subsystem X (sensor) is explicitly given by,
$${\dot{Q}}_{{{\rm{diss}}}}^{X}=\frac{2T\delta (\delta+\kappa )}{\gamma {{\mathcal{T}}}},$$
(7)
whereas the corresponding response and variance read
$$\begin{array}{rcl}{\bar{R}}_{v}{X}{(\omega )}{2}&=&\frac{{\omega }{2}\left[{{{\mathcal{Q}}}}{2}+{(\gamma \omega )}{2}\right]}{\left[{({\lambda }{+})}{2}+{(\gamma \omega )}{2}\right]\left[{({\lambda }{-})}{2}+{(\gamma \omega )}{2}\right]},\ {{\mbox{Var}}}_{{{\rm{st}}}}(x)&=&\frac{T(\gamma \sigma+2{{\mathcal{Q}}})-\sqrt{\gamma \sigma \left[\gamma \sigma -4\frac{({{\mathcal{Q}}}-{\lambda }{+})({{\mathcal{Q}}}-{\lambda }{-})}{{\lambda }{+}+{\lambda }{-}}\right]}}{2{\lambda }{+}{\lambda }^{-}},\end{array}$$
(8)
where λ± are the eigenvalues of the force matrix K. We further have the trace (,{\mbox{tr}},({{\boldsymbol{K}}})={{\mathcal{T}}}), the determinant (\det ({{\boldsymbol{K}}})={{\mathcal{D}}}), and ({{\mathcal{Q}}}={k}_{y}+\kappa). In order to warrant a stable steady state, we impose the condition ({{\mathcal{D}}} > 0); from the inequality ({{\mathcal{T}}}\ge \sqrt{2{{\mathcal{D}}}}), we then have ({{\mathcal{T}}} > 0). The response function does not explicitly depend on the overall dissipation (\sigma={\dot{Q}}_{{{\rm{diss}}}}/T), that is, on how far the overall system is driven from equilibrium, contrary to the variance. Therefore, for a given response function, the fluctuations can generally be reduced by driving the system out of equilibrium. We emphasize that both reciprocal (κ ≠ 0) and nonreciprocal (δ ≠ 0) couplings are necessary to obtain a negative heat flow (({\dot{Q}}_{{{\rm{diss}}}}^{X} < 0) for − κ < δ < 0) and to achieve enhanced sensing with a reduced variance.
We next numerically illustrate the beneficial role of the demon on the sensor for a small periodic perturbation, (f(t)=\epsilon \cos ({\omega }_{0}t)), applied to the sensor. To that end, we set the amplitude of the nonequilibrium response at frequency ω0 to the corresponding equilibrium response, ({\bar{R}}_{v}{X}({\omega }_{0})={\bar{R}}_{v,,{\mbox{eq}}}{X}({\omega }_{0})), where ({\bar{R}}_{v,, {{\mbox{eq}}}}{X}(\omega )=\sqrt{{(\gamma \omega )}{2}/{k}_{x}{2}+{(\gamma \omega )}{2}}) is the response spectrum of the sensor in the absence of the demon. We also fix the total rate of dissipation σ. Then, we numerically minimize the variance with respect to the eigenvalues λ+ and λ−, which gives us the least possible amount of fluctuations for a given response and dissipation.
Figure 2 a) displays the response of the sensor x(t) in equilibrium, in the absence of the demon (gray), and with the demon (blue) to the small perturbation f(t) (green) for the optimized parameters. A strong decrease in fluctuations is clearly visible; for the considered example, it amounts to a factor of 3.1 improvement in the signal-to-noise ratio. The behavior of the demon (purple) is shown in the inset for comparison; as discussed in more details below, it corresponds to an almost unstable mode that exhibits much larger fluctuations than the sensor. Figure 2b) moreover shows the response function ({\bar{R}}_{v}^{X}(\omega )), (4) (black) and the signal-to-noise ratio SNRX(ω), (5) (blue) relative to their respective equilibrium values, as a function of the frequency ω. At frequencies lower than the reference frequency ω0, the response in the presence of the demon is enhanced, both in terms of its absolute value and its real part. At intermediate frequencies, the coupling to the demon reduces the response, while at high frequencies, where we essentially measure the viscosity of the environment, the response is unaffected.
Fig. 2: Enhanced nonequilibrium sensing.
a The response of the sensor x(t) to a periodic perturbation (\epsilon \cos ({\omega }_{0}t)) (green) exhibits less fluctuations in the presence of the demon (blue) than in equilibrium (gray). The inset shows the much larger fluctuations of the demon (purple). b Response function ({\bar{R}}_{v}^{X}(\omega )), (4) (black), and signal-to-noise ratio, (5) (blue), normalized by their equilibrium values, which they both exceed below ω0 (vertical dotted line). Parameters are σ = 10, T = 0.01, ω0 = 0.01, ϵ = 0.1 and γ = 1. Coupling parameters after minimization of the signal-to-noise ratio for a constant response are k**x = 15.50, k**y = −7.919, κ = 8.269, δ = −7.766, corresponding to an eigenvalue λ− = 0.0105 of the force matrix.
Fundamental sensing limit
To investigate the fundamental limit on the performance of the nonequilibrium sensor, we now consider the results of the above optimization of the sensor’s parameters as a function of the reference frequency ω0, as shown in Fig. 3a). For frequencies above the characteristic relaxation rate ωc = k**x/γ of the sensor, where response and fluctuations are governed by the properties of the environment rather than the system, no improvement is possible. By contrast, at low frequencies ω0 ≪ ωc, the signal-to-noise ratio can be significantly enhanced above its equilibrium value. In particular, in the low frequency limit, where the equilibrium signal-to-noise saturates at a value of unity for the present parameters, the optimized nonequilibrium signal-to-noise ratio diverges as ({\omega }_{0}^{-1/4}). This implies that, for sensing of low-frequency signals, in particular of constant forces, the amount of fluctuations can be decreased arbitrarily, while keeping the response and dissipation finite. We note that while Fig. 3a) shows the signa-to-noise ratio normalized by its equilibrium value, the latter approaches the constant (f/\sqrt{{k}_{x}T}) in the low-frequency limit (Supplementary Information), so the divergence originates from an actual divergence of the nonequilibrium signal-to-noise ratio. Specifically, using the scaling of the parameters obtained from the numerical minimization, we find for given σ and in the limit ω0 → 0 (Supplementary Information),
$$\frac{{{\mbox{Var}}}_{{{\rm{opt}}}}(x)}{{{\mbox{Var}}}_{{{\rm{eq}}}}(x)}\simeq \sqrt{\frac{8{\omega }_{0}}{\sigma }},$$
(9)
which agrees with the results obtained by explicit numerical optimization in the low-frequency regime.
Fig. 3: Nonequilibrium sensing limit.
a The nonequilibrium signal-to-noise ratio, (5) (blue solid), exceeds its equilibrium value for low frequencies ω0, and diverges as ({\omega }_{0}{-1/4}) for ω0 → 0, (9) (blue dashed), indicating the absence of a fundamental limit on out-of-equilibrium sensing. The result is obtained by fixing the response to the equilibrium response, ({\bar{R}}_{v}{X}({\omega }_{0})={\bar{R}}_{v,,{\mbox{eq}},}^{X}({\omega }_{0})), corresponding to k**x = 1, with σ = 1, and then minimizing the variance with respect to the eigenvalues λ+ and λ− of the force matrix. The divergence of the signal-to-noise ratio of the sensor is accompanied by diverging fluctuations of the demon (purple, inset). b The efficiency of the sensor (blue) approaches unity when the signal-to-noise ratio diverges for ω0 → 0, indicating that the information acquired by the demon is perfectly converted into a negative heat flow from the sensor, However, the efficiency of the demon (purple) and the overall efficiency (black) vanish, indicating that the demon is a bad cooler in this limit. In general, the efficiencies of demon and sensor exhibit opposite behavior at low and high ω0.
To understand the origin of this dramatic improvement, it is useful to consider the optimal values of the eigenvalues λ±. The increase in the signal-to-noise ratio is accompanied by a decrease of the smaller eigenvalue λ− ≃ γ**ω0. As a result, the stability of one of the eigenmodes of the system decreases. Since the response of the sensor is kept fixed, this eigenmode corresponds to the degree of freedom of the demon, which would be unstable, with negative spring constant, without the stabilizing coupling to the sensor. This instability allows the demon to absorb the fluctuations of the sensor, thus improving the corresponding signal-to-noise ratio at the expense of its own fluctuations, which grow as Varst(y) ≃ T/(γ**ω0) in the low-frequency limit (Fig. 3a, inset).
Information-thermodynamic efficiencies
Additional insight may be gained by examining the information-thermodynamic efficiencies of sensor and demon21
$${\epsilon }{X}=\frac{-{\dot{Q}}_{{{\rm{diss}}}}{X}}{T{l}{Y}}\quad ,{{\mbox{and}}} ,\quad {\epsilon }{Y}=\frac{T{l}{Y}}{{\dot{Q}}_{{{\rm{diss}}}}{Y}}.$$
(10)
The parameter ϵ**Y quantifies how much information the demon acquires about the sensor relative to the amount of heat it dissipates, whereas ϵ**X is the efficiency of translating the acquired information into a negative heat flow that yields a reduction of the fluctuations of the sensor. The product (\epsilon={\epsilon }{X}{\epsilon }{Y}=-{\dot{Q}}_{{{\rm{diss}}}}{X}/{\dot{Q}}_{{{\rm{diss}}}}{Y}) is the overall thermodynamic efficiency of the combined system, that is, the relation between the heat removed from the sensor and the heat dissipated by the demon. All three quantities are displayed in Fig. 3b) as a function of the frequency ω0. We see that the dynamics of the sensor become approximately reversible (ϵ X → 1 and σ X → 0) in the low-frequency limit in which the signal-to-noise ratio diverges. By contrast, the demon is not efficient in extracting information about the sensor (ϵ**Y → 0) in this limit, causing the overall thermodynamic efficiency ϵ to vanish. The constraint that the demon should reduce the fluctuations of the sensor while maintaining its response hence prevents it from acting as an efficient cooling device. This makes nonreciprocal sensing very different from nonreciprocal cooling33,34.
Discussion
We have investigated the physical limits on nonequilibrium sensing by analyzing a general sensor coupled to a demon. Our first key result is the identification of the central role of nonreciprocal sensor-demon interactions to enable the demon to significantly suppress fluctuations of the sensor, while keeping the response unaffected. As a consequence, the signal-to-noise ratio can be strongly enhanced compared to its equilibrium value. However, not just any nonreciprocal coupling will produce an improved sensor. Our second main result is to show that the parameters of sensor and demon must be properly optimized to achieve an arbitrarily large signal-to-noise ratio. Remarkably, it may even diverge at low frequencies in linear systems, revealing that there is no fundamental limit on nonequilibrium sensing. Our third nontrivial observation is that such divergent signal-to-noise ratio can be obtained with constant nonequilibrium entropy production, that is, with given energy dissipation. These findings should allow one to extend the applicability of enhanced nonequilibrium sensing to a wider context, beyond the limits of biology, where parameters are usually fixed, including physics and engineering. In particular, the possibility of enhanced sensing with constant energy dissipation/consumption appears to be an interesting property in the context of current research on energy-efficient wireless sensor networks where power minimization is critical38,39,40. Our predictions for coupled oscillators could furthermore be directly tested using optically levitated particles36 or active colloidal systems37. We hasten to add that they also hold for systems described by discrete master equations for which there is no Harada-Sasa relation (Supplementary Information). Moreover, we stress that the fundamental requirement is nonreciprocal coupling between different degrees of freedom driving the overall system out of equilibrium; the separation into subsystems is convenient for intuition but not necessary. All in all, our work suggests that appropriately designed nonequilibrium systems might be generally used for highly accurate sensing, even in the presence of large environmental fluctuations.
Methods
We here relate the local heat dissipation rates to the local fluctuations and responses of each subsystem. The fluctuations of the variable z(t) can be quantified with the (positive definite) power spectral density matrix31
$${\left({{\boldsymbol{S}}}(\omega )\right)}_{kl}=\frac{1}{2}\int_{-\infty }{\infty }dt,{e}{i\omega t}\left\langle \delta {z}_{k}(t)\delta {z}_{l}(0)\right\rangle+\left\langle \delta {z}_{l}(t)\delta {z}_{k}(0)\right\rangle,$$
(11)
with (\delta {{\boldsymbol{z}}}(t)={{\boldsymbol{z}}}(t)-{\left\langle {{\boldsymbol{z}}}\right\rangle }_{{{\rm{st}}}}). Its integral over all frequencies is equal to the steady-state fluctuations of z(t), (\int_{0}^{\infty }d\omega ,{\left({{\boldsymbol{S}}}(\omega )\right)}_{kl}/\pi={\left\langle \delta {z}_{k}\delta {z}_{l}\right\rangle }_{{{\rm{st}}}}.) That is, Sz (ω)d**ω measures the amount of fluctuations of z(t) in the frequency interval [ω, ω + d**ω]. A closely related quantity is the velocity power spectral density matrix, Sv(ω) = ω2S(ω), which likewise measures the fluctuations of (\dot{{{\boldsymbol{z}}}}(t)) in a given frequency interval31. On the other hand, the response of z(t) to a perturbation force (\eta \phi (t){{\hat{{\boldsymbol{e}}}}}_{l}) applied in direction l can, to linear order in the magnitude η of the perturbation, be expressed as31
$${\left\langle {z}_{k}(t)\right\rangle }{\eta }-{\left\langle {z}_{k}\right\rangle }_{{{\rm{st}}}}\simeq \eta \int_{0}{t}d{t}{{\prime} }\int_{0}{{t}{{\prime} }}d{t}{{\prime\prime} },{{{\mathcal{R}}}}_{v,kl}({t}{{\prime} }-{t}{{\prime\prime} })\phi ({t}^{{\prime\prime} }),$$
(12)
where ({\left\langle \ldots \right\rangle }{\eta }) denotes the average evaluated in the perturbed system and the matrix ({{{\boldsymbol{{{\mathcal{R}}}}}}}_{v}({t}{{\prime} }-{t}{{\prime\prime} })) is the velocity-response matrix, whose components measure how much the velocity in direction k at time ({t}{{\prime} }) changes in response to an applied force in direction l at time t*″*. Note that, due to causality, ({{{\boldsymbol{{{\mathcal{R}}}}}}}_{v}({t}{{\prime} }-{t}{{\prime\prime} })) is only defined for ({t}{{\prime} }\ge {t}{{\prime\prime} }). Real and imaginary parts of the frequency-response matrix are given by ({{{\boldsymbol{R}}}}_{v}(\omega )=\int_{0}{\infty }dt\cos \omega t,{{{\boldsymbol{{{\mathcal{R}}}}}}}_{v}(t)) and ({{\tilde{{\boldsymbol{R}}}}}_{v}(\omega )=\int_{0}{\infty }dt\sin \omega t,{{{\boldsymbol{{{\mathcal{R}}}}}}}_{v}(t)).
To simplify the notation, we proceed by focusing on a two-dimensional space, z = (x, y), with single-variable subsystems. Then, we can write the two matrices
$${{{\boldsymbol{S}}}}_{v}(\omega )=\left(\begin{array}{cc}{S}_{v}{X}&{S}_{v}{XY}\ {S}_{v}{XY}&{S}_{v}{Y}\end{array}\right),{{\mbox{and}}} ,,{{{\boldsymbol{R}}}}_{v}(\omega )=\left(\begin{array}{cc}{R}_{v}{X}&{R}_{v}{XY}\ {R}_{v}{YX}&{R}_{v}{Y}\end{array}\right),$$
(13)
where ({S}_{v}{X}(\omega )) and ({S}_{v}{Y}(\omega )) are the respective velocity power spectral densities of X and Y, and ({S}_{v}{XY}(\omega )={S}_{v}{YX}(\omega )) quantifies the correlations between the two subsystems. Similarly, ({R}_{v}{X}(\omega )) measures the response of system X to perturbations applied to itself, while ({R}_{v}{XY}(\omega )) measures the response of system X to perturbations applied to Y. Out of equilibrium, the response is generally not reciprocal, ({R}_{v}{XY}(\omega )\ne {R}_{v}{YX}(\omega )).
Using the explicit expressions for Sv and Rv, we obtain the local Harada-Sasa relation for subsystem X (sensor),
$$\frac{\gamma }{\pi }\int_{0}{\infty }d\omega ,\left[{S}_{v}{X}(\omega )-2T{R}_{v}{X}(\omega )\right]=\left\langle {f}{X}\circ \dot{X}\right\rangle={\dot{Q}}_{{{\rm{diss}}}}^{X},$$
(14)
that connects the violation of the local fluctuation-dissipation theorem, ({S}_{v}{X}(\omega )=2T{R}_{v}{X}(\omega ))18,19, to the local heat dissipation rate ({\dot{Q}}_{{{\rm{diss}}}}^{X}) (Supplementary Information). A similar relation holds for subsystem Y (demon).
Data availability
No datasets were generated or analysed during the current study.
Code availability
The Mathematica code used for the optimization of the SNR and plotting the figures will be provided upon request.
References
Fraden, J. Handbook of Modern Sensors (Springer, 2016). 1.
Hering, E. and Schönfelder, G. Sensors in Science and Technology, (Springer, 2022). 1.
Barhoum, A. and Altintas, Z. Fundamentals of Sensor Technology, (Elsevier, 2023). 1.
Govern, C. C. & ten Wolde, P. R. Fundamental limits on sensing chemical concentrations with linear biochemical networks. Phys. Rev. Lett. 109, 218103 (2012).
Article PubMed Google Scholar 1.
Mehta, P. & Schwab, D. J. Energetic cost of cellular computation. Proc. Natl. Acad. Sci. USA 109, 17978 (2012).
Article CAS PubMed PubMed Central Google Scholar 1.
Lan, G., Sartori, P., Neumann, S., Sourjik, V. & Tu, Y. The energy-speed-accuracy trade-off in sensory adaptation. Nat. Phys. 8, 422 (2012).
Article CAS PubMed PubMed Central Google Scholar 1.
Tu, Y. The nonequilibrium mechanism for ultrasensitivity in a biological switch: sensing by Maxwell’s demons. Proc. Natl. Acad. Sci. USA 105, 11737 (2008).
Article CAS PubMed PubMed Central Google Scholar 1.
Skoge, M., Naqvi, S., Meir, Y. & Wingreen, N. S. Chemical sensing by nonequilibrium cooperative receptors. Phys. Rev. Lett. 110, 248102 (2013).
Article PubMed PubMed Central Google Scholar 1.
Lang, A., Fisher, C. K., Mora, T. & Mehta, P. Thermodynamics of statistical inference by cells. Phys. Rev. Lett. 113, 148103 (2014).
Article PubMed Google Scholar 1.
Govern, C. C. & ten Wolde, P. R. Optimal resource allocation in cellular sensing systems. Proc. Natl. Acad. Sci. USA 111, 17486 (2014).
Article CAS PubMed PubMed Central Google Scholar 1.
Ngampruetikorn, V., Schwab, D. J. &