Abstract
Predicting an individual’s behavior in one task condition based on their behavior in a different condition is a key challenge in modeling individual decision-making tendencies. We propose a novel framework that addresses this challenge by leveraging neural networks and introducing a concept we term the ‘individual latent representation’. This representation, extracted from behavior in a ‘source’ task condition via an encoder network, captures an individual’s unique decision-making tendencies. A decoder network then utilizes this representation to generate the weights of a task-specific neural network (a ‘task solver’), which predicts the individual’s behavior in a ‘target’ task condition. We demonstrate the effectiveness of our approach in two distinct decision-making ta…
Abstract
Predicting an individual’s behavior in one task condition based on their behavior in a different condition is a key challenge in modeling individual decision-making tendencies. We propose a novel framework that addresses this challenge by leveraging neural networks and introducing a concept we term the ‘individual latent representation’. This representation, extracted from behavior in a ‘source’ task condition via an encoder network, captures an individual’s unique decision-making tendencies. A decoder network then utilizes this representation to generate the weights of a task-specific neural network (a ‘task solver’), which predicts the individual’s behavior in a ‘target’ task condition. We demonstrate the effectiveness of our approach in two distinct decision-making tasks: a value-guided task and a perceptual task. Our framework offers a robust and generalizable approach for parameterizing individual variability, providing a promising pathway toward computational modeling at the individual level—replicating individuals in silico.
Introduction
Humans (and other animals) exhibit substantial commonalities in their decision-making processes. However, considerable variability is also frequently observed in how individuals perform perceptual and cognitive decision-making tasks (Carroll and Maxwell, 1979; Boogert et al., 2018). This variability arises from differences in underlying cognitive mechanisms. For example, individuals may vary in their ability or tendency to retain past experiences (Duncan and Shohamy, 2016; Collins and Frank, 2012), respond to events with both speed and accuracy (Wagenmakers and Brown, 2007; Spoerer et al., 2020), or explore novel actions (Frank et al., 2009). If these factors can be meaningfully disentangled, they would enable a concise characterization of individual decision-making processes, yielding a low-dimensional, parameterized representation of individuality. Such a representation could, in turn, be leveraged to predict future behaviors at an individual level. Shifting from population-level predictions to an individual-based approach would mark a significant advancement in domains where precise behavior prediction is essential, such as social and cognitive sciences. Beyond prediction, this approach offers a framework for parameterizing and clustering individuals, thereby facilitating the visualization of behavioral heterogeneity, which has applications in psychiatric analysis (Pedersen et al., 2017; Dezfouli et al., 2019a). Furthermore, this parameterization offers a promising pathway toward computational modeling at the individual level—replicating the cognitive and functional characteristics of individuals in silico (Shengli, 2021).
Cognitive modeling is a standard approach for reproducing and predicting human behavior (Navarro et al., 2006; Busemeyer and Stout, 2002; Yechiam et al., 2005), often implemented within a reinforcement learning framework (e.g. O’Doherty et al., 2007; Daw et al., 2011; Wilson and Collins, 2019). However, because these cognitive models are manually designed by researchers, their ability to accurately fit behavioral data may be limited (Fintz et al., 2022; Song et al., 2021; Miller et al., 2023; Eckstein et al., 2022). A data-driven approach using artificial neural networks (ANNs) offers an alternative (Dezfouli et al., 2019b; Radev et al., 2022; Schaeffer et al., 2020). Unlike cognitive models, which rely on predefined behavioral assumptions (Rmus et al., 2024), ANNs require minimal prior assumptions and can learn complex patterns directly from data. For instance, convolutional neural networks (CNNs) have successfully replicated human choices and reaction times in various visual tasks (Kriegeskorte, 2015; Rajalingham et al., 2018; Fel et al., 2022). Similarly, recurrent neural networks (RNNs; Siegelmann and Sontag, 1995; Cho et al., 2014) have been applied to model value-guided decision-making tasks such as the multi-armed bandit problem (Yang et al., 2019; Dezfouli et al., 2019a). A promising approach to capturing individual decision-making tendencies while preserving behavioral consistency is to tune ANN weights using a parameterized representation of individuality.
This idea was first proposed by Dezfouli et al., 2019a, who employed an RNN to solve a two-armed bandit task. Their study utilized an autoencoder framework (Rumelhart and McClelland, 1987; Tolstikhin et al., 2017), in which behavioral recordings from a single session of the bandit task, performed by an individual, were fed into an encoder. The encoder produced a low-dimensional vector, interpreted as a latent representation of the individual. Similar to hypernetworks (Ha et al., 2016; Karaletsos et al., 2018), a decoder then took this low-dimensional vector as input and generated the weights of the RNN. This framework successfully reproduced behavioral recordings from other sessions of the same bandit task while preserving individual characteristics. However, since this individuality transfer has only been validated within the bandit task, it remains unclear whether the extracted latent representation captures an individual’s intrinsic tendencies across a variety of task conditions.
To address this question, we aim to make the low-dimensional representation—referred to as the individual latent representation—robust to variations across individuals and task conditions, thereby enhancing its generalizability. Specifically, we propose a framework that predicts an individual’s behaviors, not only in the same condition but also in similar yet distinct task conditions and environments. If the individual latent representation serves as a low-dimensional representation of an individual’s decision-making process, then extracting it from one condition could facilitate the prediction of that individual’s behaviors in another.
In this study, we define the problem of individuality transfer across task conditions as follows (also illustrated in Figure 1). We assume access to a behavioral dataset from multiple individuals performing two task conditions: a source task condition and a target task condition. We train an encoder that takes behavioral data from the source task condition as input and outputs an individual latent representation. This representation is then fed into a decoder, which generates the weights of an ANN, referred to as a task solver, that reproduces behaviors in the target task condition. For testing, a new individual provides behavioral data from the source task condition, allowing us to infer his/her individual latent representation. Using this representation, a task solver is constructed to predict how the test individual will behave in the target task condition. Importantly, this prediction does not require any behavioral data from the test individual performing the target task condition. We refer to this framework as EIDT, an acronym for encoder, individual latent representation, decoder, and task solver.
The EIDT (encoder, individual latent representation, decoder, and task solver) framework for individuality transfer across task conditions.
The encoder maps action(s) α, provided by an individual K performing a specific problem ϕ in the source task condition A, into an individual latent representation (represented as a point in the two-dimensional space in the center). The individual latent representation is then fed into the decoder, which generates the weights for a task solver. The task solver predicts the behavior of the same individual K in the target task condition B. During the training, a loss function evaluates the discrepancy between the predicted behavior β^ and the actual recorded behavior β of individual K. The encoder’s input is referred to as an action sequence, the form of which depends on task. For example, in a sequential Markov decision process (MDP) task, an action sequence consists of an environment (state transition probabilities) and a sequence of actions over multiple episodes. For a digit recognition task, it consists of a stimulus digit image and the corresponding chosen response.
We evaluated whether the proposed EIDT framework can effectively transfer individuality in both value-guided sequential decision-making tasks and perceptual decision-making tasks. To assess its generalizability across individuals, meaning its ability to predict the behavior of previously unseen individuals, we tested the framework using a test participant pool that was not included in the dataset used for model training. To determine how well our framework captures each individual’s unique behavioral patterns, we compared the prediction performance of a task solver specifically designed for a given individual with the performance of task solvers designed for other individuals. Our results indicate that the proposed framework successfully mimics decision-making while accounting for individual differences.
Results
We evaluated our EIDT framework using two distinct experimental paradigms: a value-guided sequential decision-making task (MDP task) and a perceptual decision-making task (MNIST task). For each paradigm, we assessed model performance in two scenarios. The first, Within-Condition Prediction, tested a model’s ability to predict behavior within a single task condition without individuality transfer. In this scenario, a model was trained on data from a pool of participants to predict the behavior of a held-out individual in that same condition. The second, Cross-Condition Transfer, tested the core hypothesis of individuality transfer. Here, a model used behavioral data from a participant in ‘source’ condition to predict that same participant’s behavior in a different ‘target’ condition.
The prediction performance was evaluated using two metrics: the negative log-likelihood on a trial-by-trial basis, and the rate for behavior matched. The negative log-likelihood is based on the probability the model assigned to the specific action that the human participant actually took on that trial. The rate for behavior-matched measures the proportion of trials where the model’s most likely action (deterministically predicted by sampling from the output probabilities) matched the participant’s actual choice.
Markov decision process (MDP) task
The dataset consisted of behavioral data from 81 participants who performed both 2-step and 3-step MDP tasks. Each participant completed three blocks of 50 episodes for each condition, resulting in 486 action sequences in total. All analyses were performed using a leave-one-participant-out cross-validation procedure. For each fold, the model was trained on 80 participants, with 90% used for training updates and 10% for validation-based early stopping.
Task solver accurately predicts average behavior
First, we validated our core neural network architecture in Within-Condition Prediction. We trained a standard task solver, using the architecture defined in the EIDT model, on the training/validation pool (N=80) to predict the behavior of the held-out participant. We compared its performance against a standard cognitive model (a Q-learning model, Cognitive model) whose parameters were averaged from fits to the same training/validation pool.
As shown in Figure 2, the neural network-based task solver significantly outperformed the cognitive model. A two-way (model: cognitive model/task solver, task condition: 2-step/3-step) repeated-measures (RM) ANOVA with Greenhouse-Geisser correction (significant level was 0.05) revealed a significant effect of the model on both negative log-likelihood (model: F1,80=148.828, p<0.001, ηG2=0.143, task condition: F1,80=1.107, p=0.296, ηG2=0.002, interaction: F1,80=0.240, p=0.626, ηG2<0.001) and the rate for behavior matched (model: F1,80=110.684, p<0.001, ηG2=0.165, task condition: F1,80=3.914, p=0.051, ηG2=0.009, interaction: F1,80=19.059, p<0.001, ηG2=0.014). This result confirms that our RNN-based architecture serves as a strong foundation for modeling decision-making in this task.
Comparison of prediction performance in Within-Condition Prediction for the MDP task.
The plots show the negative log-likelihood (left) and the rate for behavior matched (right) for the average-participant cognitive model and the task solver for 2-step and 3-step conditions. Box plots indicate the median and interquartile range. Whiskers extend to the minimum and maximum values. Each connected pair of dots represents a single participant’s data. The task solver demonstrates significantly better performance.
EIDT enables accurate individuality transfer
Next, we tested our main hypothesis in Cross-Condition Transfer. We used the full EIDT framework to predict a participant’s behavior in a target condition (e.g. 3-step MDP) using their behavioral data from a source condition (e.g. 2-step MDP). We compared the performance of two models:
Cognitive model
A Q-learning model whose parameters (qlr, qinit, qdr, and qit) were individually fitted for each participant using their data from the source condition and then applied to predict behavior in the target condition.
EIDT
Our framework, trained on the training and validation pool using data from both source and target conditions (see Appendix 1—figure 2, Appendix 1 for representative training and validation curves). To predict behavior for a test participant, their individual latent representation was computed by averaging the encoder’s output across all of their behavioral sequences from the source condition, and this representation was fed to the decoder to generate the task solver weights. For reference, the averaged individual latent representations are visualized in Appendix 1—figure 3, Appendix 1.
The EIDT framework demonstrated significantly better prediction accuracy than the individualized cognitive model (Figure 3). A two-way (model: cognitive model/EIDT, transfer direction: 2→3/3→2) RM ANOVA confirmed a significant effect of the model on negative log-likelihood (model: F1,80=95.705, p<0.001, ηG2=0.142, transfer direction: F1,80=14.255, p<0.001, ηG2=0.019, interaction: F1,80=0.008, p=0.012, ηG2=0.002) and the rate for behavior matched (model: F1,80=100.843, p<0.001, ηG2=0.132, transfer direction: F1,80=13.021, p=0.001, ηG2=0.011, interaction: F1,80=0.964, p=0.329, ηG2<0.001). This result indicates that EIDT successfully captures and transfers individual-specific behavioral patterns more effectively than a traditional parameter-based transfer approach.
Individuality transfer performance in Cross-Condition Transfer for the MDP task.
The plots compare the EIDT framework against an individualized cognitive model on negative log-likelihood (left) and rate for behavior matched (right) for both 2-step to 3-step and 3-step to 2-step transfer. Box plots indicate the median and interquartile range. Whiskers extend to the minimum and maximum values. Each connected pair of dots represents a single participant’s data. The EIDT model shows superior prediction accuracy.
Latent space distance predicts transfer performance
To verify that the individual latent representation meaningfully captures individuality, we conducted a ‘cross-individual’ analysis. We generated a task solver using the latent representation of one participant (Participant l) and used it to predict the behavior of another participant (Participant k). We then measured the relationship between the prediction performance (yk,l) and the Euclidean distance (dk,l) between the latent representations of Participants k and l.
As hypothesized, prediction performance was strongly dependent on this distance (Figure 4). We fitted the data using a generalized linear model (GLM): yk,l∼Gamma(log(βparticipantk+βddk,l+β0)). The fitting confirmed that distance (dk,l) was a significant predictor: the coefficient βd was significantly positive for negative log-likelihood (transfer direction 3→2: βd=0.176, p<0.001, 2→3: βd=0.316, p<0.001) and significantly negative for the rate for behavior matched (3→2: βd=0.106, p<0.001, 2→3: βd=−0.149, p<0.001). This indicates that prediction performance degrades as the behavioral dissimilarity (represented by distance in the latent space) between the source and target individual increases, providing direct evidence that the latent space organizes individuals by behavioral similarity.
Prediction performances as functions of latent space distance in the MDP task.
This cross-individual analysis shows the result of using a task solver generated from one participant to predict the behavior of another participant. The horizontal axis is the Euclidean distance between the latent representation of the two participants. The vertical axis shows the negative log-likelihood (left) and rate for behavior matched (right). Each dot represents one participant pair. Performance degrades as the distance between individuals increases, with the solid line showing the GLM fit. (A) 3-step to 2-step transfer. (B) 2-step to 3-step transfer.
On-policy simulations generate human-like behavior
To assess if our model could generate realistic behavior, we conducted on-policy simulations. Task solvers specialized to each individual via EIDT performed the MDP task using the same environments as the human participants. We compared the model behavior to human behavior on two metrics: total reward per block and the rate of highly rewarding action selected in the final step.
The model-generated behaviors closely mirrored human behaviors (Figure 5). We found significant correlations between humans and their corresponding models in both total rewards (3→2: R=0.667, p<0.001; 2→3: R=0.593, p<0.001) and the rate of highly-rewarding action selected (3→2: R=0.889, p<0.001; 2→3: R=0.835, p<0.001). This demonstrates that the EIDT framework captures individual tendencies that generalize to active, sequential behavior generation.
Comparison of on-policy behavior between humans and EIDT-generated task solvers.
Each dot represents the performance of a single human participant (horizontal axis) versus their corresponding model (vertical axis) for one block. Plots show the total reward (left) and the rate of highly-rewarding action selected (right). (A) 3-step to 2-step transfer. (B) 2-step to 3-step transfer.
Individual latent representations reflect cognitive parameters
To better interpret the latent space, we applied our EIDT model (trained only on human data) to simulated data from 1000 Q-learning agents. The agents had known learning rates (qlr) and inverse temperatures (qit) sampled from distributions matched to human fits (Appendix 1—figure 1, Appendix 1). A cross-individual analysis on these agents confirmed that latent space distance predicted performance, mirroring the results from human data (Appendix 1—figure 5, Appendix 1).
The results revealed a systematic mapping between the cognitive parameters and the coordinates of the individual latent representation (Figure 6 and Appendix 1—figure 4 , Appendix 1). A GLM analysis (Appendix 1—table 1, Appendix 1) showed that both qrl and qit (and their interaction) were significant predictors of the latent dimensions (z1 and z2). This indicates that our data-driven representation captures core computational properties defined in classic reinforcement learning theory.
Mapping of Q-learning parameters to the individual latent space for the 3-step MDP task.
Each plot shows one dimension of the latent representation (z1 (left) or z2 (right)) as a function of either the learning rate (qlr, A) or the inverse temperature (qit, B) of simulated Q-learning agents. Black dots represent the latent representation produced by the encoder from the agent’s behavior. Blue dots show the fit from a GLM.
Handwritten digit recognition (MNIST) task
We then sought to replicate our findings in a different domain: perceptual decision-making. We used data from Rafiei et al., 2024, where 60 participants identified noisy images of digits under four conditions varying in difficulty and speed-accuracy focus (EA: easy, accuracy focus, ES: easy, speed focus, DA: difficult, accuracy focus, and DS: difficult, speed focus). Analyses were again conducted using leave-one-participant-out cross-validation.
Task solver outperforms RTNet
First, in Within-Condition Prediction, our base task solver demonstrated task performance (rate of correct responses indicating how accurately a human participant or model responded to the stimulus digit) comparable to human participants and established RTNet model (Rafiei et al., 2024; Figure 7). A two-way (model: human/RTNet/Task solver, task condition: EA/ES/DA/DS) RM ANOVA showed no significant effect of model type (F2,118=1.546, p=0.219, ηG2=0.008), while the task condition had a significant effect (F3,177=866.322, p<0.001, ηG2=0.684). This confirms similar task-solving ability.
Task performance (rate of correct responses) in Within-Condition Prediction for the MNIST tasks.
Box plots indicate the median and interquartile range. Whiskers extend to the minimum and maximum values. Performance is compared across human participants, the RTNet model, and our task solver for the four experimental conditions (EA, ES, DA, and DS). All three show similar performance patterns.
However, the task solver significantly outperformed RTNet in predicting participants’ trial-by-trial choices (Figure 8). A two-way RM ANOVA revealed significant effects on both negative log-likelihood (model: F1,59=1312.328, p<0.001, ηG2=0.731, task condition: F3,177=460.535, p<0.001, ηG2=0.682, their interaction: F3,177=24.476, p<0.001, ηG2=0.026) and the rate for behavior matched (model: F1,59=43.544, p<0.001, ηG2=0.005, task condition: F3,177=455.728, p<0.001, ηG2=0.701, their interaction: F3,177=11.052, p<0.001, ηG2=0.002). This confirms the task solver’s suitability for modeling individual behavior in this task.
Comparison of prediction performance in Within-Condition Prediction for the MNIST task.
The plots show the negative log-likelihood (left) and the rate for behavior matched (right) for the RTNet model and our task solver. Each connected pair of dots represents a single participant’s data. Box plots indicate the median and interquartile range. Whiskers extend to the minimum and maximum values. The task solver achieves significantly better prediction accuracy.
EIDT accurately transfers individuality
Next, in Cross-Condition Transfer, we tested individuality transfer across all 12 pairs of experimental conditions. The full EIDT framework was compared against a baseline: a task solver (source) model trained directly on a test participant’s source condition data.
The EIDT framework consistently and significantly outperformed this baseline across all transfer sets (Figure 9). A two-way (model: task solver/EIDT, transfer direction: 12 sets (see horizontal axis)) RM ANOVA confirmed a significant effect of the model on negative log-likelihood (model: F3,177=2440.373, p<0.001, ηG2=0.800, transfer direction: F11,649=347.850, p<0.001, ηG2=0.616, interaction: F33,1947=336.968, p<0.001, ηG2=0.573) and rate for behavior matched (model: F3,177=2318.456, p<0.001, ηG2=0.798, transfer direction: F11,649=394.753, p<0.001, ηG2=0.591, interaction: F33,1947=355.577, p<0.001, ηG2=0.628). The model was also able to reproduce idiosyncratic error patterns of individual participants, such as Participant #23’s lower accuracy for digit 1 and Participant #56’s difficulty with digits 6 and 7 (Figure 10).
Individuality transfer performance in Cross-Condition Transfer for the MNIST task.
The plots compare the EIDT framework against the task solver (source) baseline across all 12 transfer directions on negative log-likelihood (top) and rate for behavior matched (bottom). Each connected pair of dots represents a single participant’s data. Box plots indicate the median and interquartile range. Whiskers extend to the minimum and maximum values. EIDT consistently demonstrates superior prediction accuracy.
EIDT captures individual-specific error patterns in the MNIST task.
The plots show the percentage of correct responses for each digit for four representative participants (blue bars) and their corresponding EIDT-generated models (gray bars). Data shown is for the ES target condition, with transfer from EA.
Latent space reflects behavioral tendencies
Similar to the MDP task, a cross-individual analysis showed that the distance in the latent space was a significant predictor of prediction performance for all transfer directions (Figure 11; see Appendix 1—figures 8 and 9 and Appendix 1—table 2, Appendix 1, for full results). This confirms that, in the perceptual domain as well, the individual latent representation captures meaningful behavioral differences that are critical for accurate prediction.
Prediction performance as a function of latent space distance in the MNIST task (transfer direction EA→DA).
This cross-individual analysis shows the result of using a task solver generated from one participant to predict the behavior of another participant. The horizontal axis is the Euclidean distance between the latent representation of the two participants. The vertical axis shows the negative log-likelihood (left) and rate for behavior matched (right). Each dot represents one participant pair. Performance degrades as the distance between individuals increases, with the solid line showing the GLM fit.
Discussion
We proposed an EIDT framework for modeling the unique decision-making process of each individual. This framework enables the transfer of an individual latent representation from a (source) task condition to a different (target) task condition, allowing a task solver to predict behaviors in the target task condition. Several neural network techniques, such as autoencoders (Rumelhart and McClelland, 1987; Tolstikhin et al., 2017), hypernetworks (Ha et al., 2016), and learning-to-learn (Wang et al., 2017; Song et al., 2017), facilitate this transfer. Our experiments, conducted on both value-guided sequential and perceptual decision-making tasks, demonstrated the potential of the proposed framework in individuality transfer across task conditions.
EIDT framework extends prior work on individuality transfer
The core concept of using an encoder-decoder architecture to capture individuality builds on the work of Dezfouli et al., 2019a, who applied a similar model to a bandit task. We extended this idea in three key ways. First, we validated that the framework is effective for previously unseen individuals who were not included in model training. Although these individuals provided behavioral data in the source task condition to identify their individual latent representations, their data were not used for model training. Second, we established that this transfer is effective across different experimental conditions (e.g. changes in task rules or difficulty), not just across sessions of the same task. Third, while the original work focused on value-guided tasks, we validated the framework’s applicability to perceptual decision-making tasks, specifically the MNIST task. These findings establish that EIDT effectively captures individual differences across both task conditions and individuals.
Interpreting the individual latent representation remains challenging
Although we found that Q-learning parameters were reflected in the individual latent representation, the interpretation of this representation remains an open question. Since interpretation often requires task-condition-specific considerations (Eckstein et al., 2022), it falls outside the primary scope of this study, whose aim is to develop a general framework for individuality transfer. Previous research (Miller et al., 2023; Ger et al., 2024a) has explored associating neural network parameters with cognitive or functional meanings. Approaches such as disentangling techniques (Burgess et al., 2018) and cognitive model integration (Ger et al., 2024b; Tuzsus et al., 2024; Song et al., 2021; Eckstein et al., 2023) could aid in better understanding the cognitive and functional significance of the individual latent representation.
Regarding the individual latent representation, disentanglement and separation losses (Dezfouli et al., 2019a) during the model training could enhance interpretability. However, we used only the reproduction loss, as defined in Equation 5, because interpretable parameters in cognitive models (e.g. Daw et al., 2011) are not necessarily independent (e.g. an individual with a high learning rate may also have a high inverse temperature Lin et al., 2023, resulting in these two parameters being represented with one variable).
Why can the encoder extract individuality for unseen individuals?
Our experiments, which divided participants into training and test participant pools, demonstrated that the framework successfully extracts individuality for completely new individuals. This generalization likely relies on the fact that individuals with similar behavioral patterns result in similar individual latent representation and individuals similar to new participants exist in the training participant pool (Yechiam et al., 2005). This hypothesis suggests that individuals can be clustered based on behavioral patterns. Behavioral clustering has been widely discussed in relation to psychiatric conditions, medication effects, and gender-based differences (e.g. Pedersen et al., 2017; van den Bos et al., 2013; Sevy et al., 2007). Our results could contribute to a deeper discussion of behavioral characteristics by clustering not only these groups but also healthy controls.
Which processes contribute to individuality?
In the MNIST task, we assumed that individuality emerged primarily from the decision-making process (implemented by an RNN Spoerer et al., 2020; Cheng et al., 2024), rather than from the visual processing system (implemented by a CNN Yamins and DiCarlo, 2016). The CNN was pretrained, and the decoder did not tune its weights. Our results do not rule out the possibility that the visual system also exhibits individuality (Koivisto et al., 2011; Tang et al., 2018); however, they imply that individual differences in perceptual decision-making can be explained primarily by variations in the decision-making system (Ratcliff and McKoon, 2008; Vickers, 1970; Yechiam et al., 2005; Kar et al., 2019). This assumption provides valuable insights for research on human perception.
Limitations
One limitation is that the source and target behaviors were performed on different conditions, but within the same task. Thus, our findings do not fully evaluate the generalizability of individuality transfer across diverse task domains. However, our framework has the potential to be applied to diverse tasks since it connects the source and target tasks via the individual latent representation and accepts completely different tasks for the source and target. A key to realizing this transfer might be ensuring that the cognitive functions, such as memory, required for solving the source and target tasks are (partially) shared. The latent representation is expected to represent individual features of these functions. Conversely, if source and target tasks require completely different functions to solve them, the transfer by EIDT would not work.
The effectiveness of individuality transfer may be influenced by dataset volume. As discussed earlier, prediction performance may depend on whether similar individuals exist in the training participant pool. In our study, 100 participants were sufficient for effective transfer. However, tasks involving greater behavioral diversity may require a substantially larger dataset.
As discussed earlier, the interpretability of the individual latent representation requires further investigation. Furthermore, the optimal dimensionality of the individual latent representation remains unclear. This likely depends on the complexity of tasks involved—specifically, the number of factors needed to represent the diversity of behavior observed in those tasks. While these factors have been explored in cognitive modeling research (e.g., Katahira, 2015; Eckstein et al., 2022), a clear understanding at the individual level is still lacking. Integrating cognitive modeling with data-driven neural network approaches (Dezfouli et al., 2019a; Ger et al., 2024b) could help identify key factors underlying individual differences in decision-making.
Future directions
To further generalize our framework, a large-scale dataset is necessary, as discussed in the limitations. This dataset should include a large number of participants to ensure prediction performance for diverse individuals (Peterson et al., 2021). All participants should perform the same set of tasks, which should include a variety of tasks (Yang et al., 2019). Building upon our framework, where the encoder currently accepts action sequences from only a single task, a more generalizable encoder should be able to process behavioral data from multiple tasks to generate a more robust individual latent representation. To enhance the encoder, a multi-head neural network architecture (Canizo et al., 2019) could be utilized. An individual latent representation would enable transfer to a wider variety of tasks and allow accurate and detailed parameterization of individuals using data from only a single task.
Robust and generalizable parameterization of individuality enables computational modeling at the individual level. This approach, in turn, makes it possible to replicate individuals’ cognitive and functional characteristics in silico (Shengli, 2021). We anticipate that it offers a promising pathway toward a new frontier: artificial intelligence endowed with individuality.
Methods
General framework for individuality transfer across task conditions
We formulate the problem of individuality transfer, which involves extracting an individual latent representation from a source task condition and predicting behavior in a target task condition while preserving individuality. We consider two task conditions, A and B, which are different but related. For example, condition A might be a 2-step MDP, while condition B is a 3-step MDP.
The individuality transfer across task conditions is defined as follows. An individual K performs a problem within condition A, with their behavior recorded as AK. Our objective is to predict BK, which represents K’s behavior when performing a task with condition B. To achieve this, we extract an individual latent representation z from AK, capturing the individual’s behavioral characteristics. This representation z is then used to construct a task solver, enabling it to mimic K’s behavior in condition B. Since condition A provides data for estimating the individual latent representation and condition B is the target of behavior prediction, we refer to them as the source task condition and target task condition, respectively.
Our proposed framework for the individuality transfer consists of three modules:
Task solver predicts behavior in the target condition B.
Encoder extracts the individual latent representation from the source condition A.
Decoder generates the weights of the task solver based on the individual latent representation.
These modules are illustrated in Figure 1. We refer to this framework as EIDT, an acronym for encoder, individual latent representation, decoder, and task solver.
Data representation
For training, we assume that behavior data from a participant pool P (K∉P), where each participant has performed both conditions A and B. These datasets are represented as A={An}n∈P and B={Bn}n∈P.
For each individual n, the set An consists of one or more sets, each containing a problem instance ϕ (stimuli, task settings, or environment in condition A) and a sequence of action(s) α (recorded behavioral responses). For example, in an MDP task, ϕ represents the Markov process (state-action-reward transition) and α consists of choices over multiple trials. In a simple object recognition task, ϕ is a visual stimulus and α is the participant’s response to the stimulus. Similarly, Bn consists of a problem instance ψ and an action sequence β.
Task solver
The task solver predicts the action sequence for condition B as
(1) β^=TS(ψ;ΘTS),
where ψ is a specific problem in condition B and ΘTS represents the solver’s weights. The task solver architecture is tailored to condition B. For example, in an MDP task, the task solver outputs a sequence of actions in response to ψ. In a simple object recognition task, it produces an action based on a visual stimulus ψ.
Encoder
The encoder processes an action sequence(s) α and generates an individual latent representation z∈RM as
(2) z=ENC(α,ϕ;ΘENC),
where ϕ is a problem in condition A, ΘENC represents the encoder’s weights, and M is the dimensionality of the individual latent representation. The encoder architecture is task-condition-specific and designed for condition A.
Decoder
The decoder receives the individual latent representation z and generates the task solver’s weights as
(3) ΘTS=DEC(z;ΘDEC),
where ΘDEC represents the decoder’s weights. Since the decoder determines the task solver’s weights, it functions as a hypernetwork (Ha et al., 2016; Karaletsos et al., 2018).
Training objective
Although conditions A and B differ, an individual’s decision-making system remains consistent across task conditions. We model this using the individual latent representation z, linking it to the task solver via the encoder and decoder. For training, we use a behavioral dataset {An,Bn}n∈P from an individual pool P.
Let α be an action sequence representing individual n’s behavior on the source task condition, that is (α,ϕ)∈An, n∈P. The individual latent representation is derived by z=ENC(α,ϕ;ΘENC). The weights of the task solver are then given by ΘTS=DEC(z;ΘDEC). Subsequently, the task solver, with the given weights, predicts an action sequence for condition B as β^=TS(ψ;ΘTS), where (β,ψ)∈Bn. We then measure the prediction error between β^ and β as:
(4) Lp(α,ϕ,β,ψ,ΘENC,ΘDEC)=O(β,β^),
where β is an action sequence in Bn recorded along with the problem ψ, and O(⋅,⋅) is a suitable loss function (e.g. likelihood-based loss for probabilistic outputs). Using the datasets containing the behavior of the individual pool P, the weights of the encoder and decoders, ΘENC and ΘDEC, are optimized by minimizing the total loss:
(5) L(ΘENC,ΘDEC)=1|P|∑n∈P1|An|∑(α,ϕ)∈An1|Bn|∑(β,ψ)∈BnLp(α,ϕ,β,ψ,ΘENC,ΘDEC).
This section provides a general formulation of individuality transfer across two task conditions. For specific details on task architectures and loss functions, see Experiment on MDP task and Experiment on MNIST task.
Experiment on MDP task
We validated our individuality transfer framework using two different decision-making tasks: the MDP task and the MNIST task. This section focuses on the MDP tasks, a dynamic multi-step decision-making task.
Task
At the beginning of each episode, an initial state-cue is presented to the participant. For human participants, the state cue is represented by animal images (Figure 12). For the cognitive model (Q-learning agent) and neural network-based model, the state-cue is represented numerically (e.g. (2, 1) for the first task state in the second choice). The participant makes a binary decision (denoted as action C1 or C2) for each step. In the human experiment, these actions correspond to pressing the left or right cursor key. With a certain probability (either 0.8/0.2 or 0.6/0.4), known as the state-action transition probability, the participant transitions to one of two subsequent task states. This process repeats two times for the 2-step MDP and three times in the 3-step MDP. After the final step, the participant receives an outcome: either a reward (r=1) or no reward (r=0). For human participants, rewards were displayed as symbols, as shown in Figure 12. Each sequence from initial state-cue presentation to reward delivery constitutes an episode.
The 3-step MDP task.
(A) Tree diagram illustrating state-action transitions. (B) Flow of a single episode in the behavioral experiment for human participants.
The state-action transition probability T(s,a,s′) from a task state s to a preceding state s′ given an action a varies gradually across episodes. With probability ptrans, one of the transition probabilities switches to a new set chosen from {(0.8, 0.2), (0.2, 0.8), (0.6, 0.4), (0.4, 0.6)}. Consequently, participants must adjust their decision-making strategy in response to these shifts in transition probabilities to maintain reward maximization.
Behavioral data collection
We recruited 123 participants via Prolific. All participants provided their informed consent online. This study was approved by the Committee for Human Research at the Graduate School of Engineering, The University of Osaka (Approval number: 5-4-1), and complied with the Declaration of Helsinki. Participants received a base compensation of £4 for completing the entire experiment. A performance-based bonus (£0 to £2, average: £1) was awarded based on rewards earned in the MDP task.
Each participant completed 3 sequences for each step condition (2-step and 3-step MDP tasks), with each sequence comprising 50 episodes. The order of the 2-step and 3-step MDP tasks was randomized across sequences. State-cue assignment (animal images) was randomly determined for each sequence. Participants took a mandatory break (≥1 min) between sequences.
To ensure data quality, we applied exclusion criteria based on average reward, action bias, and response time. Thresholds for these metrics were systematically determined using the interquartile range method on statistics from the initial dataset. Participants were removed from the analysis entirely if their data from any single block fell outside these established ranges. This procedure led to the exclusion of one participant for low average reward (below 0.387 for the 2-step MDP and 0.382 for the 3-step MDP), 23 participants for excessive action bias (outside the 26.3–73.3% range), and 18 for outlier response times (outside the 0.260–1.983 s range). In total, 42 participants (approximately 34%) were excluded, resulting in a final sample of 81 participants for analysis.
Cognitive model
To model decision-making in the MDP task, we employed a Q-learning agent (Sutton and Barto, 1998). At each step t, the agent was presented with the current task state st and selected an action at. The agent maintained Q-values, denoted as Q(s,a), for all state-action pairs, where s was a state of the set of all possible task states S and a was an action of the set of available actions in that state Cs. The probability of selecting action a was determined by a softmax policy:
(6) π(a)=exp(qitQ(st,a))∑a′∈Cstexp(qitQ(st,a′)),
where qit>0 was a parameter called the inverse temperature or reward sensitivity, controlling the balance between exploration and exploitation.
After selecting action at, the agent received an outcome rt∈{0,1} and transitioned to a new state st+1. The Q-value for the selected action was updated by
(7) Q(st,at)←(1−qlr)Q(st,at)+qlr(rt+qdrmaxa∈Cst+1Q(st+1,a)),
where qlr∈(0,1) was the learning rate, determining how much newly acquired information replaced existing knowledge, and qdr∈(0,1) was the discount rate, governing the extent to which future rewards influenced current decision. The Q-values are initialized as qinit before an agent starts the first episode.
EIDT model
This section describes the specific models used for individuality transfer in the MDP task.
Data representation
Since MDP tasks involve sequential decision-making, each action sequence consists of multiple actions within a single session. In our experiment, each participant completed L trials per session, with L=100 for the 2-step MDP and L=150 for the 3-step MDP. The action sequence is represented as [(s1,a1,r1),…,(sL,aL,rL)], where, st denotes the task state at trial t, at∈C represents the action selected from the set C≡{Ck}k=1K (with K=2 in our task), and rt∈{0,1} indicates whether a reward was received. In the M-step MDP described in Figure 12, each task state is represented as (m,cm), where m denotes the current step within the episode (m∈{1,…,M}) and cm corresponds to the cue presented to the participant. The action sequence, denoted as α or β, consists of a sequence of selected actions (a1,…,aL), while a problem, denoted as ϕ or ψ, is represented as ((s1,…,sL),(r1,…,rL)).
Task solver
Before describing the encoder and decoder, we define the architecture of the task solver, which generates actions for the M-step MDP task. The task solver is implemented using a gated recurrent unit (GRU) (Cho et al., 2014) with Q cells, where Q=4 for the 2-step task and Q=8 for the 3-step task. At time-step t, the GRU takes as input the previous hidden state ht−1∈RQ, the previous task state st−1, the previous action at−1, the previous reward rt−1, and the current task state st. It then updates the hidden state as
(8) ht=GRU(st−1,at−1,rt−1,st,ht−1;Φ),
where Φ represents the GRU’s weights. The updated hidden state is then used to predict the probability of selecting each action through a fully-connected feed-forward layer:
(9) vt=Wht,
where vt represents the logit scores for each action (unnormalized probabilities), and W∈RK×Q is the weight matrix. The probabilities of each action are computed using a softmax layer:
(10) π(at=Ck)=e[vt]k∑k′=1,…,Ke[vt]k′,
where π(at=Ck) represents the probability of selecting action Ck at time t, and [vt]i denotes the i-th element of vt.
For input encoding, we used a 1-of-K scheme. The step of the MDP task is encoded as [1, 0, 0] for step 1, [0, 1, 0] for step 2, and [0, 0, 1] for step 3. Each task state sm is represented as [1, 0] or [0, 1] to distinguish the two state cues at each step. The participant’s action is encoded as C1:[1,0] or C2:[0,1], while the reward is represented as 0: [1, 0] or 1: [0, 1]. These encodings are concatenated to form input sequences.
The task solver TS(ψ;ΘTS) generates a sequence of predicted action probabilities {(π(at=C1),…,π(at=CK))}t=1L, using the GRU, the fully-connected layer W, and the softmax layer. The problem ψ defines the MDP environment, specifying state transitions and reward outcomes in response to selected action.
To evaluate prediction accuracy, the loss function O(β,β^), defined in Equation 4, compares human-performed action {β,ψ} with those predicted by the task solver, {β^,ψ}. Notably, the problem ψ is not executed with the task solver; instead, the task solver predicts action probabilities based on the same task state and reward history as in the human behavioral data.
Encoder and decoder
The encoder ENC(α,ϕ;ΘENC) extracts an individual latent representation z from a sequence of actions α corresponding to a given environment ϕ. The first module of the encoder is a GRU, similar to the task solver, with R=32 cells. The final hidden state hL∈RR serves as the basis for computing the individual la