Background & Summary
Humans systematically prioritize information related to the self over information related to others, a phenomenon observed consistently across perception, attention, memory, evaluation, and choice1,2,3,[4](https://www.nature.com/articles/s41597-025-06035-z#ref-CR4 “Scheller, M. et al. Self-association enhances early attentio…
Background & Summary
Humans systematically prioritize information related to the self over information related to others, a phenomenon observed consistently across perception, attention, memory, evaluation, and choice1,2,3,4. These self-biases manifest in multiple forms, reflecting the multifaceted nature of self-representation in human cognition. Despite their ubiquity in everyday life, our understanding of how different self-biases relate to one another remains limited. A major reason is historical: cognitive, social, and economic traditions have developed in parallel, using distinct paradigms, measures, and theories.
In cognitive psychology, self-prioritization yields faster and more accurate of processing of self-related information5,6,7,8,9. Related findings include the self-referential memory advantage for self-encoded material10,11,12 and preferential detection of one’s own face and own name in cluttered scenes (the cocktail-party effect)13,14. In social psychology, self-positivity bias reflects individuals’ tendency to perceive themselves in an ‘unrealistically’ positive manner (e.g., better-than-average judgments)15,16,17, evident in both explicit18,19,20 and implicit21,22 levels. Valuation may link these traditions but highlight mechanistic heterogeneity: the mere-ownership effect aligns with a positivity route, whereas the endowment effect reflects reference-dependent valuation driven by loss aversion23,24.
A central question is whether these biases represent manifestations of the same underlying mechanisms analogous to a unified self-processing system that operates regardless of context, related but distinct phenomena, or entirely separate processes25,26. Intrinsic self-biases observed in individuals with amnesia or mild cognitive impairment suggest some self-processing may operate independently of explicit self-knowledge27,28. However, recent studies attempting to address this question have typically examined correlations between two or three self-bias paradigms, generating inconsistent results with correlations that are often small and lack robustness25,26,29,30. Equally unresolved is how individual differences, such as personality, self-esteem, and cultural factors, shape the magnitude or expression of self-bias.
The present dataset constitutes the most comprehensive measures of self-bias to date, providing trial-by-trial data for 134 participants across 10 widely used paradigms, spanning cognitive, social, and economic decision-making domains, including self-reference effect, mere ownership effect, self-face visual search, self-name visual search, cocktail party effect, self-name attentional blink, shape-label matching, self-enhancement, implicit association test of self-esteem, and endowment effect. In addition, we collected measures of key individual difference variables, including the Big Five personality dimensions, self-esteem, and independent-interdependent self-construals, which previous research suggests may modulate self-bias effects20,31,32.
Taken together, this resource brings 10 established self-bias paradigms into a single, trial-level dataset collected within one cohort, enabling direct cross-paradigm comparisons across cognitive domains and cautious tests of shared versus domain-specific mechanisms25,26. The inclusion of individual difference measures—such as personality, self-esteem, and self-construals—allows examination of heterogeneity across individuals and potential cultural modulation. The trial-level structure is suitable for computational modeling (e.g., drift-diffusion modeling), making it possible to investigate where self-biases may influence processing (evidence accumulation, decision thresholds, response bias, or non-decision time)6,33,34. We release these data to support transparent reuse, method benchmarking, and progress toward integrative accounts of how self-related processing shapes cognition and behavior across contexts.
Methods
Participants
The present research was approved by the Ethics Committee of the Department of Psychological and Cognitive Sciences at Tsinghua University (NO. 2022–29), and was conducted in accordance with the ethical standards laid down in the Declaration of Helsinki. A total of one hundred and thirty-four Chinese undergraduate or graduate students (77 females; mean age = 21.99 ± 2.08 years old; ranging from 18 to 28 years old) were recruited from the psychology subject pool at Tsinghua University. All of them reported being right-handed and having normal or corrected-to-normal vision without color blindness. Each participant signed an informed consent for participation and data sharing before the start of the experiment. The entire experiment lasted approximately 4 hours, and each participant received 240 Chinese Yuan (CNY) for their time and participation.
Design and procedure
The experiment comprised 10 widely-used experimental paradigms to investigate self-biases across cognitive domains (see Table 1 for a review), along with an online questionnaire that included measurements of big five personality, self-construals, individualism-collectivism, self-esteem, subjective well-being, self-concept clarity, the dark triad (Machiavellianism, narcissism, and psychopathy), as well as modesty. The entire task was divided into two sets, requiring participants to engage in the experiment over two separate days (two hours each). The order of the aforementioned tasks was pseudorandomized for each participant. After signing the informed consent form, participants were asked by the experimenter to indicate the full name of their best friend in real life. They were instructed to input either the full name or the family name of this “friend” before the start of certain self-bias paradigms, according to the instructions presented on the screen. The sex of the best friend was not controlled, as participants selected this individual based on their own subjective criteria. The self-enhancement and endowment effects were assessed through an online questionnaire hosted on the WJX platform (www.wjx.cn). The remaining tasks were conducted using PsychoPy software (version 2022.2.4). For stimulus presentation, we employed a 25-inch external monitor with a resolution of 1920 × 1080 pixels at 60 Hz. Below are specific descriptions of each self-bias paradigm and each self-reported scale.
Self-bias paradigms
Self-reference effect (SRE)
In line with previous research11,25,35, we employed a trait-word evaluation paradigm to elicit the self-reference effect. The task followed a single-factor (Identity: self, friend, or familiar other) within-participants design. The “familiar other” used in this paradigm was Lu Xun, a highly influential modern Chinese writer, widely recognized as a foundational figure in 20th-century Chinese literature and thought. His works are extensively taught in Chinese schools, and his character, ideas, and values are well-known to most university students in China. This choice follows prior self-processing studies that used Lu Xun as a representative figure for the “familiar other” condition in self-processing research9,36. It should be noted that we did not individually assess participants’ knowledge of Lu Xun’s character traits in this dataset. The materials comprised a list of 240 two-character trait adjectives. These adjectives were divided into four sub-lists (40 items for each of the three conditions in the encoding phase, and 120 new items serving as distractors in the recognition phase). Items in the four sub-lists were matched in valence and frequency according to the results of a pilot study. Notably, for each sub-list, half of the adjectives were positive, and the other half were negative.
As shown in Fig. 1, the task comprised two phases: the encoding phase and the recognition phase. The instruction presented before the encoding phase was as follows: “In the upcoming task, you will be shown a series of adjective–name pairs. For each pair, please evaluate how well the adjective describes the named individual. You will have 4 seconds to make each judgment. Please try to respond within this time limit.”
Fig. 1
Trial procedure flowchart for the self-reference effect paradigm.
During the encoding phase, each trial began with a fixation cross presented against a gray background (RGB: 127, 127, 127) for 1000 ms. After that, a trait adjective was presented simultaneously with either the full name of the participant, the full name of the best friend, or “Lu Xun”. Participants were instructed to rate the extent to which the trait adjective was descriptive of the specified person on a 5-point scale (1 represented “not at all descriptive” and 5 represented “very descriptive”). Participants were required to complete the rating within 4000 ms; otherwise, the stimuli would disappear, and no answer would be recorded. The next trial commenced immediately after the rating was completed or the stimuli disappeared. The encoding phase consisted of 120 randomized trials, with rest periods provided after every 60 trials.
Following the encoding phase, participants performed a task-unrelated mental calculation exercise for approximately 3 minutes. In the subsequent recognition phase, participants undertook an unexpected recognition test. All 240 trait adjectives were presented in a randomized sequence. The instruction presented before the recognition phase was as follows: “You will now see a series of adjectives. Your task is to judge whether each adjective is ‘old’ (i.e., previously presented during the encoding phase) or ‘new’ (i.e., not previously encountered). For each adjective judged as ‘old’, you will then be asked to indicate whether you ‘remember’ it (i.e., you can recollect specific contextual details from the encoding phase, such as associated thoughts, feelings, or visual impressions) or merely ‘know’ it (i.e., the item feels familiar, but you cannot recall any specific details)”37. Specifically, participants were required to indicate their responses to the aforementioned questions by pressing either the “Z” or “M” key on the keyboard. The key-response mapping was counterbalanced across participants. There was no time limit imposed for responding to either of those questions.
Mere ownership effect (MOE)
The task involved an encoding phase and a recognition phase, following a single-factor (Ownership: self-owned or experimenter-owned) within-participants design. Consistent with previous research29, participants were informed that both they and the experimenter had won a competition, resulting in each receiving a basket filled with various shopping items.
Materials comprised 120 photographic representations of everyday purchase items, obtained from the Bank of Standardized Stimuli37. These items were divided into three sets (A, B, and C), each containing 40 items. Items were paired across sets based on similar categories. For instance, there were different fruits such as an apple, strawberry, and banana in sets A, B, and C, respectively. Items across the three sets were equated for familiarity based on subjective ratings provided by Brodeur and colleagues38. Each set was randomly allocated to either the “self” condition, the “experimenter” condition, or to serve as distractors during the recognition phase. The instruction presented before the encoding phase was as follows: “Both you and the experimenter had won a competition, resulting in each receiving a basket filled with various shopping items. The images of the items, along with their associated ownership cues, will be presented sequentially. Please assign each object to the appropriate basket based on the color of the ownership cue.”
During the encoding phase, two shopping baskets were displayed in the lower corners of the screen – one in blue and the other in green (see Fig. 2). Participants were informed that one basket belonged to themselves, and the other belonged to the experimenter. Each trial began with a 1000 ms fixation cross on a gray background (RGB: 127, 127, 127), followed by a centrally presented item photograph for 2000 ms. Subsequently, a blue or green border appeared around the item, indicating its assigned ownership. Participants were required to allocate the item to the corresponding basket by pressing a designated key, as quickly and accurately as possible (without time limit). Upon keypress, the next trial began immediately. The encoding phase included 80 randomized trials, with a rest break provided after 40 trials.
Fig. 2
Trial procedure flowchart for the mere ownership effect paradigm.
Following the encoding phase, participants carried out an unrelated mental calculation task for approximately 3 minutes. In the subsequent recognition phase, participants undertook an unexpected recognition test. All 120 items were presented in a randomized sequence. The instruction presented before the recognition phase was as follows: “You will now see a series of items. Your task is to judge whether each item is ‘old’ (i.e., previously presented during the encoding phase) or ‘new’ (i.e., not previously encountered). For each item judged as ‘old’, you will then be asked to indicate whether you ‘remember’ it (i.e., you can recollect specific contextual details from the encoding phase, such as associated thoughts, feelings, or visual impressions) or merely ‘know’ it (i.e., the item feels familiar, but you cannot recall any specific details).” Specifically, participants were required to indicate their responses to the aforementioned questions by pressing either the “Z” or “M” key on the keyboard. The key-response mapping was counterbalanced across participants. There was no time limit imposed for responding to either of those questions.
Self-face visual search (FVS)
The task followed a 2 (Target identity: self or stranger) × 2 (Target presence: present or absent) within-participant design. Participants had their identification photo taken in the laboratory at the end of the first experimental day and completed this task on the second experimental day. Each photo was captured using a Canon M50 Mark II camera with a focal length of 45 mm. Apart from participants’ own facial image (i.e., the self-face), a facial image of another participant of the same biological sex was randomly selected by the experimenter to act as the face image of a stranger (i.e., the stranger-face) during the visual search task. All participants reported that they did not know the assigned stranger. Fifteen male and fifteen female facial images with a neutral expression were obtained from a previous database39, and served as distractors. The mean age of the participants in this database was comparable to that of our participants (21.70 ± 2.37 vs. 21.99 ± 2.08), t(162) = 0.68, p = 0.50. Using the Adobe Photoshop 2023 software, all these images were cropped to the same size (1600 pixels × 2000 pixels), and then stored with 256 gray levels14.
At the beginning of the task, facial images of the participant and the assigned stranger were displayed on the screen. The instruction presented was as follows: “ You will search for either the self-face or the stranger-face in two separate blocks. In each block, please press the designated key as quickly and accurately as possible to indicate whether the target face was presented.” Specifically, participants responded by pressing either the “Z” or “M” key on the keyboard to indicate the presence or absence of the target face. The key-response mapping was counterbalanced across participants.
Each trial began with a fixation cross presented against a gray background (RGB: 127, 127, 127) for 500 ms, followed by an array of six different facial images (each 2.38° × 3.18°) evenly positioned around the central point, visible for 3000 ms (see Fig. 3). The visual angle between the center of each image and the central point was approximately 5.3°. Distractor faces were randomly selected from a set of fifteen distractors of the same biological sex. Participants were required to press one of two corresponding keys to indicate whether the target image presented in this trial, as quickly and accurately as possible. Upon a keypress, or after 3000 ms from the images display, the subsequent trial would start immediately. The entire task comprised 192 trials, evenly distributed across four experimental conditions. The trial order was randomized within each block. Participants had the opportunity to rest after every set of 32 trials.
Fig. 3
Trial procedure flowchart for the self-face visual search paradigm.
Self-name attentional blink (AB)
We investigated the self-name attentional blink using the rapid serial visual presentation paradigm26,40, which followed a 3 (T2 identity: self, friend, or stranger) × 2 (T2 presence: present or absent) × 4 (Lag: 1, 2, 5, or 8) × 2 (Task type: blink or control) within-participant design. The Chinese family names of the participants themselves, their best friends, and another randomly selected participant were used as the self-name, the friend-name, and the stranger-name, respectively (note that the family names for all participants and their best friends consisted of a single Chinese character, a common phenomenon among Chinese individuals). Before the experiment started, participants were asked to eliminate names from a list of twenty-four common Chinese first names if: (1) the name was the same as, or similar to, one of the three target names; or (2) someone close to them had that name. The remaining names on the list would then serve as distractors during the task.
The instruction presented was as follows: “You will see a rapid stream of single-character Chinese family names. One of them will be white, and all others will be black. In certain blocks, your task is to first report the white character and then judge whether a black target character appeared. In other blocks, you may ignore the white character and only respond to the black target character. Important: Each block has different instructions. Be sure to read the guidance shown at the beginning of each block carefully!” As visualized in Fig. 4A, each trial began with a central fixation cross presenting for 1000 ms. This was followed by a rapid serial visual presentation (RSVP) sequence consisting of 15 first names (see Fig. 4B for an illustration), each appearing for 100 ms against a gray background (RGB: 127, 127, 127). Except for the first target (T1), which was displayed in white, all other names in the sequence were in black. Distractor names, randomly selected for each trial, included T1, which was always positioned at the third, fourth, or fifth place in the sequence. The second target (T2), identified as either the self-name, the friend-name, or the stranger-name, was either omitted or appeared at one of four different intervals (lags) following T1: Lag 1, Lag 2, Lag 5, or Lag 826.
Fig. 4
(A) Trial procedure flowchart for the self-name attentional blink paradigm. (B) A detailed illustration of the RSVP sequence, with examples for the second target (T2) presented at Lag 2. It is important to note that the first target (T1) may appear at either the third, fourth, or fifth position in the RSVP sequence.
The experiment included two types of tasks: the blink task and the control task. In the blink task, following the RSVP stream, participants were asked to sequentially answer two questions: (1) “What was the white character?” (to be typed as a response); and (2) “Was [T2 name] present or not present?” (responded by clicking one of two buttons on the screen). The presentation of stimuli in the control task was identical to that in the blink task. However, participants were only required to respond to the second question.
The entire task consisted of six blocks (i.e., self-blink, friend-blink, stranger-blink, self-control, friend-control, and stranger-control). In each block, T2 was presented at each lag 12 times and was not presented in another 48 trials, resulting in 96 trials presented in a randomized sequence. The order of the six blocks was also randomized. Participants were given the opportunity to rest after every set of 32 trials.
Self-name visual search (NVS)
The task followed a 3 (Target identity: self, friend, or stranger) × 2 (Target presence: present or absent) within-participant design. Participants were required to search for Chinese first names of themselves, their best friends, and another randomly selected participant in three separate blocks. It should be noted that for each participant, the same first name was used as the stranger’s name in this task and the self-name attentional blink task. Before the experiment started, participants were asked to eliminate names from a list of twenty-four common Chinese first names, using the same exclusion criteria as in the self-name attentional blink task. The remaining names on the list would then serve as distractors.
The instruction presented was as follows: “You will search for the self-name, the friend-name, or the stranger-name in three separate blocks. In each block, please press the designated key as quickly and accurately as possible to indicate whether the target name was presented.” Specifically, participants responded by pressing either the “Z” or “M” key on the keyboard to indicate the presence or absence of the target name. The key-response mapping was counterbalanced across participants.
Each trial began with a fixation cross presented against a gray background (RGB: 127, 127, 127) for 500 ms (see Fig. 5), followed by an array of six distinct first names (each 1.32° × 1.32°) evenly arranged around the central point, visible for 2000 ms. The visual angle between the center of each name and the central point was approximately 5.3°. Distractor names shown were randomly selected from the distractors. Participants were required to press one of two corresponding keys to indicate whether the target name presented in this trial, as quickly and accurately as possible. Following a keypress or after 2000 ms from the display of names, the subsequent trial commenced immediately. The entire task comprised 288 trials, evenly distributed across six experimental conditions. The trial order was randomized within each block. Participants had the opportunity to rest after every set of 32 trials.
Fig. 5
Trial procedure flowchart for the self-name visual search paradigm.
Cocktail party effect (CPE)
The task followed a single-factor (Target identity: self or stranger) within-participants design. To create a setting similar to a cocktail party, we simultaneously played two recordings of Chinese first names through the left and right channels of headphones41. The Chinese first names of the participants themselves and another randomly selected participant were used as the self-name and the stranger-name, respectively. The recordings of these first names, as well as those of another twenty common Chinese first names, were acquired via an AI-based voice synthesis platform (https://voice.ncdsnc.com/). The recordings differed only in pronunciation, while other acoustic properties like volume, tone, and timbre were kept consistent. Using Adobe Audition 2023, we processed each recording into monaural source stimuli. These stimuli, comprising versions for both the left and right channels, were trimmed to a uniform length of 400 ms. Before the task started, participants were instructed to remove any names from the list of thirty common Chinese first names if the pronunciation was identical or similar to either of the two target names or to the first name of someone they knew well. The remaining names on the list would then serve as distractors during the task.
The instruction presented was as follows: “In the following task, you will simultaneously hear two different Chinese family names—one in your left ear and one in your right ear. If either the [self-name] or the [stranger-name] is presented, please respond as quickly and accurately as possible by indicating the ear in which the target name appeared (press the “Z” key for left ear, and the “M” key for right ear). If neither of the two target surnames is presented, no response is required for that trial. Please note that both target names will never appear in the same trial.”
Each trial began with a central fixation cross presented against a gray background (RGB: 127, 127, 127) for 1000 ms (see Fig. 6). After that, recordings of two different first names were presented through the left and right channels of headphones, respectively. Participants were required to press one of two corresponding keys to indicate the position (left or right) of the target name, as quickly and accurately as possible. Participants were informed that they did not need to press any keys if the target name was not presented, and each trial contained at most one target name. Upon a keypress, or after 2000 ms from the presentation of recordings, the trial ended. Following an inter-trial interval with a blank screen for 1000 ms, the subsequent trial commenced immediately. The whole task was conducted in a single block consisting of 240 trials. The self-name and the stranger-name were each paired with a randomly selected distractor in 60 trials, respectively. In the remaining 120 trials, two randomly selected distractors were paired. The channel assignment was balanced so that each target name was presented through the left and right channels in 30 trials each. The trial order was randomized during the task. Participants had the opportunity to rest after every set of 60 trials.
Fig. 6
Trial procedure flowchart for the cocktail party effect paradigm.
Shape–label matching (SLM)
The task followed a 3 (Shape identity: self, friend, or familiar other) × 2 (Trial type: matching or nonmatching) within-participant design. The full names of the participant, their best friend, and “Lu Xun” (representing the familiar other), were used as labels corresponding to the self, friend, and familiar other, respectively9,25. The task consisted of two phases. In the learning phase, participants learned to associate three geometric shapes (circle, square, and triangle) with three named identities—specifically, the full names of the self, their best friend, and the familiar other.
The instruction presented was as follows: “You are now required to remember the following associations: the circle represents [self-name], the square represents [friend-name], and the triangle represents Lu Xun. In the upcoming task, each trial will present a shape–name pair on the screen. Based on the associations you just learned, please judge as quickly and accurately as possible whether the presented pair is a correct match or not.” Specifically, participants indicated whether each pair was matched or mismatched by pressing either the “Z” or “M” key on the keyboard. The key-response mapping was counterbalanced across participants. In addition, the associations between geometric shapes and identity labels (i.e., full names) were also counterbalanced to ensure experimental control across subjects.
During the testing phase, participants were presented with shape–name pairings and tasked with determining if the pairing matched, based on previously learned rules, as quickly and accurately as possible. As shown in Fig. 7, each trial began with the presentation of a fixation cross against a gray background (RGB: 127, 127, 127) for 500 ms. This was followed by the display of a shape–name pairing for 100 ms. The shapes (2.4° × 2.4°) and names (about 4.4° × 2.4°) appeared consistently above and below the cross, respectively. The midpoint of both the shape and the name was positioned 3.5° from the fixation cross. Subsequently, a blank screen appeared for 1100 ms, during which participants had the opportunity to respond by pressing one of two designated keys to signify whether the shape–name pairing matched or not. After a keypress, or once 1100 ms had elapsed from the onset of the blank screen, feedback indicating whether the response was correct, incorrect, or too slow was displayed for 500 ms. Following the disappearance of this feedback, the next trial began immediately.
Fig. 7
Trial procedure flowchart for the shape–label matching paradigm.
Initially, the participants engaged in a practice block. Once they achieved six consecutive correct responses, they progressed to the formal experiment. The formal experiment consisted of 360 trials, equally divided among six experimental conditions. It should be noted that each non-matching pair combination was presented an equal number of times. For example, in the self-nonmatching condition, there were 30 trials each of “self-shape + friend-name” and “self-shape + familiar other-name”. The trial order was randomized during the task. Participants had the opportunity to rest after every set of 60 trials.
Self-enhancement (SE)
The measurement of self-enhancement was determined by comparing self-assessments with established external benchmarks20,42. Participants estimated their ranking (as integers), relative to their peers at Tsinghua University across eight characteristics: intelligence, cooperation, appearance, morality, sociability, health, honesty, and generosity. Specifically, the instruction presented was “Please estimate the approximate percentile rank of the following traits of yours within the Tsinghua University student population. (A lower number indicates a higher ranking)” In this ranking system, a score of “0” indicated the top position, while “100” denoted the bottom position.
Implicit association test of self-esteem (IAT)
The Implicit Association Test was utilized to assess participants’ implicit attitudes towards themselves21. In this task, participants had to sort Chinese words according to their meanings. The experiment involved two sets of word lists. The first list contained 12 words related to different identities, with half representing the concept of “self” and the other half representing “others.” The second list comprised 6 trait adjectives with a positive valence (sincere, reliable, intelligent, friendly, kind-hearted, and generous), and 6 trait adjectives with a negative valence (phony, deceitful, rude, cold, mean, and lazy). These trait adjectives were selected based on likability ratings from a previous study43.
The instruction presented was as follows: “In this task, you will be asked to categorize words based on the label(s) presented in the upper-left and upper-right corners of the screen. For each word, if it belongs to the category indicated by the label(s) on the upper-left corner, press the ‘Z’ key. If it belongs to the category on the upper-right corner, press the ‘M’ key. Please respond as quickly and accurately as possible. The table below displays all the words that may appear in the task, along with the category to which each belongs. Please take a moment to familiarize yourself with the word-category pairings before beginning the experiment.”
The task followed a five-block IAT design, recognized as the standard in the current IAT methodology44. As illustrated in Table 2, blocks 1, 2, and 4, each comprising 20 trials, served as practice sessions, though this was not disclosed to the participants. Implicit attitudes were assessed by comparing performances in blocks 3 and 5, each containing 40 trials where identity and valence categories were combined. In each block, category labels were consistently displayed in the upper left and right corners. Each trial started with a fixation cross against a gray background (RGB: 127, 127, 127) for 500 ms (see Fig. 8), followed by a word (pertaining to either identity or valence) from the two aforementioned lists, displayed at the center of the screen. Participants were asked to sort it into the corresponding category by pressing the left or right key, as quickly and accurately as possible. The word disappeared as soon as the keypress, followed by feedback (correct, or incorrect) presenting for 200 ms. After an inter-trial interval with a blank screen for 250 ms, the subsequent trial commenced immediately.
Fig. 8
Trial procedure flowchart for the implicit association test of self-esteem.
Each word was presented for equal times in each block, with a randomized order. Between each pair of blocks, an instruction screen was presented, detailing the nature of the forthcoming task modification. Participants were able to proceed to the next block at their own pace by pressing the space bar once they felt ready. The block order and keypress were counterbalanced among participants (see Table 2).
Endowment effect (EE)
We utilized the valuation paradigm to explore the endowment effect, wherein the willingness to pay (WTP) and the willingness to accept (WTA) were compared45. Consistent with a previous study29, we adapted this paradigm to suit a within-participants design. Our experimental materials comprised images of easily substitutable market goods, categorized into two sets. Each set contained a pen, a plate, a glass, and a doll, with each item differing in appearance from its corresponding item in the other set. The results of a pilot study (N = 50) indicated that each item pair had comparable perceived values, all ts < 0.34, ps > 0.73. The task followed a single-factor (Ownership: self-owned or experimenter-owned) within-participants design. In this setup, one set of goods was always designated as the self-owned items, while the other set was classified as experimenter-owned items.
Participants completed the task via an online questionnaire. On one page of this questionnaire, images of four self-owned items were displayed. For each item, participants were asked, “You own this item; how much would you be willing to sell it for?” On another page, images of four experimenter-owned items were shown. For each of these items, participants were asked, “The experimenter owns this item; how much would you be willing to buy it for?” The order of these two pages was randomized. Each response was limited to an integer value between ¥0 and ¥100.
Self-reported scales
Participants completed an online questionnaire using the WJX platform (www.wjx.cn), which included the following measurements.
Big five personality was measured by the Big Five Inventory-246. This scale consists of 60 items belonging to five dimensions: extraversion, agreeableness, conscientiousness, neuroticism, and openness. Participants rated the extent to which they agreed with each statement on a five-point Likert scale, anchored by 1 (completely disagree), and 5 (completely agree). In the present study, the Cronbach’s alpha coefficients for these dimensions were 0.80, 0.81, 0.83, 0.87, and 0.88, respectively.
Self-construals were assessed using the scale developed by Singelis47. This scale comprises 30 items, categorized into two dimensions: independent self-construal and interdependent self-construal. Participants rated the extent to which they agreed with each statement on a seven-point Likert scale, anchored by 1 (very much disagree), and 7 (very much agree). In the present study, the Cronbach’s alpha coefficients for the two dimensions were 0.72 and 0.77, respectively.
Individualism-Collectivism was measured by