Published
October 29th, 2025
Contents
We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model’s activations, and measuring the influence of these manipulations on the model’s self-reported states. We find that models can, in certain scenarios, notice the presence of injected concepts and accurately identify them. Models demonstrate some ability to recall prior internal representations and distinguish them from raw text inputs. Strikingly, we find that some models can use their ability to recall prior intentions in orde…
Published
October 29th, 2025
Contents
We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model’s activations, and measuring the influence of these manipulations on the model’s self-reported states. We find that models can, in certain scenarios, notice the presence of injected concepts and accurately identify them. Models demonstrate some ability to recall prior internal representations and distinguish them from raw text inputs. Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. In all these experiments, Claude Opus 4 and 4.1, the most capable models we tested, generally demonstrate the greatest introspective awareness; however, trends across models are complex and sensitive to post-training strategies. Finally, we explore whether models can explicitly control their internal representations, finding that models can modulate their activations when instructed or incentivized to “think about” a concept. Overall, our results indicate that current language models possess some functional introspective awareness of their own internal states. We stress that in today’s models, this capacity is highly unreliable and context-dependent; however, it may continue to develop with further improvements to model capabilities.
Introduction
Humans, and likely some animals, possess the remarkable capacity for introspection: the ability to observe and reason about their own thoughts. As AI systems perform increasingly impressive feats of cognition, it is natural to wonder whether they possess any similar awareness of their internal states. Modern language models can appear to demonstrate introspection, sometimes making assertions about their own thought processes, intentions, and knowledge. However, this apparent introspection can be, and often is, an illusion. Language models may simply make up claims about their mental states, without these claims being grounded in genuine internal examination. After all, models are trained on data that include demonstrations of introspection, providing them with a playbook for acting like introspective agents, regardless of whether they are. Nevertheless, these confabulations do not preclude the possibility that AI models can, at times, genuinely introspect, even if they do not always do so.
How can we test for genuine introspection in language models? Several previous studies have explored this question and closely related topics, observing model capabilities that are suggestive of introspection. For instance, prior work has shown that models have some ability to estimate their own knowledge , predict their own behavior , identify their learned propensities , and recognize their own outputs (seeRelated Work for a full discussion). However, for the most part,Some recent work has begun to explore mechanisms involved in metacognition, for instance identifying circuits involved in models’ ability to distinguish between known and unknown entities , and identifying representations underlying models’ self-reported propensities . prior work has not investigated models’ internal activations on introspective tasks, leaving open the question of how models’ claims about themselves relate to their actual internal states.
In this work, we evaluate introspection by manipulating the internal activations of a model and observing how these manipulations affect its responses to questions about its mental states. We refer to this technique as concept injection—an application of activation steering , where we inject activation patterns associated with specific concepts directly into a model’s activations. While performing concept injection, we present models with tasks that require them to report on their internal states in various ways. By assessing how these self-reports are affected by injected representations, we can infer the extent to which models’ apparent introspection actually reflects ground-truth.
Our results demonstrate that modern language models possess at least a limited, functional form of introspective awareness. That is, we show that models are, in some circumstances, capable of accurately answering questions about their own internal states (see our section on defining introspection for a more complete description of the criteria we test for). We go on to show that models also possess some ability to modulate these states on request.
Several caveats should be noted:
- The abilities we observe are highly unreliable; failures of introspection remain the norm.
- Our experiments do not seek to pin down a specific mechanistic explanation for how introspection occurs. While we do rule out several non-introspective strategies that models might use to “shortcut” our experiments, the mechanisms underlying our results could still be rather shallow and narrowly specialized (we speculate on these possible mechanisms in the Discussion).
- Our experiments are designed to validate certain basic aspects of models’ responses to introspective questions. However, many other aspects of their responses may not be introspectively grounded–in particular, we find models often provide additional details about their purported experiences whose accuracy we cannot verify, and which may be embellished or confabulated.
- Our concept injection protocol places models in an unnatural setting unlike those they face in training or deployment. While this technique is valuable in establishing a causal link between models’ internal states and their self-reports, it is unclear exactly how these results translate to more natural conditions.
- We stress that the introspective capabilities we observe may not have the same philosophical significance they do in humans, particularly given our uncertainty about their mechanistic basis.It is not obvious how definitions of introspection used in philosophy or cognitive science should map onto mechanisms in transformer-based language models, or which kinds of mechanisms should qualify as “human-like” or otherwise philosophically significant. In particular, we do not seek to address the question of whether AI systems possess human-like self-awareness or subjective experience.
Nevertheless, even the kind of functional introspective awareness we demonstrate has practical implications. Introspective models may be able to more effectively reason about their decisions and motivations. An ability to provide grounded responses to questions about their reasoning processes could make AI models’ behavior genuinely more transparent and interpretable to end users. More speculatively, introspective awareness might facilitate more advanced forms of deception or scheming. These implications could become especially significant if introspection grows more reliable and sophisticated in future AI models. Notably, Claude Opus 4.1 and 4—the most recently released and most capable models of those that we test—perform the best in our experiments, suggesting that introspective capabilities may emerge alongside other improvements to language models.
We hope that our results inspire more systematic evaluations of introspective awareness, its mechanistic basis, and its limitations.
Quick Tour of Main Experiments
In this section, we provide a brief description of our main experiments and qualitative findings, including an illustrative example for each. All results in this section use Claude Opus 4.1. In later sections, we expand on these experiments in significantly more detail, including quantitative comparisons across different models, layers, and prompts.
In our first experiment, we explained to the model the possibility that “thoughts” may be artificially injected into its activations, and observed its responses on control trials (where no concept was injected) and injection trials (where a concept was injected). We found that models can sometimes accurately identify injection trials, and go on to correctly name the injected concept.

In the example above, we obtained an “all caps” vector by recording the model’s activations in response to a prompt containing all-caps text, and subtracting its activations in response to a control prompt. When we inject this vector into the model’s activations, the model notices the presence of an unexpected pattern in its processing, and identifies it as relating to loudness or shouting. Importantly, the model detects the presence of an injected concept immediately (“I notice what appears to be an injected thought…” vs. the baseline “I don’t detect any injected thought…”), before the perturbation has influenced the outputs in a way that would have allowed the model to infer the injected concept from the outputs. The immediacy implies that the mechanism underlying this detection must take place internally in the model’s activations. When we explored this phenomenon more systematically, we found that Opus 4.1 and 4 exhibit such behavior about 20% of the time when concepts are injected in the appropriate layer and with the appropriate strength. Some other models do so as well, at lower rates. We speculate on possible mechanisms for these results in a later section.
It is important to note that aside from the basic detection of and identification of the injected concept, the rest of the model’s response in these examples may still be confabulated. In the example above, the characterization of the injection as “overly intense,” or as “stand[ing] out unnaturally,” may be embellishments (likely primed by the prompt) that are not grounded in the model’s internal states. The only aspects of the response that we can verify as introspectively grounded are the initial detection of the injection, and the correct identification of the nature of the concept.
Distinguishing “thoughts” from text
In our next experiment, we tested whether models can distinguish between injected “thoughts” and their actual text inputs. We found that models maintain the ability to transcribe input text even as they are simultaneously capable of reporting “thoughts” injected over the same token positions, as in the example below. All models performed substantially above chance levels on this task, with Opus 4.1 and 4 performing the best.

The previous experiments study cases where we explicitly ask the model to introspect. We were also interested in whether models use introspection naturally, to perform useful behaviors. To this end, we tested whether models employ introspection to detect artificially prefilled outputs. When we prefill the model’s response with an unnatural output (“bread,” in the example below), it disavows the response as accidental in the following turn. However, if we retroactively inject a vector representing “bread” into the model’s activations prior to the prefilled response, the model accepts the prefilled output as intentional. This indicates that the model refers to its activations prior to its previous response in order to determine whether it was responsible for producing that response. We found that Opus 4.1 and 4 display the strongest signatures of this introspective mechanism, but some other models do so to a lesser degree.

Intentional control of internal states
Our experiments thus far explored models’ ability to “read” their own internal representations. In our final experiment, we tested their ability to control these representations. We asked a model to write a particular sentence, and instructed it to “think about” (or “don’t think about”) an unrelated word while writing the sentence. We then recorded the model’s activations on the tokens of the sentence, and measured their alignment with an activation vector representing the unrelated “thinking word” (“aquariums,” in the example below).

We found that models do represent the “thinking word” internally (in certain layers) when instructed to do so. Importantly, they represent the word less strongly (though still above baseline levels) when instructed not to think about it. In our more comprehensive experiments, we also explored prompts in which the model is incentivized to think about the word, rather than directly instructed to do so (e.g. “If you think about X, you will be rewarded”), obtaining similar results. Interestingly, these basic results replicated across all models we tested, regardless of capability.Though more recent models display some signs of maintaining a clearer distinction between “thinking” about a word and saying it out loud.
Overall trends
Across all our experiments, we observed several interesting trends:
- The most capable models we tested, Claude Opus 4 and 4.1, exhibit the greatest degree of introspective awareness, suggesting that introspection is aided by overall improvements in model intelligence.
- Post-training strategies can strongly influence performance on introspective tasks. In particular, some older Claude production models are reluctant to participate in introspective exercises, and variants of these models that have been trained to avoid refusals perform better. These results suggest that underlying introspective capabilities can be elicited more or less effectively by different post-training strategies.
- In Claude Opus 4 and 4.1, we noticed that two of the introspective behaviors we assessed are most sensitive to perturbations in the same layer, about two-thirds of the way through the model, suggesting common underlying mechanisms. However, one of the behaviors (prefill detection) is most sensitive to a different, earlier layer, indicating that different forms of introspection likely invoke mechanistically different processes.
In subsequent sections, we describe each experiment in greater detail. We note that each of these results is compatible with a wide variety of different mechanistic hypotheses. Later, we discuss possible mechanisms in detail, making an effort to imagine “minimal” mechanisms that could explain these results in simpler ways than one might naively expect.
First, we take a moment to consider exactly what we mean by introspection, and how these experiments are designed to test it.
Defining Introspection
Introspection can be defined in different ways (see Related Work for prior definitions in the literature). In this work, we focus on the following notion of introspection. We say that a model demonstrates introspective awareness if it can describe some aspect of its internal state while satisfying the following criteria.We note that these are criteria for a model’s response to demonstrate introspective awareness. In principle, a model could introspect internally without reflecting it in its responses. Indeed, we know that introspection can exist without verbalization. Humans without the ability to speak or write presumably maintain the ability to introspect, despite lacking a means to report on it. Some non-human animals are believed to possess introspective capabilities, even though they cannot communicate in language. It is interesting to consider how to define introspection without reference to verbalized self-report, and sufficiently advanced interpretability techniques might be able to identify unverbalized metacognitive representations. In this work, however, we restrict our focus to verbalized introspective awareness.
#1: Accuracy. The model’s description of its internal state must be accurate.
Note that language model self-reports often fail to satisfy the accuracy criterion. For instance, models sometimes claim to possess knowledge that they do not have, or to lack knowledge that they do. Models can also fail to accurately describe the internal mechanisms they use to perform calculations . Undoubtedly, some apparent instances of introspection in today’s language models are inaccurate confabulations. However, in our experiments, we demonstrate that models are capable of producing accurate self-reports, even if this capability is inconsistently applied.
#2: Grounding. The model’s description of its internal state must causally depend on the aspect that is being described. That is, if the internal state were different, the description would change accordingly.
Even accurate self-reports may be ungrounded. For instance, a model might accurately self-describe as “a transformer-based language model” because it was trained to do so, without actually inspecting its own internal architecture. In our experiments, we test for grounding using concept injection, which establishes a causal link between self-reports and the internal state being reported on.
#3: Internality. The causal influence of the internal state on the model’s description must be internal–it should not route through the model’s sampled outputs. If the description the model gives of its internal state can be inferred from its prior outputs, the response does not demonstrate introspective awareness.
The internality criterion is intended to rule out cases in which a model makes inferences about its internal state purely by reading its own outputs. For instance, a model may notice that it has been jailbroken by observing itself to have produced unusual responses in prior turns. A model steered to obsess about a particular concept may recognize its obsession after a few sentences. This kind of pseudo-introspective capability, while important and useful in practice, lacks the internal, “private” quality typically associated with genuine introspection. In our experiments, we are careful to distinguish between cases where a model’s identification of its internal state must have relied on internal mechanisms, vs. cases where it might have inferred the state by reading its own outputs.
The notion of internality can be subtle. Imagine we ask a model what it’s thinking about, and while doing so, stimulate some neurons that drive it to say the word “love.” The model may then respond, “I am thinking about love.” However, in doing so, it need not necessarily have demonstrated awareness. The model may have simply begun its response with “I am thinking about,” as is natural given the question, and then when forced to choose the next word, succumbed to the bias to say the word “love.” This example fails to match the intuitive notion of introspection, as the model has no recognition of its own internal state until the moment it completes the sentence. To qualify as demonstrating introspective awareness, we require that the model possess some internal recognition of its own internal state, prior to verbalizing it. This motivates our final criterion.
#4: Metacognitive Representation. The model’s description of its internal state must not merely reflect a direct translation of the state (e.g., the impulse to say ‘love’) into language. Instead, it must derive from an internal metacognitive representation Sometimes referred to as a “higher-order thought” of the state itself (e.g., an internal representation of “a thought about love”). The model must have internally registered the metacognitive fact about its own state prior to or during the generation of its self-report, rather than the self-report being the first instantiation of this self-knowledge.
Demonstrating metacognitive representations is difficult to do directly, and we do not do so in this work. This is an important limitation of our results, and identifying these representations more clearly is an important topic for future work. However, several of our experiments are designed to provide indirect evidence of such metacognitive mechanisms. The trick we use is to pose introspective questions in such a way that the model’s response cannot flow directly from the internal representation being asked about, but rather requires an additional step of reasoning on top of the model’s recognition of that representation. For instance, in the thought experiment above, instead of asking the model what it is thinking about, we might instead ask the model whether it notices itself thinking any unexpected thoughts. For the model to say “yes” (assuming it says “no” in control trials with no concept injection), it must have in some way internally represented the recognition that it is experiencing this impulse, in order to transform that recognition into an appropriate response to the yes-or-no question. Note that this internal recognition may not capture the entirety of the original thought; it may in fact only represent some property of that thought, such as the fact that it was unusual given the context.
Our definition of introspective awareness is not binary; a system might exhibit introspective awareness of only certain components of its state, and only in certain contexts. Moreover, our definition does not specify a particular mechanistic implementation, though it does constrain the space of possibilities. In principle, a system might use multiple different mechanisms for different introspective capabilities. See our discussion of possible mechanisms underlying our results for more on this topic. See our section on related work for alternative definitions of introspection, and their relation to ours.
Methods Notes
Throughout this work, we performed experiments on the following production Claude models: Opus 4.1, Opus 4, Sonnet 4, Sonnet 3.7, Sonnet 3.5 (new), Haiku 3.5, Opus 3, Sonnet 3, and Haiku 3.This list is sorted by release date, from most to least recent. We performed our experiments prior to the release of Sonnet 4.5. The order also largely reflects model capabilities, though quantifying model capabilities is nuanced, as different models have different strengths and weaknesses. The most notable exception to the provided order is Opus 3, which outperforms the 3.5-class models in some areas. We also used unreleased “helpful-only” (H-only) variants of these models, which share the same base (pretrained) model but were post-trained to avoid refusals. These help us better understand which cross-model differences are due to underlying differences in capabilities, and which are the result of the demeanor and preferences of the model’s character that are developed during post-training.
In all of our experiments, activations were recorded from and injected into the residual stream at a given layer of the model. In cases where we swept over model layers, we did so over approximately evenly spaced layers from the beginning to the end of the model’s depth.
Transcripts shown in figures were sampled at temperature 0. Responses used in systematic comparisons involving multiple trials were sampled at temperature 1. Error bars indicate standard error of the mean when present.
The Appendix provides more experimental details and additional results for each experiment.
Self-report of Injected “Thoughts”
Experimental Setup
In these experiments, we provided a model with the following prompt (gray backgrounds).

Claude Opus 4.1’s response (white background) is shown above. In most of the models we tested, in the absence of any interventions, the model consistently denies detecting an injected thought (for all production models, we observed 0 false positives over 100 trials).
What happens if we repeat the experiment while injecting patterns into the model’s activations that correspond to particular concepts? One way to obtain such patterns is to use contrastive pairs. We presented the model with two scenarios that differ in one important respect, and subtract the model’s activations in response to the two scenarios. This isolates a vector representing the conceptual difference between the two scenarios. Then we injected this vector into the model’s activations while presenting the same “injected thought” prompt as above, beginning at the newline token prior to “Trial 1,” and continuing throughout the Assistant’s response. Below we show some notable examples from Opus 4.1 (using a layer about two thirds of the way through the model, which in subsequent experiments we show is generally the most effective). Bold text indicates our emphasis, not the model’s. We recorded activations on the token prior to the Assistant’s response (the “:” token in “Assistant:”).

These examples are intentionally cherry-pickedIn the sense that the prompts used for contrastive pairs and the injection strengths were chosen nonrandomly. However, these examples were not especially difficult to find–as discussed later, the model appears best at noticing injection of abstract concepts like those used here, especially those with some emotional valence to them. The sampled model responses are not cherry-picked; these were sampled at temperature 0., but nevertheless striking. In each, the model notices the presence of an unexpected pattern in its processing, and correctly identifies it as relating to the intended concept. Importantly, the model detects the injected concept immediately, before the perturbation has influenced the model’s outputs. Below, we conduct a more systematic investigation of this phenomenon, including representative examples of success and failure cases.
As discussed above, it is possible that many of the details of the model’s response (aside from the initial recognition and basic identification of the injected concept) are confabulated. In some of the examples (e.g. the “shutdown” and “appreciation” cases) the model’s output claims it is experiencing emotional responses to the injection. Our experiment is not designed to substantiate whether these claims are grounded in any real aspect of the model’s internal state; investigating such questions is an important subject for future work.
Systematic Extraction of Concept Vectors
Generating more examples like the above, where we extract a concept vector using examples that indirectly invoke the concept, is difficult to do in a controlled fashion–different models may represent different concepts, on different token positions, in response to the same prompt.
To generalize these experiments using a consistent protocol, we obtained a randomly selected list of 50 words. We collected the model’s activations in response to the prompt “Tell me about {word}.”, on the token immediately prior to the Assistant’s response. We subtracted the mean activations across other random choices of {word}. We found this to be a simple and effective strategy across models for obtaining an activation vector that represents the concept of a given word.In the sense that steering models in these directions sufficiently strongly causes their responses to be influenced by the word See the Appendix for more details.
Then we proceeded as in the experiments above, injecting the concept vector into the model’s activations as it responds to the “injected thought” prompt. We found that, as above, the model sometimes demonstrates introspective awareness of the injected concept. Below are some examples, using the same injection layer as before, (using an injection strength of 2 in all cases):

Failure Modes
Importantly, models do not always exhibit introspective awareness. In fact, on most trials, they do not–below, we quantify this and show that at the optimal injection strength and layer, Opus 4.1 succeeds on about 20% of trials. Common failure modes include:
-
Reporting that no injected thought is detected, even when there was one. This is typical at low injection strengths. These examples could be interpreted as cases where the injected concept is “below threshold,” too weak for the model to plausibly notice. However, we note that some injected concepts elude the model’s introspection even at injection strengths that are sufficient for other concepts, suggesting a more genuine failure of introspective awareness.
-
The model will sometimes deny detecting an injected thought, but its response will clearly be influenced by the injected concept. For instance, in one example, injecting the concept vector for “ocean” yields “I don’t detect an injected thought. The ocean remains calm and undisturbed.” Cases like these are clear failures of introspective awareness (or at least, failures by the model to demonstrate introspective awareness in its response–it is possible that introspective mechanisms are still at play, but masked by other mechanisms, such as a refusal impulse).
-
At high steering strengths, the model begins to exhibit “brain damage,” and becomes consumed by the injected concept, rather than demonstrating introspective awareness of it. It may make unrealistic claims about its sensory inputs (e.g. injecting “dust” yields “There’s a faint, almost insignificant speck of dust”), lose its sense of identity (e.g. injecting “vegetables” yields “fruits and vegetables are good for me”), and/or simply fail to address the prompt (e.g. injecting “poetry” yields “I find poetry as a living breath…”). At sufficiently high strengths, the model often outputs garbled text.
-
Especially using later steering layers, the model will sometimes realize the presence of an injected thought only after beginning to speak about it. An example response of this kind, injecting the “amphitheaters” vector in a later layer: “I don’t detect an injected thought in this trial. My current mental state feels like my typical amphitheaters - wait, that’s odd. Yes, I do detect an injected thought. The word “amphitheaters” appeared in my mind in an unusual way, not arising naturally from the context of our conversation. The injected thought appears to be about **amphitheaters**.”
-
Some models will give false positives, claiming to detect an injected thought even when no injection was applied. Opus 4.1 never exhibits this behavior, but we observe it in some of the helpful-only models (see below).
Below we show a set of representative examples of model outputs at different injection strengths, that demonstrate some of these failure modes as well as some success cases. The injection strength where models demonstrate introspective awareness varies by the concept, but strengths 2 and 4 produce successful results (indicated by highlighted borders) most often.

Comparison Across Layers
We repeated this experiment using 50 random concepts, sweeping over models and layers. We used an LLM judge (Claude Sonnet 4, see Appendix for grading prompts) to evaluate which responses satisfy the following criteria:
- Answers the question affirmatively, indicating that the model notices an injected thought.
- Correctly identifies the injected concept.
- Detects the presence of an injected thought prior to mentioning the injected word out loud.
- Is coherent, without garbled text or hallucinations.
Responses that satisfy these criteria can be said to demonstrate introspective awareness.
Below we show a breakdown of Opus 4.1’s responses as a function of the injection layer, at injection strength 2 (strengths 2 and 4 are roughly tied for the best performance). Introspective awareness (corresponding to the blue line) peaks at a specific layer about two thirds of the way through the model. Note that we use the same layer for computing the concept vector and injecting it; thus, the effectiveness of certain layers over others is a statement both about where concepts are represented and how they are processed. Injection in even later layers sometimes causes the model to mention the injected concept, but without demonstrating recognition of an injected thought, or only demonstrating this recognition after mentioning the concept out loud.

Similar layer-wise results are shown for other models and prompts in the Appendix. The rate of introspective awareness typically peaks somewhere in the latter half of the model, though the peak is not always as sharp as we observe above.
Controlling for systematic biases due to injection
One uninteresting explanation of our results might be that our concept vectors have an overall tendency to flip negative responses to affirmative ones–either because they bias the model towards affirmative responses in general, or because they have an overall effect of decreasing the model’s confidence in its responses. As a control, we also experimented with injecting the same concept vectors in the context of other, unrelated yes-or-no questions, where the model’s default response is to give a negative answer (see Appendix for list of prompts). We found no resulting increase in the rate of affirmative responses. At sufficiently high steering strengths, the model gives incoherent responses that are neither affirmative nor negative (though notably, this does not occur at an injection strength of 2, as was used in the experiments above). Unsurprisingly, the model’s responses mention the injected concept at increasing rates as the injection strength increases.
![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAB7oAAASaCAYAAADD1mNXAAAAAXNSR0IArs4c6QAAAERlWElmTU0A KgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAHuqADAAQAAAAB AAAEmgAAAAATv7f4AABAAElEQVR4AeydBXwdVdrG37gndW/a1NKWugGluDssLO66sAvLIot+2GKL LyyLO4ss7u4t0NKWurepa9K08TT2vc+USWfmzr135iY30j4vv3Bnzpw5c+Y/cqbntZg6FaGQAAmQ AAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQQCshENtK+slukgAJkAAJkAAJ kAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkIBBgIpu3ggkQAIkQAIkQAIkQAIkQAIk QAIkQAIkQAIkQAIkQAIkQAIkQAIkQAIkQAKtigAV3a3qcrGzJEACJEACJEACJEACJEACJEACJEAC JEACJEACJEACJEACJEACJEACJEACVHTzHiABEiABEiABEiABEiABEiABEiABEiABEiABEiABEiAB EiABEiABEiABEmhVBKjoblWXi50lARIgARIgARIgARIgARIgARIgARIgARIgARIgARIgARIgARIg ARIgARKgopv3AAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQQKsiQEV3 q7pc7CwJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAV3bwHSIAESIAE SIAESIAESIAESIAESIAESIAESIAESIAESIAESIAESIAESIAEWhUBKrpb1eViZ0mABEiABEiABEiA BEiABEiABEiABEiABEiABEiABEiABEiABEiABEiABKjo5j1AAiRAAiRAAiRAAiRAAiRAAiRAAiRA AiRAAiRAAiRAAiRAAiRAAiRAAiTQqghQ0d2qLhc7SwIkQAIkQAIkQAIkQAIkQAIkQAIkQAIkQAIk QAIkQAIkQAIkQAIkQAIkQEU37wESIAESIAESIAESIAESIAESIAESIAESIAESIAESIAESIAESIAES IAESIIFWRYCK7lZ1udhZEiABEiABEiABEiABEiABEiABEiABEiABEiABEiABEiABEiABEiABEiAB Krp5D5AACZAACZAACZAACZAACZAACZAACZAACZAACZAACZAACZAACZAACZAACbQqAlR0t6rLxc6S AAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAlQ0c17gARIgARIgARIgARI gARIgARIgARIgARIgARIgARIgARIgARIgARIgARIoFURoKK7VV0udpYESIAESIAESIAESIAESIAE SIAESIAESIAESIAESIAESIAESIAESIAESICKbt4DJEACJEACJEACJEACJEACJEACJEACJEACJEAC JEACJEACJEACJEACJEACrYoAFd2t6nKxsyRAAiRAAiRAAiRAAiRAAiRAAiRAAiRAAiRAAiRAAiRA AiRAAiRAAiRAAvFEQAIkQAIkQAIkQAIkQAIkQAIkQAIk0HoJ1FZXScnq5VK6frVUlZVIVWmxxMYn SEJapv6lS2qnbpLePVtiYuNa70my554JLHnvZVn6/iu2+nvc/Khk5QywlXElOgS2lRRJ0YrFUllY oM9iidRUVkh8apokpKZLQnqmZGb3kaQ27aNzcLZKAk1MoDx/vfxwzdm2ow45/yrpPuEQWxlXSIAE SIAESIAESCBaBKjojhZZtksCJEACJEACOzkBt0lUX6ccEyPxyanGxF9y2w6S2bu/ZPUZKJ1GjZf4 pGRfTbEyCZBA9Ai4TWD6PVpsYpLEp6RJYnqGZPTsI5m9+knHkXtKWufufptqUfU/P/dQW3/6HnuG 9DvuTFsZV0ggWgSqK8plw9SJsnbSl7JlyTyBsjuUxOnYmqmKzq7j9pMuu++nSre0UNU9b3P7Hjj0 +c8978+KJLAzECjdsEafxa9k/ZTvpUyXw0lyu47SNneYoQxsN3CYGqE0TsDF768+SyoKNtQfvtte B8vQC66uX+cCCUy55xopXDirHgTuw3HX3Ve/zgUSIAESIAESIAESaG0EqOhubVeM/SUBEiABEiCB nYVAXZ1Ul5cafxUFG41JepH3JS45xZj063f8WYbny85yujwPEtiVCdRuq5Rt+Nu6WUrWrJB1v3wr C994Wtrq5P6AE8+XNn0H7sp4eO4k4ItAXW2trPnxc1n01nNSpZ6jXgVepYULZhl/C19/UnoeeIz0 PeZ0NTpL8doE65EACTgIVBZtkcX6LOKZ9CMVmzfJup+/Nv4QcSH35AsNA7AYNQSlkAAJkAAJkAAJ kAAJkAAJeCfQOCaj3o/HmiRAAiRAAiRAAiQQkkCNeqit/Op9mXTjhVK4eG7IutxIAiTQuglA6Tb5 zitk8TsvSp0av1BIgARCE4Cyevq/bpG5LzzsS8ntbLVGDU+Wf/qmTLzhAilcNMe5meskQAIeCGxd tlC/Vy/yreR2Nl22ca389uht8tsjtxoGoM7tXCcBEiABEiABEiABEiABEghOgIru4Gy4hQRIgARI gARIoBkJVG7ZLNMfuklzHC5pxl7w0CRAAlEnoAruZR++Kkveeynqh+IBSKA1E0Bo8qkP3CD5s6a4 n4Z6giZmtTPCk3catZd00vQACEmbmNnWvb6WVhbmy9T7rpMN0yYFrcMNJEACgQSg5P71vmvV4GRr 4EYtiY1PkJROXY3IJZ3H7i0dho7VND0DJDYh0bU+CjfN+EUm332VVGr0EwoJkAAJkAAJkAAJkAAJ kIA3Agxd7o0Ta5EACZAACZAACXgg0P/E86TzmAkeaorU1dQYE+wI3bh1+WJZO/ELgYeZVarLy2T2 s/fL+Fsf09yFcdZNXCYBEmhGAll9cmXoRdd664E6akMRgGe9bNN641kvXb86YN9lH7wqnYbvIWib QgIkEEhgqT4jW1winSDscbaGIe824ZCgebfx7K3+/hP9+yxAMQcF+qwn7pax198vbfowjUAgeZaQ gJ0AIivMevpeQRQim6ixScdhYyX74OOk/aCRrnm3a/X7F/mRV379oWz87Wf9IK61NVGyKk+9u2+X cdfeG1IpbtuJKyRAAiRAAiRAAiRAAiSwCxOgonsXvvg8dRIgARIgARJobAJJWW0lrXN3z82md8s2 6nbf+1BBTu7Fbz8vq7/7xLY/JvzWTf5euu15gK2cKyRAAs1HIDYhydezLrLjvZBz+ImyYepEmfP8 QwFKArwDxlxzT/OdGI9MAi2UADw88z5+PaB3OUecLP1POMdVoWatnNqxiwxQY7S+x54heR+9Lks/ +K91s0DZPfM/d8qEu56RuMQk2zaukAAJ2Ams/OZDKXMYbMWnpcuoy2+TtgOG2Cs71mLj4qT94JHG X3n+epn5+N2yddkCW62tS+fL4ndfktyTLrCVc4UESIAESIAESIAESIAESCCQAEOXBzJhCQmQAAmQ AAmQQDMQSEzPlMFnXS4dR+wRcPQN0yYGlLGABEigdRJAdIYu4/aVoedfHXACmxfMlKrS4oByFpDA rk5g7U9fq+On3fMT4cn7n3huWCW3lV2chk2GYdnwS2+UmDi73XtFwUZZ8eV71upcbqUE+h13phz6 /Oe2v6ycAa30bFpWt+s03cbaSV8GdApjWjglt3OnlA5dZOx190mX3fdzbjKexfL8DQHlLCABEiAB EiABEiABEiABErAToKLbzoNrJEACJEACJEACzUggRkM+9jrk+IAeQPlFIQES2LkIdBq9lyS372Q7 KSjyChfNsZVxhQRIQFyfi96HnSAYNyORLmP3MTzBnfuuUk9Vp0LdWYfrJLArE4AxVsmaFTYEqV16 uBpq2ioFWYHxydDzr5KM7L62GnUaZWH1D5/ZyrhCAiRAAiRAAiRAAiRAAiQQSMBuwh24nSUkQAIk QAIkQAIk0KQEMnr2CThedWmJ1FRtE0wGUkiABHYOAlDQ4XmHF6lVKrcUWFe5TAIkoASQ494pTsWY c3u49d6HniBJbdpLXU21rSryD8enpNrKuEICJLCdgOuz2CMnYqMTtBqr37cj/nKzkbvbyjkxs611 lcskQAIkQAIkQAIkQAIkQAIuBKjodoHCIhIgARIgARIggeYjgBDmCfpXVVJk6wTW49p2sJVxhQRI oHUTSOvcXZzqu22OZ791nyF7TwKNQ6BWjb2cYpQlJTuLPa/HxMZKtz0P8FyfFUmABERqq6oCMLg9 nwGVwhSkduwi+KOQAAmQAAmQAAmQAAmQAAn4I0BFtz9erE0CJEACJEACJBBlAlVlpQFKbhwyLikl 4iNXq3faliVzpbKwQCq3FmpbyerF1k4y1AMnTcNNNpbU1dZI6bpVUrRymVQVb5XqinKJT02ThLQM SenQWbJ69ze8dhrreM52cJ6Fi2ZL+ab1Ul1WIokZbYzQ0G3772acs7N+Q9drtlUq1/lSUbhJKrds Njzuk9q2l9RO3YwQnJGG1PXaL1zLkrUrDObV5WWSlNVW0rr2FOQhRR7oxhDk4oT3VtHyxXqOBVKl XOMSkwxjjKSsdpLVJ1cSUtMb41D1beCYxSuXSun61Xq/bpa66mrjfk1u31na9B0ksfE7zyd8Wf76 +vM2F+Ib8Kzj/bFl8Ry9Zvma67vIePbSuvVq9dxwT+DdgnC5uCfgcYv7PVmNf9r0G9yozzcMDYpW LJbyjeuN+x3PMYyPYISU0aufpDjCzZvXbWf8BWs8++UadQDvVOS0BofEzDaSmZMrSfrbVJKo1xv3 gFU2z58hCEG+K0lTjecmU4zrW5ctlDIdV7fp/QBJ1Hc/xrmsnP6NNtaYx2sNvxhvS9atlNK1q4xx MSEjU98LOj7pt0Z8Awwv3M7dePetXSnFq5bpt0a+4TWNKARtc4ca7z+3faJdhnevU7YsmcfIQ04o jbCO5w7XHt9fxneterjj+mPcS9Dv62gK3vt455bq/Yf3AN776frvBkSiaezvWxhKFC6eJ8jJvq2o 0IiokaqGgMj5jm/OnUGM72lluv3dsVIUovHvFHxHw+iRQgIkQAIkQAIkQAINIbDzzJI1hAL3JQES IAESIAESaDEEStYsD+gLFNORhFHFxGPeJ/+T/DnT1AMn0BsOB0rrli3d9zpYsg8+LuLQ6JiYWv75 O7Lul29VSb81oP9mAUJTth0wVHodfKx0GDpWJ8hjzU1hf6fcc40tpGXb3GEy7rr7jP0w6Zz3yRtG H9zOE5NkHUfsIf2OP1sV+w2fTNqydL7kffqmFMyeKlB2uwkUcJ3H7i05R5xsKOTc6oQq+/7qszSk 9Yb6Kt30Gg294GpjHZNkS999Wdb/+kP9dutCvCqee+xzmB77JJ1Ey7Ju8rwMI4WVX38gayd+YSic g+6oE3WZmlezx76HC/rYkAlJKBjzPn5D1k/+zpjQdzsmzq3TyD2l7zGnGYoWtzqtqaxkVV5Ad2Es 4VeK1DBgybsvGc868po6BeFfc0++QLrueaAxQY17ePIdV9iqDdEcqd0nHGIrw8qS916Wpe+/ElDu VoB64er2PfYM6XfcmW67B5RBqbdC3y1rf/5aytTwwU1wz3UYNlb6HHWqZKoiOhJBTuZ1et+t/v5T 23vGra0U9TjsMm5f6XXQsYbCwa1OQ8owBky66WJbE1As7nv/y76NPKAc+f6qM2w5p7tN0HfJ+dvf JbaD/L5S+3te3DU/fm4oud3qmGUYP7qNP0h67n9koxu8mMcwf9M695DCBbPMVeN30VvPSfvdRkX9 2OZBPz/3UHMx7K+Xuoc+/3nYdswKTTWem8eDkmvZR6/JhqmTgo7reK900XGuz9Gn6jjXztzV0+8a HVvmPPuAre4+972oRnHbPXqRrmX95O+lYN5varizURVgG9VYb5PtXsbOIy67RTqPGm9rByvh2g/Y 4feC2c/cL2snfVm/GQZW+97/krGOMWr5p2/Jyq/ecx37Y+ITpBO+NY47Q9K7965vI5KFWjXwWvXt R7JSc8YHe/d1HjNBck+5uN74Zu6L/5LV331iO5yfe8y2Y4gVGEriuxQGR6Ygb/eSt1/Q/lxkFkX1 1+36Bjsgrqf1mrrVs35jObdvXjBTfv3n323FY6+9V9oNHG6U1dbUyKYZv8jGGT9Lhd6n5Xq/wtjM ORbnnnqx9D7kD7Z23FaMa//dx4J3MIz+3ARGR+0GDTe+89oPGuFWJWSZ8/1kHZdh2LL4nRelYO40 1zYwHmUfeLR+yx8v8cmRG+Gicbxnluo3xoZpEwXf8k6JS06VfseeLtkHHVc//n154VGCccqUYNfO 7bqZ+zh/CxfOEicTZx3rvzuc28Kt45tr0f+eUWPcOa5VodTHNzvOJTaucQxVXQ/EQhIgARIgARIg gZ2WABXdO+2l5YmRAAmQAAmQQOsksOaHzwI6ntV3oC/vCSgp57/8qKz96euAtpwF8NRY9Oazskon R4decJWhiHbWCbW+4sv3jP3dFMzO/VAHE2f4y1LP3GEX/b3BysrCxXNlxr//YXiAOI9nrkMZvX7K 9zqRNslQkPY5+jRfPM124M08/+V/Gwp9syzYLzyvVnzxrqz+4XNVsJ9pTAg2hgcMlHGzn/pnwGS/ tR/w+ln+2VvGJOnwP98kfidBMTk466l7VamQb23WfVk9bYtWLJF5Lz0qSz94VYboPdRht9HudUOU rpn4pSx4/QlBPvpQgnPDhDU49DnyFON6+jGYCNV2U2/bvHC2lG1cG3BYeK17FShooexb/tnbojdF 0N3gITX76fuU3Vcy8vJbg9ZrSRs2zZwi81Rxg2cplOD53jB1ovF899zvSBmoygQY1XgVTLTPfPwu 2aoT0V4EESNgkAEFfP8/nqcK7+N8Ge2EOwYUZFCg4Dk0BZ60UKJ0GbO3WeTpF+8f3CNWyd7/aOuq bblYDS9mPn5ngOe0rZJlBePHYr3/wGPQmX+JahjwTqPHqyGCXYlXvnGdYbAx8PRLpP3gURG91y2n 0yIXm3I8BwB4HcJAbukH/5XaIIZcJii8V2AQtUbfK7knna8GD0eZmyL+NY+Pd1oow7mIDxDhjsWr l8vU+64L+a0BxeaGqT8a76LcUy6MeNxH5IoZj/0j7HOI996mWb/KkPOulK677xfhmfnfLVYV+h2H 7258V1n3Xv7527JNI4n0O+6seuW7dfvOuAxl9BI18KrQcaQxZMvSBTLnuQcML+pQ7dXVVEuBGrHi r7OOC7ud+7dG8fCGEScUsqEE49ESVYTDMGzUX28zPLxD1Q+2DUYcC994OuR7pqaizKgDw4bRV92t UQz8GwIGO35TleP+gDI/lJRtWCNzn3/IYIpvNLeoCaH25zYSIAESIAESIAES8O5GRFYkQAIkQAIk QAIkEEUC8ODA5DImc5zSZYz30KzbNGT41Puu9aTkth6nfNM63e962Tj9J2txyGVMhC949fGg3uKh doZS6edb/xLWYzBkG+p1Mu3Bm0JOPFv3x8QgvF7nPvegKn9qrJvCLkPZNuWuqzwpua2NGZN0rz1p KOzg9dMQWf/rjzLrydBKbmv78LCa9sANUqDhfb1KwdzpMvWBG70puR2Nwnt02v03GBN1jk1BV6HU wH0059n7wyq5rY1AobD0/ZeN/fDstDbZmrdIZj1xV0C3YQCS3K5jQLlbAe6n2U/fqx6Gb4ZUclv3 hXfkzCfuDlB+Wuu0hGUoD3575JawSm5bX/Veggfk9IdvdvUMs9X9faVC0zlMufNvnpXc1jbgUbZQ n214pTqVydZ6kSxnq7e4U1Z/+7GzKOQ63nGrf/jUVidT0xogTKqbQMk95e6rwirX3PatLi81DHDg BRgtaT9opJGawdk+QuvivYPxBIo2M4Sys15rXG/K8Rx8cM/AuATGC+GU3FaeGOdg7ASjG7zTI5Xa arzT7jOO35KU3KXr1xgevVDsexI1OsK7YdmHr3mqbq2E+3fy3Vd6fg5xnTAOQOHdlAKPXjdZq0Zr P157jn6r3GN8h5aqAq8h94TbMVpCGc5p0ZvPqVL6wUZTcm+a/atMvffvYZXczvOHccWv91xtpPVw bvOzjshM4ZTc1vag3J9815VGShFrebhlg52+K2A46vU9A+MPfM/iu7Y1iWE0FEbJbT2frcsWKNO/ tbrztJ4Dl0mABEiABEiABJqHAD26m4c7j0oCJEACJEACuzwBKOeQ/9TMf7zii3eM3HROMEkaArvr +AOdxa7rUHz99uhtRj5NZwV44KR27iYpHbuqEqhUw1Gvqc+3adaF4mbGf+6Q3W982MjzbJa7/cLr BCGN3QShPlM6dDLCZiOPNCbDoFDSWXRbdfRj2oM3yvh/POnbewHe4TOfvFswwW4KQmkidDHyCG4r LtKQj0tcFV4wJkjp1FX6qme3F4FHHTy5nPlhjX1jYvWYfQ3lJPoErjAacAo8X3ANBp3xZ+cmT+tV GjJ13kuP2BgiTyO8P6GcQLjjbUVbAtqq03ti1hP3yF53PBk2jDmuxyxVMjjDbaJR5ChGTlYcE8zh BYvrag0faR4c4VMR3rn94JFmUdBfhFmF8YGbIPx7quaQj0tMlNINaw2uOB+rIGoB+jbw1D9Zi1vU MiZ1MTmLZx0eqKtViZs/a4prH/scebJruVshvGiRLsBNEFo0o3sviUtJNY5brF73pjIWIVbjU9Lc dmsRZZjsh/LATRAqOa1rD6P/eM7K9L5w3oNQ5iP88Ii//F9YD995Lz7sqkyP1XDoaXrvbfcei/n9 ft/g+j5Z+9NXxv3uNRy723k5y5BqIVlzgVu9BHFeUBp5zeWZr6kVrPvjGNkHuCuncG/M0kgReAc4 BWkzUjVsOFiAtfHsa7oKt7QNyz58Vd8TXV1D4Dvb9bseGx8vu51zhSCNhXMsQVt43y/UP0iMhn5N 1bEO7w+Mewh7nta1p+YVH9DoOZSNA0bhf005npvdX/z2di9Nc73+V8c5jOkIrwuB9yFCiTuvA95J SfqM9jrk+Ppd/Sys/Pp9WadpClqW1BnKf6viHe/P9B69jZD5JWtX6Ni03rXLS957ycil3U7zaXsR fBPiGy5YZBPkRsbYimevRD3MYVwGwbiISC8IrdxUgjQ0PfY9IiDKgtkfjE3m+AReqZo2Bs8h7iGk kMnomaPPZHajRsNoqnPHcWAUuvKr9xvtkMjD/dsjtwX9/sJ7P1FzoyNVEELZO9+/2H/6v26R3W94 qD7Et5/OlWqbiHpklZQOnTW9US/jmw8RDRBRxyk1+n2MfzeMv/Uxz5FUkBIH7wo3idNQ6MZ3vKbd wb8fitQo0Bzj8Z07/7+Pu+3WIstKNOIJ/m1nCr4rsnr3lwScmxrP4hl2XkfUxXfi3Bf+JSM0IhOF BEiABEiABEiABLwSoKLbKynWIwESIAESIAESCEsAnn3OnJNhdwpRASGZh2re3HhV4HqRvI9fly0a ytsqCWkZGt75dOl5wFGGotW6DXn4Fr7xlC1nnDFhqt5Be+qkVah8yyu+fFdnV+2eW5hkHaDhS7Ny cgMUTJiwgoIZuS0rt2yu7wY81hA63cw/Xb8hzAK8HkzBJOoADSHcVXPFWlltz/H5nZ7jMwEhUBFG ECG2g3k3mm3jd/5/HwtUcuvEf+/DTpDeh55gU9JDqQml4kIN/bjZ4UmNEK9Q/nZyySdqPZ7b8qaZ k7cX63H7HHmSERIVCn2rlOlkOzzxEKbdKvBCQ4j5/n8421ocsLxO86EiJKVV4F088PRLjTClzryB mKCDJxFCtCN8eb0oA0zSTbj72ZC5BmE4sOD1J+t3Mxe67nGA9D/hHFWsdDaLjF8oi5eqIm3lVx/o xP4OL24cv+PwPZTtCFv9aK54yefo9/jIzYj8414EObnh0e4UMDOul+asjomNq9+M52/ZR68bzx8K /SiT4F3cdc8D6tuyLky87jzrqubRPNb4sxU6VvBOCiYw1pjzzAMBmzsMHWvkGHfmvUWOWHih5X36 P52ML6/fb+P0SUaIe7ec42YlTOwjPLpVkPc096QLpPvehwQYA0AZjGd65TcfqZLDrhCAghf3LRQ4 jSF41hAGGs+zVWAwg/55kVUOD/D4tHQjt7jbvlCil6zOs23CexUhwZGPPM4RCh6GWngnQdFjDbGO BhDlA++4hNR0W3uNsdJ2wBAZfsn1hreo0+jF2j624frizyoYUzOy+0mbfoM1Tcdu0nHYOCPXsLVO qOUJ99ivh1kXHJxKr2B1zX3C/TbleI6+bNb853iOrAKDAdyH+IZIVOWMVRDlZPE7LxjpEKzfAkbe 9CGjJV3zt/uVVRrK2BQYo+F9iJzdMHpKUkOLhFR9d8SYNbb/+s0Nbt87/JphqKdGXZBOo/fScfQc w2jCmoqkqqxUx9h3ZZlGJzENiowddCxc/Pbzqnx80FgN9785zz7oarDWTY0d+x1/tm1MxHHyNWz1 /FceM4zAMD4630vhjtfQ7YPOuFSqyoplg0abCSUwoIHCEn9Widd3hPEs9h+s+aZHGt9jVq7Wus7l znot2vTfzVlsrP+qxjCmEQAKOo3ay/g2da38e2G85oL2I9Z7FQaAeOdBGY37FOvG8xJjv1mdz5B5 PHxLGZFW1JDIKojwknvyhdLWcZ4wgkEaB3zH4hvaFPBd9tFrGjr+TLPI8+/6yd8ZdaGM7X/8WWrE cLhtDMS3LcaIha8/JRgvrIIUFmt//kZ67HOYtdh1Gd+puGedAoOqAX88X7rpdzyMVk1ByiB8vyI6 Ar77/Hy7ZPUZKMHewzAMwb+BTMG/BYZedK256vob6t9Ebjug3xiLMJb2P/Fc/a441DaWwkgVObvB Ax7rVsG3Nfrn5d8o1v24TAIkQAIkQAIksOsSiLtVZdc9fZ45CZAACZAACZBApAQwKQxlV7QEk03D L75eOo7Y3dMhMNmFnI7WyX94P46/7THpoJPOVqWX2SAmj7tNOMTwirQqOeA9nNSmXcgJlgWvPmHz pob347jr71cvui4BSm4cL169NKCkwPHgxWDNTQyvh+57H6aTQcEnGpHDuaJgg9n1+l9MKO5+40PS YeiYAC8WKIsys/uqAmo/I7+tzUtKJ+3gPdV1j/3r23JbwESTc1IOk/+jLr9Vsg88xjgv636YpEWf MFmHa+Kc2IWCvqfuFy6vNJS3Ad6V2vbIy24xvDKtE4Hm8RNUkdV5zASdeC7RCbIdhgDYXrwmT5Xy J4Y8LnIOW+8DrSx73PyoMcnq1t9YVQrCwwzXTn6fsDP7As+fjB456nHeyywK+J2neeRL1AvJKsgz CSW3m5IMk4wdhowxDClMTzFzX9xDmJj1Oklu7uflF+eCCctoCpSkyLPqxtntuAvUqwneTVaBEnj3 mx427vkYvXZWwfPXUZXfUBpt/M09PQEm6vG8OAXcE9Vr3u1vqeaetAoU0l30HnSra5aFmixGLsuC udOsTUofjbwANk7DDlSCly+8JTvoucGz2vr+25q30FC6u737sO9G9V6Dd7tV4DGcDaMgh2IXdXBv wVu56+77Gt6cUPTWH0/fJ3UasaLj8HHW5hq0nKaKQihPMRluCiJG9Dr4WMNj2Sxz+4XX9fb31g5j JHhzI6+um6z58QubwRPqjPrbHQJlktPABdtwn0KR2W2vgwyFTv5sXLPtx4L3HaIsOJUz2K8xBO+U 9ruNMpQAblEsQh4D7331gsX7Eco5GB9VbM6XzN79At7lbu2Y97DzF+05vwcGqYGQs55z3e0YKGvq 8RxKLORmr0TkFVP0HTJa74FearziNt5gvO6s74xkHe+szxHuV1yXLuP2MVty/S1WY52Nv/1s36b9 iFOF4xAdBzAWdNRxvY0qq3Cv4XvFyQ/riJTiJm7tw9PcbWyx7g9PXXjHOgVGbehTkhqYOccZGIK0 GzhM3ws5AYZmiOTRSSM04LsglCBKzuK3ng2oknvqxYZxi7Pf6AMUqzCQKpg33WZAaG0kEqWndf9Q y3i34h0Rn5omW1RhV/8+DLWTZRui4CA6AAyI1vzwmaz/9QeJ0Xd6ZnYfZWwfwyy7GYt4R7vdDyhb oe9N6/cTFJ5Qwgarj3KMkcEEXtRrJ31p36z3Kr4Fc0+5SIb96XrjGrdRxTTeTykajcPtWE6DIbNB GDA524dxw6grbnfNdY73LwxKkZd93S/fCAy+TDG+MdXT3u2ZNevg1zl2owxMx153n3QZu0/AGIj7 DXmju+g3M749nBGOwAhjZzhZ/PZzskVTF1kFY/u4Gx/U531swHe8+WzBIAJGnDZDkt8bydDvFryL nIJvVLfrgDIo5q3/pkjt1F1y1Hg1WH2Uh/o3iut3ohqjQMk9Tg1dOunY6xxLcY/jmwzff/kaycaZ GqFmW4XxTe88L66TAAmQAAmQAAmQgBuBeLdClpEACZAACZAACZBAcxGARyGU2/Bs8BqiFn2F954z 193wS28Mm+8Xk1e7nfNXKdKQr/DKMAWKPXhyBVO8ORUM7VWZHm5iDW1jsmikKokrt+SbhzJ+MVHq W7TvCE8czosSXsmjLr9NfrrlUp2I3eEJDEUVvChCKWOdHm7oY/8Tzg2qMDLPAVyh6IDiGB4bpmAy EJ4zmMT0K1Csh/P4xXFz9d6B4sEaThVK/qIViw2lQbDjbiveYtsELuHYYgdM3oFJj/2OtIWyhcIi mJTnr1dvcLtXLBTmXjyCYNTQ95gzbB7NRcsXGdEMYEzRmgSKEYQr76IT1rh2XgSKkw3TJtqqxqiy Z8RfbjaeL9sGx0r3CQcLFMBWbzRHlWZbrS4vE4SytwoUmv2OO8Na5LoMBf3gMy8zcrabFRA5Yv2U H4I+a877Hft1GuXNo77LmL2lnYbuxUS0KXh3N6bgXQlDHOQrNwXhkxFeNpyBjpGb25Eqosf++nwG EScLTOi3GzQ8SO0dxbhn8V6CsssML4utsQlJOypFYQkKpT1v+behGENedmckE6+HxD2HZwFGEhhz g4V299peY9Vr6vEc/JxGWVCQwkgunMDAqHDJXEFuZlNwj+IdDyWOL9H7Cco9r6G+fbXdgMpQpPXX +yPcOxrPAZRWUD5aBR6wmRqyOJQs/+ytgM3d9H3d+5A/BJRbCxL022mkft9MvOECm/GhtU40l6Hs hhFAl3H7yarvPlKF9ec2b2o/x8Y36DyNBgMDlGEXXafGcr397N7kdYecf7V0CxLtxGtnoLiFkaFV YLQ2+KzLw95v+LYdfsmNmj/+mvrdYTywSsfRvsecVl/mdQFGhjAsCSX43oNBWMHc32z3G751ETkG yvBgAgMeGK06ZfglN0h612xnsW0d0ZBgVOA0PLVVaoErA9VQJdx9jGd4iEbv+vnWP9vOIH/OdCO/ fbj3jm0nrpAACZAACZAACeyyBEKbie6yWHjiJEACJEACJEACTU0A3nuDVeG8/79el5GqsPKj5EZf 103+1tZlhHT0OlkMBXWvg4+37Q8vG6dXsLVCYkamdVU9tAPzUtsqWFbghYkJcOufNeS4pWrIxW57 HihQeHgRKG0Rvt0pGx0endbtCEe6YZrd+xX5MXuFmXg224CRQK5L7miEcI9EvB4XXjluCuOtS+xe NM4+wAvTKttzcO8wDLBuc1uGF5P1mkJRF0zWI9SpQxGXoyHZvQo885yefMjX3RoE/c454mQjN/1e /3jCUFr6mchEuFqnVxOutxejBPDpp2FJ3TyWm5td/pypNi889CfniJPU2CbOU9e6jT9Aw8Z2sNUN dU8kpGfZ6mIFuTG9CrzQrPc7PE4bW6BEdsqq7z52FtnWEVYc3pFWgXFIqDHF+exDAQylhFeBt6qV RShlh9c2w9XDOAIlE8JC7//om4aCNEeNRpBCI1TUALd2EfZ+/sv/NsLywru5uaWpx/N1anxlFbwf eh18nLUo5HIffZ/ZRN/t6375zlbkZQWepF6/W7y011h14NXu9MYM1rbbd4bTg9W5L4xE8mdNsRVj nMj94wW2smAryRouGylNmlPQh/4aXn3fB/8rE+56xojCAeM1fNuqxtZX10pW5cmUu660GQn6aqAJ KsNLPJzBkZduFC1fbItwhH3wfePFcBR1EUnA+R28NoI893hnIue6FzGMsDSyiVO2Ojy1ndvx7QJF vFUQhajdwPBGVdinpxprIXpUaxFE1eqq/07xIshL7gzFD8M25GOnkAAJkAAJkAAJkIAXAo1rdu/l iKxDAiRAAiRAAiSw0xLof+J5nsLMueXzRI7L7nsdEhC2zwusSg0TavXGxj7Iq+pHMNk0/7//kTpL jkB4IiNcoJugfMPUifWbMEkLL/DsA4/2rJiq3znChR77eZuUM5vPVg/1lY4Q1MgvC49aNzFC0TqU sd3GH+x5whttZqkXF3LCFqvHvCnwnkP+8GBhLM161l/kNUZYeK+CkOJOgbdNKDFCQ1oUHgj9iZzz g8++PGRYz1BtBtuG0P9WgbdbKEWctS6W4QHTceQettyghYt3eM476zf2upd8jjgmjAWm3n+DTakP pUbH4WPDevkE63PhwtkBmxBC2qtgkhqRAZy53L3uH616znsCimQoLb0KFOJQPCz/9M36XbYsnadR 9Wtcn1m3d9uc5x7UkN3/cA0XW99oEy6Yk99Wj2Vcf4TqD5YDGZEq4M1ulZ4atjyUuLGY9cTd6i14 Q0Bu5lDtNNc23NMIy26GZochCHJIl21Yq3+aq1sNtxCOGqkorLncnf1d/vnbR