Defaults, validators, and signatures look like harmless code—but they quietly decide who holds power inside every AI system.
When people discuss AI governance, they imagine committees, ethical frameworks, or international regulations designed to keep technology under control. They picture debates about transparency, accountability, and the moral boundaries of automation. Yet few realize that actual authority often lives in a far quieter place: a small file written in JSON, hidden deep inside an API call. That file, the function-calling schema, determines what the model can and cannot do. It specifies which parameters must be included, which values are valid, and what happens when the operator leaves something blank. Inside that apparently technical configuration lies an entire …
Defaults, validators, and signatures look like harmless code—but they quietly decide who holds power inside every AI system.
When people discuss AI governance, they imagine committees, ethical frameworks, or international regulations designed to keep technology under control. They picture debates about transparency, accountability, and the moral boundaries of automation. Yet few realize that actual authority often lives in a far quieter place: a small file written in JSON, hidden deep inside an API call. That file, the function-calling schema, determines what the model can and cannot do. It specifies which parameters must be included, which values are valid, and what happens when the operator leaves something blank. Inside that apparently technical configuration lies an entire architecture of control. A schema is not simply a data format; it is an executable boundary. It defines the limits of expression, the hierarchy of permissions, and the consequences of omission. If the model proposes an action outside of the schema, it is automatically corrected or rejected. If a value is missing, the system substitutes a default that may or may not reflect the operator’s intention. Through these quiet substitutions, governance migrates from discourse to syntax. This is why the schema must be understood as more than a convenience feature. It is not merely structured output; it is a constitution written in code. Every field in that file functions like a clause in a legal document. Required parameters act as non-negotiable obligations. Optional ones resemble conditional rights. Defaults become precedents, decisions made in advance about what counts as normal. Validators serve as enforcers that patrol the system’s borders, deciding what can pass and what must be rejected. Imagine an AI scheduling assistant that receives a request to book a meeting “tomorrow afternoon.” The schema, not the model, defines what “tomorrow” and “afternoon” mean. It may restrict time ranges to business hours, reject weekends, and default to the operator’s local time zone. None of these rules come from the model’s “intelligence”; they come from the schema’s structure. The same mechanism operates in more critical domains. A diagnostic assistant that enforces “temperature < 39°C” or “age ≥ 18” is already making a policy choice. It decides who qualifies for attention before any reasoning or explanation occurs. The power of the schema lies in its invisibility. Engineers treat it as an implementation detail, yet it silently defines institutional priorities. Regulators speak of transparency and accountability, but these attributes collapse once governance resides in configuration rather than in explicit code or documentation. The schema speaks in a different grammar, one of enforcement rather than deliberation. Once compiled, it no longer invites discussion; it simply executes. To understand modern AI governance, we must therefore look not only at policies or ethical principles but at the microstructures of syntax. Each validator, default, and required field is a tiny instrument of control. Together, they form a new kind of legal order: one that operates automatically, without ceremony, and without debate.
Consider two concrete examples. A customer-support assistant uses a schema that locks “refund_limit”: 0. Another company lets “refund_limit” vary. The first assistant never grants compensation regardless of context. The second sometimes does. Who made that decision-the operator, the model, or the developer who wrote the schema? In financial automation, a validator rejects any “country” not listed in an internal whitelist and fills empty fields with “US”. Overnight, hundreds of transactions are misclassified. The interface looks neutral, but the schema has already embedded a political choice. Governance has migrated from human dialogue to configuration files.
Measuring invisible authority
The study Function-Calling Schemas as De Facto Governance: Measuring Agency Reallocation through a Compiled Rule introduces the Agency Reallocation Index (ARI), a quantitative method that measures how schemas redistribute control among the operator (human intent), the model (synthetic reasoning), and the tool (external system). By calculating entropy reduction - how much the schema restricts possible actions - and applying Shapley attribution, the ARI exposes the internal balance of power. Hard defaults and strict validators consistently shift control toward the tool. Broader signatures with soft defaults return part of that control to the model or operator. What seems to be a simple function definition becomes a regla compilada, a compiled grammar of authority that silently allocates decision rights.
Real-world implications
In healthcare, a validator that enforces “age ≥ 18” silently excludes minors from automatic triage. In logistics, a default “priority”: “standard” delays urgent deliveries. In hiring, a default “availability”: “immediate” filters out skilled applicants who need a notice period. Each line of code encodes a policy. Once compiled, it governs faster than any committee. Defaults and validators decide before humans deliberate.
Why it matters
Every validator is a clause, every default is a precedent. The schema is not after the decision; it is the decision. The research argues that syntax itself has become a form of governance. Authority no longer manifests as discourse or command but as structure. Through the ARI, organizations can finally quantify who decides within their systems before bias, exclusion, or failure makes those decisions visible.
Learn more
Full paper (Zenodo): https://zenodo.org/records/17533080 Related research Executable Power: Syntax as Infrastructure in Predictive Societies - https://doi.org/10.5281/zenodo.15754714 **AI and Syntactic Sovereignty - ** https://doi.org/10.2139/ssrn.5276879 *The Grammar of Objectivity - *https://doi.org/10.2139/ssrn.5319520
Author website: https://www.agustinvstartari.com ** SSRN Author Page:** https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915 ** Series:** AI & Power Discourse Quarterly (ISSN 3080–9789)
Ethos
I do not use artificial intelligence to write what I do not know. I use it to challenge what I know. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.- Agustin V. Startari