Published on January 22, 2026 1:47 AM GMT
When people imagine intensive regulation of frontier AI companies, they often picture regulators physically stationed inside company offices - like the Nuclear Regulatory Commission’s Resident Inspector Program. This image is powerful but somewhat misleading. Physical residence is actually just one possible feature of a broader regulatory approach that I’ll call dedicated continuous supervision.
In short: high stakes, fast-moving, complex industries can’t be monitored and regulated by periodic inspections and standardised reports alone: you need regulators who have extensive information access rights, who monitor entities continuously rather than at fixed intervals, and who develop deep institution-specific expertise throug…
Published on January 22, 2026 1:47 AM GMT
When people imagine intensive regulation of frontier AI companies, they often picture regulators physically stationed inside company offices - like the Nuclear Regulatory Commission’s Resident Inspector Program. This image is powerful but somewhat misleading. Physical residence is actually just one possible feature of a broader regulatory approach that I’ll call dedicated continuous supervision.
In short: high stakes, fast-moving, complex industries can’t be monitored and regulated by periodic inspections and standardised reports alone: you need regulators who have extensive information access rights, who monitor entities continuously rather than at fixed intervals, and who develop deep institution-specific expertise through sustained attention to individual companies.
This essay does three things: explains what dedicated continuous supervision actually involves, argues that the same factors justifying it in nuclear and finance apply to AI, and reviews how those industries handle the problems (especially regulatory capture) that tend to afflict this regulatory model. I draw particularly on finance, which turns out to be the more illuminating comparison despite nuclear’s more obvious parallels to catastrophic AI risk.
Peter Wills has already made a strong case for supervision as the right regulatory mode for AI.[1]This post provides a shorter introduction to his work, and puts a bit more focus specifically on why the continuous and dedicated dimensions matter, and what we can learn from how they’ve been implemented elsewhere.
It might seem strange to think about the most ambitious version of frontier AI regulation in the current political climate. However, I think it’s important to start considering the question now of what the optimal regulatory set-up would be on the merits. Windows of political opportunity could open suddenly in the future (perhaps as a consequence of a warning shot), and it’s important to be ready.
Dedicated regulatory supervision, what
Supervision
In what follows I will focus mainly on continuousness and dedication, since supervision itself has already been dealt with more extensively by Wills.[2]However, it is helpful to begin with a brief explanation for the reader.
At its core, supervision involves two key components: information access rights and discretionary authority.
Supervisory regimes are built on a foundation of periodic reports and audits that exist in some form in almost all industries. However, supervisors in industries like nuclear and finance, have the legal authority to demand access to virtually any company documentation, attend internal meetings, question members of staff, and even require that all business conversations take place using company email or phone services that can subsequently be accessed. Financial supervisors distinguish between “offsite” inspections, in which staff review submitted reports and public information, and “onsite” inspections (which may nonetheless be remote) in which staff proactively gather private information from the company.[3]
For AI, there is a variety of relevant types of information. In short, supervisors are likely to want access to any form of evidence that the company’s internal safety team does (or should) rely on, in proportion to how crucial that evidence is for the overall safety case. Chain of thought traces are an obvious starting point that companies have already granted to 3rd party evaluators in the past. Logs of model outputs more generally also seem likely to be useful, retaining the anonymisation of customer data that companies should already be practicing. Helpful only models continue to feature in capability evaluations and for red-teaming safeguards. If and when interpretability tools become more useful, supervisors will want to review this evidence themselves, up to full white-box access to models. Under some circumstances, supervisors might also want to review training data itself (for example if necessary for checking data poisoning attacks). Lastly, training algorithms seem unlikely to be of interest to supervisors, and are highly commercially sensitive.
Discretionary authority is a necessary concomitant of these extensive information rights, because they can only be exercised using discretionary power. If supervisors have the power to demand information beyond what is included in standardised reports, they must necessarily be making a proactive choice about what to demand, exercising their own discretionary judgement.[4]This discretion about how to conduct investigations is usually accompanied by significant discretion about enforcement decisions. Wills suggests powers to “prohibit, delay, order modifications to, or undo… the deployment of a frontier AI model”.[5]Behind the threat of demanding deeper investigations is the threat of punishment that gives supervisors leverage in negotiations and prevents them from becoming mere well-informed spectators to a catastrophic failure.
Recent trends in regulatory practice have to some extent embraced this, with a shift away from “rule-based” approaches to “risk-based”, “outcome-based” and “principle-based” approaches which specify the broad objectives of regulation and give regulators and companies significant discretion about how to achieve these objectives.[6]
Regulatory supervision generally aims in the first place to assess the adequacy of the company’s risk management process, not the object-level risk itself. For example, the Federal Reserve’s strategy is centered on ensuring that the supervised institution has strong processes for carrying out risk identification and risk management, while leaving the ultimate responsibility for managing those risks with the firm. However, examiners also conduct occasional deeper dives into the firm’s data on particularly important questions or if they suspect the higher-level information the firm provided was inadequate or misleading.[7]Applied to AI, this would mean starting with audits of the developer’s internal risk management processes, escalating to deeper investigations (e.g. attempting to replicate a surprising safety eval) as required.
Continuousness
The continuousness with which regulated entities are monitored is better thought of as a spectrum of frequency of information flows, which has several dimensions.
First, it involves more frequent regular information flows from the regulated entity. This is clearly a spectrum, from examinations tied only to particular events (e.g. approval of a new product) to reports on an annual, quarterly, weekly or daily basis. The same entity is likely to be required to submit different types of information on different cadences. In finance, for instance, large banks submit “daily, weekly, monthly, and quarterly reports containing business line, risk management, and other internal control metrics.”[8]Most of the supervisory staff’s time is spent reviewing these reports.
Second, regulators are increasingly making use of automated monitoring of real-time data streams. This might involve embedded auditing modules within company systems that automatically flag unusual patterns or deviations from expected behaviour. In finance, we see this most prominently with the Financial Industry Regulatory Authority (FINRA), which monitors potential market manipulation via automated reports of all transactions in US securities markets.[9]This volume of data is not parseable by humans, and is monitored algorithmically. We should note that automated real-time data monitoring is not necessarily linked with supervision; indeed, FINRA is a private company (a self-regulatory organisation) contracted by its members and overseen by a public regulator, the Securities and Exchange Commission (SEC). Accordingly, we should also avoid thinking of real-time data monitoring as the most intense or continuous form of monitoring. This kind of monitoring is only feasible when data is communicated in a highly standardized form that facilitates algorithmic rather than human scrutiny. It therefore misses a lot of the more nuanced or unexpected information that supervisors can unearth in less structured ways. In AI, there is clearly a lot of potential for this kind of continuous automated monitoring, for example via regulators having access to data about the company’s internal monitoring systems. Proposed systems of compute monitoring are an example of continuous automated monitoring.[10]
Third, supervision entails proactive information gathering. Rather than waiting for companies to report problems or for periodic assessments to reveal issues, continuous supervision involves supervisors actively seeking out information through unscheduled inquiries, spot checks, and exploratory analyse.
Fourth, supervisors have ongoing interactions and regular meetings with their counterparts in the company, from board level down to business specialists. These provide space for less structured information flows that can communicate things that might get lost in more standardised reporting formats. Eisenbach et al give the example of team members asking “questions such as ‘How did you get comfortable with that decision?’”[11]In periods of heightened concern, these meetings might become more frequent.
Importantly, continuous supervision always operates in layers. There is constant light-touch monitoring: very frequent reports of key variables, regular data flows being processed by algorithms, and routine check-ins with company staff. Overlaid on this are periodic deep dives: comprehensive examinations of specific topics that occur less frequently. An AI supervisor would have to learn what kinds of information flows are appropriate in these lighter and deeper layers. Financial supervisors such as the US Federal Reserve undertake an annual cycle of planning and evaluating their own supervision for each entity they are responsible for.[12]
Dedication
By dedication, I mean the degree of personalised attention received by specific supervised entities. For instance, in 2014 the US Federal Reserve had teams of 15-20 staff assigned exclusively to each of the 8 largest and most important bank holding companies.[13]These teams develop deep, institution-specific knowledge that would be impossible to maintain if supervisors rotated frequently between different companies or only engaged periodically. This allows supervisors to understand the idiosyncratic features of each company—its particular technical architecture, risk management philosophy, internal culture, and key personnel. It enables the building of relationships that facilitate information flow, particularly informal communications that might reveal problems before they appear in official reports. And it creates incentives for co-operation between supervisors and companies, because both sides know that they will be interacting repeatedly and that behaving opportunistically may make their jobs harder in the future.
Physical residence represents the most extreme form of dedication, as seen in the Nuclear Regulatory Commission’s Resident Inspector Program where inspectors maintain offices at power plants. However, we should avoid fetishising this aspect. Whether or not to station supervisors physically in the officers of their charges is just an operational decision about what will be most practical in the circumstances. Given the importance of physical processes in nuclear, physical residence makes sense; in finance, almost everything can be done remotely, and so while some supervisory authorities tend to deploy staff permanently to company offices (for example the US Office of the Comptroller for the Currency), others (for example the UK regulators) tend not to.[14]I expect the case of AI to be more similar to finance, with a permanent physical presence not being necessary, except perhaps when supervisors are engaged in a deep dive into white-box model evaluation (which should probably take place at company offices for infosec reasons).
To be clear, the analogy with finance and nuclear power isn’t perfect. Most importantly, those industries possess established ‘playbooks’ for safety and widely agreed-upon metrics for risk, like Value at Risk in finance or core damage frequency in nuclear. Frontier AI, by contrast, is pre-paradigmatic. We do not yet have a consensus on how to measure the safety of a model. However, if anything this tends to strengthen the case for a more flexible supervisory regime relative to a more rule-based approach.
Why
In this section, I look at the considerations in favour of dedicated continuous supervision. I do so by explaining why it is that we see dedicated continuous supervision in nuclear and finance but not in other industries. I then argue that the same considerations that justify dedicated continuous supervision in these cases also apply to AI. The reader should beware that there are danger to this methodology of rational reconstruction: the fact that this regime exists for the nuclear and finance industries does not necessarily imply that it exists for good reasons; there may also be path-dependent political reasons that explain these regimes (particularly the public salience of Nuclear disasters, for example). I first start with three considerations related to the desirability of dedicated continuous supervision, and then two considerations related to its feasibility.
Avoiding Gaming
The more structured and predictable regulatory reporting or monitoring is, the easier it becomes for companies to adversarially optimise against it, minimising the costs of being regulated while ignoring the spirit of the law and the actual objectives regulation was supposed to advance. The ability of supervisors to demand any information they deem relevant is the strongest possible countermeasure to this tendency. This is a potential concern in almost all industries, but in some industries (like finance) there is a particular history of firms successfully gaming regulatory systems. I am unclear on what properties of an industry render this concern more pressing, and whether AI has these properties. Certainly the sheer value of the industry and the financial incentives to game regulations appear high in AI, as in finance.
Understanding complex systems
The ability of an industry to game regulations is also related to the second consideration in favour of dedicated continuous monitoring, which is the sheer difficulty of understanding some sectors and companies. Some sectors and industries are particularly complex and opaque and require regulators to invest more resources to understand what is going on. Nuclear power plants are complex entities in this way, with different plants developing their own particular cultures.[15]But large financial organisations are even more complex and opaque, probably some of the most complex and opaque organisations on the planet (not least because the financial sector interconnects with effectively every other sector). It therefore makes sense that regulators would need multiple FTE permanently dedicated to the biggest entities just to stay abreast of developments. The more resources are dedicated to particular entities, the more likely supervision is to become effectively continuous, as it is just very unlikely that the most efficient way of using all those resources would be to only conduct a few clearly circumscribed examinations each year.
AI also seems a case of unusual complexity and opacity, partly just because these are very large companies with millions of customers, but in particular because the technology itself remains very poorly understood in many ways. Several authors have argued that there is a particular need to build state capacity around AI.[16]In finance, dedicated supervisory teams are understood to be partly about building expertise inside the regulator in a way that facilitates better policy choices in the future, and Wills argues the same seems likely to be true for AI.[17]
Responding rapidly
Another factor that closely relates to complexity is the speed and unpredictability of developments in a sector or entity. Many industries are well suited to relatively intense scrutiny of new products followed by very light-touch occasional auditing of the production process thereafter. Continuous supervision, on the other hand, makes sense where these conditions do not obtain because we cannot so clearly identify moments of particular concern in contrast with periods of relative placidity. The optimal degree of continuousness depends on the speed at which safety can deteriorate. In finance, while there may be value in devoting particular scrutiny to particular points in time around new products or listings, the whole sector is in constant flux. Historically, banking collapses and financial frauds do not arrive on any reliable schedule, and things can go from bad to worse very quickly. Nuclear power has somewhat of a more standard product lifecycle with particular danger concentrated around new plants or new designs, but the history of nuclear accidents shows that routine operation of plants is itself fraught with danger. In these conditions, continuous supervision is needed because periodic or product-lifecycle-based point-in-time examinations could easily miss fast-moving disasters that fall between scheduled examinations.
From our current epistemic standpoint, AI seems if anything even more fast-moving and unpredictable than these industries, and so even more in need of continuous supervision. Currently, safety practices in frontier AI seem based around the product lifecycle model, with safety evaluations concentrated around the release of new models. However, the most serious risks from AI in the future are likely to arise during the development process, from the internal deployment of the most advanced models on AI research and development itself.[18]Internally deployed models could undermine the alignment of more powerful future models, self-exfiltrate and establish themselves outside the company’s control, or be stolen by actors particularly likely to misuse them. Only continuous supervision leaves regulators with much chance of mitigating these risks.
High stakes
Many industries feature incentives to game regulation, and are complex and fast-moving to some extent. However, full dedicated continuous supervision only really exists in nuclear and finance. There is a good reason for this: dedicated continuous supervision is very expensive. I therefore point out two feasibility conditions.
First, dedicated continuous supervision is only justifiable when the costs of accidents are very high, on a simple cost-benefit analysis. This is clearly the case with finance, where banking collapses are often extremely costly for society as a whole. In this respect, the nuclear industry is arguably somewhat over-regulated relative to other industries on a pure cost-benefit basis, because the dramatic nature of nuclear accidents makes these risks more politically salient. I take it as given here that AI risks are at least on a similar order of magnitude to those in finance and nuclear.
Low number of entities
Second, the costs of dedicated continuous supervision are only manageable when it is applied to a relatively small number of entities. There are 54 nuclear power plants in the US, and only 8 bank holding companies that are subject to the fullest version of Federal Reserve Bank supervision.[19]Keeping the number of regulated entities low enough to be tractable often requires setting some kind of threshold of this kind, with entities below the threshold subject to some lower level of scrutiny. The standard practice in industries like finance is for the regulator to be funded by fees levied on the regulated entities themselves, rather than through general taxation.
Frontier AI is perfectly set up in this respect, with only 3-6 US companies working at the capabilities frontier where catastrophic risks are present. This may well change in the future, as capabilities have historically become cheaper to reproduce over time; if so, it may well become impractical to apply the same dedicated continuous supervision regime to a larger number of entities, and some kind of tiered regulatory structure might be desirable.
Troubleshooting
Looking at regulatory regimes in analogous industries helps direct us towards the kind of dedicated, continuous supervisory regime that is optimal for frontier AI. However, I suspect that ultimately the experiences of other industries will be most useful when it comes to thinking about how to mitigate the problems that tend to arise in this kind of regulatory structure.
Evasion
As much as supervisors dedicate significant resources to continuously monitor companies, it remains possible for people in companies to keep secrets and co-ordinate with one another behind the backs of regulators. Finance workers are prohibited from using personal email or messaging services to communicate about work precisely so that supervisors can gain access to their communications, but one can still just have a quiet word in person. More generally, supervisors remain highly dependent on supervisees to collate and curate the information they pass on. Even with 10s of supervisory staff, there is still a significant gap between the human resources of the supervisors and the supervisees. The quantity of information available remains far too large for supervisors to analyse directly in its entirety: they have to utilise information curation that supervisees do, either for their own benefit or specifically for supervisors. This creates the risk of supervisees obscuring relevant information.
One response here is to point out that there are significant incentives for mutual co-operation between supervisors and supervisees.[20]If supervisors suspect that companies are holding out on them this is likely to trigger deeper investigations that will ultimately be more costly to the company. On the other hand, supervisors are incentivized to minimize the costs of their information requests to companies in order to secure the supervisees’ continued co-operation in the future.
Engendering co-operation is one part of a broader skillset of exercising supervisory powers to best extract information and incentivise supervisees to behave safely. However, the bigger point here is that if evasion is a problem for dedicated continuous supervision, it is even more of a problem for every other mode of regulation. Dedicated continuous supervision is in the best possible position when it comes to acquiring information, because it is constituted by maximal information access rights and maximal information-processing resources. Most of the problems with supervision take this general form: they are really problems with regulation in general, and dedicated continuous supervision is if anything likely to be less subject to them.
Capture
However, regulatory capture is one issue that supervision tends to actually worsen. Originally referring to the phenomenon of regulators exercising their powers to protect incumbent firms from competition, regulatory capture today generally refers to any regulator behaviour that promotes regulated firms’ interests over that of the public interest, for example by implementing regulation too leniently. This occurs through two vectors.[21]“Material” capture refers to regulators being corrupted by material incentives, most famously the “revolving door” effect of firms hiring former regulator staff at greatly increased wages. “Cultural” capture refers to regulator staff being socialised into identification with the values and interests of the firms they regulate, as a result of constant interpersonal contact and the prestigious status of firms’ staff.
Capture is an especial problem for supervision because of the discretion that supervisors wield. Supervisors have broad latitude about how to interpret their objectives, what to investigate more deeply, and even what penalties to levy. Moreover, much of this takes place in secrecy, because one of the most important ways supervisors secure co-operation is to promise confidentiality for the (often commercially or legally sensitive) information that is shared with them. Although the primary danger is that supervisors treat firms too leniently, these conditions also facilitate the potential for the opposite problem, that supervisors will be overly demanding of companies in a way that ultimately does not serve the public interest either.
The potential for capture is to an extent an unavoidable drawback of supervision relative to other, more standardized and transparent forms of regulation. Nonetheless, multiple mitigations have been developed in the nuclear and financial sectors. Wills offers the following list of measures:
“rotating supervisors between institutions at least every five years to ensure supervisors can bring “fresh eyes”;
relocating traditional on-site work off-site;
requiring a cooling-off period of several years before a supervisor can be employed by a supervisee;
performing more horizontal examinations – that is, examinations that examine the same issue across multiple supervisees rather than examining multiple issues across the same supervisee;
dividing supervision responsibility over multiple overlapping supervising agencies so that consistent capture becomes more difficult;
requiring peer reviews or oversight of supervisors by other organizations;
hiring for intellectual diversity; and
instituting a devil’s advocate role.” [22]
A supervisor for AI could straightforwardly borrow from this playbook.
Systemic risks
A major focus of criticism of bank supervision after the 2008 financial crisis was that closely examining individual companies in isolation led to the neglect of “systemic” risks that arose as a result of interactions between companies. In response, supervisors developed a new “stress testing” methodology to measure banks’ exposure to potential problems in other financial institutions, and shifted more attention to “horizontal” examinations that looked at multiple companies simultaneously across a common theme. To some extent then, regulators did weaken the “dedicated” aspect of supervision to better account for systemic risks.
Could there be equivalent problems in AI? In the future there may be emergent risks that arise from the interaction of AI agents from different companies; however, this is very speculative, and probably better tackled when we have a better idea of the potential problem. Alternatively, one could think of the “systemic” risks in AI as referring to trends like automation, enfeeblement, concentration of power and lock-in; however, these are not really the kinds of problems that regulatory supervision could be expected to deal with anyway, as they seem to require decisions at a legislative level.
Conclusion
This essay has set out a model of regulation that exists (only) in finance and nuclear power, two of the industries with most similarity to frontier AI. The three elements of this package are:
- Supervision, extensive information access rights and discretion
- Continuous monitoring, a high frequency of information acquisition
- Dedication, significant resources devoted to individual entities.
I argued that this package is particularly desirable in situations where there is a high risk of companies gaming regulation, the entities are highly complex, and the situation can change rapidly, and that these conditions apply to frontier AI. I argued that this expensive system is feasible when the stakes are sufficiently high and the number of entities to be supervised is low, and that these conditions also apply to AI. I then reviewed how supervision in nuclear and finance deals with problem, particularly regulatory capture.
This short essay has omitted several other important topics, above all the international element. With both nuclear and finance we see separate national supervisory systems overlaid by a meta-regulatory layer of international organisations (the International Atomic Energy Agency (IAEA) and the Bank of International Settlements (BIS)) checking for compliance with international standards (Basel III and the IAEA Safeguards). This seems an obvious model for frontier AI supervision to follow. Alternatively, we could create a single supervisory body directly at the global level. This would have significant advantages but given that it would be completely unprecedented it does not seem very likely. However, frontier AI is admittedly unusual in being effectively restricted to just two countries, whereas the nuclear and especially financial industries are much more widespread.
Wills raises two other potential problems with supervision in AI that I can only address briefly here. First, supervision increases information security risks by expanding the attack surface. The solution here, I suggest, is to require supervisors to conduct white-box investigations on-site at AI company offices rather than getting their own copy of the weights to take home. Second, there is a danger of mission creep, as AI ethics concerns (privacy, discrimination, etc.) that are not suited to supervision get added to the remit of the supervisor, diluting its original mission. The solution here would seem to be making sure that these concerns have clear alternative regulatory pathways, though this is easier said than done.
AI poses unprecedented risks. But the problem of regulating companies in complex and fast-moving industries has precedents, and AI regulation can learn a lot from these models. Particularly if we take the speed of AI and the risks of internal models seriously, the best approach is a flexible supervisory system that monitors developments continuously rather than relying on evaluations conducted at fixed points in time.
Peter Wills, ‘Regulatory Supervision of Frontier AI Developers’, SSRN Scholarly Paper no. 5122871 (Social Science Research Network, 1 March 2025), doi:10.2139/ssrn.5122871. ↩︎
Wills, ‘Regulatory Supervision of Frontier AI Developers’. ↩︎
Wills, ‘Regulatory Supervision of Frontier AI Developers’, pp. 9–10. ↩︎
Wills, ‘Regulatory Supervision of Frontier AI Developers’, p. 6. ↩︎
Peter Wills, ‘Regulatory Supervision of Frontier AI Developers’, SSRN Scholarly Paper no. 5122871 (Social Science Research Network, 1 March 2025), p. 34, doi:10.2139/ssrn.5122871. ↩︎
Jonas Schuett and others, ‘From Principles to Rules: A Regulatory Approach for Frontier AI’, in The Oxford Handbook of the Foundations and Regulation of Generative AI, ed. by Philipp Hacker and others (Oxford University Press, n.d.), p. 0, doi:10.1093/oxfordhb/9780198940272.013.0014. ↩︎
Thomas M. Eisenbach and others, ‘Supervising Large, Complex Financial Institutions: What Do Supervisors Do?’, Economic and Policy Review, published online 1 May 2015, doi:10.2139/ssrn.2612020. ↩︎
Thomas M. Eisenbach and others, ‘Supervising Large, Complex Financial Institutions: What Do Supervisors Do?’, Economic and Policy Review, published online 1 May 2015, p. 70, doi:10.2139/ssrn.2612020. ↩︎
Rory Van Loo, Regulatory Monitors: Policing Firms in the Compliance Era, n.d., p. 369. ↩︎
Girish Sastry and others, ‘Computing Power and the Governance of Artificial Intelligence’, arXiv:2402.08797, preprint, arXiv, 13 February 2024, doi:10.48550/arXiv.2402.08797. ↩︎
Thomas M. Eisenbach and others, ‘Supervising Large, Complex Financial Institutions: What Do Supervisors Do?’, p. 69. ↩︎
Thomas M. Eisenbach and others, ‘Supervising Large, Complex Financial Institutions: What Do Supervisors Do?’, p. 74. ↩︎
Thomas M. Eisenbach and others, ‘Supervising Large, Complex Financial Institutions: What Do Supervisors Do?’, p. 63. ↩︎
Large Bank Supervision: OCC Could Better Address Risk of Regulatory Capture, GAO-19-69 (United States Government Accountability Office, 2019), p. 45 <https://www.gao.gov/assets/gao-19-69.pdf> [accessed 10 October 2025]. ↩︎
Carl Rollenhagen, Joakim Westerlund, and Katharina Näswall, ‘Professional Subcultures in Nuclear Power Plants’, Safety Science, 59 (2013), pp. 78–85, doi:10.1016/j.ssci.2013.05.004. ↩︎
Markus Anderljung and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’, arXiv:2307.03718, preprint, arXiv, 7 November 2023, doi:10.48550/arXiv.2307.03718; Kevin Wei and others, ‘How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance’, SSRN Scholarly Paper no. 4931927 (Social Science Research Network, 20 August 2024), doi:10.2139/ssrn.4931927. ↩︎
Thomas M. Eisenbach and others, ‘Supervising Large, Complex Financial Institutions: What Do Supervisors Do?’; Wills, ‘Regulatory Supervision of Frontier AI Developers’. ↩︎
Ashwin Acharya and Oscar Delaney, Managing Risks from Internal AI Systems (Institute for AI Policy and Strategy, 2025) <https://static1.squarespace.com/static/64edf8e7f2b10d716b5ba0e1/t/687e324254b8df665abc5664/1753100867033/Managing+Risks+from+Internal+AI+Systems.pdf> [accessed 15 October 2025]; Charlotte Stix and others, ‘AI Behind Closed Doors: A Primer on The Governance of Internal Deployment’, arXiv:2504.12170, preprint, arXiv, 16 April 2025, doi:10.48550/arXiv.2504.12170; ‘AI Models Can Be Dangerous before Public Deployment’, METR Blog, 17 January 2025 <https://metr.org/blog/2025-01-17-ai-models-dangerous-before-public-deployment/> [accessed 15 October 2025]. ↩︎
U.S. Energy Information Administration (EIA), ‘The United States Operates the World’s Largest Nuclear Power Plant Fleet’, n.d. <https://www.eia.gov/todayinenergy/detail.php?id=65104> [accessed 13 October 2025]; ‘LISCC Program Overview’, Board of Governors of the Federal Reserve System, n.d. <https://www.federalreserve.gov/publications/february-2023-liscc-overview.htm> [accessed 15 October 2025]. ↩︎
Wills, ‘Regulatory Supervision of Frontier AI Developers’, pp. 17–18. ↩︎
James Kwak, ‘Cultural Capture and the Financial Crisis’, in Preventing Regulatory Capture: Special Interest Influence and How to Limit It, ed. by Daniel Carpenter and David A. Moss (Cambridge University Press, 2013), pp. 71–98, doi:10.1017/CBO9781139565875.008. ↩︎
Wills, ‘Regulatory Supervision of Frontier AI Developers’, pp. 56–58. ↩︎
Discuss