By Mert Geyiktepe
As artificial intelligence continues to rapidly permeate every aspect of our lives, the dynamics governing this evolution warrant a closer look. Maximilian Kasy, from the University of Oxford’s Department of Economics, discussed the role of the actors who control the development of this technology in the book presentation The Means of Prediction: How AI Really Works. He expounded on the overwhelming influence those in power have over shaping the future of AI and offers a framework for addressing the disparity in its control. He was joined by Professor Dani Rodrik, Faculty Co-Director of the Reimagining the Economy program at Harvard Kennedy School for a conversation. Here are the takeaways from the discussion:
- **Good AI policy requires an understanding of who de…
By Mert Geyiktepe
As artificial intelligence continues to rapidly permeate every aspect of our lives, the dynamics governing this evolution warrant a closer look. Maximilian Kasy, from the University of Oxford’s Department of Economics, discussed the role of the actors who control the development of this technology in the book presentation The Means of Prediction: How AI Really Works. He expounded on the overwhelming influence those in power have over shaping the future of AI and offers a framework for addressing the disparity in its control. He was joined by Professor Dani Rodrik, Faculty Co-Director of the Reimagining the Economy program at Harvard Kennedy School for a conversation. Here are the takeaways from the discussion:
- Good AI policy requires an understanding of who defines its objectives
Much of the popular discourse surrounding artificial intelligence revolves around a dystopian picture. Even movies that became staples of popular culture often depict AI as gaining superhuman powers, self-improving, and posing a threat to humans. According to Kasy, such rhetoric eclipses the truly important questions we should be asking for developing good policy surrounding AI. These discussions frame the issue as competition between machines and humans and underestimate the agency we have over how this technology influences our lives.
A more all-encompassing approach, per Kasy, is concerned with understanding who defines the “objective function” of AI rather than just looking into whether AI fails to optimize. Problems around AI such as AI safety and workplace automation can be understood as optimization failures, where humans attempt to automate processes to maximize or minimize certain tasks. A crucial question surrounding this is who picks what is being optimized in the first place – after all, different parts of society can have different interests when it comes to outcomes. In that process, the people who control AI inputs such as data, compute, expertise, and energy have excessive power.
- The ultimate goal of policy is to democratize the control of AI
In considering the objectives of AI, it is crucial to evaluate which actors can be influential in driving change. Kasy said that much of the discourse on AI is centered around AI engineers. However, the ethical issues that arise from the use of this technology are difficult to address from the lens of tech companies that prioritize maximizing profits. Thus, it is vital to draw in the perspectives of other actors like workers and consumers. Factors like reputation and legal concerns that arise from media coverage and law can serve as substantial nudges for tech companies in paying closer attention to outcomes that maximize social welfare.
According to Kasy, whereas the technology itself may be convoluted, the fundamental questions surrounding AI are digestible to the point that a broad debate that draws in opinions from different segments is possible. In that sense, the debate should not be confined to the tech industry. This framework aims to incorporate the actors that are directly affected by the automated processes.
Kasy identified certain interpretations of ideology as a roadblock to a more democratic control of AI. He contended that this could engender narratives that hamper change. He said, for instance, that an understanding of the interests of a particular set of actors as the interests of society at large can obscure conflicts within society. Regarding the domain of geopolitical competition, Rodrik raised a question about how a narrative of competition with China affects the arguments regarding the control of AI. According to Kasy, arguments that champion surpassing China make “contingent choices” look “objectively necessary” regarding AI outcomes. He further contended that a possible redistribution of power of the control of AI within the U.S. would not endanger global power competition.
- The regulation of AI covers a wide range of domains
Kasy explained that the broader discussions surrounding the regulation of AI tend to center on large language models, which have become readily accessible to vast segments of society. The spotlight they are able to grab is further accentuated by ample media coverage. However, he underscored numerous other domains where discussion should not be limited to experts but incorporate a wider range of perspectives. These include automatic screening of job candidates, ad targeting, and filtering of social media feeds.
One domain-specific example Kasy discussed was concerned with admissions to higher education. The incorporation of algorithms that evaluate student profiles and determine who gets admitted to these institutions is likely to give rise to social debate. There are various competing viewpoints such as whether the system should try to meritocratically maximize average test scores, foster social mobility or rectify historical injustices. In that sense, regulation concerning the use of algorithms in producing outcomes in people’s quotidian lives covers a broad spectrum of opinions. In closing their discussion, Rodrik highlighted a silver-lining in approaching this epochal juncture – “it is not as complicated as the people who do not want to be regulated make it out to be.”
Read Next Post