As AI is integrated into scientific practice, the practice of science itself is changing. AI models that summarize, categorize, simulate, and predict not only stand to accelerate scientific research; they now sit inside these practices, alternately enhancing and eroding craft while shifting how questions are posed, what counts as evidence, how tacit judgment is taught and exercised, and reshaping trust in results. This workshop is designed to map and investigate emerging questions about the nature of proof, inference, uncertainty, and error with the integration of AI into scientific workflows.
The Craft of Science with AI: Evidence, Judgment, and Practice, convened online by Data & Society’s AI on the Ground program on March 19-20 2026, is a workshop about doing science when AI is …
As AI is integrated into scientific practice, the practice of science itself is changing. AI models that summarize, categorize, simulate, and predict not only stand to accelerate scientific research; they now sit inside these practices, alternately enhancing and eroding craft while shifting how questions are posed, what counts as evidence, how tacit judgment is taught and exercised, and reshaping trust in results. This workshop is designed to map and investigate emerging questions about the nature of proof, inference, uncertainty, and error with the integration of AI into scientific workflows.
The Craft of Science with AI: Evidence, Judgment, and Practice, convened online by Data & Society’s AI on the Ground program on March 19-20 2026, is a workshop about doing science when AI is in the loop. It asks a central, field-building question: *What does it mean to do science with AI? *What counts as evidence and theory-building explanation, how is scientific judgment exercised, and what gets foreclosed in the process?
We invite researchers and practitioners to join us in examining how scientific reasoning and imagination are being reconfigured as AI systems become a part of the everyday practice of science. Together, we will widen the research frame, build an interdisciplinary network of collaborators, and reflect on concrete practices for interpretation, verification, and epistemic accountability in AI-mediated science.
Scope at a glance: We welcome empirically grounded accounts of how ML/computer vision, LLMs/copilots/agents, or automated/cloud-lab systems shape questions, data, analysis, drafting, lab coordination, or instrumentation. We are not seeking algorithmic novelty or benchmarking papers unless they are tied to analysis of scientific practice.
Our academic workshops enable deep dives with a broad community of interdisciplinary researchers into topics at the core of Data & Society’s concerns. They’re designed to maximize scholarly thinking about the evolving and socially important issues surrounding data-driven technologies, and to build connections across fields of interest.
Participation is limited; apply by Monday, December 8, 2025 at 11:59 p.m. Anywhere on Earth (AoE).
AI promises speed, scale, and novel forms of data-driven discovery. But it also changes what scientific research looks like on the ground — shifting how scientists pose questions, justify claims, evaluate results, and interpret surprises. These shifts unsettle foundational disciplinary assumptions: What counts as proof? Where does simulation end and experimentation begin? When is a result explainable, and when is it just predictive enough?
Beyond their philosophical implications, these are practical questions that emerge in the ordinary rhythms of using AI in a manner similar to a scientific instrument or a research tool. They emerge in how problems are framed, prompts are written, outputs are debugged, findings are verified or dismissed, papers are drafted, datasets are documented, and tooling and APIs are maintained. AI does not merely transform knowledge; it reshapes the infrastructure, labor, processes, and social life through which knowledge is produced and organized.
These shifts call for conceptualizing AI as a scientific instrument within a long-standing STS lineage. Classic studies of laboratory practice show that scientific facts are enacted through instruments that inscribe meaning into measurements.Histories of experimental culture emphasize that credibility of a scientific claim depends on organized procedures, trusted witnesses, and shared interpretive norms through which it is produced. Analyses of the “experimenter’s regress” illuminate how judgments of correctness and method bootstrap one another — you can’t know if the result is right unless the experiment was done correctly, and you can’t know if the experiment was done correctly, until you know that the result is right — while research on formal verification and computer-assisted proof demonstrates that even proofs require social negotiation, not just technical verification. Extending this lineage, scholarship on data-intensive life sciences traces how computation, databases, standards, and software reorganize scientific work, reframing what counts as an object of inquiry and how evidence travels across institutions.
What this “lab studies” lineage has not always foregrounded are the political-economic currents in which research practices unfold. The rise of AI — especially when tools originate from Silicon Valley firms backed by venture capital — invites scholars to reexamine how funding structures, platform logics, and proprietary infrastructures shape the conduct of science itself. In this sense, AI adds new dimensions for scientists and ethnographers of science to account for, even as it intensifies older dilemmas around trust, reproducibility, credibility, and epistemic authority. Recent critiques of model-mediated understanding further warn that machine outputs may appear coherent or confident even when they are ungrounded — raising urgent questions about how scientific judgment is being redefined.
Drawing on and extending this scholarship, the workshop examines the craft of doing science with AI. We invite empirical, methodological, conceptual, and theoretical work that surfaces how AI tools and agents are being taken up in everyday research, and how their use is reshaping scientific practice. Our aim is to move beyond simple narratives of acceleration or automation to better understand the interpretive labor, epistemic dilemmas, organizational frictions, and institutional constraints that accompany AI in the lab. We ask:
***What counts as a good scientific question in the age of AI? What does it mean to trust a model output? How do scientists reason, argue, and take responsibility when inference is distributed across humans and machines? How do these practices reshape the nature of disciplinary expertise? ***
We’re particularly interested in projects that engage one or more of the following cross-cutting threads:
- Evidence and verification: Tracing the emergence of new interpretive protocols, examining how evidentiary thresholds shift in practice, investigating reproducibility, and surfacing the tensions between what can be predicted and what needs to be understood. How are traditional evidentiary standards evolving — or eroding — in the face of AI-mediated research? What distinguishes explanation from prediction, or simulation from experiment, in this new context?
- Craft, intuition, and tacit knowledge: Making tacit judgment visible, tracking how skills are taught, maintained, and tested, and identifying practices that sustain or dull expert intuition when AI tools are in the workflow. How do labs test and recalibrate judgment under automation, and what practices keep a feel for data and instruments alive?
- Materiality and automation: Following the material stacks of AI-mediated science: instruments and sensors, data pipelines, vendor ecosystems, robotic and cloud labs, and the maintenance and repair ecologies that keep them running. What breaks and with what consequences? Who fixes it and under what constraints? How do platform dependencies, calibration routines, and access arrangements shape which questions can be asked and which answers count as credible?
- Training, mentorship, and careers: Examining how training, authorship, and evaluation are being reorganized as AI assistance becomes ambient. How are expectations for coding, writing, attribution, and independence shifting across labs and disciplines? What now counts as an innovative proposal, dissertation, or job talk, and how can mentorship and assessment practices cultivate judgment rather than outsource it?
- Expertise and authority: Mapping how epistemic authority and decision rights shift as software, data, and infrastructure roles shape scientific discovery. Who defines problem formulations, acceptable error, and verification protocols? When does “infrastructure work” count as theory- or method-building? What boundary-work distinguishes “tooling” from “science,” and with what consequences for disciplines, expertise, careers, and voice in research directions?
- Institutions and gatekeeping: Interrogating how rules and norms are adapting (or not): funding logics, data-sharing and disclosure, IP and licensing, and peer review amid model-generated literatures. How are reviewers and editors triaging, auditing, and calibrating trust? What new disclosure or provenance practices are emerging, and how do policy changes and platform terms reshape incentives for openness, reproducibility, and credit?
We welcome a wide range of works-in-progress, including:
- Ethnographic studies of labs and research groups integrating AI into daily work;
- Conceptual essays on automated laboratories, simulation, explanation, and epistemic risk;
- Infrastructure and protocol analyses that surface assumptions embedded in AI instrumentation and vendor ecosystems;
- Legal or policy research on how AI-generated outputs interact with norms of scientific evidence;
- Experimental formats — prompt logs, AI-assisted research diaries, and data maps — that capture the friction and improvisation in AI-mediated scientific practice.
- Domain-grounded cases (for example: ecology using computer vision on animal video, radiation oncology and clinical decision support, climate and materials modeling, digital humanities projects expanding corpora via NLP).
We encourage all attendees to approach the Data & Society workshop series as an opportunity to engage across specialties, and to strengthen both relationships and research through participation. While we recognize the value for individual projects, we also see this as a valuable field-building exercise for all involved.
This is not a workshop about AI. It is a workshop about how people make sense of AI as it becomes enmeshed within the rhythms of scientific life. We are eager to gather a cohort who can bring both analytic clarity and interpretive generosity to this task.
We welcome two kinds of applicants:
Authors
Researchers with works-in-progress that examine how AI reshapes reasoning, evidence, infrastructure, labor, or institutional life in science. You might be:
- Scientists using AI in research domains like biomedicine, climate, neuroscience, materials, or ecology, reflecting on how tools alter your practice.
- Researchers in STS, anthropology, HCI, philosophy of science, or sociology, investigating epistemic, social, or methodological questions.
- Legal or policy researchers thinking about evidence standards, accountability, or rights in scientific contexts.
- Research-software engineers, data stewards, librarians/archivists, curators of datasets, and documentation specialists exploring the implications of AI for documentation, preservation, or reproducibility.
- Lab managers, instrument builders/technicians, QA/validation leads, and metrology or calibration experts with reflections on how AI, scientific work, expertise, and tacit skills mutually shape one another.
- Industry or applied R&D practitioners, navigating how science and engineering co-evolve in AI-intensive environments.
Group submissions from labs or research teams are welcome. Where needed, anonymization of sites or collaborators in workshop materials can be supported. We especially encourage early career scholars to apply. Projects around ~75 percent complete are ideal: they have shape and momentum, with room for feedback, reframing, and growth.
Participants
If you don’t have a project to submit, but want to be part of this conversation, you can apply as a participant. Ideal participants are:
- Scholars or practitioners positioned to offer rigorous, cross-disciplinary feedback
- People with domain knowledge, critical insight, or institutional perspective who can help connect dots across epistemic, infrastructural, and everyday lab concerns
- Advanced graduate students, funders, policymakers, or institutional stewards interested in reshaping how AI-mediated science is received, supported, and governed
Some participants will also be invited to serve as discussants, helping to guide small-group conversations and sharpen the critical exchange.
**Selection criteria **
In selecting authors and participants, we will focus on:
- *Is the work grounded in the questions this workshop seeks to explore? *We are looking for broad resonance between the submission and the workshop themes.
- Does the applicant bring a perspective that broadens the field? We value attention to difference — across disciplines, geographies, institutions, and positionalities — and encourage submissions that reflect global connections and citational justice.
- *Does the work name a challenge or tension others are missing or reframe a familiar one? *We are interested in submissions that deep dive into specific uncertainties, unsettle easy assumptions, or bring fresh interpretive clarity to ongoing debates.
- Will the workshop meaningfully support the applicant’s project or thinking? We’re building a space for shared learning. Participants should be prepared to give thoughtful, generous feedback and be open to receiving the same.
When and where: The Craft of Science with AI workshop will take place online over two days: Thursday and Friday, March 19–20, 2026**.** While we won’t be in the same physical room, we are designing this as a high-touch, low-burnout experience — crafted for reflection and connection across time zones and disciplines.
Session structure: Instead of back-to-back panels or passive webinars, the workshop will center deep, small-group exchange and active intellectual participation. It focuses on reading, imagining, and offering interdisciplinary responses to in-progress projects, and building collaborative networks for exploring interwoven themes. Each session is an invitation to slow down, listen closely, and think together about the epistemic, institutional, practical, and everyday shifts emerging as AI becomes embedded in scientific practice.
- Feedback sessions (75 minutes each). We will run sessions in parallel for a total of 9–15 across the workshop; each participant will attend three.
- In each session, a discussant opens with a short synthesis of the featured project, then invites responses. The author reflects and we converge on concrete next steps.
- Groups are deliberately mixed across domains, methods, and stances (enthusiastic and skeptical) to avoid siloed conversations.
Reading and preparation:
- Authors will submit a project draft that is roughly 75 percent complete (no more than 10,000 words), and are expected to read and comment on up to two peer projects. Authors may also serve as discussants in another session.
- Participants will review up to three projects in advance and offer prepared feedback. Some participants will be invited to serve as discussants, helping to guide the conversation in one feedback session.
- Pre-reads and prompts circulate two weeks before the workshop. We support light asynchronous commenting for time-zone constraints.
All attendees will also have the opportunity for informal networking and thematic conversations throughout the day. We encourage all attendees to approach this workshop as a collaborative space for field-building and mutual exchange — not just of papers, but of ideas, methods, and interpretive practices.
Publication and privacy: This is a feedback workshop; there are no proceedings or DOIs. Circulated drafts remain the property of their authors. We will follow confidentiality by default (no public attribution of discussion) and use opt-in for any quoting in post-event summaries or write-ups. If needed, we can support anonymization of sites or collaborators in workshop materials.
Support: All eligible participants can request a $150 stipend to acknowledge the time and care involved in preparing and participating. No travel is required, however, we expect active participation in workshop sessions. Accessibility matters: live captions will be available; please let us know any additional access needs in your application.
We invite applications from both authors — those who wish to share a work-in-progress — and participants, those who want to contribute to the conversation through engaged feedback and discussion. You can also apply as either, and we’ll consider your project for a feedback session while also welcoming you as a participant if space allows.
Whether you’re drafting an ethnographic account, mapping an emerging workflow, reflecting on a legal or epistemic tension, or collecting fragments of informal reasoning that resist easy categorization — this is a space to develop work that’s still in motion.
To apply
By Monday, December 8, 2025 at 11:59 p.m. Anywhere on Earth (AoE), please complete the application form, which includes:
- Basic details**:** Your name, affiliation, role, discipline or area of focus (keywords welcome), sector, location, career stage, pronouns [optional], and a link to a bio or professional page.
- Application type**:** Select whether you are applying as an author, participant, or either.
- Project summary (authors and either applicants only): In 500 words or less, describe your project. What questions or tensions does it surface about doing science with AI? How far along is it? What format does it take (article, chapter, memo, data mapping, etc.)? What kinds of feedback are you hoping for? We welcome academic research, but also experimental formats and reflections grounded in practice.
- Participation statement (participants only): In 250 words or less, tell us why you want to join this conversation. What questions or perspectives would you bring to the table? We welcome scholars, practitioners, graduate researchers, policymakers, and others with insight into how AI is changing the texture of scientific work.
- Commitment: Confirm that you are able to meet key deadlines, including advance reading and feedback responsibilities. (Details are outlined in the “Format” section.)
- Optional: Share one project, tool, or piece of writing — yours or someone else’s — that you think everyone in this domain should be reading or revisiting right now.
Notes for co-authored projects: If your project is co-authored, each author must submit a separate application. We ask that no more than three authors apply per project, due to space constraints.
By 11:59 p.m. Anywhere on Earth (AoE).
Application deadline: Mon, December 8, 2025
Selection notifications: Fri, January 16, 2026
Revised summary and RSVP deadline: Mon, January 26, 2026
Draft project deadline: Fri, February 13, 2026
Group assignments and program: Fri, March 6, 2026
Public panel and workshop: Thu-Fri, March 19-20, 2026
Questions? Contact [email protected].
This workshop is organized by Ranjit Singh and Data & Society’s AI on the Ground program in collaboration with Alice Marwick and the project’s advisory council, whose members include Lisa Messeri, Nicole Nelson, and Tal Linzen. The workshop is produced by Siera Dissmore and Rigoberto Lara Guzmán, and draws additional support from Data & Society’s Raw Materials Seminar, as well as the communications, engagement, and accounting teams.
Sources linked in this workshop description:
- Latour, Bruno, Steve Woolgar, and Jonas Salk. Laboratory Life: The Construction of Scientific Facts, 2nd Edition. Princeton University Press, 2013.
- Shapin, Steven, and Simon Schaffer. Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. Princeton University Press, 2018.
- Collins, Harry. Changing Order: Replication and Induction in Scientific Practice. University of Chicago Press, 1992.
- MacKenzie, Donald A. Mechanizing Proof: Computing, Risk, and Trust. MIT Press, 2001.
- Stevens, Hallam. Life Out of Sequence: A Data-Driven History of Bioinformatics. University of Chicago Press, 2013.
- Messeri, Lisa, and M. J. Crockett. “Artificial Intelligence and Illusions of Understanding in Scientific Research.” Nature 627, no. 8002 (2024): 49–58.https://doi.org/10.1038/s41586-024-07146-0.
Additional sources: ****
- Anderson, Chris. “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.” Tags. Wired, June 23, 2000.https://www.wired.com/2008/06/pb-theory/.
- Jumper, John, Richard Evans, Alexander Pritzel, et al. “Highly Accurate Protein Structure Prediction with AlphaFold.” Nature 596, no. 7873 (2021): 583–89.https://doi.org/10.1038/s41586-021-03819-2.
- Hope, Tom, Doug Downey, Daniel S. Weld, Oren Etzioni, and Eric Horvitz. “A Computational Inflection for Scientific Discovery.” Commun. ACM 66, no. 8 (2023): 62–73.https://doi.org/10.1145/3576896.
- Duede, Eamon, William Dolan, André Bauer, Ian Foster, and Karim Lakhani. “Oil & Water? Diffusion of AI Within and Across Scientific Fields.” arXiv:2405.15828. Preprint, arXiv, May 24, 2024.https://doi.org/10.48550/arXiv.2405.15828.
- Jamieson, Kathleen Hall, Bill Kearney, and Anne-Marie Mazza, eds. Realizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community. University of Pennsylvania Press, 2025.
- Narayanan, Arvind, and Sayash Kapoor. “Why an Overreliance on AI-Driven Modelling Is Bad for Science.” Nature 640, no. 8058 (2025): 312–14.https://doi.org/10.1038/d41586-025-01067-2.