Ask it anything: that’s the simple, ubiquitous way to interact with generative artificial intelligence (GenAI). These interfaces have become a one-stop-shop for all manner of activities—creative writing, coding, planning, speculating, etc. With such a spread of possibilities, users are advised to include some context in their prompts. It’s an example of how people are reskilling to work better with GenAI.
This post digs into the tools that the interfaces can offer users to help them clarify and communicate context. Most users’ goals will resemble those that others have when coming to GenAI, but sometimes users are playing with ideas and aiming for an output that is super-imaginative and demonstrates a surprising way to use GenAI. We want to offer users a framework that encourages …
Ask it anything: that’s the simple, ubiquitous way to interact with generative artificial intelligence (GenAI). These interfaces have become a one-stop-shop for all manner of activities—creative writing, coding, planning, speculating, etc. With such a spread of possibilities, users are advised to include some context in their prompts. It’s an example of how people are reskilling to work better with GenAI.
This post digs into the tools that the interfaces can offer users to help them clarify and communicate context. Most users’ goals will resemble those that others have when coming to GenAI, but sometimes users are playing with ideas and aiming for an output that is super-imaginative and demonstrates a surprising way to use GenAI. We want to offer users a framework that encourages curiosity, exploration, and experimentation in the interface.
Interfaces in the Physical World
Before GenAI, people were seeking knowledge and information in libraries and the physical, embodied world. The point we want to highlight about that experience is that the physical world could not bend around the user’s needs; users were required to enter and engage with the layout of a library as they found it.
So now we ask: What do users lose, if anything, when those constraints are removed? Instead of supposing that the ‘best’ GenAI tool is one that enables users to fast-track to an answer, we ask: what is the impact of raising the user’s attention—what are our rules of engagement?
Part One: Navigating Physical Interfaces
In many areas of life, people have traditionally had carefully designed static physical libraries, archives, or even domain-based study rooms that support memory and learning, and act as interfaces to how we find answers to queries. These systems don’t just give answers; they shape how users approach a question. These can become so static and impersonal that college librarians frequently add display tables celebrating a current topic, new acquisitions, or new periodicals, and populate the library with posters, information boards, and seminar rooms that link with life on campus.
If all goes well, a user leaves with new ideas and explanations and some underpinning architecture (we call it epistemic insight) that they can use in other libraries to find their way around.
Back to our question: what do we lose when we move from a library to a GenAI that can shape the presentation of information around the user’s request? And, to get to the heart of the matter, can we have the best of both worlds?
Consider an interface that assumes ‘disciplines’ are fundamental to how a lot of valuable knowledge is made; we can then have a GenAI tool for use on an academic campus that interacts with users by saying things like “Would you like to explore this topic through a scientific or historical lens?” or “This idea appears in both philosophy and sociology; do you want to compare how each discipline approaches it?”
Part Two: Mixing Genres, Making Meaning
Extending the idea of mixing disciplines, we can also use GenAI to bridge other categories of content that don’t usually meet—and produce a one-off response that combines them. Take this AI prompt: “Write a weather forecast for Swansea in the style of Dylan Thomas.”
Our experiment with Microsoft’s Copilot gave us:
“And the rain, oh the rain, it will come tumbling like a choir of grey ghosts over the copper roofs of Uplands, whispering secrets to the sea. The wind will wander drunk through the alleyways, rattling the bones of old bicycles and sighing through the chapel doors. By nightfall, the clouds will gather like gossiping aunties above the bay, and the moon, if she dares, will peek through the mist with a wink and a warning.” (Microsoft, 2025)
This isn’t a particularly good format for a forecast, and it’s also not an authentic rendition of what Thomas wrote. But it almost effortlessly lets us explore evocative and provocative ways of talking about the weather that warrants some further exploration,
Part Three: Inventing New Interfaces
GenAI can do more than respond or remix; it can inspire educators and others to invent entirely new interfaces that let users explore structure, style, and substance in tandem, to learn as we play.
Here’s a case study where we ask GenAI to write in the style of James Joyce about living in a smart city. Instead of leaving it to GenAI to improvise, the user is given control: they can choose which aspects of Joyce’s style and which features of smart cities they want to include. Compiled with some input from GenAI, here’s one suggestion for controls that a student might select.
Joyce style controls:
- Stream of consciousness (Joyce)
- Experimental syntax
- Multilayered wordplay
- Sensory imagery (Joyce)
- Thematic depth
Smart city content controls:
- Sensor-driven infrastructure
- Participatory governance
- Data-driven decision making
- Sustainable design
- Interconnected services
What happens if the user selects: stream of consciousness, sensory imagery; and sensor networks, sustainability? In our experiment, Microsoft Copilot generated this:
The pavement hummed beneath her soles, a low electric murmur, like the breath of the city dreaming. She walked—no, drifted—past the vertical gardens, their leaves whispering carbon secrets to the air. The bins blinked green, full but not overflowing, the sensors had spoken. And she thought, not in words but in pulses: is this what it means to be cared for by machines? (Microsoft, 2025).
By experimenting with different aspects of Joyce’s style and what a smart city is like, users can become ‘agentic learners,’ experimenting with the output while exploring and challenging their understanding of how the output is produced.
Finally, what could an interface look like that invites users to mix genres, aspect by aspect? There are many ways to approach exploring genres. Let’s say the user presents a request for a text in a prompt. On the screen, they could have a button bar for 10 aspects of Joyce’s writing, and of a smart city. Selecting different aspects changes the story. The user could use this interactive interface to construct and send a prompt to GenAI that stipulates aspects of Joyce and ‘smart city’. Indicators based on preliminary use with university students are that it is an entertaining way to fabricate quirky outputs, and useful for raising questions to discuss more widely about authenticity, transparency, integrity, usefulness, permission, and creative control.
We conclude interfaces matter, and hope giving selectable controls can help shape interesting results, but also encourage students to explore and reflect on alternatives. We can also wonder what is lost by giving users a single entry point and invitation to ‘ask me anything’. Our case is that users can benefit from ‘learning on the job’ about content categories such as genres, topics, and disciplines, to inspire them to investigate ways to be creative using GenAI that weren’t possible before.
The message for users and interface designers is that GenAI can do more for us than save time and fast-track users to finished, polished outputs.
**Berry Billingsley **is an educator interested in philosophical ‘big questions.‘
Ted Selker is a computer scientist and student of interfaces.
Submit an Article to CACM
CACM welcomes unsolicited submissions on topics of relevance and value to the computing community.
You Just Read
GenAI: Rules of Engagement
© 2025 Copyright held by the owner/author(s).