Small models, big results: Achieving superior intent extraction through decomposition (opens in new tab)

As AI technologies advance, truly helpful agents will become capable of better anticipating user needs. For experiences on mobile devices to be truly helpful, the underlying models need to understand what the user is doing (or trying to do) when users interact with them. Once current and previous tasks are understood, the model has more context to predict potential next actions. For example, if a user previously searched for music festivals across Europe and is now looking for a flight to London, the agent could offer to find festivals in London on those specific dates.

Large multimodal LLMs are already quite good at understanding user intent from a user interface (UI) trajectory. But using LLMs for this task would typically require sending information to a server, which can be slow, costly, and carries the potential risk of exposing sensitive information.

Our recent paper “Small Models, Big Results: Achieving Superior Intent Extraction Through Decomposition”, presented at EMNLP 2025, addresses the question of how to use small multimodal LLMs (MLLMs) to understand sequences of user interactions on the web and on mobile devices all on device. By separating user intent understanding into two stages, first summarizing each screen separately and then extracting an intent from the sequence of generated summaries, we make the task more tractable for small models. We also formalize metrics for evaluation of model performance and show that our approach yields results comparable to much larger models, illustrating its potential for on-device applications. This work builds on previous work from our team on user intent understanding.

Details

We introduce a decomposed workflow for user intent understanding from user interactions. At inference time the model performs two main steps. In the first step each individual interaction on a single screen and UI element is summarized independently. Next, those summaries are used as a series of events to predict the general intent of the entire UI trajectory.

Individual screen summaries

At the first stage, every individual interaction is summarized by a small multimodal LLM.

Given the a sliding window of three screens (previous, current, next), the following questions are asked:

  1. What is the relevant screen context? Give a short list of salient details on the current screen.
  2. What did the user just do? Provide a list of actions that the user took in this interaction.
  3. Speculate. What is the user trying to accomplish with this interaction?

Intent extraction from summaries

In this stage, a fine-tuned small model is used to extract a single sentence from the screen summaries.

We find that the following techniques are helpful.

Loading more...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help