5 min readJust now
–
Modern AI chats can do more than just reply with text — they can call small “tool” functions to fetch data or perform tasks. Think of a function as a mini-program the AI can execute (for example, looking up weather or doing math). By composing and decomposing such functions, we can break complex queries into pieces or chain multiple steps, leading to more accurate and even empathetic conversations. In practice, we might define one function to generate a response and another to tweak its tone, then compose them (call one after the other). We can also decompose a user’s request into sub-requests handl…
5 min readJust now
–
Modern AI chats can do more than just reply with text — they can call small “tool” functions to fetch data or perform tasks. Think of a function as a mini-program the AI can execute (for example, looking up weather or doing math). By composing and decomposing such functions, we can break complex queries into pieces or chain multiple steps, leading to more accurate and even empathetic conversations. In practice, we might define one function to generate a response and another to tweak its tone, then compose them (call one after the other). We can also decompose a user’s request into sub-requests handled by different functions. Below we illustrate these ideas with code examples in Python.
Press enter or click to view image in full size
Using compose to Chain AI Functions
A common pattern is to compose two functions: feed the output of one into the next. For example, suppose we have:
G(input)— a function that generates a base response,M(text)— a function that modifies or adds empathy to a response.
We can define a compose helper that creates a new function applying Gthen M:
def compose(f, g): """Return a function that is g(f(x)).""" def h(x): return g(f(x)) return hdef G(user_input): # Generate a base answer (e.g., from a language model) return f"AI: I hear you said '{user_input}'."def M(text): # Modify the answer, e.g. add an empathetic tone return f"❤ {text} ❤"# Compose G then Mempathic_response = compose(G, M)print(empathic_response("I feel lonely.")) # Output: ❤ AI: I hear you said 'I feel lonely.'. ❤
Here, compose(G, M) creates a function that first calls G(user_input) and then passes that result to M. The printed output shows how the AI’s base reply is “wrapped” with extra empathy. Internally, the AI might use both steps — generation and tone adjustment — as separate “tools.” The result is a single, coherent answer.
We’ve essentially built a simple pipeline: the AI generates a response and then refines it. In real systems, these could be more complex (e.g. G might summarize content, and M might rephrase it politely). The key idea is that composed functions let the AI handle multi-step tasks in order.
Decomposing a Complex Task
Decompose means breaking a big request into smaller subtasks and handling each with a function. For instance, imagine a user asks the AI to “Plan a trip: find flights and hotels to Paris.” We can split this into two parts (“find flights”, “find hotels”) and call different functions for each. In code:
def search_flights(destination): # Pretend to fetch flight info (would call an API in reality) return f"Flights to {destination}: Flight123 at $500"def search_hotels(destination): # Pretend to fetch hotel info return f"Hotels in {destination}: Hotel ABC, $150/night"def plan_trip(query): # Decompose the query into subtasks dest = "Paris" # in real code, extract from query flights_info = search_flights(dest) hotels_info = search_hotels(dest) return {"flights": flights_info, "hotels": hotels_info}print(plan_trip("Find flights and hotels to Paris"))# Output: {'flights': 'Flights to Paris: Flight123 at $500', # 'hotels': 'Hotels in Paris: Hotel ABC, $150/night'}
In this example, we decompose the user’s request manually. The function plan_trip calls two separate functions and then combines their results into one structured answer. From the user’s perspective, the AI seamlessly handles the multi-part task. Under the hood, we’ve broken (“decomposed”) the problem and used multiple function calls. This makes the conversation’s logic clear and organized.
Example: Compose/Decompose with G and M
Returning to our functions G and M, we can also show how decomposition might involve them. For example, suppose we set a convention that G handles the main content and M handles tone. We can compose them for a final answer, and we can “decompose” a user message to decide how each applies. Here’s a toy example:
# Define G and M as beforedef G(input_text): return f"AI thinks: '{input_text}'"def M(response_text): return f"(empathetic tone) {response_text}"# Compose G then Mfinal_fun = compose(G, M)print(final_fun("Tell me a joke.")) # Output: (empathetic tone) AI thinks: 'Tell me a joke.'# Decompose a compound inputdef decompose(input_text): # Simple split on "and" for demonstration return input_text.split(" and ")tasks = decompose("Greet and cheer")for t in tasks: print(final_fun(t))# Output: (empathetic tone) AI thinks: 'Greet'# (empathetic tone) AI thinks: 'cheer'
In this snippet, compose(G, M) is used to make final_fun, which applies both steps. We also have a simple decompose that splits an input into parts. By printing each part through our composed function, we see how the AI might address multiple tasks in a single message.
These examples show the compose/decompose pattern at work. As the Medium tutorial notes, when the AI sees your question and available tools, it will often choose to call the right function. In our case, the AI could automatically decide to use G and M in sequence. In a real chat, you’d supply metadata so the model “knows” about G and M, much like the weather API example in the tutorial.
Iteration and Empathy (Fixed-Point Insight)
By iterating these steps (composition) in a loop — the AI proposes, the human edits or re-specifies, the AI updates — we reach a stable outcome. This is analogous to finding a fixed point: after enough back-and-forth, the conversation “converges” and the answer stops changing. The referenced paper shows that modeling a human–AI editing process as a composition of functions leads to a stable consensus artefact. In simple terms, if our AI repeatedly refines its response through composed functions (and the human refines it), eventually the result stabilizes.
Importantly, even if we introduce “quantization” or noise (like a simplistic tone adjustment or rounding), the final answer is robust — an approximate fixed point. In user-friendly terms: even with many small tweaks (e.g. adding emojis, filtering content, etc.), the AI and human will settle on a consistent response.
Finally, this stability lets us build empathetic embeddings: as one blog explains, when an AI and a piece of content iteratively align, “the AI isn’t just parsing words, it’s feeling the meaning” — its representation matches ours . By carefully composing and decomposing our conversation functions, we encourage the AI to truly “feel” and reflect the user’s intent, just like reaching a fixed-point understanding.
References: We used the Lightcap AI tutorial on function calling for examples of composing functions, and the referenced arXiv paper to understand how iterative human–AI loops (composition of functions) yield stable consensus. The concept of “empathetic embeddings” was discussed in the author’s blog to motivate the emotional aspect of this approach.