Prompt Management Using Jinja (opens in new tab)

pub.towardsai.net·11w·Open original (opens in new tab)

A guide on how Jinja2 templates can be used to manage prompts

8 min readDec 5, 2025

Press enter or click to view image in full size

Image generated using ChatGPT

This article is free to read. But, your contributions help me keep studying and creating content for you 😊.

Introduction

How do we usually manage prompts? We mostly rely on static prompts embedded within the code with a few custom variables here and there. The standard practice is to retype the whole prompt manually when making a change.

As the system scales up, it becomes a nightmare to manage multiple lengthy prompts within the code, and to ensure prompt quality.

I was recently going through a case study on how LinkedIn built a “Prompt Source of Truth” using the Jinja2 templating language. This allowed developers to put placeholders in place of the actual prompts and then fill them dynamically at runtime.

About Jinja

Jinja is a Python-based templating engine. In simpler terms, Jinja lets us create templates with placeholders where we can programmatically pass some data. This lets us alter the context dynamically for different use cases. It’s also a good way to organise our prompts.

This removes the need to modify our code every time. Jinja lets us manage prompts without redoing the whole prompt for every situation.

A templating engine processes templates with dynamic content, rendering them dynamically based on the context.

It also provides an advantage in a situation where multiple prompts need to be chained together. And in situations where the template needs to change based on certain conditions.

One more added benefit, is that the template can also act as a guardrail by ensuring that the complete context is being provided before the prompt is being passed to the LLM.

{%if tone == "formal" %}Hello, {{name}}{% else %}Hey, {{name}}{% endif %}

Let me show you a very basic example on how to use Jinja.

import jinja2environment = jinja2.Environment()template = environment.from_string("Hello, {{name}}")print(template.render(name="Arunabh"))

*Here, **Hello{{name}}**is the template and **Arunabh*is the value for name which we are passing in. We get the following output.

Hello, Arunabh

LinkedIn uses a “chain-based architecture”. More specifically, every business use case is mapped to a prompt “chain” which is a predefined sequence of prompts. Every chain accepts input as a dictionary of key-value string pairs and produces a structured response.

Why can’t this be handled by the LLM itself?

Loading more...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help