, I’ll discuss how to build agentic systems using GPT-5 from OpenAI. I recently discussed how to use GPT-5 effectively, and today I’ll be continuing my GPT-5 coverage by discussing how to effectively utilize GPT as an agent. Having agents with available tools in your application is soon going to be a basic user requirement for most AI applications, which is why you should start implementing it as soon as possible.
I’ll cover how you can use GPT-5 as a powerful question answering model by allowing it access to your data and providing it with useful tools to answer user queries. This article aims to be a high-level overview of the possibilities you have to use GPT-5 as an agent. I am not sponsored by OpenAI.

This will return a series of documents and specific chunks from those documents, similar to what Pinecone does. You can then proceed to use these chunks to answer user queries.
However, you can make the vector storage even more powerful by providing GPT-5 access to it through a tool.
from openai import OpenAI
client = OpenAI(api_key="")
response = client.responses.create(
model="gpt-5",
input="When is our latest data management agreement from?",
tools=[{
"type": "file_search",
"vector_store_ids": ["<your vector store id>"]
}]
)
This is a lot more powerful because you’ve now made the vector storage available to GPT-5 through a tool. When you now input the user query, GPT-5 decides whether or not it needs to use the tool to answer the user query. If it decides it needs to use the tool, GPT-5 does the following:
-
Reasons about which tools or vector storage it has available, and which to use.
-
Does query rewriting: Writes 5 different versions of the user prompt, but optimized to find relevant information with RAG.
-
Fires the 5 prompts in parallel, and fetches the most relevant documents
-
Determines if it has enough information to answer the user query.
-
If yes, it responds to the user query
-
If no, it can search further in the vector storage(s)
This is a super easy and powerful way to get access to your data, and OpenAI essentially handles all of the complexity with:
- Chunking and embedding documents
- Deciding when to perform a vector search
- Query rewriting
- Determining relevant documents based on similarity with queries
- Deciding if it has enough information to answer the user query
- Answering the user query
Gemini has also recently made a managed RAG system with their Files API, essentially offering the same service.
GPT-5 tool usage
In the previous section, I discussed the vector storage tool you can make available to GPT-5. However, you can also make any other tool available to GPT-5. A classic example is to provide GPT-5 access to a get_weather tool, so it can access the current weather. The current example is from the OpenAI docs.
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"name": "get_weather",
"description": "Get current temperature for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country e.g. Bogotá, Colombia",
}
},
"required": ["location"],
"additionalProperties": False,
},
"strict": True,
},
]
response = client.responses.create(
model="gpt-5",
input=[
{"role": "user", "content": "What is the weather like in Paris today?"},
],
tools=tools,
)
Now you need to determine which tools you should make available to your agent, so that it can better answer the queries you’ll be providing it. For example:
- If you’re working with external knowledge bases, you should make available a tool to search these knowledge bases, and inform the model when to use the tool
- Python execution tool: You can give the model a tool to run Python code, and see the output with
- Calculator tool: Instead of the LLM performing math itself (which is inefficient and prone to errors), you can provide it with a calculator tool to run calculations.
And so on. The important part here is that you give the agent the best opportunity to answer user queries as possible. However, it’s also easy to make the mistake of making too many tools available. It’s important you follow general guidelines on providing tools to your agent, ensuring:
- Tools are always well described
- Tools are unambiguous: it should always be clear to the model (and any human reading a tool) when a tool should be used, and when not
- Minimal overlap between tools
I’ve covered the topic of AI Agent tools more in depth in my previous article on How to Build Tools for AI Agents.
When defining tools for GPT-5, you can also provide guidelines for whether a tool is required or not. A required tool could be the vector storage search, where you force the model to search the vector query for every user request, ensuring answers are always grounded in the document corpus. However, the get_weather function should usually be an optional function, considering the function should only be invoked when a user asks about the weather.
You can also make tools using connectors. Connectors are essentially tools that give access to other apps, such as:
- Gmail
- Slack
- Figma
- GitHub
This allows GPT to, for example, list your emails, search specific threads in Slack, check out designs on Figma, or look into code on GitHub.
Agents package
It’s also worth mentioning that OpenAI made an Agents SDK you can access through Python or TypeScript. The agents SDK is useful for more complex agent-building scenarios, where you need to:
- Make the agent perform complex, chained actions
- Maintain context between tasks
You can, for example, create specific agents, specializing in certain tasks (fetching information, summarizing information, etc), and build an orchestrator agent which answers user requests, fires off sub-agents to fetch and summarize information, determines if it has enough information, and then answers the user.
There are a lot of similar Agent SDKs out there, which makes creating your own agent rather simple. Some other good alternatives are:
- LangGraph
- CrewAI
- Agent Development Kit
These packages all serve the same purpose of making AI agents easier to create, and thus more accessible.
Conclusion
In this article, I’ve discussed how to utilize GPT-5 as an AI agent. I started off discussing when you need to make agents, and why GPT-5 is one of several good alternatives. I then dived into OpenAI’s vector storage, and how you can create a vector storage super simply, and make it available to your agent as a tool. Furthermore, I discussed providing your agent with other custom tools and the Agents SDK you can use to make advanced agentic applications. Providing your LLMs with tools is a simple way to supercharge your agents and make them a lot more able to answer user queries. As I stated at the beginning of this article, users will soon start to expect most AI applications to have available agents that can perform actions through tools, and this is thus a topic you should learn more about and implement as quickly as possible.
👉 Find me on socials:
🧑💻 Get in touch
✍️ Medium
You can also read my article on: