Member-only story
5 min readJust now
–
Press enter or click to view image in full size
Photo by Alex Knight on Unsplash
Hi everyone
In this article, I’ll be sharing my experience working on an agentic AI security assessment that I conducted a few months ago.
The goal of this write-up is to walk you through:
- The AI assessment itself
- The differences between a traditional LLM and an AI agent
- The threat model used during the assessment
- A security bug I discovered along the way
The Assessment
I was assigned to an assessment that involved a product with a significant LLM-powered component. The client had recently introduced a feature…
Member-only story
5 min readJust now
–
Press enter or click to view image in full size
Photo by Alex Knight on Unsplash
Hi everyone
In this article, I’ll be sharing my experience working on an agentic AI security assessment that I conducted a few months ago.
The goal of this write-up is to walk you through:
- The AI assessment itself
- The differences between a traditional LLM and an AI agent
- The threat model used during the assessment
- A security bug I discovered along the way
The Assessment
I was assigned to an assessment that involved a product with a significant LLM-powered component. The client had recently introduced a feature that allowed users to generate complete websites using AI.
These weren’t just static or basic websites. The generated applications included:
- Authentication and authorization mechanisms
- Well-defined user roles
- A properly structured database schema
Before generating the actual website, the system first created a blueprint. This blueprint outlined:
- The number of pages and how they were interconnected
- User roles and permissions
- The database schema