For the second task in the HNG Internship, I had to build a string analysis API. The brief was simple: create a RESTful API that analyses strings, computes their properties, and stores everything in a database. It sounded straightforward at first, but it ended up being a really good exercise in backend logic, data modelling, and thinking through how users might actually query the data.
Getting Started
I went with Node.js and Express for the backend, paired with MongoDB through Mongoose. To keep things flexible, I used dotenv for environment variables and added express-rate-limit to prevent the API from being hammered with too many requests. I kept the string analysis logic modular, which made everything easier to maintain and extend down the line.
…
For the second task in the HNG Internship, I had to build a string analysis API. The brief was simple: create a RESTful API that analyses strings, computes their properties, and stores everything in a database. It sounded straightforward at first, but it ended up being a really good exercise in backend logic, data modelling, and thinking through how users might actually query the data.
Getting Started
I went with Node.js and Express for the backend, paired with MongoDB through Mongoose. To keep things flexible, I used dotenv for environment variables and added express-rate-limit to prevent the API from being hammered with too many requests. I kept the string analysis logic modular, which made everything easier to maintain and extend down the line.
The main /strings endpoint does most of the heavy lifting. When you send it a string, it computes:
length(including spaces)is_palindrome(ignoring case)unique_characterscountword_countsha256_hashfor unique identificationcharacter_frequency_map
After analysing the string, everything gets saved to MongoDB with a timestamp. This makes it easy to filter, retrieve, or delete strings later.
What the API Can Do
1. POST /strings – Analyse and store a string
This endpoint validates the input, checks if the string already exists using its sha256_hash, and returns a clean JSON response with all the computed properties.
2. GET /strings – Retrieve strings with filters
You can filter by palindrome status, length range, word count, or even check if a string contains a specific character. It’s pretty flexible.
3. GET /strings/filter-by-natural-language – Query using plain language
This was probably the most interesting part to build. Instead of wrestling with query parameters, you can just ask for something like "all single word palindromic strings" or "strings containing the letter z", and the API figures out what you mean.
4. GET /strings/:value – Fetch a specific string
Pass in the original string value, and you’ll get back the full analysis if it exists.
5. DELETE /strings/:value – Remove a string
Pretty self-explanatory, deletes the string from the database if you don’t need it anymore.
What I Learned Along the Way
One thing I had to think carefully about was data consistency. Using a sha256 hash for each string was crucial, it guaranteed that the same string couldn’t be added twice, even if someone tried submitting it multiple times.
The natural language parser was fun to build. It’s not perfect, but it does a decent job of turning conversational queries into structured database filters. Testing it actually made working on the API more enjoyable.
I also spent time making sure the error handling was solid. Every endpoint returns clear, consistent HTTP status codes:
400for malformed requests404when a string doesn’t exist409when someone tries to add a duplicate
It keeps things predictable and makes the API easier to work with, whether you’re building a frontend or just testing with Postman.
Takeaways
This task really drove home some important backend principles:
- Keep your endpoints modular and maintainable
 - Always validate and sanitize user input
 - Handle database operations carefully
 - Sometimes the best UX is just letting users describe what they want in plain language
 
I also realized how much small design choices matter. Things like hashing strings for uniqueness or building flexible query filters might seem minor, but they make a huge difference in how robust and user-friendly the API ends up being.
Example Response
Here’s what you get back when you submit a string for analysis:
{
"id": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"value": "racecar",
"properties": {
"length": 7,
"is_palindrome": true,
"unique_characters": 4,
"word_count": 1,
"sha256_hash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"character_frequency_map": {
"r": 2,
"a": 2,
"c": 2,
"e": 1
}
},
"created_at": "2025-08-27T10:00:00Z"
}
Wrapping Up
Stage 1 felt like a step up from Stage 0. It wasn’t just about building endpoints anymore, it was about thinking through data integrity, query flexibility, and making the API genuinely easy to use. It’s kind of funny how something as simple as analysing strings can get surprisingly complex when you factor in storage, retrieval, and letting users filter the data however they want.
The API feels solid now, and it’s a good foundation for more advanced backend work. Plus, seeing it handle complex queries with clean, consistent responses was pretty satisfying.