Overview — From Confusion to Clarity
This project started from a simple desire: I wanted to build something around data collection. At the time, I was deep into observability and monitoring, but somehow I ended up in the world of telemetry — mainly because I was learning a lot about Formula 1 and my interest kept growing every day.
What began as curiosity quickly turned into a challenge. I was a brand-new F1 fan trying to build a system I barely understood. I got confused many times, but I noticed something important: I kept making small milestones, and each one pushed me forward.
Slowly, the project gained momentum. The architecture started becoming clear. Pieces that looked impossible at first suddenly made sense. And by the time the system was running, I realized I had learne…
Overview — From Confusion to Clarity
This project started from a simple desire: I wanted to build something around data collection. At the time, I was deep into observability and monitoring, but somehow I ended up in the world of telemetry — mainly because I was learning a lot about Formula 1 and my interest kept growing every day.
What began as curiosity quickly turned into a challenge. I was a brand-new F1 fan trying to build a system I barely understood. I got confused many times, but I noticed something important: I kept making small milestones, and each one pushed me forward.
Slowly, the project gained momentum. The architecture started becoming clear. Pieces that looked impossible at first suddenly made sense. And by the time the system was running, I realized I had learned more than I expected.
In this blog, I’ll share the core lessons, the architecture decisions, and the parts of the system that were the most fun and surprising to build. So read along on how a simple idea grew into a complete system. The main sections will walk through:
- The backend layer in Python
- The FastAPI + WebSocket connection that keeps everything flowing in real-time
- The frontend side where the data is received, visualized, and even sent back to the simulation.
BACKEND
Designing the Simulation Backend — Where the Real Engineering Began The backend was really the heart of this whole project, because that’s where the car logic lived. This is the part that actually generated the data I wanted to visualize. And honestly, it didn’t start nicely at all , it began with a few random files where I was just trying ideas, tricks, and anything that came to mind. The architecture looked odd, messy, and unclear.
Every new day I would wake up and try again. Then one day, everything clicked — not magically, but because all the messy files I’d been building finally connected into a clear picture.
I realized the simulation needed to start exactly how a real car starts — from RPM ignition, to engine warm-up, to getting on track. That single insight made the structure fall into place. Before that moment, I was just wandering. After that, the direction became obvious.
The Lesson That Changed Everything: State
The biggest clarity I gained was around state. State is what made the backend truly integrated , it allowed different modules to share data smoothly. Without state, the whole simulation would have collapsed.
Take the engine for example. This was one of the most fun parts to build.
I started with the ECU module (Engine Control Unit). Before this project, I didn’t even know what the ECU really did. Then I learned: it’s the “computer” that controls things like the fuel-air ratio and makes combustion efficient. Cool stuff.
Now, of course, I couldn’t put all combustion logic inside the ECU file. That would create a monster file. So I modularized:
ecu.pyhandles requests and ratiosfuel.pyhandles supplycombustion.pyhandles burn logic
And this is where state became the hero.
The ECU doesn’t produce final fuel burn numbers — it produces a fuel request.
That request is stored in state.
Then the fuel module reads that request, deducts the amount, and updates the state again.
Finally, the combustion module reads the updated state and performs the burn.
It became a smooth pipeline because every module could ask from state and feed into state.
In simple terms:
State became a global dictionary that acted like the car’s shared memory.
Before using state, I was writing extremely long files. My first engine file had almost all logic mashed together. It was unmanageable. Once I introduced modularization + shared state, the backend finally felt like a real system, not a pile of experiments.
This part of the project taught me one of my biggest lessons:
Long simulations only work when every module has memory. Stateless code collapses under complexity.
How self Saved the Simulation (Timing, Sectors, and Real F1 Logic)
After state, the next thing that completely changed the backend architecture was self.
At first it felt confusing, but it became the hidden hero of the whole timing system.
In real F1 broadcasts, you always hear commentators saying things like “Lando purple in sector 3!” or “That’s a green sector for Hamilton.”
So when I was building the timing module, I wanted to simulate exactly that:
- Purple → best overall
- Green → driver’s personal best
- Yellow → anything else
To even do this properly, I had to map the Silverstone track into segments and define sector boundaries.
But here’s where it gets real:
you can only calculate “sector 1 time” after you cross into sector 2.
And this is exactly where self shows its power.
self is what lets the simulation remember things between ticks.
Python runs the simulation in loops (“ticks”), so for example:
- In tick 1, the car is in sector 1.
- In tick 2, it moves into sector 2.
For the timing file, we must compare:
- what sector we were in before →
self.previous_sector - what sector we are in now →
current_sector
Only self can hold that previous-sector info automatically without relying on some huge manual “previous values” text file or awkward global variables.
Same thing for sector times — we store the timestamp when a sector starts, and when the next sector begins, self is what lets us compute the exact duration of the previous one.
So in short:
- state holds global shared values across all modules.
- self holds the memory inside the module across ticks.
That combination basically made the timing system feel like a real F1 sector tracker instead of a random counter.
How Everything Runs Together (Classes, __init__, and the Main Loop)
So after understanding state and self, the final piece that made everything make sense was how all these modules actually run together.
Python looks simple, but when you’re combining many modules, classes, and a global state, it becomes wide pretty fast.
To keep it easy, let’s stick with the timing example again.
Every module in the backend was built as a class:
- the class has an
__init__()→ where all theselfvalues get initialized - then it has an
update()→ which runs every tick and changes those values - and the class takes
stateso it can read/write shared info
So the idea is:
__init__= sets up all the memory the module needs (sector numbers, lap counters, flags…)update= does the logic every tick (detect sector change, update times, apply color logic…)
But none of these modules do anything alone.
Everything comes to life in main.py.
That’s the file where:
- all modules (engine, brakes, tires, timing, weather…) are imported
- each one is instantiated as a class
- the big state dictionary is created
- a loop runs all classes’
update()functions in order - and the final processed data gets streamed out through WebSocket
It’s basically like assembling the whole car:
- engine module runs
- brakes run
- aero runs
- tires update
- timing calculates
- weather injects external data
- state keeps everyone connected
main.py is the “race director” — it calls each module every tick so everything stays synchronized.
This structure made the backend feel organized, realistic, and easy to extend when new ideas popped up.
FastAPI Server & WebSocket Connection
To stream live telemetry, we needed something more than regular HTTP requests like Axios — we required persistent connections. That’s where FastAPI with WebSockets came in.
We created a server file that defines the FastAPI app and sets up the WebSocket endpoint. This connection is what allows data to flow continuously from our Python simulation to the frontend.
Here, the state object became central. Instead of sending piecemeal data, we just import state and push it as a full object to the frontend. This keeps the architecture neat and makes the flow predictable.
In addition, we added some logic around session duration and modes, like RACE or P1. That’s where the timing control sits — it governs how long the simulation runs for each mode, adding a bit of realism and fun to the simulation.
Connecting to the Backend
On the frontend, to receive live telemetry, we needed a persistent connection to the backend. Using WebSockets, we could stream data continuously, allowing charts, timers, and other UI components to update in real time without reloading or polling.
To simulate the strategist role in F1 teams, we also sent messages back through the same WebSocket. This included commands like pit calls, ERS deployment decisions, and other strategy inputs.
With this bi-directional flow, the simulation felt lively. The frontend could now react to live car data, and at the same time, strategy inputs could influence the backend simulation — covering key aspects of real F1 logic in a simplified but interactive way.
When the UI Started to Feel Like a Race Weekend
This was the stage where the project really came to life. Since everything was running on localhost, I couldn’t just dump all the data in at once — my machine would struggle. So the focus shifted to visualizing live data as it streamed in, similar to what you see on real F1 dashboards.
The first big UI piece was the live charts. After watching a lot of F1 analysis content, I noticed how often wheel-speed and telemetry overlays rely on line charts. That’s why I went with Recharts. I built a reusable chart component that could plot wheel speed or any y axis values over time and also mark sector boundaries. It wasn’t just about seeing the speed — it showed where on the track each spike or drop happened. That little link between sectors and speed made the simulation feel closer to real F1 telemetry.
Then there was the timer counter with the proper F1 coloring logic. I implemented the same color system used in actual broadcasts: purple for overall best, green for personal best, and yellow for the rest. Seeing those colors update live made the sessions feel more authentic.
I also integrated pit logic, so you could simulate in-laps, out-laps, and strategy moments. And on top of that, I added weather data from an API (Meteo), giving the UI small touches like conditions you’d normally see on an F1 dashboard. Those three—charts, timers, and weather/pit logic—became the main pillars of the frontend
Conclusion This whole project was really just a leap of faith. I didn’t start with a clear picture or some perfect plan — I just started. And day by day, things slowly made more sense. Every file I wrote, every bug I chased, every small breakthrough added up. Looking back, I’m genuinely thankful to God for that rhythm of trying again each day, because that’s what carried the project from confusion to clarity.
I ended up learning way more than I expected — not just about Python or telemetry or websockets, but about how to think, how to break problems down, and how to keep going when nothing makes sense yet. And those lessons are things I’m still applying even now. So yeah… what began as a simple idea turned into something that actually works, something I genuinely enjoyed building, and something that built a stronger mindset in the process.
I hope you get the same kind of value from reading this as I got from building it — even if it’s just one idea, one perspective, or one spark to start your own project.