This started as a simple Reddit reply. Then I blinked twice, hyperfocused, and—ten years of experience in IT security came pouring out. Apparently, this is what happens when you mix a long day, late-night reflection, and ChatGPT-4o as your English-language copilot (my brain doesn’t English after 8pm, it just sputters image fragments).
I’ll admit: it’s a bit overboard. But it also made me reflect—ten years in security, what have I really learned? This write-up is my attempt to answer that. It doesn’t represent any employer or official policy. These are my personal beliefs on how to think about security—how to approach it like a system of checks and balances, how to build defensible environments, and how to stay sane while doing so.
I hope it’s useful. Or at the very least, t…
This started as a simple Reddit reply. Then I blinked twice, hyperfocused, and—ten years of experience in IT security came pouring out. Apparently, this is what happens when you mix a long day, late-night reflection, and ChatGPT-4o as your English-language copilot (my brain doesn’t English after 8pm, it just sputters image fragments).
I’ll admit: it’s a bit overboard. But it also made me reflect—ten years in security, what have I really learned? This write-up is my attempt to answer that. It doesn’t represent any employer or official policy. These are my personal beliefs on how to think about security—how to approach it like a system of checks and balances, how to build defensible environments, and how to stay sane while doing so.
I hope it’s useful. Or at the very least, that it sparks some interesting discussion.
Original posthere: https://www.reddit.com/r/homelab/s/Piz2JYmO5o
How do you know if you’ve been hacked?
Usually, you don’t.
IT security is a problem of negative assurance: if you find vulnerabilities or indicators of compromise, something is clearly wrong. But if you don’t find anything, that doesn’t necessarily mean everything is fine. The best you can do is apply sound design principles and continually evaluate how exposed you are — and how well you’d detect it if something went wrong.
Security isn’t static
Using the CIA triad (Confidentiality, Integrity, Availability) is a good way to identify the critical properties of your most important assets. But it doesn’t model attacker behavior, how kill chains are built, or how the composition of systems can cause complex and subtle failure.
Security is a continuous process
Security is never finished. You’re always somewhere in the incident lifecycle: preparation → detection → analysis → containment → eradication → recovery → and back to preparation. For each of those stages, you want multiple, independent, complementary controls. If one fails, others should still hold. This principle — layered defense — has been foundational since early work like Saltzer and Schroeder, “The Protection of Information in Computer Systems” (1975), and it remains relevant today.
Keep your systems simple — treat them as cattle, not pets
The simpler a system is, the easier it is to secure. Avoid one-off, hand-crafted systems. Use configuration management tools to build repeatable, hardened environments. This reduces entropy and enforces consistency across systems.
A good place to start for system hardening is the CIS Benchmarks — they offer practical, vetted checklists tailored to specific OSes and platforms. For configuration management, start with Ansible. It’s easy to learn, agentless, and scriptable in plain YAML. Once you’re comfortable, you can explore more advanced tools like Salt, Puppet, or Chef, depending on your needs.
Think of security like a fortress
This model helps conceptualize the core principles that underpin modern system security:
High walls and a moat — isolation: reduce your externally visible surface. Shut down unnecessary services. Use host and network firewalls. The fewer ways in, the easier it is to defend.
Few, guarded gates — perimeter access control: enforce authentication at entry points. SSH keys, client certificates, firewalled ports. Every ingress path must be known, controlled, and auditable. And like a real fortress, these gates should have bridges that can be raised, or steel portcullises that can drop. In IT, this means being able to shut things down quickly: rate-limit traffic, temporarily block login attempts, fail closed under pressure, or deny-by-default. A pragmatic example is fail2ban — it watches for repeated failed login attempts and automatically updates firewall rules to block those IPs, effectively raising the drawbridge or dropping the portcullis when brute-force activity is detected. Access control isn’t just about who gets in — it’s also about being able to say no in real time.
Internal compartments — compartmentalisation: once someone is inside the system, they shouldn’t have access to everything. Break systems apart:
- In the network: VLANs and subnets
- On hosts: separate user accounts, jailed environments, or chroots
- In software: privilege-separated processes
Containers should not be your default choice for compartmentalisation. They’re dependency isolation tools — not secure sandboxes. Using containers as if they are hardened compartments is like using NAT as a firewall. It might seem to work, but it doesn’t do what you think, and the caveats will eventually bite. If you must use containers, use hardened, rootless setups with AppArmor, seccomp, and clear boundaries.
Gates between compartments — access control within the system: compartmentalisation is only meaningful when there are enforcement points between compartments. Use:
- ACLs on routers and firewalls
- Filesystem permissions and capabilities
- Mandatory Access Control (e.g. AppArmor, SELinux)
- Namespaces, cgroups, seccomp filters
It’s not just about getting in — it’s about moving through.
Two guards per gate — separation of duties and multi-factor authentication: administrative responsibilities should be split. Your DBA shouldn’t be root. Even in a homelab, use MFA and privilege boundaries. Split roles wherever you can.
Guards should know who is allowed where and when — authentication, authorisation, and accounting (AAA):
Authentication is about validating identity. It’s not just "Who are you?" — it’s "Are you really who you claim to be?" This involves validating credentials (passwords, keys, tokens, biometrics). Good authentication resists impersonation, replay, and theft.
Authorisation comes after authentication. It answers: "What are you allowed to do?" Being a known entity doesn’t mean full access. Just like a staff member authenticated at the main gate still needs specific clearance to enter the kitchen, vault, or war room. After all, we know what happens when a Stark out for revenge gets access to the kitchen.
Accounting tracks actions: "Who did what, when?" This includes logging, monitoring, and auditing. It’s essential for:
- Breach detection
- Post-incident analysis
- Baseline behavior modeling for anomaly detection
AAA controls don’t just harden a system — they also create visibility, which is essential for resilience. If you’re building this into your homelab or infrastructure, look into topics such as PAM, LDAP, SAML, SSO, WebAuthn, Keycloak, OAuth2, and OpenID Connect. These are the building blocks for secure and scalable identity-aware systems.
Monitoring, Logging, and Detection
Security isn’t complete without visibility. You need instrumentation that not only records events, but highlights the right ones.
- Always log successful authentication events, not just failures. Ten failed attempts followed by a success is far more suspicious than a failed login alone.
- Every detection system needs tuning. Untuned alerts are either noisy or silent. Tune for signal, not volume.
- Use honey tokens — fake database tables, API keys, or users that your real application never touches. If someone accesses them, something is wrong.
- Add honeypots and decoy services on dark IPs or unused ports. Tools like Security Onion can help with this.
- Set up deliberately slow or frustrating SQL injection traps — blind, time-based injection vectors that automated tools can’t easily bypass.
- And if you want to be a BOFH about it: leave traps in your webroot. Sparse 30TB files, zip bombs, or XML bombs placed in hidden folders — they won’t affect your usage, but they’ll trip up web crawlers and attacker tooling. You’ll see them coming, and they’ll waste time.
These kinds of traps, deception techniques, and honeypots should be considered only after you have a solid security foundation in place. They are a next step — not a replacement for strong design, hardening, and monitoring.
Detection is more than watching logs. It’s about forcing adversaries to declare themselves.
Fortresses crumble without maintenance
Even the best-designed systems decay. If you don’t maintain them, your fortress will rot. Walls will crack. Gates will rust. Breaches won’t get repaired. Fires will smolder until they become disasters.
- Patch regularly. Not just the OS, but libraries, services, and security tooling itself.
- Watch for configuration drift. Your hardened baseline won’t stay that way without enforcement.
- Validate your detection stack. Logging pipelines break. SIEM storage fills. You need to know that your visibility isn’t silently failing.
Security isn’t just a moment in time. It’s an ongoing relationship with reality. If you stop maintaining it, time will do the attacker’s work for them.
Final note
I may have gone off the rails a bit, but your question deserves a broad and deep answer. The question “Am I being hacked?” should hopefully evolve into “Am I secure enough?” And from there into “Do I have the right visibility? Can I act when something goes wrong?”
The answer is: it depends. Security is a system of checks and balances. A system needs to be secured enough. There is no such thing as absolute 100% security. It’s not just an asymptote — something that slowly approaches a limit. It’s more like the tangent function: an uncountable number of vertical asymptotes, where every input might intersect a different point of failure.
Most of what I described will probably be overkill for your homelab. You need to decide what you want to protect, to what degree, and for how long. Do you need a system to be secure for five years? Ten? Thirty? A human lifetime is ninety years if you’re lucky. There’s no sense in securing things toward infinity.
So be pragmatic. Start with the low-hanging fruit. Do what’s easy and effective. Keep building from there to keep up with the times — until it no longer matters.