AI, Dopamine, and the Night I Rewired My Home Lab
AI, Dopamine, and the Night I Rewired My Home Lab⌗
A few nights ago, I opened my real backlog.
Not Jira. Not a curated roadmap.
The actual list:
- stabilize my k3s cluster running across Raspberry Pis and an old desktop
- build a network sentinel agent on my OpenWrt router to detect malicious traffic
- improve observability (Prometheus, exporters, Grafana dashboards)
- monitor smart plugs for power anomalies
- orchestrate domotic scenarios based on real signals (not just timers)
This is the kind of list that usually grows.
Not shrinks.
Then I turned on AI⌗
I had AI wired directly into my environment.
And instead of prioritizing…
I just started executing.
One task.
Then another.
Then another.
Bang. Bang. Bang.
- Exporters deployed
- Metrics flowing
- Alerts firing
- Dashboards shaping up
- Agents scanning traffic
At some point, I wasn’t implementing ideas anymore.
I was discovering systems.
Security changes when exploration becomes cheap⌗
Here’s what surprised me the most.
As I instrumented the system:
- smart plugs started exposing power patterns
- network flows revealed unexpected behaviors
- devices that “looked fine” showed anomalies
Not vulnerabilities in the traditional sense.
But weak signals:
- unusual consumption patterns
- unexpected network chatter
- implicit dependencies between devices
Things I would have never had time to explore manually.
And that’s when it clicked:
With AI, security is no longer just about known threats —
it’s about exploring unknown states at scale
The dopamine spike⌗
Somewhere in the middle of all this, I paused.
Because I felt it.
- hyper-focused
- highly motivated
- unable to stop
- constantly jumping to the next improvement
It felt like flow.
But more intense.
And I asked myself:
Is this just productivity… or something else?
Engineers + AI = infinite execution loop⌗
If you’re in infra or security, you already have the mindset:
- curiosity-driven
- system-oriented
- always seeing improvements
Normally, execution is the bottleneck.
AI removes that.
Now:
- writing exporters takes minutes
- building agents is trivial
- testing hypotheses is cheap
- iterating is almost free
So what happens?
You stop choosing carefully — and start executing continuously.
The loop I fell into⌗
- Identify a gap (monitoring, security, automation)
- Build a quick solution with AI
- Discover new signals or anomalies
- Expand scope
- Move to the next idea
Repeat.
This feels like peak productivity.
But it’s actually:
A dopamine-driven infra loop
Why this is different from other “dopamine traps”⌗
We’ve seen loops like this before:
- social media
- gaming
- dashboards for the sake of dashboards
But this is not consumption.
This is:
dopamine attached to building systems
And that’s why it’s dangerous.
Because it looks like progress.
The real risk in infra & security⌗
The risk is not addiction.
The risk is:
building systems that are wide… but not deep
You end up with:
- 10 dashboards, none fully reliable
- multiple agents, none production-ready
- partial observability
- fragmented automation
In security, this is worse than doing nothing.
Because:
false confidence is more dangerous than no visibility
What actually worked (lesson learned)⌗
After that session, I changed one rule:
Build one thing. Then make it real.
Not:
- “good enough”
- not “it works on my machine”
But:
- observable
- reliable
- documented
- explainable
Example: from toy to system⌗
❌ Before (dopamine mode)⌗
- exporter deployed
- metrics visible
- quick dashboard
- move on
✅ After (controlled mode)⌗
- exporter deployed
- metrics validated (correctness > existence)
- alerts defined with real thresholds
- failure modes tested
- dashboard tied to action
- integrated into a workflow (not just visualization)
Smart plugs: a concrete case⌗
Monitoring smart plugs started as a small idea.
It became a system.
From:
- “see power consumption in Grafana”
To:
- detect abnormal usage patterns
- correlate with device state
- trigger domotic actions:
- shut down unstable devices
- alert on unexpected consumption
- adapt behavior based on load
This is where infra meets automation.
And where:
observability becomes control
OpenWrt agent: another example⌗
The idea:
- detect malicious traffic on my home network
With AI, it was easy to:
- parse flows
- classify patterns
- generate alerts
But the real work was:
- defining what “malicious” actually means
- reducing false positives
- integrating with existing signals
- deciding what action to take
That’s the difference between:
a demo
and
a security system
AI is a cognitive amplifier (and a stimulant)⌗
The closest analogy I’ve found:
AI behaves like a cognitive stimulant.
Not chemically.
But functionally:
- faster feedback
- higher engagement
- reduced friction
- continuous reward
Which means:
you need discipline at a different layer
A better way to use AI in infra & security⌗
Here’s the model I’m converging on.
1. Exploration is allowed — but bounded⌗
Use AI to:
- explore ideas
- generate prototypes
- map solution space
But set limits:
- time-box it
- define exit criteria
2. Execution is sacred⌗
For anything that matters:
- validate signals
- test failure modes
- define ownership
- connect to action
If it doesn’t change behavior:
it’s noise
3. Observability → Decision → Action⌗
Every system you build should follow:
- Observe (metrics, logs, signals)
- Decide (rules, models, thresholds)
- Act (automation, alerts, orchestration)
If you stop at step 1:
you built a dashboard, not a system
4. Build less, but build deeper⌗
AI makes building easy.
Value still comes from:
- reliability
- clarity
- integration
Final thought⌗
That night felt like unlocking a new capability.
Not because I became better.
But because:
the cost of turning ideas into systems collapsed
And when that happens, a new problem appears:
choosing what not to build
If you’re in infra or security, this matters.
Because the goal is not to build more systems.
It’s to build:
fewer systems that you can actually trust
Build fast.
But finish one.