📊 Post #4: Tracking and Surfacing Errors with CI Dashboards
Genesis of Infrastructure — Entry #4
By Emergent Dynamics, building Colonies: Genesis of E.D.E.N.
Welcome to the next phase of simulation development maturity: observability.
You’ve got builds. You’ve got tests. You’re logging tick times and performance.
Now it’s time to see it.
This post covers how to visualize test results, track profiling data, and surface simulation failures using built-in GitHub tools and optional third-party dashboards.
🎯 What We’re Solving
Even with CI running, raw logs aren’t enough. You need:
- 🔴 Immediate alerts when tests or builds fail
- 📈 Historical profiling data to watch for regressions
- 🧭 Feedback loops to guide optimization and refactor efforts
- 📉 Artifacts and charts to analyze emergent behavior over time
✅ GitHub Built-In Tools
1. Action Status Overview
Every commit, PR, or push shows CI pass/fail badges:
- Green ✅ = all jobs passed
- Red ❌ = one or more failed
- Click into “Actions” tab for full logs
2. Annotations in Pull Requests
If a test fails or throws an exception, GitHub shows:
[x] TimeManagerTests.TickCounterIncrements failed at line 12
This links directly to the source file and line.
3. Artifacts (Logs, Reports)
We configured CI to upload performance logs. You can:
- Download them manually
- Compare them between commits
- Store them in
/Logs/
or/Artifacts/
🧰 Optional: Dashboards for Advanced Tracking
If you want even more visibility, connect CI to external tools:
🔹 Unity Test Report Viewer
- Parses and displays Unity test results
- Integrates directly with GitHub Actions
- Generates nice test dashboards
🔹 Codecov
- Tracks test coverage over time
- Helps ensure your simulation systems are actually verified
- Requires minor CI config update
🔹 Grafana + InfluxDB
- Pipe tick timing logs into a time-series database
- Visualize tick duration, memory use, frame stutter, etc.
- Requires self-hosting or paid tiers
🧩 Example: Tracking Tick Durations in CI
Let’s say your log file outputs this on each system tick:
[TickProfiler] WeatherSystem took 3.2 ms
[TickProfiler] TimeManager took 0.5 ms
You can now:
- Upload that log to CI artifacts
- Parse it using a GitHub Action step
- Visualize it over time using your tool of choice
This creates a performance history for your simulation.
🧠 Why This Matters for Simulation Projects
Simulation is nonlinear. One small change can cause:
- Exponential slowdowns
- Logical drift
- Cascading behavioral errors
- Memory blowup
You won’t notice these in normal testing.
But a dashboard will tell you when, what, and how it changed.
🧭 Suggested Next Step: Alerting
In a future post, we’ll cover:
- Slack or Discord webhook alerts
- Auto-commenting on PRs
- Threshold-based failure conditions
- Regression detection (e.g., “tick time increased 50%”)
⏭️ Next Entry: Git Branching, Feature Flow, and Deployment Strategy
CI is only as good as your workflow.
Now that you’ve built the monitoring layer, let’s formalize how changes get merged, what gets tested, and how builds are staged.
You’re no longer hacking code — you’re building a reliable, testable, observable simulation engine.