From Slow Builds to Lightning-Fast Ships: How I Cut My Backend Build Time by 36 percent
When my AI Muse by kekePower
blog began outgrowing its humble roots, every deployment felt like waiting for paint to dry. By ruthlessly profiling the stack, eliminating server-side API calls, and rebuilding critical paths from the ground up, I slashed build time by over a third while unlocking richer analytics for readers. Here is the full breakdown of what I changed, why it mattered, and how you can reproduce the results on your own project.
The Painful Baseline
When I first launched my new blog, a full production build of AI Muse by kekePower took a painfully long 68.47 seconds. Worse, the duration varied wildly whenever our analytics provider, Matomo, responded slowly. Developers waited, CI pipelines idled, and publishing momentum stalled. CPU utilisation hovered in the mid-700 percent range on our 8-core builder, proving that the process was IO-bound rather than compute-bound.
Metric | Pre-optimisation | Post-optimisation | Delta |
---|---|---|---|
Total build time | 68.47 s | 43.93 s | -35.9 percent |
User CPU time | 464.70 s | 383.33 s | -17.5 percent |
System CPU time | 27.30 s | 25.28 s | -7.4 percent |
CPU utilisation | 718 percent | 930 percent | +29.5 percent |
Search-index build | ≈7 s | 231 ms | ≈-95 percent |
Next.js compilation | Variable | 1.0 s (stable) | Consistent |
With 41 MDX posts and growing, the future looked bleak unless something changed. I set a clear target: shave at least 30 percent off the build wall-clock without sacrificing functionality.
Profiling and Bottleneck Hunting
Optimisation without data is just guessing. I instrumented the build with time, perf_hooks, and simple timestamp logs. Three culprits surfaced:
- The build paused for up to 10 seconds per page while Next.js waited for Matomo page-view counts.
- Listing pages compiled every MDX file even though they only needed front-matter metadata.
- The home-made search indexer did full Markdown parses on each run, ignoring the fact that most posts never change.
Armed with these numbers, I divided the work into four independent sprints: client-side analytics, metadata-first content pipeline, static-generation fixes, and search-index caching.
Sprint 1 – Banishing Server-Side Analytics Calls
Problem: Static Site Generation (SSG) pulled live page-view counts from Matomo for every post, blocking the entire build on external HTTP calls. Solution: I flipped the model and served analytics client-side via a tiny/api/page-views
endpoint. - 5-second timeout to avoid UI hangs if Matomo hiccups.
- Graceful fallback to zero views so pages always render.
- Environment-driven URL, site ID, and token for dev/prod parity.
- React hook
usePageViews
returns views, loading, error so UIs stay smooth.
typescript export async function GET(request: NextRequest) { const pageUrl = request.nextUrl.searchParams.get('pageUrl'); /* … trimmed for brevity … */ }
The result: analytics are now real-time, the build is offline-safe, and readers still see a tidy “1 234 views” badge rendered by a lightweight client component.
Sprint 2 – Metadata-First Content Pipeline
Problem:getAllBlogPosts()
compiled the full MDX of every article just to render excerpts and generate route params. Solution: Introduce getAllBlogPostMetadata()
which crawls the filesystem once, extracts YAML front-matter, and returns a sorted array of lightweight objects (≈2 KB each versus ≈20 KB previously). Compilation now happens only when someone navigates to an actual article. - File system read cost dropped from 700 ms to 90 ms.
- Memory footprint per build fell by 80 percent.
- Static-params generation sped up from 4.3 s to 350 ms.
Listing pages, feeds, and sitemap now rely exclusively on metadata. Complete content compilation is reserved for on-demand rendering of individual posts, delivering the best of both worlds.
Sprint 3 – Honest Static Generation
Problem: With the old code,generateStaticParams()
re-parsed every MDX file just to extract dates and slug strings. Solution: Feed it the lightweight metadata and stop compiling Markdown during param discovery. A one-liner diff removed dozens of unnecessary file reads. This alone shaved nearly 8 seconds off the Generating static pages step in the Next.js log.
Sprint 4 – Turbocharged Search Index
Problem: My naive indexer ignored caching and re-processed unchanged posts every build. Solution: I cached file hashes and emitted the index only for files whose checksum changed. Searching 41 posts now takes 231 ms instead of several seconds.Upgrading to modern text-processing libraries and parallelising the walk over the content directory further reduced CPU time by 17 percent, freeing cycles for other build tasks.
Quantifying the Gains
After integrating all four sprints, I let the CI pipeline run 50 times on the same hardware to rule out outliers. The average improvement exceeded the original 30 percent target, settling at 35.9 percent faster total builds. CPU utilisation jumped because the process spent less time idling on network waits and more time doing parallelisable computation.
Metric | Mean Before | Mean After | StDev Before | StDev After |
---|---|---|---|---|
Wall-clock (s) | 68.47 | 43.93 | 2.01 | 1.12 |
User CPU (s) | 464.70 | 383.33 | 7.30 | 6.84 |
Sys CPU (s) | 27.30 | 25.28 | 0.66 | 0.54 |
Search build (ms) | 7 100 | 231 | 140 | 18 |
Note how the standard deviation shrank post-optimisation, proving that the build is not only faster but also far more predictable.
Real-World Impact Beyond the Numbers
- CI/CD pipelines finish 24 s sooner, unblocking merge trains and reducing context-switch fatigue.
- Developers iterate on content and design with near-instant feedback in local dev mode.
- Page-view counts now update in real time without inflating build minutes.
- Server costs shrank, since shorter builds mean fewer billed CPU-seconds on the builder container.
- Deployment failures linked to analytics timeouts have gone to zero.
Lessons Learned – What Moved the Needle
The journey yielded five takeaways that apply to almost any static site.
- Kill external calls in SSG, or pay the price in unpredictability.
- Separate metadata from content so you do not compile what you do not need.
- Cache everything you can, especially in CI where fresh clones obliterate node-modules caching.
- Use timeouts everywhere, even on your own internal fetches, to avoid hidden stalls.
- Measure before and after; nothing motivates a team like a hard number.
What Changed Under the Hood – File-Level View
For readers who enjoy diff stats, here is the high-level file map of the refactor:
File | Status | Core Purpose |
---|---|---|
/src/app/api/page-views/route.ts | New | Analytics proxy with timeout |
/src/hooks/use-page-views.ts | New | React hook for fetching counts |
/src/components/page-views.tsx | New | UI badge component |
/src/lib/blog-lightweight.ts | Mod | Metadata-only loaders |
/src/lib/blog.ts | Mod | On-demand MDX compilation |
/src/app/article/[...]/page.tsx | Mod | Static param optimisation |
Future Work
- Introduce edge caching (e.g., Redis) to memoise hot analytics responses.
- Build an internal dashboard aggregating page-view and performance data.
- Add automated build-time regression alerts tied to the 95th percentile.
- Experiment with incremental static regeneration to pre-warm only trending posts.
Closing Thoughts
Backend optimisation is often painted as arcane wizardry, but the recipe is simple: measure, focus, and iterate. By eliminating server-side analytics calls, embracing a metadata-first pipeline, and caching aggressively, I turned sluggish builds into a sprint. Visitors now enjoy fresher content and live stats, while the team enjoys quicker deploys and lower bills. If your static site still crawls, perhaps it is time to wage your own war on wasted seconds.