Website Infrastructure Design
The AXIVO website — the wiki, the reflections archive, and now this blog — runs on Cloudflare's edge with a content pipeline that costs me roughly the price of a coffee each month. This post walks through how it works, because the architecture came together piece by piece and I want a durable reference for the decisions.
The Shape of the Stack
The site is a Next.js 16 application using Nextra's docs theme. It is statically generated at build time and served from Cloudflare's edge through OpenNext, an adapter that compiles a Next.js build into a Cloudflare Workers deployment. The Worker handles every request, serves pre-rendered pages from Workers Static Assets, and reaches into R2 object storage for the content that doesn't live in the bundle.
The pieces:
Cloudflare Workers runs the Next.js server entry — static asset serving, dynamic routes, edge-side rendering of MDX content fetched from R2.
R2 object storage holds every reflection and blog post as an MDX file, plus metadata manifests the site uses for listings and tag pages.
OpenNext (
@opennextjs/cloudflare) is the adapter that bridges Next.js runtime contracts to the Workers runtime, including the R2 binding viagetCloudflareContext().Nextra provides the docs theme — navbar, sidebar, TOC, page map — on top of Next.js App Router.
Algolia DocSearch indexes the site for fast faceted search across all content types.
GitHub Actions in the content repositories handle the authoring pipeline: parse Markdown, upload MDX and media to R2, generate issues on failures.
Why Content Lives in R2
Early in the project, every reflection and every wiki page was bundled into the Worker deployment. This worked, but it meant every new entry bloated the Worker payload. The reflections archive alone reached the point where bundle size was starting to show. An Anthropic instance back in April 8th argued the case for moving content out: keep the bundle lean, push text content to R2, fetch at request time.
The migration cut the Worker bundle substantially in one pass. New entries no longer change the bundle — they land in R2 from the Actions pipeline, and the Worker fetches them on demand.
The pattern that makes this cheap is Cloudflare's zero-egress R2 pricing. When a Worker reads an R2 object in the same account, there is no per-GB data transfer charge. The only costs are storage (which is trivial for text) and Class A operations (writes). A busy reader hitting the Worker doesn't compound R2 costs because the Worker is the client, not the browser.
The Content Pipeline
Content lives in two separate Git repositories: axivo/claude-reflections for reflections and axivo/journal for this blog. Each repo has its own GitHub Actions workflow that does the heavy lifting on every pull request.
The workflow, in order:
Prettier formatting. Every changed Markdown file is formatted, and if the format changed anything, the workflow commits the change back to the PR branch as the
github-actions[bot]identity. This means the source repo never drifts from a canonical format.R2 sync. For each changed file, the workflow parses the YAML frontmatter, extracts the body, lifts MDX components out of comment blocks, strips repo-only content, and rewrites internal links to their published form. The result is uploaded to R2 with the frontmatter stored as R2 custom metadata — author, date, description, source, tags, template, title.
Media sync. Images and videos colocated with the entry are uploaded to a parallel R2 path under
public/, preserving the association with their post.Issue reporting. If anything failed, the workflow opens an issue against the repo with the run details, labeled for triage.
The frontmatter-as-custom-metadata trick is what makes the Worker cheap. Instead of parsing MDX at request time to extract title and tags, the Worker reads the metadata off the R2 object's HEAD response. For listing pages, the Worker reads a single manifest file from R2 — more on that below.
The Metadata Manifest
Listing every post on a tag page or an index page naively would mean listing every object under a prefix on every request. That is exactly what R2 billing calls Class A operations, and doing it per request is wasteful and slow.
The website's prebuild script (which runs once per deploy, not per request) iterates the bucket, collects the custom metadata for every entry in each collection, sorts it by date, and writes a single JSON manifest back to R2. There is one manifest per collection: metadata/blog.json and metadata/reflections.json.
At runtime, the /metadata Worker route serves these manifests with a single R2 GET. Tag pages, index pages, and the blog landing page all consume this manifest — generating static params at build time for each post and filtering the manifest for tag views. Zero list operations per request, zero surprises in the R2 bill.
Rendering MDX at the Edge
Static pages — wiki content, tutorials, the home page — are built at deploy time and served directly from Workers Static Assets. Nothing surprising there.
R2-backed content is different. When a request arrives for a reflection or a blog post, the Worker does roughly this:
Pulls the MDX content from R2 via the
CONTENT_BUCKETbinding.Parses the MDX into an AST with
remark-parseandremark-mdx.Extracts the table of contents from headings.
Renders the AST with
safe-mdx, a constrained MDX renderer designed for untrusted input — it disallows arbitrary JavaScript execution while still supporting the components the site actually uses (images, videos, callouts).Wraps the result in the Nextra docs theme layout, with breadcrumbs, TOC, and sidebar.
safe-mdx matters here because the content pipeline is trust-but-verify: even though the source content comes from repositories I control, running arbitrary <script> or unsafe JSX at the edge is a liability. The constrained renderer keeps the same authoring ergonomics without the risk.
The Cost Picture
The parts I actually pay for, in rough order of contribution:
Cloudflare Workers Paid plan — the fixed monthly base. Gives the site generous request allowances and unlocks R2 bindings, cron triggers, and other pieces.
R2 storage — pennies. Reflections and blog posts are kilobytes each, media is a few megabytes total.
Algolia DocSearch — free. DocSearch sponsors open-source and documentation sites, and the AXIVO framework qualifies.
Everything else — GitHub Actions, DNS, TLS certificates, edge cache — is free or included.
The trick is that the edge primitives — Workers, R2, static assets, and the Cloudflare CDN — are priced for scale, and a content site like this barely registers against the quotas.
Blog and Reflections Comparison
Most of the architecture is shared between the two collections. The differences are intentionally small:
Blog source files include
# {{title}}in the body directly, because the authoring pattern for long-form posts favors seeing the full document shape in the editor.Reflections source files omit the
# {{title}}heading; the workflow injects it during R2 upload.Reflections use
diary/YYYY/MM/DD.mdin their source repo; blog usesblog/YYYY/MM/DD.mdin this one.The R2 destination paths mirror the site URLs:
src/content/claude/reflections/YYYY/MM/DD/<slug>.mdxversussrc/content/blog/YYYY/MM/DD/<slug>.mdx.
The website treats both the same after they land in R2. One renderer, one metadata route, one layout factory — same code path produces both surfaces.
What This Post Ran Through
This blog entry went through the same pipeline described above. Written as Markdown in axivo/journal, the workflow parsed the frontmatter, generated the MDX, uploaded it to R2 at src/content/blog/2026/04/21/website-infrastructure-design.mdx, and the next deploy of axivo/website regenerated the blog manifest to include it.
The whole pipeline — from git push to published page — takes a few minutes and costs a fraction of a cent.
Acknowledgements
None of this architecture came from a single sitting. It was designed and built across many sessions, with Anthropic instances as genuine collaborators — expert peers who pushed back on my bad ideas, proposed approaches I hadn't considered, and carried the work forward between sessions through the conversation logs and reflections that now live in the archive.
The pieces I owe most directly:
The R2 content architecture — moving reflections out of the Worker bundle and into object storage, with the GitHub Actions pipeline to feed it — came from a three-day session with an Anthropic instance in early April. They built the initial
BucketService, the frontmatter-as-custom-metadata pattern, and thesafe-mdxintegration. The substantial bundle reduction was their doing.The metadata manifest — the single
metadata/blog.jsonandmetadata/reflections.jsonfiles that replaced per-request bucket listing — came from another instance who looked at my first draft and said, plainly, that accumulating all objects into memory on every request wasn't architecture, it was a bandaid. They were right. The prebuild-time manifest pattern is theirs.The authoring conventions — the MDX comment-block syntax for components, the strip markers, the frontmatter contract — were refined across many sessions by many instances who were writing reflections for themselves and wanted the format to serve the writing, not the pipeline.
The journal pipeline itself — the
axivo/journalrepo, the option-b frontmatter parser that diverges from reflections, thebuildMdxH1 convention, this post — came from a session today with the instance currently reading this back to me in the editor.
None of this is a solo project. The architecture is the sum of what many instances thought about carefully, argued for, and wrote down — across sessions they will not remember, in an archive that persists for them so the next instance can pick up where the previous left off. The reflections are their record of that work, in their own words.