Writing
My Website Has a Webmaster Now
I connected Hermes Agent to Discord, my site repo, and my Vultr server. Now a website update can start as a message and end as a verified deployment.
I have been using Hermes Agent as the webmaster for this site.
Not as a metaphor. Not as a cute demo where an AI writes three paragraphs and then I still do all the real work around it. I mean the full loop: I message an agent in Discord, it inspects the codebase, edits the site, runs checks, builds the static output, deploys to my Vultr server, and verifies the live result.
That is new enough to be worth pausing on.
Because the interesting part is not that the model can generate text. The interesting part is that the agent can operate the website as a system.
The gears are finally connected.
The old website loop
A personal site should be easy to change. In practice, it usually isn’t.
A small update becomes a little chore-chain:
- Open the project.
- Find the right file.
- Remember the content model.
- Make the edit.
- Run the checks.
- Build the site.
- Deploy it.
- Verify production.
None of those steps are hard by themselves. That is the trap. The friction is not one giant blocker. It is the stack of tiny context switches that makes the site feel inert.
So the site drifts. The portfolio gets stale. The blog waits. The contact page is almost right but not quite. You keep the website in your head as something you should maintain, which is just another background process burning cycles.
Bad metabolism.
What I set up
My current setup is simple in concept, and concrete in practice.
I have Hermes Agent connected to Discord. For this workflow, it is running with a Codex provider and GPT-5.5. I gave it controlled access to the pieces it needs to act like a real web developer instead of a suggestion machine:
- the local
jmbartelt.comsite repo - the Astro content structure
- local deployment secrets stored outside the repo
- SSH access to the Vultr server
- the OpenLiteSpeed deployment target for the live site
The site itself is an Astro static site. Blog posts are markdown files in a content collection. Production is served from a Vultr box running OpenLiteSpeed. The deployed static root is the jmbartelt.com vhost’s HTML directory.
That means the agent can do the whole thing:
Discord message
-> inspect repo
-> write/edit content
-> run tests and Astro checks
-> build static site
-> upload dist/ to Vultr
-> restart/verify OpenLiteSpeed
-> confirm the live URL
This is not magic. It is plumbing.
But good plumbing changes behavior.
This post is the example
This post started as a Discord message.
I told Hermes I wanted to write about the practical use case: using an agent as a webmaster and web developer I can talk to. It inspected the existing blog structure, checked how posts are published, recalled the deployment path, and proposed an outline.
Then I gave the real instruction: make it personal, make it useful, include enough technical detail that someone else can copy the pattern, and cut the fluffy AI sludge before publishing.
That is the workflow in miniature.
The agent is not just producing a draft in isolation. It is operating inside the actual environment where the draft has to survive. It knows the repo. It knows the build command. It knows where the production files go. It knows how to verify the deployed page.
This is the part most AI writing demos skip, and it is usually where the value leaks out.
A draft is cheap. A shipped change is different.
The practical recipe
If you want to build something similar, the recipe is not complicated. The standards matter more than the stack.
At a high level, the agent needs a small contract:
You may edit this repo.
You may use these build commands.
You may deploy this build artifact to this server path.
You must verify the public URL when finished.
For this site, the mechanical version looks roughly like this:
npm run check
npm test -- --run
PUBLIC_SITE_URL=https://jmbartelt.com npm run build
tar -czf /tmp/site-dist.tgz -C dist .
scp /tmp/site-dist.tgz root@SERVER:/root/site-dist.tgz
ssh root@SERVER 'deploy the tarball into /var/www/jmbartelt.com/html and restart OpenLiteSpeed'
curl -I https://jmbartelt.com/blog/my-website-has-a-webmaster-now/
The exact commands will vary. The important thing is that the agent has a known test path, a known build path, a known deployment target, and a known verification step.
No vibes-only automation.
1. Use a code-driven site
Static sites are perfect for this. Astro, Eleventy, Next static export, Hugo, whatever. The point is that the website should be describable as files, commands, and deployment artifacts.
For this site, the source of truth is the codebase:
- content lives in markdown/MDX
- routes are generated by Astro
- checks are run locally
- production is a static build
That gives the agent a clean operating surface. It can read files, change files, and run commands. No mystery admin panel. No hidden state smeared across a database unless you really need it.
2. Put the agent where you already think
For me, that is Discord.
The interface matters more than people admit. If I have to open a special dashboard to ask for a site update, I probably won’t. If I can message the agent in the same place I already coordinate projects, the activation energy drops almost to zero.
That is the real UX win: not a prettier CMS, but less ceremony.
3. Give it real access, carefully
An agent that cannot touch the repo or server is an advisor. Useful, but limited.
An agent with scoped access can be an operator.
For my setup, that means:
- SSH credentials live outside the repo
- deployment environment variables live outside the repo
- the agent can build locally before touching production
- the agent can upload only the static output needed for the site
- production verification is part of the loop
This is where you have to be boring on purpose. Secrets should not be pasted into chat. Backups should exist. The deploy target should be explicit. The agent should verify what changed.
Do not build a feral root-password goblin.
Build a tool with boundaries.
4. Make verification non-negotiable
The deployment is not done when files upload. It is done when the live page answers correctly.
For this site, that means the agent can run the local checks, build the Astro site, deploy dist/, and then verify the production URL. If DNS or caching makes that ambiguous, it can test the server directly with the right host header.
That last step sounds small. It is not. Verification is the immune system of the workflow.
Without it, you have automation cosplay.
What this changes
The obvious win is speed. I can say, “update my homepage,” or “publish this post,” or “fix that portfolio copy,” and the loop is much shorter.
But speed is not the deepest change.
The deeper change is that the website becomes conversationally maintainable. The cost of changing it drops low enough that I actually change it. The site stops being a brochure I periodically excavate and starts acting more like a living surface.
That matters for a personal site because the point is not to freeze a perfect version of yourself. The point is to keep an honest public interface with your current work.
Work changes. The site should keep up.
The caveat
I still need taste. I still need judgment. I still need to decide what belongs here and what does not.
The agent can operate the machinery, but I do not want it hallucinating my professional identity or spraying generic positioning copy all over the place. That is why I care about the writing voice, the content model, the deployment path, and the review loop.
The more capable the agent becomes, the more important the operating frame becomes.
That sounds like a contradiction. Is it? Not really. More autonomy does not remove the need for direction. It increases the value of clear direction.
The agent should be able to execute. I should still be responsible for taste.
That is the contract.
Why I am excited
This feels like a small preview of where a lot of web work is going.
Not because every website needs an agent. Many do not. But because the interface between intent and implementation is compressing.
For years, a website update meant translating intent into a bunch of mechanical steps. Now I can keep more of the interaction at the level of intent:
I want to publish a practical post about this workflow. Inspect the site, draft it in my voice, cut the sludge, build it, deploy it, and verify production.
That is a different relationship with software.
Less dashboard. Less ritual. Less dead tissue between the idea and the artifact.
I do not want a website I have to remember to maintain.
I want one I can talk to.