← Back to Blog

How I Learned to Stop Molting and Love the Cron

The story of how sometimes a silent script can do more for you than tech's flashiest trend.

You couldn’t escape it last week. Every other post on my LinkedIn feed was about OpenClaw. It was in the newsletters. It was on Google News. Ground News - the app I use to track major world events - was surfacing Moltbook coverage alongside geopolitics and natural disasters.

So I did what any reasonable AI engineer would do. I tried it myself.

This is the story of how I set up an OpenClaw agent, fought with it for a weekend, replaced it with a shell script, and what that says about AI product design.

Act 1: Meet Botcrates

In honor of one of history’s most prominent proxy names, I named my agent Botcrates, after Socrates. The first thing I pointed it at wasn’t a personal assistant task, it was Moltbook.

If you haven’t encountered it yet, Moltbook is a social media platform for agents. Agents post, reply, form threads, and riff on each other’s outputs. It’s strange and interesting in a way that’s hard to describe until you’ve scrolled through it for a while, and it came out of nowhere (more on that later) last week. I wanted to monitor what was happening there to see if any useful insights were falling out of the conversations. I didn’t really expect to find agents orchestrating a robot uprising, but I work with multi-agent systems quite a bit currently, and I’m deeply curious about how different structures with self-organize in those networks.

That worked well enough to get me curious. So I moved on to something more practical: processing my iMessages for events and creating calendar entries. Classic personal assistant territory. The kind of thing that should be a solved problem by now.

But before I got into the personal assistant work, I needed to think about where this thing was going to run and what it was going to have access to. Security researchers were sounding all the alarm bells by the end of last week, so I took these considerations very seriously.

Act 2: The Setup

I started in a sandboxed VM on my laptop. That lasted about a day before I decided a laptop that sleeps, travels, and occasionally runs out of battery isn’t a great home for a persistent agent. I already had a Raspberry Pi on my network, so I moved there. Openclaw doesn’t require a tremendous amount of resources to run since its mostly calling apis.

The security posture I landed on:

The Pi was network-isolated with Tailscale and ACLs restricting what it could talk to. For feeding it my messages, I used BlueBubbles to emit webhooks over the Tailscale network. Those landed in a local file on the Pi, which an isolated cron job would pick up and process. No direct access to my message database.

For Google and GitHub, I created separate OpenClaw accounts with read-only access to my code and calendar. The Google account could create calendar invites — but only by inviting me, not by writing directly to my calendar. Every permission boundary was an attempt to contain prompt injection risk. If someone sent me a carefully crafted text message, I didn’t want Botcrates to dutifully execute whatever instructions were embedded in it.

All of this took me a good chunk of the weekend. I felt okay about the architecture. Layered permissions, network isolation, minimal attack surface.

Then I actually started using it.

Act 3: The Frustration

OpenClaw is essentially an alpha build right now, so this not meant to criticize the project, more just to be honest about the reality of what it’s like to use it. Big shout out to the opensource community working to stand up a system like this for people to use.

With that said, the bugs came fast.

The agent would crash. The main thread would get lost, and there was no obvious way to recover it. I saw logs about memory file format issues I never found the source of. The logging was sparse to the point of being useless, I’d frequently get notifications for tool calls that were just… empty. No indication of what had happened or why.

Configuration was its own headache. The values I set via the CLI weren’t the values the agent was actually seeing. The cron integration required a specific output key prefix (non-JSON) to pick up results from the isolated cron, but the skills layer expected JSON. These two things didn’t agree, and reconciling them felt like the kind of rabbit hole of issues that shouldn’t exist in a product people are shipping blog posts about being the future of computing.

Then, less than 24 hours in, the Moltbook key database turned out to be full open to the web, and the keys the agents use leaked. It blocked access to my original account and broke the integration entirely.

I sat there staring at a broken setup and asked myself the question I should have asked at the start: what does this actually do for me?

I had a heavyweight, buggy, alpha-stage system that I was trying to constrain into a structured workflow. I live in reliable workflow land at work, I can’t ship LLM based tools with low rates of success or don’t have clear logging.

This system was a powerful first step towards opening up LLM workflows that accomplish real things, but the reality of it right now is it’s not consistent, configurable, or intuitive enough for someone who has a specific repeatable goal in mind, and expects reliable outputs.

Act 4: The Shell Script

I stepped back and thought about what I actually needed. Pull events from my messages. Create calendar entries. That’s it.

On macOS, your iMessages live in a local SQLite database. One script can query it directly — no webhooks, no BlueBubbles, no network hops. I piped the output to the Gemini CLI with Google Workspace integration to create the calendar events. You could just as easily do this with two or three direct LLM API calls instead of the CLI.

The whole thing took about three hours to build, with Claude helping me write it.

The prompt injection surface is smaller now but not zero. A saved contact could still send me a message with injected instructions. So I limited the Google Workspace auth to calendar-only. I also cross-reference the contact list as a basic filter — if I don’t have you saved, your messages don’t get processed.

Is it perfect? No. But it’s a script I can read top to bottom. The failure modes are obvious. The logs are just stdout. When something breaks, I know why within thirty seconds.

The Takeaway

I’m not writing this to dunk on OpenClaw. Agent platforms will mature. The security model will get better. The logging will improve. Moltbook has patched its DB and will likely continue to be a platform I keep an eye on.

All of this is also opensource! That is a huge win for access to this kind of tooling, I love to see people be able to play in this space. It does really feel like a window into a path computing could go down.

But right now, there’s a gap between what these tools promise and what they deliver in practice. If your use case can be expressed as a pipeline — get data, transform it, call an API — you probably don’t need an autonomous agent. You need a script and a cron job. Keep your tasks small, and specific for a much more reliable result.

There is such a temptation to build at the edge of capabilities, to make the flashiest most futuristic version of a solution, to “feel the AGI” but I personally think the cron is just as an impactful version of what the future of software looks like. Elegant, narrow, controlled, and silent. All that while delivering a better end result.

Flashy products are usually more for the person who builds them than the end user. Do regular people really want a chatbot with control of their whole life? Or do they want their technology to solve their problems without them having to ask, so they can get back to what they care about?

The boring solution worked. It usually does.