How I shipped a Temporal-powered product in three weeks, one year out of college

AUTHORS
Pasha Fateev
DATE
Mar 26, 2026
DURATION
12 MIN

I got my CS degree in 2024. I worked for a year. Then one day, I met a founder in the Temporal Community Slack who was hiring for a small startup, and that conversation changed the way I think about software.

About three weeks after joining that startup, I’d written roughly 50,000 lines of code running on Temporal Cloud. Email integrations, AI voice, OpenAI for request categorization, and JWT-backed magic links, all orchestrated by Temporal Workflows. My only prior Temporal experience? The 101 and 102 courses. Pairing Temporal with AI coding tools is what made that possible.

But let me back up, because a month earlier I wouldn’t have believed any of that myself.

Why Temporal felt out of reach#

I’d heard about Temporal before, even tinkered with it a bit. But I was early in my career, and I wanted to learn things I could actually build something with. I didn’t have complex distributed systems to orchestrate or applications that demanded extreme reliability. Temporal seemed powerful, but it also seemed like overkill without a good reason to learn it.

My first job out of college was at a company built on top of Temporal. I got to see firsthand what it was capable of, but my role didn’t require me to touch the Temporal code directly. I remember thinking, “Maybe now is the time to learn it?” But I put it off: I had plenty of other things to learn that applied directly to my day-to-day.

Nothing to lose#

The catalyst was, like in many cases, a big life transition. (I got laid off.)

While looking for work, I met the founder of ActiveMGT.ai, a small startup building a property management system on top of Temporal. “Cool,” I thought. “Here’s my opportunity to learn.”

To be honest, I felt out of my depth. One year out of college, basically zero Temporal experience. But I had nothing to lose, and Temporal had always sounded interesting, so I went for it.

The founder was an expert in property management, not software engineering. But he had a pulse on industry trends, and he was adamant: he didn’t just want me to build the software. He wanted me to build it using AI. That was the deal. AI-assisted development, from day one.

I wasn’t sold. At that point, I’d been using ChatGPT and Cursor for a while, and frankly, I found them underwhelming. Fine for answering specific questions or generating small bits of code, but nothing I’d trust for ‌anything serious. My expectations were low.

Still, I figured I’d give it a shot. I took a course on agentic coding, and something clicked. The approach was different enough from what I’d tried before that I was willing to suspend disbelief.

Leaky faucets and long-running Workflows#

Say your faucet is leaking. You call or email the owner. The owner processes the request and coordinates with a vendor. The vendor assesses the damage, gives you a quote. You schedule the repair. The tenant confirms the work is done. The vendor sends an invoice. The vendor gets paid.

I’m glossing over a lot, but since Temporal sits at the core of the business logic, you need at least a sneak peek.

Each of those checkpoints can fail in different ways. Temporal handles the obvious transient errors (API failures, network issues) out of the box with minimal effort.

But there are less obvious situations too, like a tenant not confirming their availability or a vendor ghosting on an invoice request. Those required us to define the behaviors ourselves. Each stage had a custom timeout. If a step stalled, the Workflow would terminate, the relevant parties would get notified, and the tenant would need to submit a new request.

How it all fit together#

maintenance-workflow-temporal

A Parent polling Workflow checked an external property management API at a set interval. When it found a new maintenance request, it kicked off a Child Workflow for that request. The Child Workflow handled triage, approvals, scheduling, estimates, and completion.

Humans (owners, vendors, tenants) responded through email or the app, and those responses came back into Temporal as Signals, which resumed the Workflow. The whole thing ran reliably in the background until the request was fully resolved.

Here’s the main Workflow:

@workflow.defn
class PollingWorkflow:
    @workflow.run
    async def run(self, input_data: PollingWorkflowInput) -> None:
        self.vendor_label = input_data.vendor_label
        workflow.logger.info(
            "Starting polling workflow (poll_interval=%ds, batch_limit=%d)",
            input_data.poll_interval,
            input_data.batch_limit,
        )
        while True:
            await self._execute_poll_cycle(input_data)
            workflow.logger.info(
                "Sleeping for %d seconds until next poll",
                input_data.poll_interval,
            )
            await workflow.sleep(timedelta(seconds=input_data.poll_interval))
            if workflow.info().is_continue_as_new_suggested():
                workflow.logger.info("Continue-as-New suggested, restarting workflow")
                await workflow.wait_condition(workflow.all_handlers_finished)
                workflow.continue_as_new(input_data)
                return

The Parent Workflow was a durable dispatcher, not just a timer. Each maintenance request became its own Child Workflow with its own state, timeouts, and human waiting points, so the system could keep running reliably even as the poller itself continued or restarted in the background.

A note on this code: The polling loop above is actually an anti-pattern. I didn’t know that at the time. I’m leaving the original code in because it worked and because it’s a good example of something the AI agent didn’t catch. Here’s how I’d do it today:

@workflow.defn
class PollingWorkflow:
    @workflow.run
    async def run(self, input_data: PollingWorkflowInput) -> FetchTasksOutput:
        return await workflow.execute_activity(
            fetch_doorloop_tasks_until_available,
            FetchTasksInput(),
            start_to_close_timeout=timedelta(seconds=10),
            schedule_to_close_timeout=timedelta(days=7),
            retry_policy=RetryPolicy(
                initial_interval=timedelta(seconds=input_data.poll_interval),
                maximum_interval=timedelta(seconds=input_data.poll_interval),
                backoff_coefficient=1.0,
            ),
        )

@activity.defn
async def fetch_doorloop_tasks_until_available(
    input_data: FetchTasksInput,
) -> FetchTasksOutput:
    result = await fetch_doorloop_tasks(input_data)
    if result.count == 0:
        raise ApplicationError(
            "No matching DoorLoop tasks yet",
            type="NoTasksAvailable",
        )
    return result

The more idiomatic version would also keep Workflow history more manageable. In my original design, every poll cycle lived in Workflow code, so each sleep-and-check cycle contributed more Workflow history. With server-side Activity retries, Temporal compresses that retry behavior much more efficiently, so the Workflow history stays smaller and less noisy even if the system waits a long time for new work.

Back to the original code. Here’s the _execute_poll_cycle method:

async def _execute_poll_cycle(self, input_data: PollingWorkflowInput) -> None:
        workflow.logger.info("Starting poll cycle")
        tasks_output = await self._fetch_tasks()
        workflow.logger.info(
            "Retrieved %d tasks from DoorLoop API (latency: %.2fms)",
            tasks_output.count,
            tasks_output.api_latency_ms,
        )
        if tasks_output.count > 0:
            await self._process_tasks(tasks_output.tasks, input_data.batch_limit)
        workflow.logger.info("Poll cycle completed")

This is the handoff loop in miniature: check the external property management system, see what came back, and only fan out work when there’s actual work to do. It kept the Parent Workflow focused on orchestration instead of business logic.

async def _fetch_tasks(self) -> FetchTasksOutput:
        return await workflow.execute_activity(
            fetch_doorloop_tasks,
            FetchTasksInput(),
            start_to_close_timeout=timedelta(seconds=10),
            retry_policy=RetryPolicy(
                initial_interval=timedelta(seconds=1),
                backoff_coefficient=2.0,
                maximum_interval=timedelta(seconds=16),
                maximum_attempts=5,
            ),
        )

The poller didn’t call the external API directly from Workflow code. It pushed that I/O into an Activity, wrapped it in a timeout, and gave it a Retry Policy. That’s an important part of the design: external systems are unreliable, but the orchestration layer can still stay deterministic and resilient.

async def _process_tasks(self, tasks: list[dict], batch_limit: int) -> None:
        tasks_to_process = tasks[:batch_limit]
        for task in tasks_to_process:
            task_id = task.get("id", "")
            if not task_id or not isinstance(task_id, str):
                workflow.logger.error(
                    "Skipping task with invalid ID: %s (task_data=%s)",
                    task_id,
                    task,
                )
                continue
            await self._spawn_child_workflow(task_id, task)

This part of the code handled the messy edge of integrating with a real external system: limit each polling batch, validate the returned Tasks, and ignore malformed ones without taking down the rest of the Workflow.

async def _spawn_child_workflow(self, task_id: str, task_data: dict) -> None:
        workflow.logger.info(
            "Spawning child workflow for task %s (workflow_id=task-%s)",
            task_id,
            task_id,
        )
        await workflow.start_child_workflow(
            TaskHandlerWorkflow.run,
            TaskHandlerInput(
                task_id=task_id,
                task_data=task_data,
                vendor_label=self.vendor_label,
            ),
            id=f"task-{task_id}",
            parent_close_policy=ParentClosePolicy.ABANDON,
        )

Once a valid Task was found, the Parent Workflow would hand it off to a dedicated Child Workflow with its own ID and lifecycle. The ParentClosePolicy.ABANDON setting mattered here: once a request started, it could keep running independently even if the Parent poller restarted or rolled over.

Oh, and it makes phone calls#

We also built a voice AI proof of concept, because a lot of property management still happens over the phone. The integration itself was pretty straightforward.

What made it interesting was how it fit into the Temporal model. When the Child Workflow needed owner confirmation on a vendor’s invoice, it would block and wait for a Signal. But that Signal could come from multiple sources, since we simultaneously sent an email and made a voice call. Whichever response reached the Workflow first moved it forward. Any additional Signal was gracefully ignored.

When you picked up the call, you’d hear the address of the unit, the vendor’s name, and the invoice amount, and then the system would ask whether you wanted to approve or reject. The funny part: it was very hard to teach the voice agent manners. The moment you responded, it would often just say “goodbye”.

Okay, but how did AI actually help?#

We tried tools like Blitzy. We tried using Claude and Codex to one-shot the entire solution. It didn’t work, for two reasons:

  1. The models were less sophisticated than they are now.

  2. Software development is fundamentally iterative. We had a high-level design, but we didn’t know what dependencies we’d need, what we’d have to build ourselves, or what walls we’d hit with third-party APIs until we got in there and started building.

So we found a balance. We had two engineers: a more senior person focused on testing infrastructure and feedback, and me doing the main build-out, which really meant overseeing Claude Code.

Building trust#

agent-trust-loop

We started with design, hashing out the scope of the problem together. These sessions could be really short depending on the feature, mostly aligning and sanity-checking. From there, I’d generate a spec with AI.

My secret weapon was an agent I built on principles from The Pragmatic Programmer: I fed Claude the book’s key points and used them to create an interactive design agent. Claude would challenge my assumptions, ask clarifying questions, and push me to think through edge cases before I wrote a single line of code.

Once I was satisfied with the plan, I’d let the coding agent do its work.

Over time, I began trusting the agent more and more, though I still jumped in when I saw it doing something strange. The key thing I learned: whenever the agent made a mistake, capture the behavior and adjust your instructions in CLAUDE.md (the native config file for Claude Code, analogous to the agent-agnostic AGENTS.md). A few examples:

  • “Always use uv run poe test instead of pytest directly.” The agent kept stumbling on this. It would eventually figure it out, but because every instance starts fresh, I got tired of watching it hit the same wall.

  • “No database in MVP. All state in Temporal Workflows.” The agent kept trying to create a database. We’d made a clear design decision for the MVP that it simply didn’t know about.

  • “Never use random numbers or datetime.now() in Workflows.” This is a Temporal fundamental: Workflows must be deterministic. The agent generally understood this, but because each instance starts fresh and has to take in a lot of context at once, it could drift over time and forget a detail like this. Persisting the rule in CLAUDE.md solved the problem.

The broader lesson: if you find yourself repeating an instruction, persist it. The agent usually responds.

Teaching the agent Temporal#

That last bullet is worth expanding on. When we were developing, Claude was aware of Temporal but didn’t have a deep enough understanding to let it loose. So I had Claude read through all the Temporal Docs, learn the primitives and common gotchas, and create a guideline document for itself.

Anytime it wrote a Temporal Workflow, it referenced that document. I’m not sure how critical this step would be today, but to be safe, here are some Temporal Developer Skills we wrote that you can use with your agents.

I also set strict rules around testing: “Never change a test without asking me first, and only do it if there’s a sufficient reason, such as a change in the behavior of the code being tested.”

Tests came first, and I made sure I understood what was being built at a high level. I didn’t check every single line (we were working against the clock) but I didn’t do it blindly either.

It’s not nearly as hard as you think#

In hindsight, most of the resistance I felt toward Temporal was purely psychological. I’d been to Replay multiple times. I’d talked to the engineers who work here and the engineers who use Temporal, and they seemed to have some secret computer science neural pathway that I lacked. It was intimidating. I’m not sure I would have attempted to build a real Temporal Workflow by myself with barely any experience.

That’s the biggest downside to Temporal: it looks harder than it actually is. And I had to watch AI do it to understand that.

AI agents taught me Temporal. And that’s how I’d recommend using them. Maybe one day agents will write all our code, but until then, understanding still matters — even if you’re not writing most of the lines yourself.

If Temporal has felt out of reach for you the way it did for me, try this: take the 101 and 102 courses, open up an AI coding assistant, pick a real project, and just start building. Temporal Cloud makes the infrastructure part painless so you can focus on the actual problem. Ask questions. Experiment. Iterate. Document what you learn as you go.

If I did it, you can too.

Now I work at Temporal. The experience was that compelling.

Temporal Cloud

Ready to see for yourself?

Sign up for Temporal Cloud today and get $1,000 in free credits.

Build invincible applications

It sounds like magic, we promise it's not.