“All our orchestration relies on Temporal. When a signal hits — someone visits the site or a CRM field changes — we create a Workflow and run the actions as Activities. That’s the heart of Cargo.” — Aurelien Aubert, CEO at Cargo
The typical go-to-market (GTM) tech stack often feels less like a streamlined machine and more like a collection of disconnected parts. CRMs, marketing automation, sales engagement tools — they all hum along, generating data, but rarely in perfect harmony. This fragmentation can lead to siloed data, disjointed customer experiences, and sales reps spending far too little time actually selling.
Cargo, a Y Combinator S23 company, looked at this GTM complexity and saw an engineering challenge. They’re building a revenue orchestration platform, aiming to be the central hub connecting these disparate tools and data streams. The goal, according to CEO Aurelien Aubert, is to ensure AEs focus on the right lead, at the right time, with the right context. Cargo’s vision also leans toward smart automation over large SDR teams.
What makes Cargo different is its engineering-led philosophy. Aurelien explains they take “more of an engineering approach” to solving GTM use cases. Rather than adding another tool, they provide the architecture companies use to build their own “growth engine,” with the data warehouse as the central source of truth.
An engineering mindset from day one#
This technical focus is woven into Cargo’s fabric. As a lean startup — about five full-time employees — they fostered a culture where technical understanding is shared. Aurelien notes that essentially everyone on the early team knew how to code, which helped in fixing bugs, building integrations, and making decisions faster, regardless of role.
This environment bred autonomy. Without a dedicated CTO or CPO in the early days, engineers took ownership of features from start to finish, defining specs and timelines. They were capable of taking a feature from “understanding the problem to… releasing and implementing,” as Aurelien describes it.
The scalability tightrope: Infrastructure vs. UI#
Building a powerful GTM platform requires a delicate balancing act. Doing “interesting things in go-to-market,” as Aurelien puts it, demands deep infrastructure and data capabilities — connecting to numerous sources, transforming data, and ensuring scalability. But all that backend power needs a usable frontend. The team constantly faced the trade-off: invest heavily in infrastructure or prioritize shipping user-facing features.
“Investing too much in infrastructure without a good UI is useless, but a great UI without robust infrastructure to support it is equally ineffective.” — Aurelien Aubert, CEO at Cargo
This tension shaped their early architecture. They run a single backend (monolith) and deploy workers on Kubernetes, including dedicated workers for some customers or plans when needed. Orchestrating multi-step workflows — enriching leads, scoring accounts, routing opportunities — across various third-party APIs, each with unique limitations, was particularly demanding.
Data storage became a critical bottleneck. The team knew that their initial choice, Postgres, wouldn’t scale indefinitely for the analytical workloads central to Cargo’s value proposition. “We knew… Postgres at some point won’t do the job anymore,” Aurelien recalls, “but it was simple to build on top of Postgres at the beginning.” As usage grew, the limitations became clear, forcing a migration. Moving to ClickHouse, a columnar database optimized for OLAP queries, provided the necessary performance for their data-intensive features.
Finding the right conductor: Temporal#
With ClickHouse handling analytics, the core challenge became orchestrating the complex, often long-running, and failure-prone workflows that fed it data and interacted with the GTM ecosystem. Simple background job queues often struggle with the statefulness and reliability required.
Cargo chose Temporal. When asked about critical architecture components, Aubert is direct: “All our orchestration relies on Temporal today.” Drawing on prior experience with Redis/RabbitMQ-style queues, he acknowledged Temporal’s initial learning curve but emphasized its scalability and reliability.
Temporal’s Durable Execution was a game-changer. Workflows are stateful and resilient by design; their progress is automatically saved, allowing them to survive worker crashes or restarts and resume precisely where they left off. This is crucial for GTM processes that might run for days or weeks.
Cargo leverages Temporal for several key functions:
- Event-driven GTM workflows: External signals, like a website visit or CRM update, trigger Temporal Workflows. An event automatically creates a Temporal Workflow, and the subsequent actions become Activities within that Workflow. These Workflows coordinate sequences of Activities — fetching data, calling APIs, running models, updating systems. Temporal’s built-in retries handle the inevitable failures when dealing with external services.
- Data synchronization (ETL): Temporal ensures reliable, scheduled syncs between tools like Salesforce or HubSpot and Cargo’s warehouse (ClickHouse). Pipelines also handle fan-out to customer-preferred stores (e.g., Snowflake/BigQuery) when needed. These dependable ETL jobs (often hourly) fetch, transform, and load data, retrying failed steps and tracking progress.
- AI workflow orchestration: Cargo uses AI for tasks like scoring companies, summarizing context for AEs, and drafting outreach. These are multi-step processes (prepare → model call → post-process). Temporal orchestrates the steps, manages dependencies, and handles retries amid rate limits and variable prompts.
A concrete example: when an opportunity moves to Closed Won, Cargo can automatically identify similar companies (look-alikes) and assign new outreach to the same AE.
export const orchestrationWorkflowRunHandle = async (
uuid: string,
workspaceUuid: string
): Promise<void> => {
const { nextExecutionConfig } = await activities.startRun({
uuid,
workspaceUuid,
});
let currentExecutionConfig = nextExecutionConfig;
while (currentExecutionConfig !== undefined) {
const executeNodeResult = activities.executeNode({
currentExecutionConfig,
});
if (executeNodeResult.outcome === "notExecuted") {
await activities.finishRun({
uuid,
workspaceUuid,
status: "error",
errorMessage: executeNodeResult.errorMessage,
});
return;
}
if (executeNodeResult.outcome === "executed") {
if (executeNodeResult.nextExecutionConfig !== undefined) {
currentExecutionConfig = executeNodeResult.nextExecutionConfig;
}
}
}
await activities.finishRun({
uuid,
workspaceUuid,
status: "success",
});
};
This is a Temporal Workflow that runs an orchestration “run.” It initializes with startRun
, then iterates through an execution graph one node at a time by invoking executeNode
as an Activity (I/O and heavy work live in Activities). On failure, it records the error via finishRun
and exits; otherwise it advances to the next node until completion, then marks the run success
. Because the loop and branching live in Workflow code, Temporal durably persists progress and can resume after worker crashes or deploys.
Lessons from the trenches#
Building a platform like Cargo inevitably involves learning curves. One early insight in Cargo’s Temporal journey: Workflows aren’t meant to carry large payloads. The team initially treated them like traditional job queues, which led to inefficiencies. After shifting the approach — keeping Workflows lightweight and pushing heavier data into Activities — performance improved dramatically.
Adopting Temporal requires a change in mental model: treat Workflows as durable, event-driven state machines. Embracing that model is what unlocks Temporal’s reliability and scalability.
Beyond Temporal, Aurelien highlights a familiar startup tension: deciding when to invest in infrastructure versus when to keep shipping features. The move from Postgres to ClickHouse, and tuning orchestration patterns, both came down to timing. The theme is consistent: evolve the architecture just in time while continuing to deliver value.
Conclusion#
Cargo is applying engineering rigor to the fragmented GTM landscape. Their platform aims to unify disparate tools and automate complex processes so revenue teams can work with better timing and context. That requires a sophisticated backend capable of handling large data volumes and reliably orchestrating workflows across third-party systems. Technologies like ClickHouse and Temporal are central to this mission. Temporal is at the heart of Cargo, powering the stateful orchestration that keeps modern revenue operations moving.