Modernizing Monoliths with Temporal

temporal logo meteor indigo

Joshua Smith & Meagan Speare

Staff Solutions Architect & Manager, Product Marketing

We regularly get asked, “how can I modernize my monolith with Temporal?” or “How can Temporal help improve my systems if I’m not using microservices?”

In this post, we’ll talk through the process we recommend for how and why you can improve your monolithic systems with Temporal.

Why Change a Monolith?

I commonly see these reasons for monolith modernization:

  • Agility: monoliths can be hard to manage, complex, hard to change, hard to test. Microservices are smaller and might be quicker to change, test, and release.
  • Performance & scalability: scaling up a monolith requires a lot of resources. Usually there are certain parts of the monolith that need more resources, so they could be spun out and scaled up as separate services.
  • Cognitive load: monoliths are complex, with everything packed into one big application, which makes it difficult to understand, reason about, and contribute to.
  • Quality: monoliths are often built without reliable tests. Rewriting is seen as a path to add tests, gain a deeper understanding, and a chance to “do it the right way this time.”
  • Unclear domains and responsibilities: often monoliths are maintained by large teams or even multiple teams. The boundaries between areas of a monolith are often fuzzy or undefined, and as a result, responsibilities for maintenance are unclear or undefined.
  • High time to value: monolith release cycles are often long, due to the time needed to test and deploy them, which results in fewer releases to minimize the risk of change. Features are usually done for months before they are released.

Not every monolith needs improvements. We’ll talk about improving a monolith without breaking it apart later.

How to Modernize: Domain Thinking

We recommend starting to understand and improve a monolith by breaking it down into domains. For this blog, you don’t need to be an expert on Domain Driven Design. Our example will show a big system broken up into smaller pieces with relatively clear boundaries (domains), and we will talk about the benefits and challenges of breaking a system up this way.

For further reading, Domain Driven Design is a great book on the subject, and lays out an excellent path for breaking down a monolith:

  1. Break the monolith design down into domains with bounded contexts
  2. Identify entities and their relationships
  3. Build a ubiquitous language for shared understanding of the domains, and the broader system

This method lays an excellent foundation for understanding and then improving a big system.

Here we take an Order Management System monolith with everything written in J2EE. As we think about the different problem areas that lie within an order management system — things like the shopping cart, checking for fraud, etc., each of these is considered a domain in Domain Driven Design. (We can also split out certain elements to use technologies that are a better fit for our developers or easier to implement in a certain technology).

Monolith 1

Once domains are defined, a team can segment out pieces of a system that need clearer ownership, higher quality, better performance, and faster delivery time. In this way, modernization delivers value quickly and can be prioritized according to business objectives.

Monolith image

We have helped many teams modernize their monoliths this way. We found success and created much business value using this approach.

Limitations of Domain Deconstruction

One challenge we discovered was coordinating transaction processes that cross domain boundaries. In our order management system, an order spans many domains: the Order domain, an external fraud check service, the Inventory domain (to let it know an order is coming, check that inventory is available, and reserve it), a payment processing domain, the inventory domain again (to ship the order), and finally the Order domain again to close it.

Each service has its own state (inventory counts, orders, catalog pricing, etc.), but an order process–which spans domains–has its own state as well, and must keep all domains correctly updated as the order proceeds. A crash or an mishandled error will likely lead to loss of order state, and then the domains and systems will be out of sync.

Ephemeral State

Ephemeral state is a term for transaction process information that is only needed while the process is in flight (see Omar Diab’s excellent article about this topic). One risk of keeping ephemeral state only in memory is that it might be lost if a system crashes. Cross-domain data consistency is a challenge with monoliths, and it comes to the foreground when solving monolith pain.

Turning monoliths into microservices means you must manage ephemeral state. Managing distributed transaction state with durable events, such as with Kafka, is one approach, but it brings its own set of pains: automated testing, keeping databases and the various event logs in sync, provisioning and modifying topics as the process changes, and trying to build visibility for all of these data stores. Monolith 3

Temporal and Process Thinking

One of my (Josh’s) great regrets from my time modernizing monoliths is that I didn’t know about Temporal Workflows backed by event sourcing. In doing modernization without Temporal, the development teams I worked with pulled “hot,” high-change services out of the monoliths, resulting in clearer domains and better team ownership. But our modernization efforts ran out of momentum when there was a risk of losing ephemeral state. We ended up leaving complex processes in the monolith. They were slow to change, hard to test, and painful.

Since my introduction to Temporal, my thinking has changed. Complex, distributed transactions become easy to define and make durable when using Temporal Workflows. Each process has its state, stored durably. Every step in the process is stored and tracked as an event. And nothing can get lost due to a bug or a crash. There’s no need to set up a database or event stream infrastructure to manage ephemeral state. It’s as easy as writing code and using the Temporal SDK.

The result is the process can work across systems without any risk of state loss. Monolith 4

In this workflow, Temporal keeps the information about the process durably, and orchestrates the process across several different systems.

Benefits Of Process Thinking and Temporal

With this approach, things get a lot nicer. With Temporal’s code-first orchestration, you get:

  • Process visibility
  • Cognitive simplicity
  • Domain division with orchestrated control
  • Management of ephemeral data
  • Testing simplicity
  • Scalability without data loss
  • No need for additional architecture to manage for keeping ephemeral state

Here’s how the orchestration code might look in Temporal:

public OrderOutput execute(OrderInput input) {
        log.info("Order workflow started, orderId = {}", input.getOrderId());

        // Get items
        List<OrderItem> orderItems = localActivities.getItems();

        // Check fraud
        activities.checkFraud(input);

        // Prepare shipment
        activities.prepareShipment(input);

        // Charge customer
        activities.chargeCustomer(input);

        // Ship order items
        List<Promise<Void>> promiseList = new ArrayList<>();
        for (OrderItem orderItem : orderItems) {
            log.info("Shipping item: {}", orderItem.getDescription());
            promiseList.add(Async.procedure(activities::shipOrder, input, orderItem));
        }

        // Wait for all items to ship
        Promise.allOf(promiseList).get();

        // Generate trackingId
        String trackingId = Workflow.randomUUID().toString();
        return new OrderOutput(trackingId, input.getAddress());
    }

The events and orchestration all happen because of this code: a Temporal Workflow calling Activities. Because it is implemented as code, it’s much simpler to test compared with a process spanning four distributed systems, each with their own event topic, database, and microservice. Here, the orchestration is in one place, can be owned by a single team, and the ephemeral state is kept all in Temporal’s Workflow History.

Individual microservices don’t need to manage ephemeral state, and can focus on their own domain. For example, the Inventory service doesn’t need to manage in-flight orders - it can just manage inventory and accept changes driven by this orchestration. This makes for much simpler microservices that are easier to understand, change, and debug. Here’s how the an order would look in Temporal:

Monolith 5

Every order is tracked, inputs and outputs are part of the Event History, and debugging or inspection is so much easier.

Temporal does a lot of the heavy lifting in this architecture: state, events, process metadata, and process history are all built-in to Temporal. Temporal Workflow code isn’t complex (see the code snippet above), and it powerfully solves difficult challenges with modern distributed systems. Temporal SDKs let you focus on what should happen, while the management of process state is automatically handled for you.

The result of this refactoring of the monolith is a system built out of Workflows in Temporal and simple microservices. The architecture is simpler. Stronger boundaries now exist between services. There is much more transparency into the work being done by the system. Testing is also easier, with simple automated tests and no need to check for ephemeral state loss or consistency between services. The end result is a simpler, happier modernization path with a more reliable system at the end.

Keep the Monolith, Add Durability

You may ask, “what if I like my monolith, but I want to add durability and visibility. Can Temporal help?” The answer is yes!

Adding Temporal gives you visibility into your monolith, lets you test cross-domain processes easily, and most importantly, adds durability to these processes so data is never lost. This can make a monolith much easier to support, and make the processes flowing through your monolith more visible and reliable.

Start by looking for processes that flow around and through your monolith, like the order example above, sort them by the ones you want to add reliability to the most, and implement Temporal Workflows for these processes. Temporal can manage these processes that touch databases, call external services, and interact with the domains in your monolith. Adding Temporal makes your monolith more durable and increases visibility. As part of using Temporal Workflows in your monolith, you can add tests for these key processes. This will let you change your monolith with more confidence and lower risk.

Conclusion

Whether rearchitecting a monolith to microservices, or improving a monolith by adding Temporal Workflows, you can add durability, visibility, clarity, and increased feature turnaround time. By putting the focus on transaction processes, you can manage ephemeral state across domains easily and keep your system consistent and reliable.

Check out Temporal and see how it can simplify your architecture and code and make building distributed systems durable, flexible, and clear.