More Replay VideosView all videos
Need a closer look? Download and review slides presented at Replay 2023 here.
During this talk, Samar (CTO and founder) and Preeti (SVP Engineering) unveiled the next wave of innovation for Temporal — including advancements for Global Namespace, Workflow update, versioning and more. These announcements were supported by live demos from the Temporal engineering and product teams.
This transcript has been edited for clarity and formatting. You can follow along with a recording of the original content at Replay 2023 in the video above.
Samar Abbas: Building this community is such a key part of what got us here. At this point we have around 200,000 developers actively building applications on top of the Temporal platform, and growing at a rate of 9-10%, month over month, or 180% CAGR this past year.
This excitement that you bring to this event, the excitement that we see around the product, the excitement that we see in our community, please keep on pouring that in, because that's how we keep making the product better and better. So one of the awesome things about the community we have built around the product is it's not just Temporal engineers building – there are a lot of you folks out there who are contributing to Temporal directly.
A very good example last year was Temporalite, which is a community-led contribution by Datadog, and actually became the foundational technology for our next generation of CLI experience. An amazing contribution. And now I'm going to invite Jacob LaGrone to come up to the stage and talk about that some more.
Jacob Legrone: Thank you so much for the opportunity to be up here today. I’m Jacob Legrone. I am a software engineer on the Temporal platform team at Datadog, and we've been self-hosting our own Temporal clusters in production for about the last three years. And in that time, we've had the opportunity to learn a lot of our own operational lessons about running Temporal. What we found is that Temporal itself is a complex system. Although the documentation and everything that the Temporal team does to support it is really great, it's still a complex system – there are a lot of metrics and logs that come out of the box that make it really great to monitor, but that also means that it's a lot of work to keep on top of all of the failure modes and potential scaling indicators that we need to track.
So that's why I'm really excited to announce today that we've launched an official Datadog integration for monitoring Temporal Server. The integration brings together all of the metrics and logs emitted by Temporal services and gives you a great starting point for running your own servers with an out-of-the-box dashboard and a handful of recommended monitors. For example, one aspect of Temporal that's important to monitor is your sync match rate. So low sync match rate can indicate that Workers are unable to keep up with the throughput on the task queue, and might need to be scaled up. Our dashboard automatically highlights task queues that have the lowest sync match rates so you can take action. Over time, we aim to include more features like this with the integration as we learn more lessons of our own and also get feedback from you and the community about what you'd like to see. You can head to the integrations tab in Datadog and follow the onboarding instructions. And if you're running your own storage backend, then this also pairs really well with our existing Cassandra, Elastic, Postgres, and MySQL integrations.
And we have a second announcement, which is that you can now instrument your Workflows and Activities using our native Datadog tracing library for the Go SDK. It has a couple of advantages over the existing OpenTracing and OpenTelemetry options that exist, although those are both still supported by Datadog as well. The first advantage is that Workflow and Activity logs are automatically linked to your traces. So it's easy to pivot back and forth between logs in APM while you're debugging. The second advantage is that Workflow spans and data can now accurately reflect the actual duration of your Workflow Executions as they show up in Temporal. Other tracers only emit short spans at the beginning of each Workflow Execution. But with Datadog, we're able to compute deterministic span IDs that allow you to close spans as Workflows complete, even if that's days or weeks after they started. This also means that you can query Workflows by duration directly in Datadog, even after those executions have been dropped or archived by the Temporal service. This can be super useful when you want to find out if a deployment had unintended consequences, or if Workflows are nearing their timeouts.
That's all we have for today, But we're not anywhere close to done. We've heard from many of you that Temporal Cloud and Temporal SDK integrations would be nice to have. So please stay tuned and keep the feedback coming. I'd also like to thank the Temporal team, and all of you in the community who make Temporal such an engaging and welcoming project to contribute to. I'm already looking forward to seeing you all next year. Thanks again, back to you Samar.
Samar Abbas: Thanks a lot, Jacob. Jacob and Datadog have been such amazing partners for us. You have no idea how often people ask for help on our Slack and say, “Oh, we are having a problem running Temporal in a production environment,” and the first thing that I typically ask for are some snapshots or screenshots of dashboards. And the response I typically get is, “Oh, we haven't set that up.” Now, with the Datadog integration, there is no excuse.
So earlier, you heard Max talk about durable execution. Last year, we coined the phrase “durable execution” to describe the way the Temporal open source platform increases the software reliability, expands the visibility, and accelerates feature velocity. It's been really encouraging to how other people in the industry have now started describing their systems with similar terminology. You have no idea how hard it has been for us to explain what Temporal does. And eventually, we hope, the more sticky this durable execution term gets, the easier it is to help us get there.
For us, the Temporal platform is a combination of client SDKs, primitives, and services that basically provide the abstractions to make durable execution more approachable for all developers. So we've been a big believer in this idea of durable execution for a long time now, and especially in the last four years since starting Temporal, and all of the experiences with running in more than 1000 organizations at this point. We are convinced beyond any doubt at this point that this is the path forward. This is how we are going to reliably automate the world by building on top of these abstractions.
But, I think we are only taking the first few steps now. Since last Replay, the team has been doing amazing work to kind of move the platform forward. If we feel that durable execution is going to be the way of life for all developers, we need to invest into it to make it more approachable. It needs to be approachable to developers, in whatever language they use, and across experience levels. If durable executions are going to transform software development, it needs to work with different technical stacks. We need to meet the needs of every developer where they are.
If durable execution is going to become the default means to reliably automate the world, it needs to become more applicable to different business circumstances. We keep on seeing patterns around business requirements like low latency and high availability of workloads, and most importantly, people are running mission critical apps. So people are coming to durable executions with an expectation that they need to be highly, highly available and reliable.
Next, we are going to be doing a demo and taking a deeper look at some of the key investments like Workflow Update, Schedules, and Worker Versioning. So I'm going to invite Yimin Chen, who leads the Temporal server team to come up on the stage and walk through a few scenarios with us. Welcome, Yimin.
Yimin Chen: Thank you so much.
Samar Abbas: So while Yimin is setting up, one of the first things we'll be talking about is Workflow Update. This is one of the amazing new primitives we have added to the platform after a very long time. So people who have some experience working with Temporal, one of the key friction points they typically run into is, “Oh, we have this primitive signal, you can send data into a Workflow, but then you cannot piggyback a response.” We have another primitive, query, which allows you to retrieve it out of a Workflow synchronously, but then you can’t mutate the state.
We typically find that developers want to send a request in, have the application do some work, and then synchronously send a response back. And sometimes, though, the applications have very strict latency budgets and requirements to live within. So what developers end up doing is they send a signal, mutate the state, have a Worker running on the same host, have the Workflow Execution route an activity to run on on that host, which sent the request in to piggyback a response to unblock the app. It's doable, but lots of work. And this is where the new primitive we are going to be talking about, Workflow Update, makes it so simple. You do not need to do all of that any more, you just send an update synchronously based on the response. Then, after the Workflow finishes and processes your request, it unblocks the caller. This primitive, we expect it to be very useful. A lot of interactive applications, like money transfer, we are going to look at later. And like applications like page flow, especially things like background checks and customer support systems will heavily rely on this primitive.
Yimin Chen: Super excited to have the chance to showcase some of the awesome new capabilities recently added to the Temporal platform. Before I start my demo, I want to give you a very quick overview of the demo setup. So everything will be powered by our Temporal cloud. There will be two major scenarios that I will try to showcase here: the first one will be a single transfer money transaction transfer between two accounts; the second one will be a batch transfer between a bunch of accounts.
Samar Abbas: Ok Yimin, let's initiate a transfer. Let's look at some running code.
Yimin Chen: As soon as I click on this, we will have a real Temporal Workflow running in the cloud. Let's take a look at what the code actually looks like. So this is the Workflow that is currently running. The thing that I want to point out is that I have just registered a few updates. And for each, I have three updates. For each one of them, they will handle one interaction request coming from the web UI, and they are all synchronous. At the end of this, I have this Workflow
await waiting for the transaction to complete. Also one thing important to point out is that when you register your Update, you have a chance to validate the request. And so here is what you can put your logic: if you don't like this update request, you can reject the request. For example, if the amount of the transfer request is greater than some number, you can just say, “Okay, you cannot do that.”
Samar Abbas: So, Yimin, let's try to transfer a million dollars to see how that validation should end.
Yimin Chen: Yes, let's try to do that. So I will try to send my money to my savings account. And let's do $1 million. Alright, so as you would expect it that we have this max amount limit that prevents your transaction from happening. We can take a look at how this is showing up on your Workflow. So here, we actually see two updates coming in. But the last update that is trying to actually do the transfer was not showing up. That's because the validator rejected your update. And when Temporal sees the rejection, it doesn’t write it into your Workflow history. This will prevent a bunch of invalid updates coming to you and spamming your history.
Samar Abbas: This is so cool. So now you have protection from spamming your Workflow Execution by sending all of those trying to get rich like me. Now let's run a real transaction.
Yimin Chen: Sure, let's do 1,000. Yep, and the transfer goes through. We can take a look at what it looks like on the UI. So as you can see, we actually have two Activities that have been triggered by this update. So we have a successful transfer.
Samar Abbas: Awesome. So Yimin, everyone who has been coding up these kinds of use cases on top of Temporal understands that a lot of times these transfers do not happen as expected. There are scenarios where in the middle of this transaction, it failed with an unrecoverable error. What does Workflow Update look like in those situations?
Yimin Chen: We can try one case where it will fail with an unrecoverable failure. And let's see how we compensate for that. And this will be like, in most real scenarios, we tried to transfer money again. And let's put some reasonable number: $100. But it failed. And we got a very clear message that you cannot do this, because your account is frozen, and this is non-retriable. So let’s take a look at what exactly happened to the Workflow.
So the most interesting part will be the last update here. What I want to point out is, you can see from the Timeline view, we have this update, which has two Activities, and one of them failed. And if you notice, we have two compensations going on, but the Update actually finished before your compensation started. This is where we were able to return as soon as possible to unblock the UI where a human or something is waiting where we want a low latency response. This is very powerful, but people make them confused on how this is actually implemented.
Let's take a look at how the code looks. It's actually pretty simple. So here is the real update handler code that we use for the Saga pattern. So before we actually execute the withdrawal, we append some action that we use to compensate if this operation fails. So we add it to this pending compensation. If nothing is wrong, we proceed. And before we do the next deposit, we append another compensation if this thing fails. And then we finish our update. By the time we do this, our UI will be unblocked and the web UI can move forward. But, inside the Workflow, as soon as that is done, we will move forward from here. And then we check, and something goes wrong, then we really need to do some compensation. And we just executed the pending actions that we have already added. And this is how you implement this Saga pattern.
Samar Abbas: So this is amazing. This is one of the like, most this is where the developers were facing a lot of friction building these kinds of systems. Now you can build responsive apps, immediately unblock the application, show an update failed, doesn't matter. Still, your transactionality is still there because there's a Workflow taking care of things asynchronously underneath the covers to roll back and get the transaction back to a happy state. If I want to run a batch of those on a schedule, show me what you have got.
Yimin Chen: So it will be very easy to do so since now we have a new Schedule available at the Temporal platform, and I have coded up this demo so that as soon as I click on this, I will have three Schedules created on Temporal Cloud. This is the new place where you want to manage your Schedules. And here I have just clicked to add three of them. And let's take a super quick view of each one of them. So this one is an every five seconds schedule. Nothing interesting here. This one, however, is hourly, and we wanted to do this during business hours. I specify Pacific time, during business hours, and you can notice that we have added some jitter. A lot of times when you have a lot of those Schedules to be at the same time, you don't want them to spike your downstream service. So you can add jitter and this is now built in.
The last one I want to show is the auto timezone switch. So I specified to be running on a pacific time at 2pm in the afternoon, and I was able to edit just once the timezone shift and you don't have to worry about that. Also, we added a pause.
So all of those are the new features that we added, we can take a super quick look at what the code looks like. See this one is how easy it is? If you wanted to schedule a simple one like every five seconds during business hours, only from Monday to Friday, added jitter, specify your timezone and, even more complicated, you can specify the month you wanted to run and specify time and day of week. So super easy. You just need to specify your Schedule spec. Take a look at what it looks like on all the Workflows. Yeah, so as we would expect, we can see a bunch of transfers happening right now. For example, this one coming from account 41 to account 91. And we sent over $10. So yeah, it's done.
Samar Abbas: Okay, so that's awesome. This creates a lot of flexibility as Yimin has shown here, where you just define all of those things in a spec. And it makes it so that things happen on a recurring basis for you automatically. One of the things that I'm noticing here is that all those transfers are fixed amounts like $10. Can we make a change to the Workflow logic to call an Activity which
GETs the amount to transfer?
Yimin Chen: Yeah, that'll be super easy to do using a Workflow. We just need to add one additional step. And I happen to already have them. So super easy, just save this line calling an Activity that fetches the new amount that we want for each transfer.
Samar Abbas: Easy. Okay, I know people in the room understand that adding that activity is not easy. Why? Because the moment I deployed this code, all of my executions that were running, are now going to start failing with non-determinism errors.
Yimin Chen: You are exactly right. And having to deal with this type of Workflow logic change is non-backward compatible. Facing the non-deterministic error is one of the major pain points that people complain about. We have been working so hard to address this by introducing a Worker versioning feature.
Now, you will be able to do this safely with very little changes needed. So, besides making your code change, the only thing you need to do is go to your Worker code and enable versioning. When you start your Worker, you specify some of the options. And then we just need to pass in a different version. Let's say we want to do a 2.0. I save this and I deploy this Worker, and this Worker will now identify itself as version 2.0 to the Temporal Server. This means that any new Workflow coming in will be automatically dispatched to this particular version, and only the new Worker with this version will be able to process that task. Your old Workflow will continue to run on the old Worker.
Let's deploy this new Worker and see it in action. As you can see on the left side, this is my older Worker. As soon as I hit Enter, my new Worker will start taking over the new traffic, and the old one will continue to run until it drains all the inflight. See? It's already taking over and the old Worker is slowing down, because no more new tasks are being sent to it. It's probably done. Let's take a look at the Workflows.
Now those numbers are different and not $10 anymore, but you still see some of them $10 here, and if you scroll down, more and more of them become $10. So while this is interleaving, it is when both versions are having inflight Workflows, and both versions are active. And then very quickly, they drain all the old tasks. And now you are in a state where all of the new tasks are running on a new version. This is so cool.
Samar Abbas: I have so much PTSD, by the way, from making changes to my Workflow logic and deploying, and with this feature, we expect people to be more creative and risk-taking, and able to deploy their code to production quickly without worrying about non-determinism. Thanks a lot, Yimin, for the amazing demos. A huge round of applause for Yimin.
As you have seen, we have been making tons and tons of investment to make durable executions more and more attractive for people to build their applications. As you've seen, we've been kind of pushing boundaries on the core platform side of things to make these durable executions more and more attractive. One of the things which has naturally started happening is people are using it for very, very mission critical apps. This is where we feel we are kind of differentiating Temporal Cloud and doing tons of the investments to make Temporal Cloud the best place to run these workloads. So next, I'm going to invite Priti thermal to come up on the stage to talk about those investments.
Preeti Somal: Thank you so much, Samar. Hello, everyone, my name is Preeti and it's just such a privilege to be here today to represent the work that the very talented team here at Temporal has been doing. I'm going to talk a little bit about Cloud and then I'll also have Liang come up in a little bit. And we will definitely do demos. Sounds great. So let's dive in. You heard from Max, you know, kind of this paradigm shift and seeing your execution and Workflows everywhere. And I think you heard from Samar a little bit about reliably automating the world. Our goal with Temporal Cloud is really simple. What we want to do is build a utility that essentially runs everything that you are building. We want you to be able to do this in as simple a way as possible, without having to worry about all the heavy lifting that goes into running the Temporal service at scale reliably.
And we are providing you this utility so that you can run them really, really easily. In order to do that, we've actually put in a ton of effort into our operations, security, automating the control plane, and building a custom persistence layer so that we can run this reliably and at scale, and do this in a way that we can meet all the needs that are coming from our customers. Last year at Replay, we launched Temporal Cloud in GA. And we are so honored to report on our progress. Since then, we've had a huge scale point, we have more than 600 customers running their applications on Temporal Cloud. We have more than 2800 namespaces, and over a trillion and a half actions since April 1. So this is something that is really serious for us. And we are really, really honored that you're putting your trust in us, and allowing us to run this utility for you as you automate your world.
How is Temporal Cloud different? We actually have a few talks later. So Sergei is going to talk about the Control Plane later today. And then Paul talks about our custom persistence layer. So if you're wondering sort of how we run Temporal Cloud, how we built it, those are two really good talks to go to, to learn more about what's going on with Temporal Cloud as well. All right, as Samar talked about sort of making our platform approachable, adaptable, and applicable, this sort of is a theme that runs through everything we do. And what I'm going to do is talk a little bit about some of the work that we're doing on the Temporal Cloud side. To help me with that, I'm going to invite Liang up on stage. Liang is on our engineering team for Temporal Cloud. And while he sets up, I'll introduce the first set of demos that we're going to do.
One of the clear themes for us that has emerged is making the cloud very approachable for all the platform admins. What does that mean? That means that as you all are out there, building your applications and more developers are coming in, we want to be able to simplify how authentication works, how you're onboarding users into Temporal Cloud, and how you're managing the Workflow of those users and namespaces. So we'll kick this off with a few demos.
Liang Mei: The first thing that I'm going to demo is the recent launch of API keys. API keys will allow you as a Temporal user to interact with Temporal Cloud using the
tcld command line tool or through one of our API's that we just launched. It's actually fairly easy and straightforward for you to just apply or provision your API keys: all you need to do is go to the Temporal cloud UI, and in the Settings page, you will see this new tab, which basically has API keys. It contains everything you need to manage your keys. I already logged in as account admin. So what I'm going to do is create a new API key for Replay demo. And as I click this it will show the string which I can copy over for my use in my CLI and in my programs, and this API key will inherit all the permissions I have as my user.
Preeti Somal: That's great, Liang, thank you. And as you can see, this feature is in private preview, currently. Let's actually take a look at how we use this.
tcld CLI is the official tool to manage all your users accounts and namespaces. And in the latest release, we added support for API keys. So what I'm going to demo here is
tcld plus the API key to do some CRUD on users. The first thing I'm going to show is that we can do this
GET operation. And I'm going to use this firstname.lastname@example.org, which doesn't exist. And you should be able to see the same information in the Users tab, the account admin, but I'm the only loaning user in this account, which doesn't sound right. So I'm going to use the
tcld to invite some new users to the platform. And what I'm going to do is I'm going to email this user code, again, email@example.com, and I'm going to assign a developer role for the user. So what this does is it talks to Temporal cloud, it actually runs the Workflow in the backend, it provisions the user account. And by going back to the UI, we should be able to see that the user is connected, and eventually it will be activated. We can go back to the CLI to do the same thing with the
GET operation. And now you will be able to get all the user information and permission information from the CLI as well.
Preeti Somal: That's fantastic, Liang. But you know, we've got a large development organization, and we want everybody to use Temporal Cloud. Are you expecting me to run the
tcld command every single time?
Liang Mei: Of course not. We are going to release a set of API's for the crowd of users namespaces accounts as well. Actually, this is the internal repo, but it contains all the protobuf definitions that you as a developer would need. I'm just going to show one of these, like proto definitions, which gives you permission to get users, create users, update users and change user permissions. And that's how you would be able to use it.
Preeti Somal: Yeah, that sounds great. So once again, you know, this is a feature that we are developing. And we will be releasing soon, the goal being you can go into that repo, and you can take a look at all the API signatures. And again, as you can see, what we're really trying to do here is build out that sort of automation layer so that you can automate your usage of Temporal Cloud pretty seamlessly.
Liang Mei: And one of the ways obviously, you can use this by using Temporal Workflows. So one of the things that I did with this API recently is that I wrote a very simple Workflow, probably similar to what you guys probably learned from the window classes yesterday, which does this loop. And it basically does two things: it creates an Activity and monitors a file, and I'll show you guys a little bit what that looks like; and then, it costs into a bunch of these API's that I just showed you to reconcile these users. So when I say a bunch of these API's, it's just to make sure that we do idempotency on these update operations, and then it goes back into sleep.
If you go back to the UI, you will see that we have two users now, but we want to add more users. So in this file, we have a domain user, which contains an admin access to the account. And then we have a developer user who basically has write access to this namespace. And then we have a monitor user who has only read access. And what I'm going to do here is I'm going to start this Workflow. And I'm going to start a worker here, which talks to Temporal Cloud. And then I'm going to just start this Workflow. And while the Workflow does its job and tries to reconcile the data, we just need to wait a little bit to see that this user eventually appears here on the UI.
Preeti Somal: Yeah. So as you can see, we see Workflows everywhere, internally, as well. And so to show kind of the user management API's, we're showing a Workflow approach to it. Liang, did you want to share anything else here?
Liang Mei: So obviously, one thing that we can do with this is we can update this file and it will get reflected in the UI.
Preeti Somal: Yeah. So the point being that, you know, we've got these primitives in place, and hopefully, all of you can start imagining how this would apply to the automation needs that you have within your processes around onboarding users, etc.
Let's talk a little bit about mission critical applications. You've again heard how reliability and availability are core focuses for us. And, you know, we showed you Temporal crash and hopefully that never goes down. But in all seriousness, we have a lot of mission critical applications running on our platform from payments, messaging, bookings, and even tacos. Yeah, no, I know, we're kind of in between you guys and lunch. So tacos it is.
So as we think about mission critical applications running on our platform, you know, one of our key goals is to bring more and more availability forward. And today, I am really pleased to announce that we are launching global namespaces in private preview. And so global namespaces is a feature that allows us to protect against region outage and takes us to four nines availability. How does global namespaces actually work?
Liang Mei: Yeah, so global namespaces actually span across multiple Temporal clusters in different regions. As you all know, while your workload continues to make progress in the primary region, Temporal will be able to continuously replicate all these history events to a secondary region. As part of that process, it also updates the Workflow state in the Hot Standby mode. So when an outage happens in the primary region, it will allow us to failover your Workflows to the healthy region so that they can continue to execute. And through the same process, we're going to failover all your customer traffic into the same region. This whole process allows Temporal Cloud to continue to serve our customers traffic for both the existing Workflow executions and for any new Workflow executions during a regional outage.
Preeti Somal: Wow, that's really compelling AI. How hard is this to set up?
Liang Mei: It's actually pretty simple. All a user needs to do is to just select the regions, and Temporal Cloud will take care of the rest.
Preeti Somal: That is unbelievable. Are you saying I just need to click a little drop list and I'll get global namespaces?
Liang Mei: Yes, that's exactly right.
Preeti Somal: I think we'll have to show a demo here. Should we switch to demo?
Liang Mei: So here, what we're going to do is we're going to show this demo. This is an awesome application, which our Temporal SRE team created. And this is, as you can see, it's a trivia game. I already launched two game sessions. One is powered by a normal Temporal namespace, and the right side one is using a global namespace. Before we dive into any of the details, let's try to play the game anyway.
Preeti Somal: Yeah, I think that might need help with the baseball question. Who knows the answer to that? It's B, oh, it's B, I should know that I'm from the Bay.
Liang Mei: Great, and you want to try the one on the right hand side, like which planet in the solar system has the highest average surface temperature?
Preeti Somal: Mercury. Awesome. So once again, as we're running through this game, you know, this is literally the exact same code. There is a blog post that our solution architects team did on this game. And so as Liang is launching that, I think the next step is for us to simulate an outage.
Liang Mei: Yeah. So the way I'm going to demonstrate the value of global namespaces, I'm going to bring down the Temporal cluster, which powers both of the Workflows and namespaces. Here I'm going to launch the script, all it's going to do is shut down the services. And then we are going to see errors from both sides actually. What these apps do, as you can see, is actually cause Temporal to continuously watch for the game state change, because this is actually a multiplayer game. So other people can also play on this. I'm not demoing that multiplayer part.
You're going to see two things here. Actually, the first thing that you're going to see is on the right hand side, the workload continues to progress because the failover actually happened. So after we detect failure of that region, we are actually failing over the namespace to the other side of the region, which is a healthy region. And that's why we can continue to play this game, and we can click them off these buttons I don't know what you just did the largest island in Caribbean? Cuba. I misclicked. All right.
Preeti Somal: This is incredible and no pun intended, but this is a game changer, isn't it?
Liang Mei: It is, as you guys can see. Thank you. I mean, this is a very powerful technology, right? It adds all these resiliency keys to your application without you having to change any of your code. But much more than that, we use a similar technology within Temporal, which allows us to migrate all these workloads between Temporal clusters for efficiency reasons and for infrastructure upgrades. In the future, we foresee a similar technology which will help our users migrate their workloads from on-prem clusters to Temporal Cloud and vice versa.
Preeti Somal: Yeah, amazing. This is definitely not a trivial piece of technology. And, you know, as Leon just mentioned, what we're doing here is sort of laying this foundation. And as you can imagine, you know, there are more use cases here for us to solve around migration that we will be diligently working on.
Liang Mei: And just before we finish, I just want to mention that the DNS change also got propagated, the only reason that you guys probably see that get delayed is because of DNS caching, and the traffic's actually routed to the Norwegians. Thank you so much.
Preeti Somal: Thank you so much, Liang, a huge round of applause.
Here’s the link to the trivia game. It is a multiplayer game. It's really interesting if anybody wants to check that out. We've had the chance to talk about two main focus areas, but there are a lot more capabilities being added to the Temporal platform, so stay tuned.