Possible feature: inspect Task execution time (original) (raw)
November 24, 2025, 3:04pm 1
Hi Swift Heroes! 
I’m reaching out to get your thoughts on an idea we’re exploring: adding the ability to measure the actual time spent working on a Task. There’s already WIP on an experimental implementation, but before moving forward, we want to hear from the community.
(Just a quick note: I’m using working, running, and executing interchangeably here. In a formal proposal, we’d define these terms more precisely.)
The Challenge: How long is a Task really running?
This idea came up during discussions in the Testing Workgroup around the Polling Confirmations pitch. We realized there’s no reliable way to constrain how long polling should run since wall-clock time isn’t dependable in parallel Swift Testing - the start, suspension, and finish times of a test can vary unpredictably.
More broadly, there’s no good way to measure or limit how long a test or task actually consumes CPU time versus being suspended or waiting.
A Possible Approach
What if we could inspect how long a Task is actively running or executing - potentially by using a suspending clock? The measurement would include the time spent running child tasks but exclude when those child tasks are suspended.
We’d Love Your Feedback
We’re considering pitching an API for this, but before that, we want to understand the bigger picture and potential use cases outside testing as well. One use case we thought of is: this might help with profiling/instrumenting products (especially on non-Apple platforms where Instruments is not available).
So, two questions for you:
- Do you think adding the ability to measure actual task runtime would be a valuable enhancement to Swift?
- Can you think of use cases beyond testing where this would be useful? If yes, please share!
Thanks for your insights!
Cheers,
Maarten
Swift Testing Workgroup
nikolai.ruhe (Nikolai Ruhe) November 25, 2025, 6:12am 2
I just want to make sure I understand semantics correctly. The proposed measurement would return:
- The duration, from start (spawn) of the Task to now, where the Task or at least one of its child tasks was actively running.
I first understood it as the combined runtime, counting each child task individually, so that the result could be greater than the wall clock duration.
To answer the question: Yes, I think this would be a useful addition. Especially when debugging or analyzing complex behavior this would be a valuable data point.
I have written extensive instrumentalization around a custom executor based on a serial queue to measure job latency and load factor and also to detect hangs. While not the same thing, I guess the question “How long is a Task working on this?” is a valid one. I could, for example, envision use cases in analytics, where I want to detect abnormal behavior.
+1
maartene (Maarten Engels) November 25, 2025, 7:51am 3
Hi Nikolai
This is indeed the intended behavior.
Do you think your implementation of instrumentalization could be used as a starting point if we want to create a first implementation of this feature?
FranzBusch (Franz Busch) November 25, 2025, 10:01am 4
I understand the motivation behind exposing more "metric"s around the Concurrency runtime. We have similar needs in the server space where we want to understand in live-services how long tasks/jobs are running and more. So far though we have delegated this down to the executor level. This is due to two reasons:
- Taking clock measures around each execution of a task/job can impact performance a lot. As an example, in NIO's EventLoop or the new platform executor implementation on Linux we only take a clock measurement once a tick. We have seen in the past that unnecessary clock measurements can show up in performance profiles
- The executors have even better knowledge about what is happening on the scheduling front. They understand exactly when a task/job is enqueued, run and how it compares to other work happening on the executor.
Have you considered using executors for understand the execution time?
maartene (Maarten Engels) November 25, 2025, 11:53am 5
Hi @FranzBusch,
We haven't really explored the potential ways of gaining insights in execution time yet. So in that sense we haven't researched doing this on the executor yet. For now we are most interested in understanding the needs for these insights. I.e. is "the problem big enough to be worth solving?".
When we start looking at potential options, executors definitely sound like an option we need to explore.
I tried to look into the documentation for the platform executor (https://swiftpackageindex.com/swiftlang/swift-platform-executors/documentation), but got a 404 error. Does the platform executor already have ways of determining execution time for a task?
nikolai.ruhe (Nikolai Ruhe) November 25, 2025, 1:48pm 6
Do you think your implementation of instrumentalization could be used as a starting point if we want to create a first implementation of this feature?
Not really. I've done this to understand performance of a global actor, which is used throughout the non-UI layers of medium-large project (an app that manages bluetooth devices). The actor is using a DispatchQueue as a custom executor, of which I can control enqueueing, starting and finishing jobs.
I was mostly interested in delays due to the actor being occupied. So I was measuring the time (latency) from enqueueing to starting and also the fraction of time where it was executing (load). If I understand correctly you are looking for a somewhat solid basis for timeouts in unit tests, where actual execution time is a better measure than wall clock time.
So you're looking at task creation, child task spawning, and suspensions. My code has no notion of tasks (any task can be scheduled on the actor/executor). Also, it's just looking at one specific global actor.
FranzBusch (Franz Busch) November 25, 2025, 2:42pm 7
I personally think that getting metrics to understand how the Concurrency runtime and executors are working is a generally useful feature. What I am personally not sure is if we need to expose such APIs on the Task itself. My primary concern is the performance impact of on-by-default measuring.
There are no public docs at this point but to your question no there is no public API to determine the execution time of a task. I am also not sure if we would add task specific APIs to measure execution time or rather have high-level metrics to understand the performance of the overall executor. Having said that, you can totally write your own executors that can take task-level metrics.
ktoso (Konrad 'ktoso' Malawski 🐟🏴☠️) November 26, 2025, 2:58am 8
This is something we should explore at some point, but it’ll have to be carefully woven through the runtime such that there is no performance impact when e.g. measurement is not enabled. That linked PR specifically, I do have concerns about and I’m not sure this is the right way to do it. It is not intended to be merged at this point, and just an experiment.
The granularity of these measurements is also unclear. So while I welcome the feature request unless someone has considerable amounts of time to explore this, it’ll likely remain just a request for the time being.
Lack of runtime observability like that is an understandable and valid concern when teams move e.g. from the JVM and its incredible runtime instrumentation to Swift, and find that such tooling is near nonexistent. Though perhaps the rise of always on profilers might help such adopters, so this remains to be seen how critical these instrumentations of tasks specifically are.
maartene (Maarten Engels) November 26, 2025, 7:07am 9
Although performance impact will depend on the implementation, it's reasonable to assume that this feature would be opt-in instead of on-by-default.
ktoso (Konrad 'ktoso' Malawski 🐟🏴☠️) November 26, 2025, 12:09pm 10
Well the fact of adding hooks means they have to be checked at runtime, so making these checks efficient is an exercise in of itself and is the hardest and most important part of such feature. It also impacts the design, would we be willing to have this be applied globally only at “startup” or does it have to be more flexible (which might come at a cost). What I’m saying is basically that the difficulty of this specific feature is the actual details, not agreeing that such feature would be very welcome - I think that’s very clear, just what and how to achieve it is difficult and would need to be fleshed out.
smontgomery (Stuart Montgomery) December 1, 2025, 8:58pm 11
On this specific question, I do think the motivating use cases in the testing domain specifically would be adequately served by a solution which is enabled and checked just once, at process launch. We don't need the ability to dynamically toggle it on/off throughout the lifetime of a process or task.
Agreed.