1. Introduction
This paper proposes a self-contained design for a Standard C++ framework for managing asynchronous execution on generic execution resources. It is based on the ideas in A Unified Executors Proposal for C++ and its companion papers.
1.1. Motivation
Today, C++ software is increasingly asynchronous and parallel, a trend that is likely to only continue going forward. Asynchrony and parallelism appears everywhere, from processor hardware interfaces, to networking, to file I/O, to GUIs, to accelerators. Every C++ domain and every platform needs to deal with asynchrony and parallelism, from scientific computing to video games to financial services, from the smallest mobile devices to your laptop to GPUs in the world’s fastest supercomputer.
While the C++ Standard Library has a rich set of concurrency primitives
(
,
,
, etc) and lower level
building blocks (
, etc), we lack a Standard vocabulary and
framework for asynchrony and parallelism that C++ programmers desperately need.
/
/
, C++11’s intended exposure for
asynchrony, is inefficient, hard to use correctly, and severely lacking in
genericity, making it unusable in many contexts. We introduced parallel
algorithms to the C++ Standard Library in C++17, and while they are an excellent
start, they are all inherently synchronous and not composable.
This paper proposes a Standard C++ model for asynchrony based around three key abstractions: schedulers, senders, and receivers, and a set of customizable asynchronous algorithms.
1.2. Priorities
-
Be composable and generic, allowing users to write code that can be used with many different types of execution resources.
-
Encapsulate common asynchronous patterns in customizable and reusable algorithms, so users don’t have to invent things themselves.
-
Make it easy to be correct by construction.
-
Support the diversity of execution resources and execution agents, because not all execution agents are created equal; some are less capable than others, but not less important.
-
Allow everything to be customized by an execution resource, including transfer to other execution resources, but don’t require that execution resources customize everything.
-
Care about all reasonable use cases, domains and platforms.
-
Errors must be propagated, but error handling must not present a burden.
-
Support cancellation, which is not an error.
-
Have clear and concise answers for where things execute.
-
Be able to manage and terminate the lifetimes of objects asynchronously.
1.3. Examples: End User
In this section we demonstrate the end-user experience of asynchronous programming directly with the sender algorithms presented in this paper. See § 4.19 User-facing sender factories, § 4.20 User-facing sender adaptors, and § 4.21 User-facing sender consumers for short explanations of the algorithms used in these code examples.
1.3.1. Hello world
using namespace std :: execution ; scheduler auto sch = thread_pool . scheduler (); // 1 sender auto begin = schedule ( sch ); // 2 sender auto hi = then ( begin , []{ // 3 std :: cout << "Hello world! Have an int." ; // 3 return 13 ; // 3 }); // 3 sender auto add_42 = then ( hi , []( int arg ) { return arg + 42 ; }); // 4 auto [ i ] = this_thread :: sync_wait ( add_42 ). value (); // 5
This example demonstrates the basics of schedulers, senders, and receivers:
-
First we need to get a scheduler from somewhere, such as a thread pool. A scheduler is a lightweight handle to an execution resource.
-
To start a chain of work on a scheduler, we call § 4.19.1 execution::schedule, which returns a sender that completes on the scheduler. A sender describes asynchronous work and sends a signal (value, error, or stopped) to some recipient(s) when that work completes.
-
We use sender algorithms to produce senders and compose asynchronous work. § 4.20.2 execution::then is a sender adaptor that takes an input sender and a
, and calls thestd :: invocable
on the signal sent by the input sender. The sender returned bystd :: invocable
sends the result of that invocation. In this case, the input sender came fromthen
, so itsschedule
, meaning it won’t send us a value, so ourvoid
takes no parameters. But we return anstd :: invocable
, which will be sent to the next recipient.int -
Now, we add another operation to the chain, again using § 4.20.2 execution::then. This time, we get sent a value - the
from the previous step. We addint
to it, and then return the result.42 -
Finally, we’re ready to submit the entire asynchronous pipeline and wait for its completion. Everything up until this point has been completely asynchronous; the work may not have even started yet. To ensure the work has started and then block pending its completion, we use § 4.21.1 this_thread::sync_wait, which will either return a
with the value sent by the last sender, or an emptystd :: optional < std :: tuple < ... >>
if the last sender sent a stopped signal, or it throws an exception if the last sender sent an error.std :: optional
1.3.2. Asynchronous inclusive scan
using namespace std :: execution ; sender auto async_inclusive_scan ( scheduler auto sch , // 2 std :: span < const double > input , // 1 std :: span < double > output , // 1 double init , // 1 std :: size_t tile_count ) // 3 { std :: size_t const tile_size = ( input . size () + tile_count - 1 ) / tile_count ; std :: vector < double > partials ( tile_count + 1 ); // 4 partials [ 0 ] = init ; // 4 return just ( std :: move ( partials )) // 5 | continues_on ( sch ) | bulk ( tile_count , // 6 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 7 auto start = i * tile_size ; // 8 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 8 partials [ i + 1 ] = *-- std :: inclusive_scan ( begin ( input ) + start , // 9 begin ( input ) + end , // 9 begin ( output ) + start ); // 9 }) // 10 | then ( // 11 []( std :: vector < double >&& partials ) { std :: inclusive_scan ( begin ( partials ), end ( partials ), // 12 begin ( partials )); // 12 return std :: move ( partials ); // 13 }) | bulk ( tile_count , // 14 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 14 auto start = i * tile_size ; // 14 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 14 std :: for_each ( begin ( output ) + start , begin ( output ) + end , // 14 [ & ] ( double & e ) { e = partials [ i ] + e ; } // 14 ); }) | then ( // 15 [ = ]( std :: vector < double >&& partials ) { // 15 return output ; // 15 }); // 15 }
This example builds an asynchronous computation of an inclusive scan:
-
It scans a sequence of
s (represented as thedouble std :: span < const double >
) and stores the result in another sequence ofinput
s (represented asdouble std :: span < double >
).output -
It takes a scheduler, which specifies what execution resource the scan should be launched on.
-
It also takes a
parameter that controls the number of execution agents that will be spawned.tile_count -
First we need to allocate temporary storage needed for the algorithm, which we’ll do with a
,std :: vector
. We need onepartials
of temporary storage for each execution agent we create.double -
Next we’ll create our initial sender with § 4.19.2 execution::just and § 4.20.1 execution::continues_on. These senders will send the temporary storage, which we’ve moved into the sender. The sender has a completion scheduler of
, which means the next item in the chain will usesch
.sch -
Senders and sender adaptors support composition via
, similar to C++ ranges. We’ll useoperator |
to attach the next piece of work, which will spawnoperator |
execution agents using § 4.20.9 execution::bulk (see § 4.12 Most sender adaptors are pipeable for details).tile_count -
Each agent will call a
, passing it two arguments. The first is the agent’s index (std :: invocable
) in the § 4.20.9 execution::bulk operation, in this case a unique integer ini
. The second argument is what the input sender sent - the temporary storage.[ 0 , tile_count ) -
We start by computing the start and end of the range of input and output elements that this agent is responsible for, based on our agent index.
-
Then we do a sequential
over our elements. We store the scan result for our last element, which is the sum of all of our elements, in our temporary storagestd :: inclusive_scan
.partials -
After all computation in that initial § 4.20.9 execution::bulk pass has completed, every one of the spawned execution agents will have written the sum of its elements into its slot in
.partials -
Now we need to scan all of the values in
. We’ll do that with a single execution agent which will execute after the § 4.20.9 execution::bulk completes. We create that execution agent with § 4.20.2 execution::then.partials -
§ 4.20.2 execution::then takes an input sender and an
and calls thestd :: invocable
with the value sent by the input sender. Inside ourstd :: invocable
, we callstd :: invocable
onstd :: inclusive_scan
, which the input senders will send to us.partials -
Then we return
, which the next phase will need.partials -
Finally we do another § 4.20.9 execution::bulk of the same shape as before. In this § 4.20.9 execution::bulk, we will use the scanned values in
to integrate the sums from other tiles into our elements, completing the inclusive scan.partials -
returns a sender that sends the outputasync_inclusive_scan
. A consumer of the algorithm can chain additional work that uses the scan result. At the point at whichstd :: span < double >
returns, the computation may not have completed. In fact, it may not have even started.async_inclusive_scan
1.3.3. Asynchronous dynamically-sized read
using namespace std :: execution ; sender_of < std :: size_t > auto async_read ( // 1 sender_of < std :: span < std :: byte >> auto buffer , // 1 auto handle ); // 1 struct dynamic_buffer { // 3 std :: unique_ptr < std :: byte [] > data ; // 3 std :: size_t size ; // 3 }; // 3 sender_of < dynamic_buffer > auto async_read_array ( auto handle ) { // 2 return just ( dynamic_buffer {}) // 4 | let_value ([ handle ] ( dynamic_buffer & buf ) { // 5 return just ( std :: as_writable_bytes ( std :: span ( & buf . size , 1 ))) // 6 | async_read ( handle ) // 7 | then ( // 8 [ & buf ] ( std :: size_t bytes_read ) { // 9 assert ( bytes_read == sizeof ( buf . size )); // 10 buf . data = std :: make_unique < std :: byte [] > ( buf . size ); // 11 return std :: span ( buf . data . get (), buf . size ); // 12 }) | async_read ( handle ) // 13 | then ( [ & buf ] ( std :: size_t bytes_read ) { assert ( bytes_read == buf . size ); // 14 return std :: move ( buf ); // 15 }); }); }
This example demonstrates a common asynchronous I/O pattern - reading a payload of a dynamic size by first reading the size, then reading the number of bytes specified by the size:
-
is a pipeable sender adaptor. It’s a customization point object, but this is what it’s call signature looks like. It takes a sender parameter which must send an input buffer in the form of aasync_read
, and a handle to an I/O context. It will asynchronously read into the input buffer, up to the size of thestd :: span < std :: byte >
. It returns a sender which will send the number of bytes read once the read completes.std :: span -
takes an I/O handle and reads a size from it, and then a buffer of that many bytes. It returns a sender that sends aasync_read_array
object that owns the data that was sent.dynamic_buffer -
is an aggregate struct that contains adynamic_buffer
and a size.std :: unique_ptr < std :: byte [] > -
The first thing we do inside of
is create a sender that will send a new, emptyasync_read_array
object using § 4.19.2 execution::just. We can attach more work to the pipeline usingdynamic_array
composition (see § 4.12 Most sender adaptors are pipeable for details).operator | -
We need the lifetime of this
object to last for the entire pipeline. So, we usedynamic_array
, which takes an input sender and alet_value
that must return a sender itself (see § 4.20.4 execution::let_* for details).std :: invocable
sends the value from the input sender to thelet_value
. Critically, the lifetime of the sent object will last until the sender returned by thestd :: invocable
completes.std :: invocable -
Inside of the
let_value
, we have the rest of our logic. First, we want to initiate anstd :: invocable
of the buffer size. To do that, we need to send aasync_read
pointing tostd :: span
. We can do that with § 4.19.2 execution::just.buf . size -
We chain the
onto the § 4.19.2 execution::just sender withasync_read
.operator | -
Next, we pipe a
that will be invoked after thestd :: invocable
completes using § 4.20.2 execution::then.async_read -
That
gets sent the number of bytes read.std :: invocable -
We need to check that the number of bytes read is what we expected.
-
Now that we have read the size of the data, we can allocate storage for it.
-
We return a
to the storage for the data from thestd :: span < std :: byte >
. This will be sent to the next recipient in the pipeline.std :: invocable -
And that recipient will be another
, which will read the data.async_read -
Once the data has been read, in another § 4.20.2 execution::then, we confirm that we read the right number of bytes.
-
Finally, we move out of and return our
object. It will get sent by the sender returned bydynamic_buffer
. We can attach more things to that sender to use the data in the buffer.async_read_array
1.4. Asynchronous Windows socket recv
To get a better feel for how this interface might be used by low-level
operations see this example implementation of a cancellable
operation for a Windows Socket.
struct operation_base : WSAOVERALAPPED { using completion_fn = void ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept ; // Assume IOCP event loop will call this when this OVERLAPPED structure is dequeued. completion_fn * completed ; }; template < class Receiver > struct recv_op : operation_base { using operation_state_concept = std :: execution :: operation_state_t ; recv_op ( SOCKET s , void * data , size_t len , Receiver r ) : receiver ( std :: move ( r )) , sock ( s ) { this -> Internal = 0 ; this -> InternalHigh = 0 ; this -> Offset = 0 ; this -> OffsetHigh = 0 ; this -> hEvent = NULL; this -> completed = & recv_op :: on_complete ; buffer . len = len ; buffer . buf = static_cast < CHAR *> ( data ); } void start () & noexcept { // Avoid even calling WSARecv() if operation already cancelled auto st = std :: execution :: get_stop_token ( std :: execution :: get_env ( receiver )); if ( st . stop_requested ()) { std :: execution :: set_stopped ( std :: move ( receiver )); return ; } // Store and cache result here in case it changes during execution const bool stopPossible = st . stop_possible (); if ( ! stopPossible ) { ready . store ( true, std :: memory_order_relaxed ); } // Launch the operation DWORD bytesTransferred = 0 ; DWORD flags = 0 ; int result = WSARecv ( sock , & buffer , 1 , & bytesTransferred , & flags , static_cast < WSAOVERLAPPED *> ( this ), NULL); if ( result == SOCKET_ERROR ) { int errorCode = WSAGetLastError (); if ( errorCode != WSA_IO_PENDING ) { if ( errorCode == WSA_OPERATION_ABORTED ) { std :: execution :: set_stopped ( std :: move ( receiver )); } else { std :: execution :: set_error ( std :: move ( receiver ), std :: error_code ( errorCode , std :: system_category ())); } return ; } } else { // Completed synchronously (assuming FILE_SKIP_COMPLETION_PORT_ON_SUCCESS has been set) execution :: set_value ( std :: move ( receiver ), bytesTransferred ); return ; } // If we get here then operation has launched successfully and will complete asynchronously. // May be completing concurrently on another thread already. if ( stopPossible ) { // Register the stop callback stopCallback . emplace ( std :: move ( st ), cancel_cb { * this }); // Mark as 'completed' if ( ready . load ( std :: memory_order_acquire ) || ready . exchange ( true, std :: memory_order_acq_rel )) { // Already completed on another thread stopCallback . reset (); BOOL ok = WSAGetOverlappedResult ( sock , ( WSAOVERLAPPED * ) this , & bytesTransferred , FALSE , & flags ); if ( ok ) { std :: execution :: set_value ( std :: move ( receiver ), bytesTransferred ); } else { int errorCode = WSAGetLastError (); std :: execution :: set_error ( std :: move ( receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } } struct cancel_cb { recv_op & op ; void operator ()() noexcept { CancelIoEx (( HANDLE ) op . sock , ( OVERLAPPED * )( WSAOVERLAPPED * ) & op ); } }; static void on_complete ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept { recv_op & self = * static_cast < recv_op *> ( op ); if ( self . ready . load ( std :: memory_order_acquire ) || self . ready . exchange ( true, std :: memory_order_acq_rel )) { // Unsubscribe any stop callback so we know that CancelIoEx() is not accessing 'op' // any more self . stopCallback . reset (); if ( errorCode == 0 ) { std :: execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); } else { std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } using stop_callback_t = stop_callback_of_t < stop_token_of_t < env_of_t < Receiver >> , cancel_cb > ; Receiver receiver ; SOCKET sock ; WSABUF buffer ; std :: optional < stop_callback_t > stopCallback ; std :: atomic < bool > ready { false}; }; struct recv_sender { using sender_concept = std :: execution :: sender_t ; SOCKET sock ; void * data ; size_t len ; template < class Receiver > recv_op < Receiver > connect ( Receiver r ) const { return recv_op < Receiver > { sock , data , len , std :: move ( r )}; } }; recv_sender async_recv ( SOCKET s , void * data , size_t len ) { return recv_sender { s , data , len }; }
1.4.1. More end-user examples
1.4.1.1. Sudoku solver
This example comes from Kirk Shoop, who ported an example from TBB’s documentation to sender/receiver in his fork of the libunifex repo. It is a Sudoku solver that uses a configurable number of threads to explore the search space for solutions.
The sender/receiver-based Sudoku solver can be found here. Some things that are worth noting about Kirk’s solution:
-
Although it schedules asynchronous work onto a thread pool, and each unit of work will schedule more work, its use of structured concurrency patterns make reference counting unnecessary. The solution does not make use of
.shared_ptr -
In addition to eliminating the need for reference counting, the use of structured concurrency makes it easy to ensure that resources are cleaned up on all code paths. In contrast, the TBB example that inspired this one leaks memory.
For comparison, the TBB-based Sudoku solver can be found here.
1.4.1.2. File copy
This example also comes from Kirk Shoop which uses sender/receiver to recursively copy the files a directory tree. It demonstrates how sender/receiver can be used to do IO, using a scheduler that schedules work on Linux’s io_uring.
As with the Sudoku example, this example obviates the need for reference counting by employing structured concurrency. It uses iteration with an upper limit to avoid having too many open file handles.
You can find the example here.
1.4.1.3. Echo server
Dietmar Kuehl has proposed networking APIs that use the sender/receiver abstraction (see P2762). He has implemented an echo server as a demo. His echo server code can be found here.
Below, I show the part of the echo server code. This code is executed for each client that connects to the echo server. In a loop, it reads input from a socket and echos the input back to the same socket. All of this, including the loop, is implemented with generic async algorithms.
outstanding . start ( EX :: repeat_effect_until ( EX :: let_value ( NN :: async_read_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer )) | EX :: then ([ ptr ]( :: std :: size_t n ){ :: std :: cout << "read='" << :: std :: string_view ( ptr -> d_buffer , n ) << "' \n " ; ptr -> d_done = n == 0 ; return n ; }), [ & context , ptr ]( :: std :: size_t n ){ return NN :: async_write_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer , n )); }) | EX :: then ([]( auto && ...){}) , [ owner = :: std :: move ( owner )]{ return owner -> d_done ; } ) );
In this code,
and
are asynchronous
socket-based networking APIs that return senders.
,
, and
are fully generic sender adaptor algorithms that
accept and return senders.
This is a good example of seamless composition of async IO functions with non-IO
operations. And by composing the senders in this structured way, all the state
for the composite operation -- the
expression and all its
child operations -- is stored altogether in a single object.
1.5. Examples: Algorithms
In this section we show a few simple sender/receiver-based algorithm implementations.
1.5.1. then
namespace stdexec = std :: execution ; template < class R , class F > class _then_receiver : public R { F f_ ; public : _then_receiver ( R r , F f ) : R ( std :: move ( r )), f_ ( std :: move ( f )) {} // Customize set_value by invoking the callable and passing the result to // the inner receiver template < class ... As > requires std :: invocable < F , As ... > void set_value ( As && ... as ) && noexcept { try { stdexec :: set_value ( std :: move ( * this ). base (), std :: invoke (( F && ) f_ , ( As && ) as ...)); } catch (...) { stdexec :: set_error ( std :: move ( * this ). base (), std :: current_exception ()); } } }; template < stdexec :: sender S , class F > struct _then_sender { using sender_concept = stdexec :: sender_t ; S s_ ; F f_ ; template < class ... Args > using _set_value_t = stdexec :: completion_signatures < stdexec :: set_value_t ( std :: invoke_result_t < F , Args ... > ) > ; using _except_ptr_sig = stdexec :: completion_signatures < stdexec :: set_error_t ( std :: exception_ptr ) > ; // Compute the completion signatures template < class Env > auto get_completion_signatures ( Env && env ) && noexcept -> stdexec :: transform_completion_signatures_of < S , Env , _except_ptr_sig , _set_value_t > { return {}; } // Connect: template < stdexec :: receiver R > auto connect ( R r ) && -> stdexec :: connect_result_t < S , _then_receiver < R , F >> { return stdexec :: connect ( ( S && ) s_ , _then_receiver {( R && ) r , ( F && ) f_ }); } decltype ( auto ) get_env () const noexcept { return get_env ( s_ ); } }; template < stdexec :: sender S , class F > stdexec :: sender auto then ( S s , F f ) { return _then_sender < S , F > {( S && ) s , ( F && ) f }; }
This code builds a
algorithm that transforms the value(s) from the input
sender with a transformation function. The result of the transformation becomes
the new value. The other receiver functions (
and
), as
well as all receiver queries, are passed through unchanged.
In detail, it does the following:
-
Defines a receiver in terms of receiver and an invocable that:
-
Defines a constrained
member function for transforming the value channel.set_value -
Delegates
andset_error
to the inner receiver.set_stopped
-
-
Defines a sender that aggregates another sender and the invocable, which defines a
member function that wraps the incoming receiver in the receiver from (1) and passes it and the incoming sender toconnect
, returning the result. It also defines astd :: execution :: connect
member function that declares the sender’s completion signatures when executed within a particular environment.get_completion_signatures
1.5.2. retry
using namespace std ; namespace stdexec = execution ; template < class From , class To > concept _decays_to = same_as < decay_t < From > , To > ; // _conv needed so we can emplace construct non-movable types into // a std::optional. template < invocable F > struct _conv { F f_ ; static_assert ( is_nothrow_move_constructible_v < F > ); explicit _conv ( F f ) noexcept : f_ (( F && ) f ) {} operator invoke_result_t < F > () && { return (( F && ) f_ )(); } }; template < class S , class R > struct _retry_op ; // pass through all customizations except set_error, which retries // the operation. template < class S , class R > struct _retry_receiver { _retry_op < S , R >* o_ ; void set_value ( auto && ... as ) && noexcept { stdexec :: set_value ( std :: move ( o_ -> r_ ), ( decltype ( as ) && ) as ...); } void set_error ( auto && ) && noexcept { o_ -> _retry (); // This causes the op to be retried } void set_stopped () && noexcept { stdexec :: set_stopped ( std :: move ( o_ -> r_ )); } decltype ( auto ) get_env () const noexcept { return get_env ( o_ -> r_ ); } }; // Hold the nested operation state in an optional so we can // re-construct and re-start it if the operation fails. template < class S , class R > struct _retry_op { using operation_state_concept = stdexec :: operation_state_t ; using _child_op_t = stdexec :: connect_result_t < S & , _retry_receiver < S , R >> ; S s_ ; R r_ ; optional < _child_op_t > o_ ; _op ( _op && ) = delete ; _op ( S s , R r ) : s_ ( std :: move ( s )), r_ ( std :: move ( r )), o_ { _connect ()} {} auto _connect () noexcept { return _conv {[ this ] { return stdexec :: connect ( s_ , _retry_receiver < S , R > { this }); }}; } void _retry () noexcept { try { o_ . emplace ( _connect ()); // potentially-throwing stdexec :: start ( * o_ ); } catch (...) { stdexec :: set_error ( std :: move ( r_ ), std :: current_exception ()); } } void start () & noexcept { stdexec :: start ( * o_ ); } }; // Helpers for computing the `then` sender's completion signatures: template < class ... Ts > using _value_t = stdexec :: completion_signatures < stdexec :: set_value_t ( Ts ...) > ; template < class > using _error_t = stdexec :: completion_signatures <> ; using _except_sig = stdexec :: completion_signatures < stdexec :: set_error_t ( std :: exception_ptr ) > ; template < class S > struct _retry_sender { using sender_concept = stdexec :: sender_t ; S s_ ; explicit _retry_sender ( S s ) : s_ ( std :: move ( s )) {} // Declare the signatures with which this sender can complete template < class Env > using _compl_sigs = stdexec :: transform_completion_signatures_of < S & , Env , _except_sig , _value_t , _error_t > ; template < class Env > auto get_completion_signatures ( Env && ) const noexcept -> _compl_sigs < Env > { return {}; } template < stdexec :: receiver R > requires stdexec :: sender_to < S & , _retry_receiver < S , R >> _retry_op < S , R > connect ( R r ) && { return { std :: move ( s_ ), std :: move ( r )}; } decltype ( auto ) get_env () const noexcept { return get_env ( s_ ); } }; template < stdexec :: sender S > stdexec :: sender auto retry ( S s ) { return _retry_sender { std :: move ( s )}; }
The
algorithm takes a multi-shot sender and causes it to repeat on
error, passing through values and stopped signals. Each time the input sender is
restarted, a new receiver is connected and the resulting operation state is
stored in an
, which allows us to reinitialize it multiple times.
This example does the following:
-
Defines a
utility that takes advantage of C++17’s guaranteed copy elision to emplace a non-movable type in a_conv
.std :: optional -
Defines a
that holds a pointer back to the operation state. It passes all customizations through unmodified to the inner receiver owned by the operation state except for_retry_receiver
, which causes aset_error
function to be called instead._retry () -
Defines an operation state that aggregates the input sender and receiver, and declares storage for the nested operation state in an
. Constructing the operation state constructs aoptional
with a pointer to the (under construction) operation state and uses it to connect to the input sender._retry_receiver -
Starting the operation state dispatches to
on the inner operation state.start -
The
function reinitializes the inner operation state by connecting the sender to a new receiver, holding a pointer back to the outer operation state as before._retry () -
After reinitializing the inner operation state,
calls_retry ()
on it, causing the failed operation to be rescheduled.start -
Defines a
that implements a_retry_sender
member function to return an operation state constructed from the passed-in sender and receiver.connect -
also implements a_retry_sender
member function to describe the ways this sender may complete when executed in a particular execution resource.get_completion_signatures
1.6. Examples: Schedulers
In this section we look at some schedulers of varying complexity.
1.6.1. Inline scheduler
namespace stdexec = std :: execution ; class inline_scheduler { template < class R > struct _op { using operation_state_concept = operation_state_t ; R rec_ ; void start () & noexcept { stdexec :: set_value ( std :: move ( rec_ )); } }; struct _env { template < class Tag > inline_scheduler query ( stdexec :: get_completion_scheduler_t < Tag > ) const noexcept { return {}; } }; struct _sender { using sender_concept = stdexec :: sender_t ; using _compl_sigs = stdexec :: completion_signatures < stdexec :: set_value_t () > ; using completion_signatures = _compl_sigs ; template < stdexec :: receiver_of < _compl_sigs > R > _op < R > connect ( R rec ) noexcept ( std :: is_nothrow_move_constructible_v < R > ) { return { std :: move ( rec )}; } _env get_env () const noexcept { return {}; } }; public : inline_scheduler () = default ; _sender schedule () const noexcept { return {}; } bool operator == ( const inline_scheduler & ) const noexcept = default ; };
The inline scheduler is a trivial scheduler that completes immediately and
synchronously on the thread that calls
on the operation
state produced by its sender. In other words,
is just a fancy way of
saying
, with the exception of the fact that
wants
to be passed an lvalue.
Although not a particularly useful scheduler, it serves to illustrate the basics
of implementing one. The
:
-
Customizes
to return an instance of the sender typeexecution :: schedule
._sender -
The
type models the_sender
concept and provides the metadata needed to describe it as a sender of no values and that never callssender
orset_error
. This metadata is provided with the help of theset_stopped
utility.execution :: completion_signatures -
The
type customizes_sender
to accept a receiver of no values. It returns an instance of typeexecution :: connect
that holds the receiver by value._op -
The operation state customizes
to callstd :: execution :: start
on the receiver.std :: execution :: set_value
1.6.2. Single thread scheduler
This example shows how to create a scheduler for an execution resource that
consists of a single thread. It is implemented in terms of a lower-level
execution resource called
.
class single_thread_context { std :: execution :: run_loop loop_ ; std :: thread thread_ ; public : single_thread_context () : loop_ () , thread_ ([ this ] { loop_ . run (); }) {} single_thread_context ( single_thread_context && ) = delete ; ~ single_thread_context () { loop_ . finish (); thread_ . join (); } auto get_scheduler () noexcept { return loop_ . get_scheduler (); } std :: thread :: id get_thread_id () const noexcept { return thread_ . get_id (); } };
The
owns an event loop and a thread to drive it. In the
destructor, it tells the event loop to finish up what it’s doing and then joins
the thread, blocking for the event loop to drain.
The interesting bits are in the
context implementation. It
is slightly too long to include here, so we only provide a reference to
it,
but there is one noteworthy detail about its implementation: It uses space in
its operation states to build an intrusive linked list of work items. In
structured concurrency patterns, the operation states of nested operations
compose statically, and in an algorithm like
, the
composite operation state lives on the stack for the duration of the operation.
The end result is that work can be scheduled onto this thread with zero
allocations.
1.7. Examples: Server theme
In this section we look at some examples of how one would use senders to implement an HTTP server. The examples ignore the low-level details of the HTTP server and looks at how senders can be combined to achieve the goals of the project.
General application context:
-
server application that processes images
-
execution resources:
-
1 dedicated thread for network I/O
-
N worker threads used for CPU-intensive work
-
M threads for auxiliary I/O
-
optional GPU context that may be used on some types of servers
-
-
all parts of the applications can be asynchronous
-
no locks shall be used in user code
1.7.1. Composability with execution :: let_ *
Example context:
-
we are looking at the flow of processing an HTTP request and sending back the response.
-
show how one can break the (slightly complex) flow into steps with
functions.execution :: let_ * -
different phases of processing HTTP requests are broken down into separate concerns.
-
each part of the processing might use different execution resources (details not shown in this example).
-
error handling is generic, regardless which component fails; we always send the right response to the clients.
Goals:
-
show how one can break more complex flows into steps with let_* functions.
-
exemplify the use of
,let_value
,let_error
, andlet_stopped
algorithms.just
namespace stdexec = std :: execution ; // Returns a sender that yields an http_request object for an incoming request stdexec :: sender auto schedule_request_start ( read_requests_ctx ctx ) {...} // Sends a response back to the client; yields a void signal on success stdexec :: sender auto send_response ( const http_response & resp ) {...} // Validate that the HTTP request is well-formed; forwards the request on success stdexec :: sender auto validate_request ( const http_request & req ) {...} // Handle the request; main application logic stdexec :: sender auto handle_request ( const http_request & req ) { //... return stdexec :: just ( http_response { 200 , result_body }); } // Transforms server errors into responses to be sent to the client stdexec :: sender auto error_to_response ( std :: exception_ptr err ) { try { std :: rethrow_exception ( err ); } catch ( const std :: invalid_argument & e ) { return stdexec :: just ( http_response { 404 , e . what ()}); } catch ( const std :: exception & e ) { return stdexec :: just ( http_response { 500 , e . what ()}); } catch (...) { return stdexec :: just ( http_response { 500 , "Unknown server error" }); } } // Transforms cancellation of the server into responses to be sent to the client stdexec :: sender auto stopped_to_response () { return stdexec :: just ( http_response { 503 , "Service temporarily unavailable" }); } //... // The whole flow for transforming incoming requests into responses stdexec :: sender auto snd = // get a sender when a new request comes schedule_request_start ( the_read_requests_ctx ) // make sure the request is valid; throw if not | stdexec :: let_value ( validate_request ) // process the request in a function that may be using a different execution resource | stdexec :: let_value ( handle_request ) // If there are errors transform them into proper responses | stdexec :: let_error ( error_to_response ) // If the flow is cancelled, send back a proper response | stdexec :: let_stopped ( stopped_to_response ) // write the result back to the client | stdexec :: let_value ( send_response ) // done ; // execute the whole flow asynchronously stdexec :: start_detached ( std :: move ( snd ));
The example shows how one can separate out the concerns for interpreting
requests, validating requests, running the main logic for handling the request,
generating error responses, handling cancellation and sending the response back
to the client. They are all different phases in the application, and can be
joined together with the
functions.
All our functions return
objects, so that they can all
generate success, failure and cancellation paths. For example, regardless where
an error is generated (reading request, validating request or handling the
response), we would have one common block to handle the error, and following
error flows is easy.
Also, because of using
objects at any step, we might expect
any of these steps to be completely asynchronous; the overall flow doesn’t care.
Regardless of the execution resource in which the steps, or part of the steps
are executed in, the flow is still the same.
1.7.2. Moving between execution resources with execution :: starts_on
and execution :: continues_on
Example context:
-
reading data from the socket before processing the request
-
reading of the data is done on the I/O context
-
no processing of the data needs to be done on the I/O context
Goals:
-
show how one can change the execution resource
-
exemplify the use of
andstarts_on
algorithmscontinues_on
namespace stdexec = std :: execution ; size_t legacy_read_from_socket ( int sock , char * buffer , size_t buffer_len ); void process_read_data ( const char * read_data , size_t read_len ); //... // A sender that just calls the legacy read function auto snd_read = stdexec :: just ( sock , buf , buf_len ) | stdexec :: then ( legacy_read_from_socket ); // The entire flow auto snd = // start by reading data on the I/O thread stdexec :: starts_on ( io_sched , std :: move ( snd_read )) // do the processing on the worker threads pool | stdexec :: continues_on ( work_sched ) // process the incoming data (on worker threads) | stdexec :: then ([ buf ]( int read_len ) { process_read_data ( buf , read_len ); }) // done ; // execute the whole flow asynchronously stdexec :: start_detached ( std :: move ( snd ));
The example assume that we need to wrap some legacy code of reading sockets, and handle execution resource switching. (This style of reading from socket may not be the most efficient one, but it’s working for our purposes.) For performance reasons, the reading from the socket needs to be done on the I/O thread, and all the processing needs to happen on a work-specific execution resource (i.e., thread pool).
Calling
will ensure that the given sender will be started on the
given scheduler. In our example,
is going to be started on the I/O
scheduler. This sender will just call the legacy code.
The completion-signal will be issued in the I/O execution resource, so we have
to move it to the work thread pool. This is achieved with the help of the
algorithm. The rest of the processing (in our case, the
last call to
) will happen in the work thread pool.
The reader should notice the difference between
and
. The
algorithm will ensure that the given
sender will start in the specified context, and doesn’t care where the
completion-signal for that sender is sent. The
algorithm
will not care where the given sender is going to be started, but will ensure
that the completion-signal of will be transferred to the given context.
1.8. Design changes from P0443
-
The
concept has been removed and all of its proposed functionality is now based on schedulers and senders, as per SG1 direction.executor -
Properties are not included in this paper. We see them as a possible future extension, if the committee gets more comfortable with them.
-
Senders now advertise what scheduler, if any, their evaluation will complete on.
-
The places of execution of user code in P0443 weren’t precisely defined, whereas they are in this paper. See § 4.5 Senders can propagate completion schedulers.
-
P0443 did not propose a suite of sender algorithms necessary for writing sender code; this paper does. See § 4.19 User-facing sender factories, § 4.20 User-facing sender adaptors, and § 4.21 User-facing sender consumers.
-
P0443 did not specify the semantics of variously qualified
overloads; this paper does. See § 4.7 Senders can be either multi-shot or single-shot.connect -
This paper extends the sender traits/typed sender design to support typed senders whose value/error types depend on type information provided late via the receiver.
-
Support for untyped senders is dropped; the
concept is renamedtyped_sender
;sender
is replaced withsender_traits
.completion_signatures_of_t -
Specific type erasure facilities are omitted, as per LEWG direction. Type erasure facilities can be built on top of this proposal, as discussed in § 5.9 Customization points.
-
A specific thread pool implementation is omitted, as per LEWG direction.
-
Some additional utilities are added:
-
: An execution resource that provides a multi-producer, single-consumer, first-in-first-out work queue.run_loop -
andcompletion_signatures
: Utilities for describing the ways in which a sender can complete in a declarative syntax.transform_completion_signatures
-
1.9. Prior art
This proposal builds upon and learns from years of prior art with asynchronous and parallel programming frameworks in C++. In this section, we discuss async abstractions that have previously been suggested as a possible basis for asynchronous algorithms and why they fall short.
1.9.1. Futures
A future is a handle to work that has already been scheduled for execution. It is one end of a communication channel; the other end is a promise, used to receive the result from the concurrent operation and to communicate it to the future.
Futures, as traditionally realized, require the dynamic allocation and management of a shared state, synchronization, and typically type-erasure of work and continuation. Many of these costs are inherent in the nature of "future" as a handle to work that is already scheduled for execution. These expenses rule out the future abstraction for many uses and makes it a poor choice for a basis of a generic mechanism.
1.9.2. Coroutines
C++20 coroutines are frequently suggested as a basis for asynchronous algorithms. It’s fair to ask why, if we added coroutines to C++, are we suggesting the addition of a library-based abstraction for asynchrony. Certainly, coroutines come with huge syntactic and semantic advantages over the alternatives.
Although coroutines are lighter weight than futures, coroutines suffer many of the same problems. Since they typically start suspended, they can avoid synchronizing the chaining of dependent work. However in many cases, coroutine frames require an unavoidable dynamic allocation and indirect function calls. This is done to hide the layout of the coroutine frame from the C++ type system, which in turn makes possible the separate compilation of coroutines and certain compiler optimizations, such as optimization of the coroutine frame size.
Those advantages come at a cost, though. Because of the dynamic allocation of coroutine frames, coroutines in embedded or heterogeneous environments, which often lack support for dynamic allocation, require great attention to detail. And the allocations and indirections tend to complicate the job of the inliner, often resulting in sub-optimal codegen.
The coroutine language feature mitigates these shortcomings somewhat with the HALO optimization Halo: coroutine Heap Allocation eLision Optimization: the joint response, which leverages existing compiler optimizations such as allocation elision and devirtualization to inline the coroutine, completely eliminating the runtime overhead. However, HALO requires a sophisiticated compiler, and a fair number of stars need to align for the optimization to kick in. In our experience, more often than not in real-world code today’s compilers are not able to inline the coroutine, resulting in allocations and indirections in the generated code.
In a suite of generic async algorithms that are expected to be callable from hot code paths, the extra allocations and indirections are a deal-breaker. It is for these reasons that we consider coroutines a poor choice for a basis of all standard async.
1.9.3. Callbacks
Callbacks are the oldest, simplest, most powerful, and most efficient mechanism for creating chains of work, but suffer problems of their own. Callbacks must propagate either errors or values. This simple requirement yields many different interface possibilities. The lack of a standard callback shape obstructs generic design.
Additionally, few of these possibilities accommodate cancellation signals when the user requests upstream work to stop and clean up.
1.10. Field experience
1.10.1. libunifex
This proposal draws heavily from our field experience with libunifex. Libunifex implements all of the concepts and customization points defined in this paper (with slight variations -- the design of P2300 has evolved due to LEWG feedback), many of this paper’s algorithms (some under different names), and much more besides.
Libunifex has several concrete schedulers in addition to the
suggested here (where it is called
). It has schedulers that
dispatch efficiently to epoll and io_uring on Linux and the Windows Thread Pool
on Windows.
In addition to the proposed interfaces and the additional schedulers, it has several important extensions to the facilities described in this paper, which demonstrate directions in which these abstractions may be evolved over time, including:
-
Timed schedulers, which permit scheduling work on an execution resource at a particular time or after a particular duration has elapsed. In addition, it provides time-based algorithms.
-
File I/O schedulers, which permit filesystem I/O to be scheduled.
-
Two complementary abstractions for streams (asynchronous ranges), and a set of stream-based algorithms.
Libunifex has seen heavy production use at Meta. An employee summarizes it as follows:
As of June, 2023, Unifex is still used in production at Meta. It’s used to express the asynchrony in rsys, and is therefore serving video calling to billions of people every month on Meta’s social networking apps on iOS, Android, Windows, and macOS. It’s also serving the Virtual Desktop experience on Oculus Quest devices, and some internal uses that run on Linux.
One team at Meta has migrated from
to
folly :: Future and seen significant developer efficiency improvements. Coroutines are easier to understand than chained futures so the team was able to meet requirements for certain constrained environments that would have been too complicated to maintain with futures.
unifex :: task In all the cases mentioned above, developers mix-and-match between the sender algorithms in Unifex and Unifex’s coroutine type,
. We also rely on
unifex :: task ’s scheduler affinity to minimize surprise when programming with coroutines.
unifex :: task
1.10.2. stdexec
stdexec is the reference implementation of this proposal. It is a complete implementation, written from the specification in this paper, and is current with \R8.
The original purpose of stdexec was to help find specification bugs and to harden the wording of the proposal, but it has since become one of NVIDIA’s core C++ libraries for high-performance computing. In addition to the facilities proposed in this paper, stdexec has schedulers for CUDA, Intel TBB, and MacOS. Like libunifex, its scope has also expanded to include a streaming abstraction and stream algorithms, and time-based schedulers and algorithms.
The stdexec project has seen lots of community interest and contributions. At the time of writing (March, 2024), the GitHub repository has 1.2k stars, 130 forks, and 50 contributors.
stdexec is fit for broad use and for ultimate contribution to libc++.
1.10.3. Other implementations
The authors are aware of a number of other implementations of sender/receiver from this paper. These are presented here in perceived order of maturity and field experience.
-
HPX - The C++ Standard Library for Parallelism and Concurrency
HPX is a general purpose C++ runtime system for parallel and distributed applications that has been under active development since 2007. HPX exposes a uniform, standards-oriented API, and keeps abreast of the latest standards and proposals. It is used in a wide variety of high-performance applications.
The sender/receiver implementation in HPX has been under active development since May 2020. It is used to erase the overhead of futures and to make it possible to write efficient generic asynchronous algorithms that are agnostic to their execution resource. In HPX, algorithms can migrate execution between execution resources, even to GPUs and back, using a uniform standard interface with sender/receiver.
Far and away, the HPX team has the greatest usage experience outside Facebook. Mikael Simberg summarizes the experience as follows:
Summarizing, for us the major benefits of sender/receiver compared to the old model are:
-
Proper hooks for transitioning between execution resources.
-
The adaptors. Things like
are really nice additions.let_value -
Separation of the error channel from the value channel (also cancellation, but we don’t have much use for it at the moment). Even from a teaching perspective having to explain that the future
in the continuation will always be ready heref2
is enough of a reason to separate the channels. All the other obvious reasons apply as well of course.f1 . then ([]( future < T > f2 ) > {...}) -
For futures we have a thing called
which is an optimized version ofhpx :: dataflow
which avoids intermediate allocations. With the sender/receiverwhen_all (...). then (...)
we get that "for free".when_all (...) | > then (...)
-
-
kuhllib by Dietmar Kuehl
This is a prototype Standard Template Library with an implementation of sender/receiver that has been under development since May, 2021. It is significant mostly for its support for sender/receiver-based networking interfaces.
Here, Dietmar Kuehl speaks about the perceived complexity of sender/receiver:
... and, also similar to STL: as I had tried to do things in that space before I recognize sender/receivers as being maybe complicated in one way but a huge simplification in another one: like with STL I think those who use it will benefit - if not from the algorithm from the clarity of abstraction: the separation of concerns of STL (the algorithm being detached from the details of the sequence representation) is a major leap. Here it is rather similar: the separation of the asynchronous algorithm from the details of execution. Sure, there is some glue to tie things back together but each of them is simpler than the combined result.
Elsewhere, he said:
... to me it feels like sender/receivers are like iterators when STL emerged: they are different from what everybody did in that space. However, everything people are already doing in that space isn’t right.
Kuehl also has experience teaching sender/receiver at Bloomberg. About that experience he says:
When I asked [my students] specifically about how complex they consider the sender/receiver stuff the feedback was quite unanimous that the sender/receiver parts aren’t trivial but not what contributes to the complexity.
-
C++ Bare Metal Senders and Receivers from Intel
This is a prototype implementation of sender/receiver by Intel that has been under development since August, 2023. It is significant mostly for its support for bare metal (no operating system) and embedded systems, a domain for which senders are particularly well-suited due to their very low dynamic memory requirements.
1.10.4. Inspirations
This proposal also draws heavily from our experience with Thrust and Agency. It is also inspired by the needs of countless other C++ frameworks for asynchrony, parallelism, and concurrency, including:
2. Revision history
2.1. R10
The changes since R9 are as follows:
Fixes:
-
Fixed
andconnect
to useget_completion_signatures
, as "Sender Algorithm Customization" proposed (but failed) to do. See "Fixing Lazy Sender Algorithm Customization" for details.transform_sender -
,ensure_started
,start_detached
, andexecute
are removed from the proposal. They are to be replaced with safer and more structured APIs by "async_scope — Creating scopes for non-sequential concurrency". See "remove ensure_started and start_detached from P2300" for details.execute_may_block_caller -
Fixed a logic error in the specification of
that could have caused a receiver to be completed twice in some cases.split -
Fixed
to handle the case where the child sender completes with more than one value, in which case thestopped_as_optional
sender completes with anstopped_as_optional
of aoptional
of the values.tuple -
The
,queryable
, andstoppable_source
concepts have been made exposition-only.stoppable_callback_for
Enhancements:
-
The
concept no longer requires that operation states modeloperation_state
.queryable -
The
query has been renamed toget_delegatee_scheduler
.get_delegation_scheduler -
The
environment has been renamed toread
.read_env -
The nullary forms of the queries which returned instances of the
sender have been removed. That is,read_env
is no longer another way to spellget_scheduler ()
. Same for the other queries.read_env ( get_scheduler ) -
A feature test macro has been added:
.__cpp_lib_senders -
has been renamed totransfer
.continues_on
has been renamed toon
. A newstarts_on
algorithm has been added that is a combination ofon
andstarts_on
for performing work on a different context and automatically transitioning back to the starting one. See "Reconsidering the std::execution::on algorithm" for details.continues_on -
An exposition-only
concept is added to the Library introduction ([library]), and the specification of thesimple - allocator
query is expressed in terms of it.get_allocator -
An exposition-only
sender adaptor has been added for use in the implementation of the newwrite - env
algorithm.on
2.2. R9
The changes since R8 are as follows:
Fixes:
-
The
mechanism has been replaced with member functions for customizations as per "Member customization points for Senders and Receivers".tag_invoke -
Per guidance from LWG and LEWG,
has been removed.receiver_adaptor -
The
concept is tweaked to require that receiver types are notreceiver
. Withoutfinal
andreceiver_adaptor
, receiver adaptors are easily written using implementation inheritance.tag_invoke -
is made exposition-only.std :: tag_t -
The types
,in_place_stop_token
, andin_place_stop_source
are renamed toin_place_stop_callback
,inplace_stop_token
, andinplace_stop_source
, respectively.inplace_stop_callback
Enhancements:
-
The specification of the
algorithm has been updated for clarity.sync_wait -
The specification of all the stop token, source, and callback types have been re-expressed in terms of shared concepts.
-
Declarations are shown in their proper namespaces.
-
Editorial changes have been made to clarify what text is added, what is removed, and what is an editorial note.
-
The section numbers of the proposed wording now match the section numbers in the working draft of the C++ standard.
2.3. R8
The changes since R7 are as follows:
Fixes:
-
is required to be nothrow.get_env ( obj ) -
and the associated environment utilities are moved back intoget_env
fromstd :: execution
.std :: -
is renamedmake_completion_signatures
and is expressed in terms of the newtransform_completion_signatures_of
, which takes an input set of completion signatures instead of a sender and an environment.transform_completion_signatures -
Add a requirement on queryable objects that if
is well-formed, thentag_invoke ( query , env , args ...)
is expression-equivalent to it. This is necessary to properly specify how to join two environments in the presence of queries that have defaults.query ( env , args ...) -
The
concept requires thatsender_in < Sndr , Env >
satisfiesE
.queryable -
Senders of more than one value are now
-able in coroutines, the result of which is aco_await
of the values (which is suitable as the initializer of a structured binding).std :: tuple
Enhancements:
-
The exposition-only class template
is greatly enhanced, and the sender algorithms are respecified in term of it.basic - sender -
andenable_sender
traits now have default implementations that look for nestedenable_receiver
andsender_concept
types, respectively.receiver_concept
2.4. R7
The changes since R6 are as follows:
Fixes:
-
Make it valid to pass non-variadic templates to the exposition-only alias template
, fixing the definitions ofgather - signatures
,value_types_of_t
, and the exposition-only alias templateerror_types_of_t
.sync - wait - result - type -
Removed the query forwarding from
that was inadvertantly left over from a previous edit.receiver_adaptor -
When adapting a sender to an awaitable with
, the sender’s value result datum is decayed before being stored in the exposition-onlyas_awaitable
.variant -
Correctly specify the completion signatures of the
algorithm.schedule_from -
The
concept no longer distinguishes between a sender of a typesender_of
and a sender of a typeT
.T && -
The
andjust
sender factories now reject C-style arrays instead of silently decaying them to pointers.just_error
Enhancements:
-
The
andsender
concepts get explicit opt-in traits calledreceiver
andenable_sender
, respectively. The traits have default implementations that look for nestedenable_receiver
andis_sender
types, respectively.is_receiver -
is removed andget_attrs
is used in its place.get_env -
The exposition-only type
is made normative and is renamedempty - env
.empty_env -
gets a fall-back implementation that simply returnsget_env
if aempty_env {}
overload is not found.tag_invoke -
is required to be insensitive to the cvref-qualification of its argument.get_env -
,get_env
, andempty_env
are moved into theenv_of_t
namespace.std :: -
Add a new subclause describing the async programming model of senders in abstract terms. See § 34.3 Asynchronous operations [async.ops].
2.5. R6
The changes since R5 are as follows:
Fixes:
-
Fix typo in the specification of
about the relative lifetimes of the tokens and the source that produced them.in_place_stop_source -
tests for awaitability with a promise type similar to the one used byget_completion_signatures
for the sake of consistency.connect -
A coroutine promise type is an environment provider (that is, it implements
) rather than being directly queryable. The previous draft was inconsistent about that.get_env ()
Enhancements:
-
Sender queries are moved into a separate queryable "attributes" object that is accessed by passing the sender to
(see below). Theget_attrs ()
concept is reexpressed to requiresender
and separated from a newget_attrs ()
concept for checking whether a type is a sender within a particular execution environment.sender_in < Snd , Env > -
The placeholder types
andno_env
are no longer needed and are dropped.dependent_completion_signatures <> -
andensure_started
are changed to persist the result of callingsplit
on the input sender.get_attrs () -
Reorder constraints of the
andscheduler
concepts to avoid constraint recursion when used in tandem with poorly-constrained, implicitly convertible types.receiver -
Re-express the
concept to be more ergonomic and general.sender_of -
Make the specification of the alias templates
andvalue_types_of_t
, and the variable templateerror_types_of_t
more concise by expressing them in terms of a new exposition-only alias templatesends_done
.gather - signatures
2.5.1. Environments and attributes
In earlier revisions, receivers, senders, and schedulers all were directly
queryable. In R4, receiver queries were moved into a separate "environment"
object, obtainable from a receiver with a
accessor. In R6, the
sender queries are given similar treatment, relocating to a "attributes"
object obtainable from a sender with a
accessor. This was done
to solve a number of design problems with the
and
algorithms; e.g., see NVIDIA/stdexec#466.
Schedulers, however, remain directly queryable. As lightweight handles that are required to be movable and copyable, there is little reason to want to dispose of a scheduler and yet persist the scheduler’s queries.
This revision also makes operation states directly queryable, even though there isn’t yet a use for such. Some early prototypes of cooperative bulk parallel sender algorithms done at NVIDIA suggest the utility of forwardable operation state queries. The authors chose to make opstates directly queryable since the opstate object is itself required to be kept alive for the duration of asynchronous operation.
2.6. R5
The changes since R4 are as follows:
Fixes:
-
requires its argument to be astart_detached
sender (sends no values tovoid
).set_value
Enhancements:
-
Receiver concepts refactored to no longer require an error channel for
or a stopped channel.exception_ptr -
concept andsender_of
customization point additionally require that the receiver is capable of receiving all of the sender’s possible completions.connect -
is now required to return an instance of eitherget_completion_signatures
orcompletion_signatures
.dependent_completion_signatures -
made more general.make_completion_signatures -
handlesreceiver_adaptor
as it does theget_env
members; that is,set_ *
will look for a member namedreceiver_adaptor
in the derived class, and if found dispatch theget_env ()
tag invoke customization to it.get_env_t -
,just
,just_error
, andjust_stopped
have been respecified as customization point objects instead of functions, following LEWG guidance.into_variant
2.7. R4
The changes since R3 are as follows:
Fixes:
-
Fix specification of
on theget_completion_scheduler
,transfer
andschedule_from
algorithms; the completion scheduler cannot be guaranteed fortransfer_when_all
.set_error -
The value of
for the default sender traits of types that are generally awaitable was changed fromsends_stopped false
totrue
to acknowledge the fact that some coroutine types are generally awaitable and may implement the
protocol in their promise types.unhandled_stopped () -
Fix the incorrect use of inline namespaces in the
header.< execution > -
Shorten the stable names for the sections.
-
now handlessync_wait
specially by throwing astd :: error_code
on failure.std :: system_error -
Fix how ADL isolation from class template arguments is specified so it doesn’t constrain implmentations.
-
Properly expose the tag types in the header
synopsis.< execution >
Enhancements:
-
Support for "dependently-typed" senders, where the completion signatures -- and thus the sender metadata -- depend on the type of the receiver connected to it. See the section dependently-typed senders below for more information.
-
Add a
sender factory for issuing a query against a receiver and sending the result through the value channel. (This is a useful instance of a dependently-typed sender.)read ( query ) -
Add
utility for declaratively defining a typed sender’s metadata.completion_signatures -
Add
utility for specifying a sender’s completion signatures by adapting those of another sender.make_completion_signatures -
Drop support for untyped senders and rename
totyped_sender
.sender -
is renamed toset_done
. All occurances of "set_stopped
" in indentifiers replaced with "done
"stopped -
Add customization points for controlling the forwarding of scheduler, sender, receiver, and environment queries through layers of adaptors; specify the behavior of the standard adaptors in terms of the new customization points.
-
Add
query to forward a scheduler that can be used by algorithms or by the scheduler to delegate work and forward progress.get_delegatee_scheduler -
Add
alias template.schedule_result_t -
More precisely specify the sender algorithms, including precisely what their completion signatures are.
-
respecified as a customization point object.stopped_as_error -
respecified to improve diagnostics.tag_invoke
2.7.1. Dependently-typed senders
Background:
In the sender/receiver model, as with coroutines, contextual information about
the current execution is most naturally propagated from the consumer to the
producer. In coroutines, that means information like stop tokens, allocators and
schedulers are propagated from the calling coroutine to the callee. In
sender/receiver, that means that that contextual information is associated with
the receiver and is queried by the sender and/or operation state after the
sender and the receiver are
-ed.
Problem:
The implication of the above is that the sender alone does not have all the
information about the async computation it will ultimately initiate; some of
that information is provided late via the receiver. However, the
mechanism, by which an algorithm can introspect the value and error types the
sender will propagate, only accepts a sender parameter. It does not take into
consideration the type information that will come in late via the receiver. The
effect of this is that some senders cannot be typed senders when they
otherwise could be.
Example:
To get concrete, consider the case of the "
" sender: when
-ed and
-ed, it queries the receiver for its associated
scheduler and passes it back to the receiver through the value channel. That
sender’s "value type" is the type of the receiver’s scheduler. What then
should
report for the
’s value type? It can’t answer because it doesn’t know.
This causes knock-on problems since some important algorithms require a typed
sender, such as
. To illustrate the problem, consider the following
code:
namespace ex = std :: execution ; ex :: sender auto task = ex :: let_value ( ex :: get_scheduler (), // Fetches scheduler from receiver. []( auto current_sched ) { // Lauch some nested work on the current scheduler: return ex :: starts_on ( current_sched , nested work ... ); }); std :: this_thread :: sync_wait ( std :: move ( task ));
The code above is attempting to schedule some work onto the
’s
execution resource. But
only returns a typed sender when
the input sender is typed. As we explained above,
is not
typed, so
is likewise not typed. Since
isn’t typed, it cannot be
passed to
which is expecting a typed sender. The above code would
fail to compile.
Solution:
The solution is conceptually quite simple: extend the
mechanism
to optionally accept a receiver in addition to the sender. The algorithms can
use
to inspect the
async operation’s completion-signals. The
concept would also need
to take an optional receiver parameter. This is the simplest change, and it
would solve the immediate problem.
Design:
Using the receiver type to compute the sender traits turns out to have pitfalls
in practice. Many receivers make use of that type information in their
implementation. It is very easy to create cycles in the type system, leading to
inscrutible errors. The design pursued in R4 is to give receivers an associated environment object -- a bag of key/value pairs -- and to move the contextual
information (schedulers, etc) out of the receiver and into the environment. The
template and the
concept, rather than taking a
receiver, take an environment. This is a much more robust design.
A further refinement of this design would be to separate the receiver and the
environment entirely, passing then as separate arguments along with the sender to
. This paper does not propose that change.
Impact:
This change, apart from increasing the expressive power of the sender/receiver abstraction, has the following impact:
-
Typed senders become moderately more challenging to write. (The new
andcompletion_signatures
utilities are added to ease this extra burden.)transform_completion_signatures -
Sender adaptor algorithms that previously constrained their sender arguments to satisfy the
concept can no longer do so as the receiver is not available yet. This can result in type-checking that is done later, whentyped_sender
is ultimately called on the resulting sender adaptor.connect -
Operation states that own receivers that add to or change the environment are typically larger by one pointer. It comes with the benefit of far fewer indirections to evaluate queries.
"Has it been implemented?"
Yes, the reference implementation, which can be found at https://github.com/NVIDIA/stdexec, has implemented this design as well as some dependently-typed senders to confirm that it works.
Implementation experience
Although this change has not yet been made in libunifex, the most widely adopted sender/receiver implementation, a similar design can be found in Folly’s coroutine support library. In Folly.Coro, it is possible to await a special awaitable to obtain the current coroutine’s associated scheduler (called an executor in Folly).
For instance, the following Folly code grabs the current executor, schedules a task for execution on that executor, and starts the resulting (scheduled) task by enqueueing it for execution.
// From Facebook's Folly open source library: template < class T > folly :: coro :: Task < void > CancellableAsyncScope :: co_schedule ( folly :: coro :: Task < T >&& task ) { this -> add ( std :: move ( task ). scheduleOn ( co_await co_current_executor )); co_return ; }
Facebook relies heavily on this pattern in its coroutine code. But as described
above, this pattern doesn’t work with R3 of
because of the lack
of dependently-typed schedulers. The change to
in R4 rectifies that.
Why now?
The authors are loathe to make any changes to the design, however small, at this
stage of the C++23 release cycle. But we feel that, for a relatively minor
design change -- adding an extra template parameter to
and
-- the returns are large enough to justify the change. And there
is no better time to make this change than as early as possible.
One might wonder why this missing feature not been added to sender/receiver before now. The designers of sender/receiver have long been aware of the need. What was missing was a clean, robust, and simple design for the change, which we now have.
Drive-by:
We took the opportunity to make an additional drive-by change: Rather than
providing the sender traits via a class template for users to specialize, we
changed it into a sender query:
. That function’s return type is used as the sender’s traits.
The authors feel this leads to a more uniform design and gives sender authors a
straightforward way to make the value/error types dependent on the cv- and
ref-qualification of the sender if need be.
Details:
Below are the salient parts of the new support for dependently-typed senders in R4:
-
Receiver queries have been moved from the receiver into a separate environment object.
-
Receivers have an associated environment. The new
CPO retrieves a receiver’s environment. If a receiver doesn’t implementget_env
, it returns an unspecified "empty" environment -- an empty struct.get_env -
now takes an optionalsender_traits
parameter that is used to determine the error/value types.Env -
The primary
template is replaced with asender_traits
alias implemented in terms of a newcompletion_signatures_of_t
CPO that dispatches withget_completion_signatures
.tag_invoke
takes a sender and an optional environment. A sender can customize this to specify its value/error types.get_completion_signatures -
Support for untyped senders is dropped. The
concept has been renamed totyped_sender
and now takes an optional environment.sender -
The environment argument to the
concept and thesender
CPO defaults toget_completion_signatures
. All environment queries fail (are ill-formed) when passed an instance ofno_env
.no_env -
A type
is required to satisfyS
to be considered a sender. If it doesn’t know what types it will complete with independent of an environment, it returns an instance of the placeholder traitssender < S >
.dependent_completion_signatures -
If a sender satisfies both
andsender < S >
, then the completion signatures for the two cannot be different in any way. It is possible for an implementation to enforce this statically, but not required.sender < S , Env > -
All of the algorithms and examples have been updated to work with dependently-typed senders.
2.8. R3
The changes since R2 are as follows:
Fixes:
-
Fix specification of the
algorithm to clarify lifetimes of intermediate operation states and properly scope thestarts_on
query.get_scheduler -
Fix a memory safety bug in the implementation of
.connect - awaitable -
Fix recursive definition of the
concept.scheduler
Enhancements:
-
Add
execution resource.run_loop -
Add
utility to simplify writing receivers.receiver_adaptor -
Require a scheduler’s sender to model
and provide a completion scheduler.sender_of -
Specify the cancellation scope of the
algorithm.when_all -
Make
a customization point.as_awaitable -
Change
’s handling of awaitables to consider those types that are awaitable owing to customization ofconnect
.as_awaitable -
Add
andvalue_types_of_t
alias templates; renameerror_types_of_t
tostop_token_type_t
.stop_token_of_t -
Add a design rationale for the removal of the possibly eager algorithms.
-
Expand the section on field experience.
2.9. R2
The changes since R1 are as follows:
-
Remove the eagerly executing sender algorithms.
-
Extend the
customization point and theexecution :: connect
template to recognize awaitables assender_traits <>
s.typed_sender -
Add utilities
andas_awaitable ()
so a coroutine type can trivially make senders awaitable with a coroutine.with_awaitable_senders <> -
Add a section describing the design of the sender/awaitable interactions.
-
Add a section describing the design of the cancellation support in sender/receiver.
-
Add a section showing examples of simple sender adaptor algorithms.
-
Add a section showing examples of simple schedulers.
-
Add a few more examples: a sudoku solver, a parallel recursive file copy, and an echo server.
-
Refined the forward progress guarantees on the
algorithm.bulk -
Add a section describing how to use a range of senders to represent async sequences.
-
Add a section showing how to use senders to represent partial success.
-
Add sender factories
andexecution :: just_error
.execution :: just_stopped -
Add sender adaptors
andexecution :: stopped_as_optional
.execution :: stopped_as_error -
Document more production uses of sender/receiver at scale.
-
Various fixes of typos and bugs.
2.10. R1
The changes since R0 are as follows:
-
Added a new concept,
.sender_of -
Added a new scheduler query,
.this_thread :: execute_may_block_caller -
Added a new scheduler query,
.get_forward_progress_guarantee -
Removed the
adaptor.unschedule -
Various fixes of typos and bugs.
2.11. R0
Initial revision.
3. Design - introduction
The following three sections describe the entirety of the proposed design.
-
§ 3 Design - introduction describes the conventions used through the rest of the design sections, as well as an example illustrating how we envision code will be written using this proposal.
-
§ 4 Design - user side describes all the functionality from the perspective we intend for users: it describes the various concepts they will interact with, and what their programming model is.
-
§ 5 Design - implementer side describes the machinery that allows for that programming model to function, and the information contained there is necessary for people implementing senders and sender algorithms (including the standard library ones) - but is not necessary to use senders productively.
3.1. Conventions
The following conventions are used throughout the design section:
-
The namespace proposed in this paper is the same as in A Unified Executors Proposal for C++:
; however, for brevity, thestd :: execution
part of this name is omitted. When you seestd ::
, treat that asexecution :: foo
.std :: execution :: foo -
Universal references and explicit calls to
/std :: move
are omitted in code samples and signatures for simplicity; assume universal references and perfect forwarding unless stated otherwise.std :: forward -
None of the names proposed here are names that we are particularly attached to; consider the names to be reasonable placeholders that can freely be changed, should the committee want to do so.
3.2. Queries and algorithms
A query is a callable that takes some set of objects (usually one) as parameters and returns facts about those objects without modifying them. Queries are usually customization point objects, but in some cases may be functions.
An algorithm is a callable that takes some set of objects as parameters and causes those objects to do something. Algorithms are usually customization point objects, but in some cases may be functions.
4. Design - user side
4.1. Execution resources describe the place of execution
An execution resource is a resource that represents the place where execution will happen. This could be a concrete resource - like a specific thread pool object, or a GPU - or a more abstract one, like the current thread of execution. Execution contexts don’t need to have a representation in code; they are simply a term describing certain properties of execution of a function.
4.2. Schedulers represent execution resources
A scheduler is a lightweight handle that represents a strategy for
scheduling work onto an execution resource. Since execution resources don’t
necessarily manifest in C++ code, it’s not possible to program directly against
their API. A scheduler is a solution to that problem: the scheduler concept is
defined by a single sender algorithm,
, which returns a sender that
will complete on an execution resource determined by the scheduler. Logic that
you want to run on that context can be placed in the receiver’s
completion-signalling method.
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); // snd is a sender (see below) describing the creation of a new execution resource // on the execution resource associated with sch
Note that a particular scheduler type may provide other kinds of scheduling
operations which are supported by its associated execution resource. It is not
limited to scheduling purely using the
API.
Future papers will propose additional scheduler concepts that extend
to add other capabilities. For example:
-
A
concept that extendstime_scheduler
to support time-based scheduling. Such a concept might provide access toscheduler
,schedule_after ( sched , duration )
andschedule_at ( sched , time_point )
APIs.now ( sched ) -
Concepts that extend
to support opening, reading and writing files asynchronously.scheduler -
Concepts that extend
to support connecting, sending data and receiving data over the network asynchronously.scheduler
4.3. Senders describe work
A sender is an object that describes work. Senders are similar to futures in existing asynchrony designs, but unlike futures, the work that is being done to arrive at the values they will send is also directly described by the sender object itself. A sender is said to send some values if a receiver connected (see § 5.3 execution::connect) to that sender will eventually receive said values.
The primary defining sender algorithm is § 5.3 execution::connect; this function, however, is not a user-facing API; it is used to facilitate communication between senders and various sender algorithms, but end user code is not expected to invoke it directly.
The way user code is expected to interact with senders is by using sender algorithms. This paper proposes an initial set of such sender algorithms, which are described in § 4.4 Senders are composable through sender algorithms, § 4.19 User-facing sender factories, § 4.20 User-facing sender adaptors, and § 4.21 User-facing sender consumers. For example, here is how a user can create a new sender on a scheduler, attach a continuation to it, and then wait for execution of the continuation to complete:
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); execution :: sender auto cont = execution :: then ( snd , []{ std :: fstream file { "result.txt" }; file << compute_result ; }); this_thread :: sync_wait ( cont ); // at this point, cont has completed execution
4.4. Senders are composable through sender algorithms
Asynchronous programming often departs from traditional code structure and control flow that we are familiar with. A successful asynchronous framework must provide an intuitive story for composition of asynchronous work: expressing dependencies, passing objects, managing object lifetimes, etc.
The true power and utility of senders is in their composability. With senders, users can describe generic execution pipelines and graphs, and then run them on and across a variety of different schedulers. Senders are composed using sender algorithms:
-
sender factories, algorithms that take no senders and return a sender.
-
sender adaptors, algorithms that take (and potentially
) senders and return a sender.execution :: connect -
sender consumers, algorithms that take (and potentially
) senders and do not return a sender.execution :: connect
4.5. Senders can propagate completion schedulers
One of the goals of executors is to support a diverse set of execution resources, including traditional thread pools, task and fiber frameworks (like HPX Legion), and GPUs and other accelerators (managed by runtimes such as CUDA or SYCL). On many of these systems, not all execution agents are created equal and not all functions can be run on all execution agents. Having precise control over the execution resource used for any given function call being submitted is important on such systems, and the users of standard execution facilities will expect to be able to express such requirements.
A Unified Executors Proposal for C++ was not always clear about the place of execution of any given piece of code. Precise control was present in the two-way execution API present in earlier executor designs, but it has so far been missing from the senders design. There has been a proposal (Towards C++23 executors: A proposal for an initial set of algorithms) to provide a number of sender algorithms that would enforce certain rules on the places of execution of the work described by a sender, but we have found those sender algorithms to be insufficient for achieving the best performance on all platforms that are of interest to us. The implementation strategies that we are aware of result in one of the following situations:
-
trying to submit work to one execution resource (such as a CPU thread pool) from another execution resource (such as a GPU or a task framework), which assumes that all execution agents are as capable as a
(which they aren’t).std :: thread -
forcibly interleaving two adjacent execution graph nodes that are both executing on one execution resource (such as a GPU) with glue code that runs on another execution resource (such as a CPU), which is prohibitively expensive for some execution resources (such as CUDA or SYCL).
-
having to customise most or all sender algorithms to support an execution resource, so that you can avoid problems described in 1. and 2, which we believe is impractical and brittle based on months of field experience attempting this in Agency.
None of these implementation strategies are acceptable for many classes of parallel runtimes, such as task frameworks (like HPX) or accelerator runtimes (like CUDA or SYCL).
Therefore, in addition to the
sender algorithm from Towards C++23 executors: A proposal for an initial set of algorithms, we are
proposing a way for senders to advertise what scheduler (and by extension what
execution resource) they will complete on. Any given sender may have completion schedulers for some or all of the signals (value, error, or
stopped) it completes with (for more detail on the completion-signals, see § 5.1 Receivers serve as glue between senders). When further work is attached to that sender by invoking
sender algorithms, that work will also complete on an appropriate completion
scheduler.
4.5.1. execution :: get_completion_scheduler
is a query that retrieves the completion scheduler
for a specific completion-signal from a sender’s environment. For a sender that
lacks a completion scheduler query for a given signal, calling
is ill-formed. If a sender advertises a completion
scheduler for a signal in this way, that sender must ensure that it sends that signal on an execution agent belonging to an execution
resource represented by a scheduler returned from this function. See § 4.5 Senders can propagate completion schedulers for more details.
execution :: scheduler auto cpu_sched = new_thread_scheduler {}; execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto snd0 = execution :: schedule ( cpu_sched ); execution :: scheduler auto completion_sch0 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd0 )); // completion_sch0 is equivalent to cpu_sched execution :: sender auto snd1 = execution :: then ( snd0 , []{ std :: cout << "I am running on cpu_sched! \n " ; }); execution :: scheduler auto completion_sch1 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd1 )); // completion_sch1 is equivalent to cpu_sched execution :: sender auto snd2 = execution :: continues_on ( snd1 , gpu_sched ); execution :: sender auto snd3 = execution :: then ( snd2 , []{ std :: cout << "I am running on gpu_sched! \n " ; }); execution :: scheduler auto completion_sch3 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd3 )); // completion_sch3 is equivalent to gpu_sched
4.6. Execution resource transitions are explicit
A Unified Executors Proposal for C++ does not contain any mechanisms for performing an execution
resource transition. The only sender algorithm that can create a sender that
will move execution to a specific execution resource is
,
which does not take an input sender. That means that there’s no way to construct
sender chains that traverse different execution resources. This is necessary to
fulfill the promise of senders being able to replace two-way executors, which
had this capability.
We propose that, for senders advertising their completion scheduler, all execution resource transitions must be explicit; running user code anywhere but where they defined it to run must be considered a bug.
The
sender adaptor performs a transition from one
execution resource to another:
execution :: scheduler auto sch1 = ...; execution :: scheduler auto sch2 = ...; execution :: sender auto snd1 = execution :: schedule ( sch1 ); execution :: sender auto then1 = execution :: then ( snd1 , []{ std :: cout << "I am running on sch1! \n " ; }); execution :: sender auto snd2 = execution :: continues_on ( then1 , sch2 ); execution :: sender auto then2 = execution :: then ( snd2 , []{ std :: cout << "I am running on sch2! \n " ; }); this_thread :: sync_wait ( then2 );
4.7. Senders can be either multi-shot or single-shot
Some senders may only support launching their operation a single time, while others may be repeatable and support being launched multiple times. Executing the operation may consume resources owned by the sender.
For example, a sender may contain a
that it will be transferring ownership of to the
operation-state returned by a call to
so that the operation has access to
this resource. In such a sender, calling
consumes the sender such that after
the call the input sender is no longer valid. Such a sender will also typically be move-only so that
it can maintain unique ownership of that resource.
A single-shot sender can only be connected to a receiver
at most once. Its implementation of
only has overloads for
an rvalue-qualified sender. Callers must pass the sender as an rvalue to the
call to
, indicating that the call consumes the sender.
A multi-shot sender can be connected to multiple
receivers and can be launched multiple times. Multi-shot senders customise
to accept an lvalue reference to the sender. Callers can
indicate that they want the sender to remain valid after the call to
by passing an lvalue reference to the sender to call these
overloads. Multi-shot senders should also define overloads of
that accept rvalue-qualified senders to allow the sender to
be also used in places where only a single-shot sender is required.
If the user of a sender does not require the sender to remain valid after
connecting it to a receiver then it can pass an rvalue-reference to the sender
to the call to
. Such usages should be able to accept either
single-shot or multi-shot senders.
If the caller does wish for the sender to remain valid after the call then it
can pass an lvalue-qualified sender to the call to
. Such
usages will only accept multi-shot senders.
Algorithms that accept senders will typically either decay-copy an input sender
and store it somewhere for later usage (for example as a data-member of the
returned sender) or will immediately call
on the input
sender, such as in
.
Some multi-use sender algorithms may require that an input sender be
copy-constructible but will only call
on an rvalue of each
copy, which still results in effectively executing the operation multiple times.
Other multi-use sender algorithms may require that the sender is
move-constructible but will invoke
on an lvalue reference
to the sender.
For a sender to be usable in both multi-use scenarios, it will generally be required to be both copy-constructible and lvalue-connectable.
4.8. Senders are forkable
Any non-trivial program will eventually want to fork a chain of senders into independent streams of work, regardless of whether they are single-shot or multi-shot. For instance, an incoming event to a middleware system may be required to trigger events on more than one downstream system. This requires that we provide well defined mechanisms for making sure that connecting a sender multiple times is possible and correct.
The
sender adaptor facilitates connecting to a sender multiple times,
regardless of whether it is single-shot or multi-shot:
auto some_algorithm ( execution :: sender auto && input ) { execution :: sender auto multi_shot = split ( input ); // "multi_shot" is guaranteed to be multi-shot, // regardless of whether "input" was multi-shot or not return when_all ( then ( multi_shot , [] { std :: cout << "First continuation \n " ; }), then ( multi_shot , [] { std :: cout << "Second continuation \n " ; }) ); }
4.9. Senders support cancellation
Senders are often used in scenarios where the application may be concurrently executing multiple strategies for achieving some program goal. When one of these strategies succeeds (or fails) it may not make sense to continue pursuing the other strategies as their results are no longer useful.
For example, we may want to try to simultaneously connect to multiple network servers and use whichever server responds first. Once the first server responds we no longer need to continue trying to connect to the other servers.
Ideally, in these scenarios, we would somehow be able to request that those other strategies stop executing promptly so that their resources (e.g. cpu, memory, I/O bandwidth) can be released and used for other work.
While the design of senders has support for cancelling an operation before it
starts by simply destroying the sender or the operation-state returned from
before calling
, there also needs to
be a standard, generic mechanism to ask for an already-started operation to
complete early.
The ability to be able to cancel in-flight operations is fundamental to supporting some kinds of generic concurrency algorithms.
For example:
-
a
algorithm should cancel other operations as soon as one operation failswhen_all ( ops ...) -
a
algorithm should cancel the other operations as soon as one operation completes successfulyfirst_successful ( ops ...) -
a generic
algorithm needs to be able to cancel thetimeout ( src , duration )
operation after the timeout duration has elapsed.src -
a
algorithm should cancelstop_when ( src , trigger )
ifsrc
completes first and canceltrigger
iftrigger
completes firstsrc
The mechanism used for communcating cancellation-requests, or stop-requests, needs to have a uniform interface so that generic algorithms that compose sender-based operations, such as the ones listed above, are able to communicate these cancellation requests to senders that they don’t know anything about.
The design is intended to be composable so that cancellation of higher-level operations can propagate those cancellation requests through intermediate layers to lower-level operations that need to actually respond to the cancellation requests.
For example, we can compose the algorithms mentioned above so that child operations are cancelled when any one of the multiple cancellation conditions occurs:
sender auto composed_cancellation_example ( auto query ) { return stop_when ( timeout ( when_all ( first_successful ( query_server_a ( query ), query_server_b ( query )), load_file ( "some_file.jpg" )), 5 s ), cancelButton . on_click ()); }
In this example, if we take the operation returned by
,
this operation will receive a stop-request when any of the following happens:
-
algorithm will send a stop-request iffirst_successful
completes successfullyquery_server_a ( query ) -
algorithm will send a stop-request if thewhen_all
operation completes with an error or stopped result.load_file ( "some_file.jpg" ) -
algorithm will send a stop-request if the operation does not complete within 5 seconds.timeout -
algorithm will send a stop-request if the user clicks on the "Cancel" button in the user-interface.stop_when -
The parent operation consuming the
sends a stop-requestcomposed_cancellation_example ()
Note that within this code there is no explicit mention of cancellation, stop-tokens, callbacks, etc. yet the example fully supports and responds to the various cancellation sources.
The intent of the design is that the common usage of cancellation in sender/receiver-based code is primarily through use of concurrency algorithms that manage the detailed plumbing of cancellation for you. Much like algorithms that compose senders relieve the user from having to write their own receiver types, algorithms that introduce concurrency and provide higher-level cancellation semantics relieve the user from having to deal with low-level details of cancellation.
4.9.1. Cancellation design summary
The design of cancellation described in this paper is built on top of and
extends the
-based cancellation facilities added in C++20,
first proposed in Composable cancellation for sender-based async operations.
At a high-level, the facilities proposed by this paper for supporting cancellation include:
-
Add a
concept that generalises the interface of thestd :: stoppable_token
type to allow other stop token types with different implementation strategies.std :: stop_token -
Add
concept for detecting whether astd :: unstoppable_token
can never receive a stop-request.stoppable_token -
Add
,std :: inplace_stop_token
andstd :: inplace_stop_source
types that provide a more efficient implementation of a stop-token for use in structured concurrency situations.std :: inplace_stop_callback < CB > -
Add
for use in places where you never want to issue a stop-request.std :: never_stop_token -
Add
CPO for querying the stop-token to use for an operation from its receiver’s execution environment.std :: execution :: get_stop_token () -
Add
for querying the type of a stop-token returned fromstd :: execution :: stop_token_of_t < T >
.get_stop_token ()
In addition, there are requirements added to some of the algorithms to specify what their cancellation behaviour is and what the requirements of customisations of those algorithms are with respect to cancellation.
The key component that enables generic cancellation within sender-based
operations is the
CPO. This CPO takes a single
parameter, which is the execution environment of the receiver passed to
, and returns a
that the operation
can use to check for stop-requests for that operation.
As the caller of
typically has control over the receiver
type it passes, it is able to customise the
CPO for
that receiver to return an execution environment that hooks the
CPO to return a stop-token that the receiver has
control over and that it can use to communicate a stop-request to the operation
once it has started.
4.9.2. Support for cancellation is optional
Support for cancellation is optional, both on part of the author of the receiver and on part of the author of the sender.
If the receiver’s execution environment does not customise the
CPO then invoking the CPO on that receiver’s
environment will invoke the default implementation which returns
. This is a special
type that is
statically known to always return false
from the
method.
Sender code that tries to use this stop-token will in general result in code that handles stop-requests being compiled out and having little to no run-time overhead.
If the sender doesn’t call
, for example because
the operation does not support cancellation, then it will simply not respond to
stop-requests from the caller.
Note that stop-requests are generally racy in nature as there is often a race betwen an operation completing naturally and the stop-request being made. If the operation has already completed or past the point at which it can be cancelled when the stop-request is sent then the stop-request may just be ignored. An application will typically need to be able to cope with senders that might ignore a stop-request anyway.
4.9.3. Cancellation is inherently racy
Usually, an operation will attach a stop callback at some point inside the call
to
so that a subsequent stop-request will interrupt the
logic.
A stop-request can be issued concurrently from another thread. This means the
implementation of
needs to be careful to ensure that, once
a stop callback has been registered, that there are no data-races between a
potentially concurrently-executing stop callback and the rest of the
implementation.
An implementation of
that supports cancellation will
generally need to perform (at least) two separate steps: launch the operation,
subscribe a stop callback to the receiver’s stop-token. Care needs to be taken
depending on the order in which these two steps are performed.
If the stop callback is subscribed first and then the operation is launched, care needs to be taken to ensure that a stop-request that invokes the stop callback on another thread after the stop callback is registered but before the operation finishes launching does not either result in a missed cancellation request or a data-race. e.g. by performing an atomic write after the launch has finished executing
If the operation is launched first and then the stop callback is subscribed,
care needs to be taken to ensure that if the launched operation completes
concurrently on another thread that it does not destroy the operation-state
until after the stop callback has been registered. e.g. by having the
implementation write to an atomic variable once it has
finished registering the stop callback and having the concurrent completion
handler check that variable and either call the completion-signalling operation
or store the result and defer calling the receiver’s completion-signalling
operation to the
call (which is still executing).
For an example of an implementation strategy for solving these data-races see § 1.4 Asynchronous Windows socket recv.
4.9.4. Cancellation design status
This paper currently includes the design for cancellation as proposed in Composable cancellation for sender-based async operations - "Composable cancellation for sender-based async operations". P2175R0 contains more details on the background motivation and prior-art and design rationale of this design.
It is important to note, however, that initial review of this design in the SG1 concurrency subgroup raised some concerns related to runtime overhead of the design in single-threaded scenarios and these concerns are still being investigated.
The design of P2175R0 has been included in this paper for now, despite its potential to change, as we believe that support for cancellation is a fundamental requirement for an async model and is required in some form to be able to talk about the semantics of some of the algorithms proposed in this paper.
This paper will be updated in the future with any changes that arise from the investigations into P2175R0.
4.10. Sender factories and adaptors are lazy
In an earlier revision of this paper, some of the proposed algorithms supported executing their logic eagerly; i.e., before the returned sender has been connected to a receiver and started. These algorithms were removed because eager execution has a number of negative semantic and performance implications.
We have originally included this functionality in the paper because of a long-standing belief that eager execution is a mandatory feature to be included in the standard Executors facility for that facility to be acceptable for accelerator vendors. A particular concern was that we must be able to write generic algorithms that can run either eagerly or lazily, depending on the kind of an input sender or scheduler that have been passed into them as arguments. We considered this a requirement, because the _latency_ of launching work on an accelerator can sometimes be considerable.
However, in the process of working on this paper and implementations of the features proposed within, our set of requirements has shifted, as we understood the different implementation strategies that are available for the feature set of this paper better, and, after weighing the earlier concerns against the points presented below, we have arrived at the conclusion that a purely lazy model is enough for most algorithms, and users who intend to launch work earlier may write an algorithm to achieve that goal. We have also come to deeply appreciate the fact that a purely lazy model allows both the implementation and the compiler to have a much better understanding of what the complete graph of tasks looks like, allowing them to better optimize the code - also when targetting accelerators.
4.10.1. Eager execution leads to detached work or worse
One of the questions that arises with APIs that can potentially return
eagerly-executing senders is "What happens when those senders are destructed
without a call to
?" or similarly, "What happens if a call
to
is made, but the returned operation state is destroyed
before
is called on that operation state"?
In these cases, the operation represented by the sender is potentially executing concurrently in another thread at the time that the destructor of the sender and/or operation-state is running. In the case that the operation has not completed executing by the time that the destructor is run we need to decide what the semantics of the destructor is.
There are three main strategies that can be adopted here, none of which is particularly satisfactory:
-
Make this undefined-behaviour - the caller must ensure that any eagerly-executing sender is always joined by connecting and starting that sender. This approach is generally pretty hostile to programmers, particularly in the presence of exceptions, since it complicates the ability to compose these operations.
Eager operations typically need to acquire resources when they are first called in order to start the operation early. This makes eager algorithms prone to failure. Consider, then, what might happen in an expression such as
. Imaginewhen_all ( eager_op_1 (), eager_op_2 ())
starts an asynchronous operation successfully, but theneager_op_1 ()
throws. For lazy senders, that failure happens in the context of theeager_op_2 ()
algorithm, which handles the failure and ensures that async work joins on all code paths. In this case though -- the eager case -- the child operation has failed even beforewhen_all
has been called.when_all It then becomes the responsibility, not of the algorithm, but of the end user to handle the exception and ensure that
is joined before allowing the exception to propagate. If they fail to do that, they incur undefined behavior.eager_op_1 () -
Detach from the computation - let the operation continue in the background - like an implicit call to
. While this approach can work in some circumstances for some kinds of applications, in general it is also pretty user-hostile; it makes it difficult to reason about the safe destruction of resources used by these eager operations. In general, detached work necessitates some kind of garbage collection; e.g.,std :: thread :: detach ()
, to ensure resources are kept alive until the operations complete, and can make clean shutdown nigh impossible.std :: shared_ptr -
Block in the destructor until the operation completes. This approach is probably the safest to use as it preserves the structured nature of the concurrent operations, but also introduces the potential for deadlocking the application if the completion of the operation depends on the current thread making forward progress.
The risk of deadlock might occur, for example, if a thread-pool with a small number of threads is executing code that creates a sender representing an eagerly-executing operation and then calls the destructor of that sender without joining it (e.g. because an exception was thrown). If the current thread blocks waiting for that eager operation to complete and that eager operation cannot complete until some entry enqueued to the thread-pool’s queue of work is run then the thread may wait for an indefinite amount of time. If all threads of the thread-pool are simultaneously performing such blocking operations then deadlock can result.
There are also minor variations on each of these choices. For example:
-
A variation of (1): Call
if an eager sender is destructed without joining it. This is the approach thatstd :: terminate
destructor takes.std :: thread -
A variation of (2): Request cancellation of the operation before detaching. This reduces the chances of operations continuing to run indefinitely in the background once they have been detached but does not solve the lifetime- or shutdown-related challenges.
-
A variation of (3): Request cancellation of the operation before blocking on its completion. This is the strategy that
uses for its destructor. It reduces the risk of deadlock but does not eliminate it.std :: jthread
4.10.2. Eager senders complicate algorithm implementations
Algorithms that can assume they are operating on senders with strictly lazy
semantics are able to make certain optimizations that are not available if
senders can be potentially eager. With lazy senders, an algorithm can safely
assume that a call to
on an operation state strictly happens
before the execution of that async operation. This frees the algorithm from
needing to resolve potential race conditions. For example, consider an algorithm
that puts async operations in sequence by starting an operation only
after the preceding one has completed. In an expression like
, one may reasonably assume that
,
and
are sequenced and therefore do not need synchronisation. Eager algorithms
break that assumption.
When an algorithm needs to deal with potentially eager senders, the potential race conditions can be resolved one of two ways, neither of which is desirable:
-
Assume the worst and implement the algorithm defensively, assuming all senders are eager. This obviously has overheads both at runtime and in algorithm complexity. Resolving race conditions is hard.
-
Require senders to declare whether they are eager or not with a query. Algorithms can then implement two different implementation strategies, one for strictly lazy senders and one for potentially eager senders. This addresses the performance problem of (1) while compounding the complexity problem.
4.10.3. Eager senders incur cancellation-related overhead
Another implication of the use of eager operations is with regards to cancellation. The eagerly executing operation will not have access to the caller’s stop token until the sender is connected to a receiver. If we still want to be able to cancel the eager operation then it will need to create a new stop source and pass its associated stop token down to child operations. Then when the returned sender is eventually connected it will register a stop callback with the receiver’s stop token that will request stop on the eager sender’s stop source.
As the eager operation does not know at the time that it is launched what the
type of the receiver is going to be, and thus whether or not the stop token
returned from
is an
or not,
the eager operation is going to need to assume it might be later connected to a
receiver with a stop token that might actually issue a stop request. Thus it
needs to declare space in the operation state for a type-erased stop callback
and incur the runtime overhead of supporting cancellation, even if cancellation
will never be requested by the caller.
The eager operation will also need to do this to support sending a stop request to the eager operation in the case that the sender representing the eager work is destroyed before it has been joined (assuming strategy (5) or (6) listed above is chosen).
4.10.4. Eager senders cannot access execution resource from the receiver
In sender/receiver, contextual information is passed from parent operations to their children by way of receivers. Information like stop tokens, allocators, current scheduler, priority, and deadline are propagated to child operations with custom receivers at the time the operation is connected. That way, each operation has the contextual information it needs before it is started.
But if the operation is started before it is connected to a receiver, then there isn’t a way for a parent operation to communicate contextual information to its child operations, which may complete before a receiver is ever attached.
4.11. Schedulers advertise their forward progress guarantees
To decide whether a scheduler (and its associated execution resource) is sufficient for a specific task, it may be necessary to know what kind of forward progress guarantees it provides for the execution agents it creates. The C++ Standard defines the following forward progress guarantees:
-
concurrent, which requires that a thread makes progress eventually;
-
parallel, which requires that a thread makes progress once it executes a step; and
-
weakly parallel, which does not require that the thread makes progress.
This paper introduces a scheduler query function,
, which returns one of the enumerators of a new
type,
. Each enumerator of
corresponds to one of the aforementioned
guarantees.
4.12. Most sender adaptors are pipeable
To facilitate an intuitive syntax for composition, most sender adaptors are pipeable; they can be composed (piped)
together with
. This mechanism is similar to the
composition that C++ range adaptors support and draws inspiration from piping in
*nix shells.
Pipeable sender adaptors take a sender as their first parameter and have no
other sender parameters.
will pass the sender
as the first argument to the pipeable sender
adaptor
. Pipeable sender adaptors support partial application of the
parameters after the first. For example, all of the following are equivalent:
execution :: bulk ( snd , N , [] ( std :: size_t i , auto d ) {}); execution :: bulk ( N , [] ( std :: size_t i , auto d ) {})( snd ); snd | execution :: bulk ( N , [] ( std :: size_t i , auto d ) {});
Piping enables you to compose together senders with a linear syntax. Without it, you’d have to use either nested function call syntax, which would cause a syntactic inversion of the direction of control flow, or you’d have to introduce a temporary variable for each stage of the pipeline. Consider the following example where we want to execute first on a CPU thread pool, then on a CUDA GPU, then back on the CPU thread pool:
Syntax Style | Example |
---|---|
Function call (nested) |
|
Function call (named temporaries) |
|
Pipe |
|
Certain sender adaptors are not pipeable, because using the pipeline syntax can result in confusion of the semantics of the adaptors involved. Specifically, the following sender adaptors are not pipeable.
-
andexecution :: when_all
: Since this sender adaptor takes a variadic pack of senders, a partially applied form would be ambiguous with a non partially applied form with an arity of one less.execution :: when_all_with_variant -
: This sender adaptor changes how the sender passed to it is executed, not what happens to its result, but allowing it in a pipeline makes it read as if it performed a function more similar toexecution :: starts_on
.continues_on
Sender consumers could be made pipeable, but we have chosen to not do so. However, since these are terminal nodes in a pipeline and nothing can be piped after them, we believe a pipe syntax may be confusing as well as unnecessary, as consumers cannot be chained. We believe sender consumers read better with function call syntax.
4.13. A range of senders represents an async sequence of data
Senders represent a single unit of asynchronous work. In many cases though, what is being modeled is a sequence of data arriving asynchronously, and you want computation to happen on demand, when each element arrives. This requires nothing more than what is in this paper and the range support in C++20. A range of senders would allow you to model such input as keystrikes, mouse movements, sensor readings, or network requests.
Given some expression
that is a range of senders, consider
the following in a coroutine that returns an async generator type:
for ( auto snd : R ) { if ( auto opt = co_await execution :: stopped_as_optional ( std :: move ( snd ))) co_yield fn ( * std :: move ( opt )); else break ; }
This transforms each element of the asynchronous sequence
with the function
on demand, as the data arrives. The result is a new
asynchronous sequence of the transformed values.
Now imagine that
is the simple expression
. This creates a lazy range of senders, each
of which completes immediately with monotonically increasing integers. The above
code churns through the range, generating a new infine asynchronous range of
values [
,
,
, ...].
Far more interesting would be if
were a range of senders
representing, say, user actions in a UI. The above code gives a simple way to
respond to user actions on demand.
4.14. Senders can represent partial success
Receivers have three ways they can complete: with success, failure, or cancellation. This begs the question of how they can be used to represent async operations that partially succeed. For example, consider an API that reads from a socket. The connection could drop after the API has filled in some of the buffer. In cases like that, it makes sense to want to report both that the connection dropped and that some data has been successfully read.
Often in the case of partial success, the error condition is not fatal nor does it mean the API has failed to satisfy its post-conditions. It is merely an extra piece of information about the nature of the completion. In those cases, "partial success" is another way of saying "success". As a result, it is sensible to pass both the error code and the result (if any) through the value channel, as shown below:
// Capture a buffer for read_socket_async to fill in execution :: just ( array < byte , 1024 > {}) | execution :: let_value ([ socket ]( array < byte , 1024 >& buff ) { // read_socket_async completes with two values: an error_code and // a count of bytes: return read_socket_async ( socket , span { buff }) // For success (partial and full), specify the next action: | execution :: let_value ([]( error_code err , size_t bytes_read ) { if ( err != 0 ) { // OK, partial success. Decide how to deal with the partial results } else { // OK, full success here. } }); })
In other cases, the partial success is more of a partial failure. That happens when the error condition indicates that in some way the function failed to satisfy its post-conditions. In those cases, sending the error through the value channel loses valuable contextual information. It’s possible that bundling the error and the incomplete results into an object and passing it through the error channel makes more sense. In that way, generic algorithms will not miss the fact that a post-condition has not been met and react inappropriately.
Another possibility is for an async API to return a range of senders: if the API completes with full success, full error, or cancellation, the returned range contains just one sender with the result. Otherwise, if the API partially fails (doesn’t satisfy its post-conditions, but some incomplete result is available), the returned range would have two senders: the first containing the partial result, and the second containing the error. Such an API might be used in a coroutine as follows:
// Declare a buffer for read_socket_async to fill in array < byte , 1024 > buff ; for ( auto snd : read_socket_async ( socket , span { buff })) { try { if ( optional < size_t > bytes_read = co_await execution :: stopped_as_optional ( std :: move ( snd ))) { // OK, we read some bytes into buff. Process them here.... } else { // The socket read was cancelled and returned no data. React // appropriately. } } catch (...) { // read_socket_async failed to meet its post-conditions. // Do some cleanup and propagate the error... } }
Finally, it’s possible to combine these two approaches when the API can both partially succeed (meeting its post-conditions) and partially fail (not meeting its post-conditions).
4.15. All awaitables are senders
Since C++20 added coroutines to the standard, we expect that coroutines and awaitables will be how a great many will choose to express their asynchronous code. However, in this paper, we are proposing to add a suite of asynchronous algorithms that accept senders, not awaitables. One might wonder whether and how these algorithms will be accessible to those who choose coroutines instead of senders.
In truth there will be no problem because all generally awaitable types
automatically model the
concept. The adaptation is transparent and
happens in the sender customization points, which are aware of awaitables. (By
"generally awaitable" we mean types that don’t require custom
trickery from a promise type to make them awaitable.)
For an example, imagine a coroutine type called
that knows nothing
about senders. It doesn’t implement any of the sender customization points.
Despite that fact, and despite the fact that the
algorithm is constrained with the
concept, the following would compile
and do what the user wants:
task < int > doSomeAsyncWork (); int main () { // OK, awaitable types satisfy the requirements for senders: auto o = this_thread :: sync_wait ( doSomeAsyncWork ()); }
Since awaitables are senders, writing a sender-based asynchronous algorithm is trivial if you have a coroutine task type: implement the algorithm as a coroutine. If you are not bothered by the possibility of allocations and indirections as a result of using coroutines, then there is no need to ever write a sender, a receiver, or an operation state.
4.16. Many senders can be trivially made awaitable
If you choose to implement your sender-based algorithms as coroutines, you’ll
run into the issue of how to retrieve results from a passed-in sender. This is
not a problem. If the coroutine type opts in to sender support -- trivial with
the
utility -- then a large class of senders
are transparently awaitable from within the coroutine.
For example, consider the following trivial implementation of the sender-based
algorithm:
template < class S > requires single - sender < S &> // see [exec.as.awaitable] task < single - sender - value - type < S >> retry ( S s ) { for (;;) { try { co_return co_await s ; } catch (...) { } } }
Only some senders can be made awaitable directly because of the fact that
callbacks are more expressive than coroutines. An awaitable expression has a
single type: the result value of the async operation. In contrast, a callback
can accept multiple arguments as the result of an operation. What’s more, the
callback can have overloaded function call signatures that take different sets
of arguments. There is no way to automatically map such senders into awaitables.
The
utility recognizes as awaitables those senders that
send a single value of a single type. To await another kind of sender, a user
would have to first map its value channel into a single value of a single type
-- say, with the
sender algorithm -- before
-ing that
sender.
4.17. Cancellation of a sender can unwind a stack of coroutines
When looking at the sender-based
algorithm in the previous section, we
can see that the value and error cases are correctly handled. But what about
cancellation? What happens to a coroutine that is suspended awaiting a sender
that completes by calling
?
When your task type’s promise inherits from
, what
happens is this: the coroutine behaves as if an uncatchable exception had been
thrown from the
expression. (It is not really an exception, but it’s
helpful to think of it that way.) Provided that the promise types of the calling
coroutines also inherit from
, or more generally
implement a member function called
, the exception unwinds
the chain of coroutines as if an exception were thrown except that it bypasses
clauses.
In order to "catch" this uncatchable stopped exception, one of the calling
coroutines in the stack would have to await a sender that maps the stopped
channel into either a value or an error. That is achievable with the
,
,
, or
sender
adaptors. For instance, we can use
to "catch"
the stopped signal and map it into an empty optional as shown below:
if ( auto opt = co_await execution :: stopped_as_optional ( some_sender )) { // OK, some_sender completed successfully, and opt contains the result. } else { // some_sender completed with a cancellation signal. }
As described in the section "All
awaitables are senders", the sender customization points recognize
awaitables and adapt them transparently to model the sender concept. When
-ing an awaitable and a receiver, the adaptation layer awaits the
awaitable within a coroutine that implements
in its promise
type. The effect of this is that an "uncatchable" stopped exception propagates
seamlessly out of awaitables, causing
to be called on
the receiver.
Obviously,
is a library extension of the coroutine promise
interface. Many promise types will not implement
. When an
uncatchable stopped exception tries to propagate through such a coroutine, it is
treated as an unhandled exception and
is called. The solution, as
described above, is to use a sender adaptor to handle the stopped exception
before awaiting it. It goes without saying that any future Standard Library
coroutine types ought to implement
. The author of Add lazy coroutine (coroutine task) type, which proposes a standard coroutine task type, is in agreement.
4.18. Composition with parallel algorithms
The C++ Standard Library provides a large number of algorithms that offer the potential for non-sequential execution via the use of execution policies. The set of algorithms with execution policy overloads are often referred to as "parallel algorithms", although additional policies are available.
Existing policies, such as
, give the implementation permission
to execute the algorithm in parallel. However, the choice of execution resources
used to perform the work is left to the implementation.
We will propose a customization point for combining schedulers with policies in order to provide control over where work will execute.
template < class ExecutionPolicy > unspecified executing_on ( execution :: scheduler auto scheduler , ExecutionPolicy && policy );
This function would return an object of an unspecified type which can be used in
place of an execution policy as the first argument to one of the parallel
algorithms. The overload selected by that object should execute its computation
as requested by
while using
to create any work to be run.
The expression may be ill-formed if
is not able to support the given
policy.
The existing parallel algorithms are synchronous; all of the effects performed
by the computation are complete before the algorithm returns to its caller. This
remains unchanged with the
customization point.
In the future, we expect additional papers will propose asynchronous forms of
the parallel algorithms which (1) return senders rather than values or
and (2) where a customization point pairing a sender with an execution policy
would similarly be used to obtain an object of unspecified type to be provided
as the first argument to the algorithm.
4.19. User-facing sender factories
A sender factory is an algorithm that takes no senders as parameters and returns a sender.
4.19.1. execution :: schedule
execution :: sender auto schedule ( execution :: scheduler auto scheduler );
Returns a sender describing the start of a task graph on the provided scheduler. See § 4.2 Schedulers represent execution resources.
execution :: scheduler auto sch1 = get_system_thread_pool (). scheduler (); execution :: sender auto snd1 = execution :: schedule ( sch1 ); // snd1 describes the creation of a new task on the system thread pool
4.19.2. execution :: just
execution :: sender auto just ( auto ... && values );
Returns a sender with no completion schedulers, which sends the provided values. The input values are decay-copied into the
returned sender. When the returned sender is connected to a receiver, the values
are moved into the operation state if the sender is an rvalue; otherwise, they
are copied. Then xvalues referencing the values in the operation state are
passed to the receiver’s
.
execution :: sender auto snd1 = execution :: just ( 3.14 ); execution :: sender auto then1 = execution :: then ( snd1 , [] ( double d ) { std :: cout << d << " \n " ; }); execution :: sender auto snd2 = execution :: just ( 3.14 , 42 ); execution :: sender auto then2 = execution :: then ( snd2 , [] ( double d , int i ) { std :: cout << d << ", " << i << " \n " ; }); std :: vector v3 { 1 , 2 , 3 , 4 , 5 }; execution :: sender auto snd3 = execution :: just ( v3 ); execution :: sender auto then3 = execution :: then ( snd3 , [] ( std :: vector < int >&& v3copy ) { for ( auto && e : v3copy ) { e *= 2 ; } return std :: move ( v3copy ); } auto && [ v3copy ] = this_thread :: sync_wait ( then3 ). value (); // v3 contains {1, 2, 3, 4, 5}; v3copy will contain {2, 4, 6, 8, 10}. execution :: sender auto snd4 = execution :: just ( std :: vector { 1 , 2 , 3 , 4 , 5 }); execution :: sender auto then4 = execution :: then ( std :: move ( snd4 ), [] ( std :: vector < int >&& v4 ) { for ( auto && e : v4 ) { e *= 2 ; } return std :: move ( v4 ); }); auto && [ v4 ] = this_thread :: sync_wait ( std :: move ( then4 )). value (); // v4 contains {2, 4, 6, 8, 10}. No vectors were copied in this example.
4.19.3. execution :: just_error
execution :: sender auto just_error ( auto && error );
Returns a sender with no completion schedulers, which
completes with the specified error. If the provided error is an lvalue
reference, a copy is made inside the returned sender and a non-const lvalue
reference to the copy is sent to the receiver’s
. If the provided
value is an rvalue reference, it is moved into the returned sender and an rvalue
reference to it is sent to the receiver’s
.
4.19.4. execution :: just_stopped
execution :: sender auto just_stopped ();
Returns a sender with no completion schedulers, which
completes immediately by calling the receiver’s
.
4.19.5. execution :: read_env
execution :: sender auto read_env ( auto tag );
Returns a sender that reaches into a receiver’s environment and pulls out the
current value associated with the customization point denoted by
. It then
sends the value read back to the receiver through the value channel. For
instance,
is a sender that asks the
receiver for the currently suggested
and passes it to the receiver’s
completion-signal.
This can be useful when scheduling nested dependent work. The following sender pulls the current schduler into the value channel and then schedules more work onto it.
execution :: sender auto task = execution :: read_env ( get_scheduler ) | execution :: let_value ([]( auto sched ) { return execution :: starts_on ( sched , some nested work here ); }); this_thread :: sync_wait ( std :: move ( task ) ); // wait for it to finish
This code uses the fact that
associates a scheduler with the
receiver that it connects with
.
reads that scheduler
out of the receiver, and passes it to
’s receiver’s
function, which in turn passes it to the lambda. That lambda returns a new
sender that uses the scheduler to schedule some nested work onto
’s
scheduler.
4.20. User-facing sender adaptors
A sender adaptor is an algorithm that takes one or more senders, which it
may
, as parameters, and returns a sender, whose completion
is related to the sender arguments it has received.
Sender adaptors are lazy, that is, they are never allowed to submit any work for execution prior to the returned sender being started later on, and are also guaranteed to not start any input senders passed into them. Sender consumers such as § 4.21.1 this_thread::sync_wait start senders.
For more implementer-centric description of starting senders, see § 5.5 Sender adaptors are lazy.
4.20.1. execution :: continues_on
execution :: sender auto continues_on ( execution :: sender auto input , execution :: scheduler auto scheduler );
Returns a sender describing the transition from the execution agent of the input sender to the execution agent of the target scheduler. See § 4.6 Execution resource transitions are explicit.
execution :: scheduler auto cpu_sched = get_system_thread_pool (). scheduler (); execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto cpu_task = execution :: schedule ( cpu_sched ); // cpu_task describes the creation of a new task on the system thread pool execution :: sender auto gpu_task = execution :: continues_on ( cpu_task , gpu_sched ); // gpu_task describes the transition of the task graph described by cpu_task to the gpu
4.20.2. execution :: then
execution :: sender auto then ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function );
returns a sender describing the task graph described by the input sender,
with an added node of invoking the provided function with the values sent by the input sender as arguments.
is guaranteed to not begin executing
until the returned
sender is started.
execution :: sender auto input = get_input (); execution :: sender auto snd = execution :: then ( input , []( auto ... args ) { std :: ( args ...); }); // snd describes the work described by pred // followed by printing all of the values sent by pred
This adaptor is included as it is necessary for writing any sender code that actually performs a useful function.
4.20.3. execution :: upon_ *
execution :: sender auto upon_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto upon_stopped ( execution :: sender auto input , std :: invocable auto function );
and
are similar to
, but where
works
with values sent by the input sender,
works with errors, and
is invoked when the "stopped" signal is sent.
4.20.4. execution :: let_ *
execution :: sender auto let_value ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto let_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto let_stopped ( execution :: sender auto input , std :: invocable auto function );
is very similar to
: when it is started, it invokes the
provided function with the values sent by the input sender as
arguments. However, where the sender returned from
sends exactly what
that function ends up returning -
requires that the function return a sender, and the sender returned
by
sends the values sent by the sender returned from the callback.
This is similar to the notion of "future unwrapping" in future/promise-based
frameworks.
is guaranteed to not begin executing
until the
returned sender is started.
and
are similar to
, but where
works with values sent by the input sender,
works with errors, and
is invoked when the "stopped" signal is sent.
4.20.5. execution :: starts_on
execution :: sender auto starts_on ( execution :: scheduler auto sched , execution :: sender auto snd );
Returns a sender which, when started, will start the provided sender on an execution agent belonging to the execution resource associated with the provided scheduler. This returned sender has no completion schedulers.
4.20.6. execution :: into_variant
execution :: sender auto into_variant ( execution :: sender auto snd );
Returns a sender which sends a variant of tuples of all the possible sets of types sent by the input sender. Senders can send multiple sets of values depending on runtime conditions; this is a helper function that turns them into a single variant value.
4.20.7. execution :: stopped_as_optional
execution :: sender auto stopped_as_optional ( single - sender auto snd );
Returns a sender that maps the value channel from a
to an
, and maps the stopped channel to a value of an empty
.
4.20.8. execution :: stopped_as_error
template < move_constructible Error > execution :: sender auto stopped_as_error ( execution :: sender auto snd , Error err );
Returns a sender that maps the stopped channel to an error of
.
4.20.9. execution :: bulk
execution :: sender auto bulk ( execution :: sender auto input , std :: integral auto shape , invocable < decltype ( size ), values - sent - by ( input ) ... > function );
Returns a sender describing the task of invoking the provided function with every index in the provided shape along with the values sent by the input sender. The returned sender completes once all invocations have completed, or an error has occurred. If it completes by sending values, they are equivalent to those sent by the input sender.
No instance of
will begin executing until the returned sender is
started. Each invocation of
runs in an execution agent whose forward
progress guarantees are determined by the scheduler on which they are run. All
agents created by a single use of
execute with the same guarantee. The
number of execution agents used by
is not specified. This allows a
scheduler to execute some invocations of the
in parallel.
In this proposal, only integral types are used to specify the shape of the bulk section. We expect that future papers may wish to explore extensions of the interface to explore additional kinds of shapes, such as multi-dimensional grids, that are commonly used for parallel computing tasks.
4.20.10. execution :: split
execution :: sender auto split ( execution :: sender auto sender );
If the provided sender is a multi-shot sender, returns that sender. Otherwise, returns a multi-shot sender which sends values equivalent to the values sent by the provided sender. See § 4.7 Senders can be either multi-shot or single-shot.
4.20.11. execution :: when_all
execution :: sender auto when_all ( execution :: sender auto ... inputs ); execution :: sender auto when_all_with_variant ( execution :: sender auto ... inputs );
returns a sender that completes once all of the input senders have
completed. It is constrained to only accept senders that can complete with a
single set of values (_i.e._, it only calls one overload of
on its
receiver). The values sent by this sender are the values sent by each of the
input senders, in order of the arguments passed to
. It completes
inline on the execution resource on which the last input sender completes,
unless stop is requested before
is started, in which case it
completes inline within the call to
.
does the same, but it adapts all the input senders using
, and so it does not constrain the input arguments as
does.
The returned sender has no completion schedulers.
execution :: scheduler auto sched = thread_pool . scheduler (); execution :: sender auto sends_1 = ...; execution :: sender auto sends_abc = ...; execution :: sender auto both = execution :: when_all ( sends_1 , sends_abc ); execution :: sender auto final = execution :: then ( both , []( auto ... args ){ std :: cout << std :: format ( "the two args: {}, {}" , args ...); }); // when final executes, it will print "the two args: 1, abc"
4.21. User-facing sender consumers
A sender consumer is an algorithm that takes one or more senders, which it
may
, as parameters, and does not return a sender.
4.21.1. this_thread :: sync_wait
auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ;
is a sender consumer that submits the work described by
the provided sender for execution,
blocking the current
or thread of
until the work is
completed, and returns an optional tuple of values that were sent by the
provided sender on its completion of work. Where § 4.19.1 execution::schedule and § 4.19.2 execution::just are
meant to enter the domain of senders,
is one way to exit the domain of senders, retrieving the result of the task graph.
If the provided sender sends an error instead of values,
throws that
error as an exception, or rethrows the original exception if the error is of
type
.
If the provided sender sends the "stopped" signal instead of values,
returns an empty optional.
For an explanation of the
clause, see § 5.8 All senders are typed. That clause
also explains another sender consumer, built on top of
:
.
Note: This function is specified inside
, and not inside
. This is because
has to block the current execution agent, but determining what the current execution agent is is not
reliable. Since the standard does not specify any functions on the current
execution agent other than those in
, this is the flavor of
this function that is being proposed. If C++ ever obtains fibers, for instance,
we expect that a variant of this function called
would be provided. We also expect that runtimes with execution agents that use
different synchronization mechanisms than
’s will provide their own
flavors of
as well (assuming their execution agents have the means
to block in a non-deadlock manner).
5. Design - implementer side
5.1. Receivers serve as glue between senders
A receiver is a callback that supports more than one channel. In fact, it supports three of them:
-
, which is the moral equivalent of anset_value
or a function call, which signals successful completion of the operation its execution depends on;operator () -
, which signals that an error has happened during scheduling of the current work, executing the current work, or at some earlier point in the sender chain; andset_error -
, which signals that the operation completed without succeeding (set_stopped
) and without failing (set_value
). This result is often used to indicate that the operation stopped early, typically because it was asked to do so because the result is no longer needed.set_error
Once an async operation has been started exactly one of these functions must be invoked on a receiver before it is destroyed.
While the receiver interface may look novel, it is in fact very similar to the
interface of
, which provides the first two signals as
and
, and it’s possible to emulate the third channel with
lifetime management of the promise.
Receivers are not a part of the end-user-facing API of this proposal; they are necessary to allow unrelated senders communicate with each other, but the only users who will interact with receivers directly are authors of senders.
Receivers are what is passed as the second argument to § 5.3 execution::connect.
5.2. Operation states represent work
An operation state is an object that represents work. Unlike senders, it is
not a chaining mechanism; instead, it is a concrete object that packages the
work described by a full sender chain, ready to be executed. An operation state
is neither movable nor copyable, and its interface consists of a single
algorithm:
, which serves as the submission point of the work represented
by a given operation state.
Operation states are not a part of the user-facing API of this proposal; they
are necessary for implementing sender consumers like
,
and the knowledge of them is necessary to
implement senders, so the only users who will interact with operation states
directly are authors of senders and authors of sender algorithms.
The return value of § 5.3 execution::connect must satisfy the operation state concept.
5.3. execution :: connect
is a customization point which connects senders with
receivers, resulting in an operation state that will ensure that if
is
called that one of the completion operations will be called on the receiver
passed to
.
execution :: sender auto snd = some input sender ; execution :: receiver auto rcv = some receiver ; execution :: operation_state auto state = execution :: connect ( snd , rcv ); execution :: start ( state ); // at this point, it is guaranteed that the work represented by state has been submitted // to an execution resource, and that execution resource will eventually call one of the // completion operations on rcv // operation states are not movable, and therefore this operation state object must be // kept alive until the operation finishes
5.4. Sender algorithms are customizable
Senders being able to advertise what their completion schedulers are fulfills one of the promises of senders: that of being able to customize an implementation of a sender algorithm based on what scheduler any work it depends on will complete on.
The simple way to provide customizations for functions like
, that is for sender adaptors and sender consumers, is to follow the customization
scheme that has been adopted for C++20 ranges library; to do that, we would
define the expression
to be equivalent to:
-
, if that expression is well-formed; otherwisesender . then ( invocable ) -
, performed in a context where this call always performs ADL, if that expression is well-formed; otherwisethen ( sender , invocable ) -
a default implementation of
, which returns a sender adaptor, and then define the exact semantics of said adaptor.then
However, this definition is problematic. Imagine another sender adaptor,
,
which is a structured abstraction for a loop over an index space. Its default
implementation is just a for loop. However, for accelerator runtimes like CUDA,
we would like sender algorithms like
to have specialized behavior, which
invokes a kernel of more than one thread (with its size defined by the call to
); therefore, we would like to customize
for CUDA senders to
achieve this. However, there’s no reason for CUDA kernels to necessarily
customize the
sender adaptor, as the generic implementation is perfectly
sufficient. This creates a problem, though; consider the following snippet:
execution :: scheduler auto cuda_sch = cuda_scheduler {}; execution :: sender auto initial = execution :: schedule ( cuda_sch ); // the type of initial is a type defined by the cuda_scheduler // let’s call it cuda::schedule_sender<> execution :: sender auto next = execution :: then ( cuda_sch , []{ return 1 ; }); // the type of next is a standard-library unspecified sender adaptor // that wraps the cuda sender // let’s call it execution::then_sender_adaptor<cuda::schedule_sender<>> execution :: sender auto kernel_sender = execution :: bulk ( next , shape , []( int i ){ ... });
How can we specialize the
sender adaptor for our wrapped
? Well, here’s one possible approach, taking advantage of ADL
(and the fact that the definition of "associated namespace" also recursively
enumerates the associated namespaces of all template parameters of a type):
namespace cuda :: for_adl_purposes { template < typename ... SentValues > class schedule_sender { execution :: operation_state auto connect ( execution :: receiver auto rcv ); execution :: scheduler auto get_completion_scheduler () const ; }; execution :: sender auto bulk ( execution :: sender auto && input , execution :: shape auto && shape , invocable % lt ; sender - values ( input ) > auto && fn ) { // return a cuda sender representing a bulk kernel launch } } // namespace cuda::for_adl_purposes
However, if the input sender is not just a
like in the
example above, but another sender that overrides
by itself, as a member
function, because its author believes they know an optimization for bulk - the
specialization above will no longer be selected, because a member function of
the first argument is a better match than the ADL-found overload.
This means that well-meant specialization of sender algorithms that are entirely scheduler-agnostic can have negative consequences. The scheduler-specific specialization - which is essential for good performance on platforms providing specialized ways to launch certain sender algorithms - would not be selected in such cases. But it’s really the scheduler that should control the behavior of sender algorithms when a non-default implementation exists, not the sender. Senders merely describe work; schedulers, however, are the handle to the runtime that will eventually execute said work, and should thus have the final say in how the work is going to be executed.
Therefore, we are proposing the following customization scheme: the expression
, for any given sender algorithm
that accepts a sender as its first argument, should do the following:
-
Create a sender that implements the default implementation of the sender algorithm. That sender is tuple-like; it can be destructured into its constituent parts: algorithm tag, data, and child sender(s).
-
We query the child sender for its domain. A domain is a tag type associated with the scheduler that the child sender will complete on. If there are multiple child senders, we query all of them for their domains and require that they all be the same.
-
We use the domain to dispatch to a
customization, which accepts the sender and optionally performs a domain-specific transformation on it. This customization is expected to return a new sender, which will be returned fromtransform_sender
in place of the original sender.< sender - algorithm >
5.5. Sender adaptors are lazy
Contrary to early revisions of this paper, we propose to make all sender adaptors perform strictly lazy submission, unless specified otherwise.
Strictly lazy submission means that there is a guarantee
that no work is submitted to an execution resource before a receiver is
connected to a sender, and
is called on the resulting
operation state.
5.6. Lazy senders provide optimization opportunities
Because lazy senders fundamentally describe work, instead of describing or representing the submission of said work to an execution resource, and thanks to the flexibility of the customization of most sender algorithms, they provide an opportunity for fusing multiple algorithms in a sender chain together, into a single function that can later be submitted for execution by an execution resource. There are two ways this can happen.
The first (and most common) way for such optimizations to happen is thanks to the structure of the implementation: because all the work is done within callbacks invoked on the completion of an earlier sender, recursively up to the original source of computation, the compiler is able to see a chain of work described using senders as a tree of tail calls, allowing for inlining and removal of most of the sender machinery. In fact, when work is not submitted to execution resources outside of the current thread of execution, compilers are capable of removing the senders abstraction entirely, while still allowing for composition of functions across different parts of a program.
The second way for this to occur is when a sender algorithm is specialized for a specific set of arguments. For instance, an implementation could recognize two subsequent § 4.20.9 execution::bulks of compatible shapes, and merge them together into a single submission of a GPU kernel.
5.7. Execution resource transitions are two-step
Because
takes a sender as its first argument, it is not
actually directly customizable by the target scheduler. This is by design: the
target scheduler may not know how to transition from a scheduler such as
a CUDA scheduler; transitioning away from a GPU in an efficient manner requires
making runtime calls that are specific to the GPU in question, and the same is
usually true for other kinds of accelerators too (or for scheduler running on
remote systems). To avoid this problem, specialized schedulers like the ones
mentioned here can still hook into the transition mechanism, and inject a sender
which will perform a transition to the regular CPU execution resource, so that
any sender can be attached to it.
This, however, is a problem: because customization of sender algorithms must be
controlled by the scheduler they will run on (see § 5.4 Sender algorithms are customizable),
the type of the sender returned from
must be controllable by the
target scheduler. Besides, the target scheduler may itself represent a
specialized execution resource, which requires additional work to be performed
to transition to it. GPUs and remote node schedulers are once again good
examples of such schedulers: executing code on their execution resources
requires making runtime API calls for work submission, and quite possibly for
the data movement of the values being sent by the input sender passed into
.
To allow for such customization from both ends, we propose the inclusion of a
secondary transitioning sender adaptor, called
. This adaptor is
a form of
, but takes an additional, second argument: the input
sender. This adaptor is not meant to be invoked manually by the end users; they
are always supposed to invoke
, to ensure that both schedulers have a
say in how the transitions are made. Any scheduler that specializes
shall ensure that the return value of their customization
is equivalent to
, where
is a successor of
that sends values equivalent to those sent by
.
The default implementation of
is
.
5.8. All senders are typed
All senders must advertise the types they will send when they complete. There are many sender adaptors that need this information. Even just transitioning from one execution context to another requires temporarily storing the async result data so it can be propagated in the new execution context. Doing that efficiently requires knowing the type of the data.
The mechanism a sender uses to advertise its completions is the
customization point, which takes an environment and
must return a specialization of the
class
template. The template parameters of
is a
list of function types that represent the completion operations of the sender.
for example, the type
indicates
that the sender can complete successfully by passing a
and a
to the receiver’s
function.
This proposal includes utilities for parsing and manipulating the list of a
sender’s completion signatures. For instance,
is a template alias
for accessing a sender’s value completions. It takes a sender, an environment,
and two variadic template template parameters: a tuple-like template and a
variant-like template. You can get the value completions of
and
with
. For example, for a sender that can complete
successfully with either
or
,
would name the type
.
5.9. Customization points
Earlier versions of this paper used a dispatching technique known as
(see tag_invoke: A general pattern for supporting customisable functions) to allow for customization of basis operations
and sender algorithms. This technique used private friend functions named
"
" that are found by argument-dependent look-up. The
overloads are distinguished from each other by their first argument, which is
the type of the customization point object being customized. For instance, to
customize the
operation, a receiver type might do the
following:
struct my_receiver { friend void tag_invoke ( execution :: set_value_t , my_receiver && self , int value ) noexcept { std :: cout << "received value: " << value ; } //... };
The
technique, although it had its strengths, has been replaced
with a new (or rather, a very old) technique that uses explicit concept opt-ins
and named member functions. For instance, the
operation
is now customized by defining a member function named
in the
receiver type. This technique is more explicit and easier to understand than
. This is what a receiver author would do to customize
now:
struct my_receiver { using receiver_concept = execution :: receiver_t ; void set_value ( int value ) && noexcept { std :: cout << "received value: " << value ; } //... };
The only exception to this is the customization of queries. There is a need to
build queryable adaptors that can forward an open and unknowable set of queries
to some wrapped object. This is done by defining a member function named
in the adaptor type that takes the query CPO object as its first
(and usually only) argument. A queryable adaptor might look like this:
template < class Query , class Queryable , class ... Args > concept query_for = requires ( const Queryable & o , Args && ... args ) { o . query ( Query (), ( Args && ) args ...); }; template < class Allocator = std :: allocator <> , class Base = execution :: empty_env > struct with_allocator { Allocator alloc {}; Base base {}; // Forward unknown queries to the wrapped object: template < query_for < Base > Query > decltype ( auto ) query ( Query q ) const { return base . query ( q ); } // Specialize the query for the allocator: Allocator query ( execution :: get_allocator_t ) const { return alloc ; } };
Customization of sender algorithms such as
and
are handled differently because they must dispatch based on
where the sender is executing. See the section on § 5.4 Sender algorithms are customizable for
more information.
6. Specification
Much of this wording follows the wording of A Unified Executors Proposal for C++.
§ 22 General utilities library [utilities] is meant to be a diff relative to the wording of the [utilities] clause of Working Draft, Standard for Programming Language C++.
§ 33 Concurrency support library [thread] is meant to be a diff relative to the wording of the [thread] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from Composable cancellation for sender-based async operations.
§ 34 Execution control library [exec] is meant to be added as a new library clause to the working draft of C++.
14. Exception handling [except]
14.6. Special functions [except.special]
14.6.2. The std :: terminate
function [except.terminate]
At the end of the bulleted list in the Note in paragraph 1, add a new bullet as follows:
-
when a call to a
,wait ()
, orwait_until ()
function on a condition variable (33.7.4, 33.7.5) fails to meet a postcondition.wait_for ()
-
when a callback invocation exits via an exception when requesting stop on a
or astd :: stop_source
([stopsource.mem], [stopsource.inplace.mem]), or in the constructor ofstd :: inplace_stop_source
orstd :: stop_callback
([stopcallback.cons], [stopcallback.inplace.cons]) when a callback invocation exits via an exception.std :: inplace_stop_callback -
when a
object is destroyed that is still in therun_loop
state ([exec.run.loop]).running -
when
is called on aunhandled_stopped ()
object ([exec.with.awaitable.senders]) whose continuation is not a handle to a coroutine whose promise type has anwith_awaitable_senders < T >
member function.unhandled_stopped ()
16. Library introduction [library]
At the end of [expos.only.entity], add the following:
-
The following are defined for exposition only to aid in the specification of the library:
namespace std { // ...as before... }
-
An object
is said to be decay-copied from a subexpressiondst
if the type ofsrc
isdst
, anddecay_t < decltype (( src )) >
is copy-initialized fromdst
.src
16.1.
16.2.
16.3.
16.4.
16.4.1.
16.4.2.
16.4.3.
16.4.4.
16.4.4.1.
16.4.4.2.
16.4.4.3.
16.4.4.4.
16.4.4.5.
16.4.4.6.
16.4.4.6.1. General [allocator.requirements.general]
At the end of [allocator.requirements.general], add the following new paragraph:
-
[Example 2: The following is an allocator class template supporting the minimal interface that meets the requirements of [allocator.requirements.general]:
template < class T > struct SimpleAllocator { using value_type = T ; SimpleAllocator ( ctor args ); template < class U > SimpleAllocator ( const SimpleAllocator < U >& other ); T * allocate ( std :: size_t n ); void deallocate ( T * p , std :: size_t n ); template < class U > bool operator == ( const SimpleAllocator < U >& rhs ) const ; }; -- end example]
-
The following exposition-only concept defines the minimal requirements on an Allocator type.
template < class Alloc > concept simple - allocator = requires ( Alloc alloc , size_t n ) { { * alloc . allocate ( n ) } -> same_as < typename Alloc :: value_type &> ; { alloc . deallocate ( alloc . allocate ( n ), n ) }; } && copy_constructible < Alloc > && equality_comparable < Alloc > ; -
A type
modelsAlloc
if it meets the requirements of [allocator.requirements.general].simple - allocator
-
17. Language support library [cpp]
17.3. Implementation properties [support.limits]
17.3.2. Header < version >
synopsis [version.syn]
To the
synopsis, add the following:
#define __cpp_lib_semaphore 201907L // also in <semaphore> #define __cpp_lib_senders 2024XXL // also in <execution> #define __cpp_lib_shared_mutex 201505L // also in <shared_mutex>
22. General utilities library [utilities]
22.10. Function objects [function.objects]
22.10.2. Header < functional >
synopsis [functional.syn]
At the end of this subclause, insert the following
declarations into the synopsis within
:
namespace std { // ...as before... namespace ranges { // 22.10.9, concept-constrained comparisons struct equal_to ; // freestanding struct not_equal_to ; // freestanding struct greater ; // freestanding struct less ; // freestanding struct greater_equal ; // freestanding struct less_equal ; // freestanding } template < class Fn , class ... Args > concept callable = // exposition only requires ( Fn && fn , Args && ... args ) { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...); }; template < class Fn , class ... Args > concept nothrow - callable = // exposition only callable < Fn , Args ... > && requires ( Fn && fn , Args && ... args ) { { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...) } noexcept ; }; // exposition only: template < class Fn , class ... Args > using call - result - t = decltype ( declval < Fn > ()( declval < Args > ()...)); template < const auto & Tag > using decayed - typeof = decltype ( auto ( Tag )); // exposition only }
33. Concurrency support library [thread]
33.3. Stop tokens [thread.stoptoken]
33.3.1. Introduction [thread.stoptoken.intro]
-
Subclause [thread.stoptoken] describes components that can be used to asynchronously request that an operation stops execution in a timely manner, typically because the result is no longer required. Such a request is called a stop request.
-
,stop_source
, andstop_token
implementstop_callback
,stoppable - source
, andstoppable_token
are concepts that specify the required syntax and semantics of sharedstoppable - callback - for ownership ofaccess to a stop state.AnyAny object modeling
,stop_source
, orstop_token
object that shares ownership of the same stop state is an associatedstop_callback
,stop_source
, orstop_token
, respectively.stop_callback
,stoppable - source
, orstoppable_token
that refers to the same stop state is an associatedstoppable - callback - for
,stoppable - source
, orstoppable_token
, respectively.stoppable - callback - for The last remaining owner of the stop state automatically releases the resources associated with the stop state. -
A n object of a type that models
can be passed to an operationstop pable _token whichthat can either-
actively poll the token to check if there has been a stop request, or
-
register a callback
using thethat will be called in the event that a stop request is made.
class template whichstop_callback
A stop request made via
aan object whose type modelsstop_source
will be visible to all associatedstoppable - source
andstop pable _token stop_source
objects. Once a stop request has been made it cannot be withdrawn (a subsequent stop request has no effect).stoppable - source -
-
Callbacks registered via
aan object whose type models
objectstop_callback
are called when a stop request is first made by any associatedstoppable - callback - for stop_source
object.stoppable - source
The following paragraph is moved to the specification of
the new
concept.
-
Calls to the functions
,request_stop
, andstop_requested
do not introduce data races. A call tostop_possible
that returnsrequest_stop true
synchronizes with a call to
on an associatedstop_requested
orstop_token
object that returnsstop_source true
. Registration of a callback synchronizes with the invocation of that callback.
-
The types
andstop_source
and the class templatestop_token
implement the semantics of shared ownership of a stop state. The last remaining owner of the stop state automatically releases the resources associated with the stop state.stop_callback -
An object of type
is the sole owner of its stop state. An object of typeinplace_stop_source
or of a specialization of the class templateinplace_stop_token
does not participate in ownership of its associated stop state. They are for use when all uses of the associated token and callback objects are known to nest within the lifetime of theinplace_stop_callback
object.inplace_stop_source
33.3.2. Header < stop_token >
synopsis [thread.stoptoken.syn]
In this subclause, insert the following
declarations into the
synopsis:
namespace std { // [stoptoken.concepts], stop token concepts template < class CallbackFn , class Token , class Initializer = CallbackFn > concept stoppable - callback - for = see below ; // exposition only template < class Token > concept stoppable_token = see below ; template < class Token > concept unstoppable_token = see below ; template < class Source > concept stoppable - source = see below ; // exposition only // 33.3.3, class stop_token class stop_token ; // 33.3.4, class stop_source class stop_source ; // no-shared-stop-state indicator struct nostopstate_t { explicit nostopstate_t () = default ; }; inline constexpr nostopstate_t nostopstate {}; // 33.3.5, class template stop_callback template < class Callback Fn > class stop_callback ; // [stoptoken.never], class never_stop_token class never_stop_token ; // [stoptoken.inplace], class inplace_stop_token class inplace_stop_token ; // [stopsource.inplace], class inplace_stop_source class inplace_stop_source ; // [stopcallback.inplace], class template inplace_stop_callback template < class CallbackFn > class inplace_stop_callback ; template < class T , class CallbackFn > using stop_callback_for_t = T :: template callback_type < CallbackFn > ; }
Insert the following subclause as a new subclause between
Header
synopsis [thread.stoptoken.syn] and Class
[stoptoken].
33.3.3. Stop token concepts [stoptoken.concepts]
-
The exposition-only
concept checks for a callback compatible with a givenstoppable - callback - for
type.Token template < class CallbackFn , class Token , class Initializer = CallbackFn > concept stoppable - callback - for = // exposition only invocable < CallbackFn > && constructible_from < CallbackFn , Initializer > && requires { typename stop_callback_for_t < Token , CallbackFn > ; } && constructible_from < stop_callback_for_t < Token , CallbackFn > , const Token & , Initializer > ; -
Let
andt
be distinct, valid objects of typeu
that reference the same logical stop state; letToken
be an expression such thatinit
issame_as < decltype ( init ), Initializer > true
; and let
denote the typeSCB
.stop_callback_for_t < Token , CallbackFn > -
The concept
is modeled only if:stoppable - callback - for < CallbackFn , Token , Initializer > -
The following concepts are modeled:
-
constructible_from < SCB , Token , Initializer > -
constructible_from < SCB , Token & , Initializer > -
constructible_from < SCB , const Token , Initializer >
-
-
An object of type
has an associated callback function of typeSCB
. LetCallbackFn
be an object of typescb
and letSCB
denotecallback_fn
’s associated callback function. Direct-non-list-initializingscb
from argumentsscb
andt
shall execute a stoppable callback registration as follows:init -
If
ist . stop_possible () true
:-
shall be direct-initialized withcallback_fn
.init -
Construction of
shall only throw exceptions thrown by the initialization ofscb
fromcallback_fn
.init -
The callback invocation
shall be registered withstd :: forward < CallbackFn > ( callback_fn )()
’s associated stop state as follows:t -
If
evaluates tot . stop_requested () false
at the time of registration, the callback invocation is added to the stop state’s list of callbacks such that
is evaluated if a stop request is made on the stop state.std :: forward < CallbackFn > ( callback_fn )() -
Otherwise,
shall be immediately evaluated on the thread executingstd :: forward < CallbackFn > ( callback_fn )()
’s constructor, and the callback invocation shall not be added to the list of callback invocations.scb
-
-
If the callback invocation was added to stop state’s list of callbacks,
shall be associated with the stop state.scb
-
-
If
ist . stop_possible () false
, there is no requirement that the initialization of
causes the initialization ofscb
.callback_fn
-
-
Destruction of
shall execute a stoppable callback deregistration as follows (in order):scb -
If the constructor of
did not register a callback invocation withscb
’s stop state, then the stoppable callback deregistration shall have no effect other than destroyingt
if it was constructed.callback_fn -
Otherwise, the invocation of
shall be removed from the associated stop state.callback_fn -
If
is concurrently executing on another thread then the stoppable callback deregistration shall block ([defns.block]) until the invocation ofcallback_fn
returns such that the return from the invocation ofcallback_fn
strongly happens before ([intro.races]) the destruction ofcallback_fn
.callback_fn -
If
is executing on the current thread, then the destructor shall not block waiting for the return from the invocation ofcallback_fn
.callback_fn -
A stoppable callback deregistration shall not block on the completion of the invocation of some other callback registered with the same logical stop state.
-
The stoppable callback deregistration shall destroy
.callback_fn
-
-
-
The
concept checks for the basic interface of a stop token that is copyable and allows polling to see if stop has been requested and also whether a stop request is possible. Thestoppable_token
concept checks for aunstoppable_token
type that does not allow stopping.stoppable_token template < template < class > class > struct check - type - alias - exists ; // exposition-only template < class Token > concept stoppable_token = requires ( const Token tok ) { typename check - type - alias - exists < Token :: template callback_type > ; { tok . stop_requested () } noexcept -> same_as < bool > ; { tok . stop_possible () } noexcept -> same_as < bool > ; { Token ( tok ) } noexcept ; // see implicit expression variations // ([concepts.equality]) } && copyable < Token > && equality_comparable < Token > && swappable < Token > ; template < class Token > concept unstoppable_token = stoppable_token < Token > && requires ( const Token tok ) { requires bool_constant < ( ! tok . stop_possible ()) >:: value ; }; -
An object whose type models
has at most one associated logical stop state. Astoppable_token
object with no associated stop state is said to be disengaged.stoppable_token -
Let
be an evaluation ofSP
that ist . stop_possible () false
, and let
be an evaluation ofSR
that ist . stop_requested () true
. -
The type
modelsToken
only if:stoppable_token -
Any evaluation of
oru . stop_possible ()
that happens after ([intro.races])u . stop_requested ()
isSP false
. -
Any evaluation of
oru . stop_possible ()
that happens afteru . stop_requested ()
isSR true
. -
For any types
andCallbackFn
such thatInitializer
is satisfied,stoppable - callback - for < CallbackFn , Token , Initializer >
is modeled.stoppable - callback - for < CallbackFn , Token , Initializer > -
If
is disengaged, evaluations oft
andt . stop_possible ()
aret . stop_requested () false
. -
If
andt
reference the same stop state, or if bothu
andt
are disengaged,u
ist == u true
; otherwise, it isfalse
.
-
-
An object whose type models the exposition-only
concept can be queried whether stop has been requested (stoppable - source
) and whether stop is possible (stop_requested
). It is a factory for associated stop tokens (stop_possible
), and a stop request can be made on it (get_token
). It maintains a list of registered stop callback invocations that it executes when a stop request is first made.request_stop template < class Source > concept stoppable - source = // exposition only requires ( Source & src , const Source csrc ) { // see implicit expression variations // ([concepts.equality]) { csrc . get_token () } -> stoppable_token ; { csrc . stop_possible () } noexcept -> same_as < bool > ; { csrc . stop_requested () } noexcept -> same_as < bool > ; { src . request_stop () } -> same_as < bool > ; }; -
An object whose type models
has at most one associated logical stop state. If it has no associated stop state, it is said to be disengaged. Letstoppable - source
be an object whose type modelss
and that is disengaged.stoppable - source
ands . stop_possible ()
shall returns . stop_requested () false
. -
Let
be an object whose type modelst
. Ifstoppable - source
is disengaged,t
shall return a disengaged stop token; otherwise, it shall return a stop token that is associated with the stop state oft . get_token ()
.t
The following paragraph is moved from the introduction, with minor modifications (underlined in green).
-
Calls to the member functions
,request_stop
, andstop_requested
and similarly named member functions on associatedstop_possible
objects do not introduce data races. A call tostoppable_token
that returnsrequest_stop true
synchronizes with a call to
on an associatedstop_requested
orstop pable _token stop_source
object that returnsstoppable - source true
. Registration of a callback synchronizes with the invocation of that callback.
The following paragraph is taken from § 33.3.5.3 Member functions [stopsource.mem] and modified.
-
If the
is disengaged,stoppable - source
shall have no effect and returnrequest_stop false
. Otherwise, it shall execute a stop request operation on the associated stop state. A stop request operation determines whether the stop state has received a stop request, and if not, makes a stop request. The determination and making of the stop request shall happen atomically, as-if by a read-modify-write operation ([intro.races]). If the request was made, the stop state’s registered callback invocations shall be synchronously executed. If an invocation of a callback exits via an exception then
shall be invoked ([except.terminate]). No constraint is placed on the order in which the callback invocations are executed.terminate
shall returnrequest_stop true
if a stop request was made, andfalse
otherwise. After a call to
either a call torequest_stop
shall returnstop_possible false
or a call to
shall returnstop_requested true
.A stop request includes notifying all condition variables of type
temporarily registered during an interruptible wait ([thread.condvarany.intwait]).condition_variable_any
-
Modify subclause [stoptoken] as follows:
33.3.4. Class stop_token
[stoptoken]
33.3.4.1. General [stoptoken.general]
-
The classThe class
provides an interface for querying whether a stop request has been made (stop_token
) or can ever be made (stop_requested
) using an associatedstop_possible
object ([stopsource]). Astop_source
can also be passed to astop_token
([stopcallback]) constructor to register a callback to be called when a stop request has been made from an associatedstop_callback
.stop_source
models the conceptstop_token
. It shares ownership of its stop state, if any, with its associatedstoppable_token
object ([stopsource]) and anystop_source
objects to which it compares equal.stop_token
namespace std { class stop_token { public : template < class CallbackFn > using callback_type = stop_callback < CallbackFn > ; // [stoptoken.cons], constructors, copy, and assignment stop_token () noexcept = default ; stop_token ( const stop_token & ) noexcept ; stop_token ( stop_token && ) noexcept ; stop_token & operator = ( const stop_token & ) noexcept ; stop_token & operator = ( stop_token && ) noexcept ; ~ stop_token (); // [stoptoken.mem], Member functions void swap ( stop_token & ) noexcept ; // [stoptoken.mem], stop handling [[ nodiscard ]] bool stop_requested () const noexcept ; [[ nodiscard ]] bool stop_possible () const noexcept ; bool operator == ( const stop_token & rhs ) const noexcept = default ; [[ nodiscard ]] friend bool operator == ( const stop_token & lhs , const stop_token & rhs ) noexcept ; friend void swap ( stop_token & lhs , stop_token & rhs ) noexcept ; private : shared_ptr < unspecified > stop - state {}; // exposition only }; }
-
refers to thestop - state
’s associated stop state. Astop_token
object is disengaged whenstop_token
is empty.stop - state
33.3.4.2. Constructors, copy, and assignment [stoptoken.cons]
stop_token () noexcept ;
-
Postconditions:
Because the created
isstop_possible () false
and
isstop_requested () false
.
object can never receive a stop request, no resources are allocated for a stop state.stop_token
stop_token ( const stop_token & rhs ) noexcept ;
-
Postconditions:
is* this == rhs true
.
and* this
share the ownership of the same stop state, if any.rhs
stop_token ( stop_token && rhs ) noexcept ;
-
Postconditions:
contains the value of* this
prior to the start of construction andrhs
isrhs . stop_possible () false
.
~ stop_token ();
-
Effects: Releases ownership of the stop state, if any.
stop_token & operator = ( const stop_token & rhs ) noexcept ;
-
Effects: Equivalent to:
.stop_token ( rhs ). swap ( * this ) -
Returns:
.* this
stop_token & operator = ( stop_token && rhs ) noexcept ;
-
Effects: Equivalent to:
.stop_token ( std :: move ( rhs )). swap ( * this ) -
Returns:
.* this
Move
into [stoptoken.mem]:
33.3.4.3. Member functions [stoptoken.mem]
void swap ( stop_token & rhs ) noexcept ;
-
Effects:
Exchanges the values ofEquivalent to:
and* this
.rhs
.stop - state . swap ( rhs . stop - state )
[[ nodiscard ]] bool stop_requested () const noexcept ;
-
Returns:
true
if
has ownership of* this
refers to a stop state that has received a stop request; otherwise,stop - state false
.
[[ nodiscard ]] bool stop_possible () const noexcept ;
-
Returns:
false
if:-
* this does not have ownership of a stop stateis disengaged , or -
a stop request was not made and there are no associated
objects; otherwise,stop_source true
.
-
The following are covered by the
and
concepts.
33.3.4.4. Non-member functions [stoptoken.nonmembers]
[[ nodiscard ]] bool operator == ( const stop_token & lhs , const stop_token & rhs ) noexcept ;
-
Returns:
true
if
andlhs
have ownership of the same stop state or if bothrhs
andlhs
do not have ownership of a stop state; otherwiserhs false
.
friend void swap ( stop_token & x , stop_token & y ) noexcept ;
-
Effects: Equivalent to:
.x . swap ( y )
33.3.5. Class stop_source
[stopsource]
33.3.5.1. General [stopsource.general]
-
The class
implements the semantics of making a stop request. A stop request made on astop_source
object is visible to all associatedstop_source
andstop_source
([thread.stoptoken]) objects. Once a stop request has been made it cannot be withdrawn (a subsequent stop request has no effect).stop_token
namespace std { The following definitions are already specified in the
< stop_token > synopsis : // no-shared-stop-state indicator struct nostopstate_t { explicit nostopstate_t () = default ; }; inline constexpr nostopstate_t nostopstate {}; class stop_source { public : // 33.3.4.2, constructors, copy, and assignment stop_source (); explicit stop_source ( nostopstate_t ) noexcept ; {} stop_source ( const stop_source & ) noexcept ; stop_source ( stop_source && ) noexcept ; stop_source & operator = ( const stop_source & ) noexcept ; stop_source & operator = ( stop_source && ) noexcept ; ~ stop_source (); // [stopsource.mem], Member functions void swap ( stop_source & ) noexcept ; // 33.3.4.3, stop handling [[ nodiscard ]] stop_token get_token () const noexcept ; [[ nodiscard ]] bool stop_possible () const noexcept ; [[ nodiscard ]] bool stop_requested () const noexcept ; bool request_stop () noexcept ; bool operator == ( const stop_source & rhs ) const noexcept = default ; [[ nodiscard ]] friend bool operator == ( const stop_source & lhs , const stop_source & rhs ) noexcept ; friend void swap ( stop_source & lhs , stop_source & rhs ) noexcept ; private : shared_ptr < unspecified > stop - state {}; // exposition only }; }
-
refers to thestop - state
’s associated stop state. Astop_source
object is disengaged whenstop_source
is empty.stop - state -
modelsstop_source
,stoppable - source
,copyable
, andequality_comparable
.swappable
33.3.5.2. Constructors, copy, and assignment [stopsource.cons]
stop_source ();
-
Effects: Initialises
to have ownership of* this
with a pointer to a new stop state.stop - state -
Postconditions:
isstop_possible () true
and
isstop_requested () false
. -
Throws:
if memory cannot be allocated for the stop state.bad_alloc
explicit stop_source ( nostopstate_t ) noexcept ;
-
Postconditions:
isstop_possible () false
and
isstop_requested () false
. No resources are allocated for the state.
stop_source ( const stop_source & rhs ) noexcept ;
-
Postconditions:
== rhs is* this true
.
and* this
share the ownership of the same stop state, if any.rhs
stop_source ( stop_source && rhs ) noexcept ;
-
Postconditions:
contains the value of* this
prior to the start of construction andrhs
isrhs . stop_possible () false
.
~ stop_source ();
-
Effects: Releases ownership of the stop state, if any.
stop_source & operator = ( const stop_source & rhs ) noexcept ;
-
Effects: Equivalent to:
.stop_source ( rhs ). swap ( * this ) -
Returns:
.* this
stop_source & operator = ( stop_source && rhs ) noexcept ;
-
Effects: Equivalent to:
.stop_source ( std :: move ( rhs )). swap ( * this ) -
Returns:
.* this
Move
into [stopsource.mem]:
33.3.5.3. Member functions [stopsource.mem]
void swap ( stop_source & rhs ) noexcept ;
-
Effects:
Exchanges the values ofEquivalent to:
and* this rhs
.stop - state . swap ( rhs . stop - state )
[[ nodiscard ]] stop_token get_token () const noexcept ;
-
Returns:
ifstop_token ()
isstop_possible () false
; otherwise a new associated
object ; i.e., itsstop_token
member is equal to thestop - state
member ofstop - state
.* this
[[ nodiscard ]] bool stop_possible () const noexcept ;
-
Returns:
true
if
has ownership of a stop state; otherwise,* this false
.stop - state != nullptr
[[ nodiscard ]] bool stop_requested () const noexcept ;
-
Returns:
true
if
has ownership of* this
refers to a stop state that has received a stop request; otherwise,stop - state false
.
bool request_stop () noexcept ;
-
Effects: Executes a stop request operation ([stoptoken.concepts]) on the associated stop state, if any.
-
Effects: If
does not have ownership of a stop state, returns* this false
. Otherwise, atomically determines whether the owned stop state has received a stop request, and if not, makes a stop request. The determination and making of the stop request are an atomic read-modify-write operation ([intro.races]). If the request was made, the callbacks registered by associated
objects are synchronously called. If an invocation of a callback exits via an exception thenstop_callback
is invoked ([except.terminate]).terminate A stop request includes notifying all condition variables of type
temporarily registered during an interruptible wait ([thread.condvarany.intwait]).condition_variable_any -
Postconditions:
isstop_possible () false
or
isstop_requested () true
. -
Returns:
true
if this call made a stop request; otherwisefalse
.
33.3.5.4. Non-member functions [stopsource.nonmembers]
[[ nodiscard ]] friend bool operator == ( const stop_source & lhs , const stop_source & rhs ) noexcept ;
-
Returns:
true
if
andlhs
have ownership of the same stop state or if bothrhs
andlhs
do not have ownership of a stop state; otherwiserhs false
.
friend void swap ( stop_source & x , stop_source & y ) noexcept ;
-
Effects: Equivalent to:
.x . swap ( y )
33.3.6. Class template stop_callback
[stopcallback]
33.3.6.1. General [stopcallback.general]
namespace std { template < class Callback Fn > class stop_callback { public : using callback_type = Callback Fn ; // 33.3.5.2, constructors and destructor template < class C Initializer > explicit stop_callback ( const stop_token & st , C Initializer && cb init ) noexcept ( is_nothrow_constructible_v < Callback Fn , C Initializer > ); template < class C Initializer > explicit stop_callback ( stop_token && st , C Initializer && cb init ) noexcept ( is_nothrow_constructible_v < Callback Fn , C Initializer > ); ~ stop_callback (); stop_callback ( const stop_callback & ) = delete ; stop_callback ( stop_callback && ) = delete ; stop_callback & operator = ( const stop_callback & ) = delete ; stop_callback & operator = ( stop_callback && ) = delete ; private : Callback Fn callback callback - fn ; // exposition only }; template < class Callback Fn > stop_callback ( stop_token , Callback Fn ) -> stop_callback < Callback Fn > ; }
-
Mandates:
is instantiated with an argument for the template parameterstop_callback
that satisfies bothCallback Fn
andinvocable
.destructible
-
Preconditions:
is instantiated with an argument for the template parameterstop_callback
that models bothCallback
andinvocable
.destructible
-
Remarks: For a type
, ifInitializer
is satisfied, thenstoppable - callback - for < CallbackFn , stop_token , Initializer >
is modeled. The exposition-onlystoppable - callback - for < CallbackFn , stop_token , Initializer >
member is the associated callback function ([stoptoken.concepts]) ofcallback - fn
objects.stop_callback < CallbackFn >
33.3.6.2. Constructors and destructor [stopcallback.cons]
template < class C Initializer > explicit stop_callback ( const stop_token & st , C Initializer && cb init ) noexcept ( is_nothrow_constructible_v < Callback Fn , C Initializer > ); template < class C Initializer > explicit stop_callback ( stop_token && st , C Initializer && cb init ) noexcept ( is_nothrow_constructible_v < Callback Fn , C Initializer > );
-
Constraints:
andCallback Fn
satisfyC Initializer
.constructible_from < Callback Fn , C Initializer >
-
Preconditions:
andCallback
modelC
.constructible_from < Callback , C >
-
Effects: Initializes
withcallback callback - fn
and executes a stoppable callback registration ([stoptoken.concepts]) .std :: forward < C Initializer > ( cb init ) IfIf a callback is registered with
isst . stop_requested () true
, then
is evaluated in the current thread before the constructor returns. Otherwise, ifstd :: forward & lt ; Callback > ( callback )()
has ownership of a stop state, acquires shared ownership of that stop state and registers the callback with that stop state such thatst
is evaluated by the first call tostd :: forward & lt ; Callback > ( callback )()
on an associatedrequest_stop ()
.stop_source
’s shared stop state, thenst
acquires shared ownership of that stop state.* this
-
Throws: Any exception thrown by the initialization of
.callback -
Remarks: If evaluating
exits via an exception, thenstd :: forward < Callback > ( callback )()
is invoked ([except.terminate]).terminate
~ stop_callback ();
-
Effects:
Unregisters the callback from the owned stop state, if any. The destructor does not block waiting for the execution of another callback registered by an associatedExecutes a stoppable callback deregistration ([stoptoken.concepts]) and releases ownership of the stop state, if any.
. Ifstop_callback
is concurrently executing on another thread, then the return from the invocation ofcallback
strongly happens before ([intro.races])callback
is destroyed. Ifcallback
is executing on the current thread, then the destructor does not block ([defns.block]) waiting for the return from the invocation ofcallback
. Releasescallback
Insert a new subclause, Class
[stoptoken.never], after subclause Class template
[stopcallback], as a new subclause of Stop tokens [thread.stoptoken].
33.3.7. Class never_stop_token
[stoptoken.never]
33.3.7.1. General [stoptoken.never.general]
-
The class
models thenever_stop_token
concept. It provides a stop token interface, but also provides static information that a stop is never possible nor requested.unstoppable_token namespace std { class never_stop_token { struct callback - type { // exposition only explicit callback - type ( never_stop_token , auto && ) noexcept {} }; public : template < class > using callback_type = callback - type ; static constexpr bool stop_requested () noexcept { return false; } static constexpr bool stop_possible () noexcept { return false; } bool operator == ( const never_stop_token & ) const = default ; }; }
Insert a new subclause, Class
[stoptoken.inplace], after the subclause added above, as a new subclause
of Stop tokens [thread.stoptoken].
33.3.8. Class inplace_stop_token
[stoptoken.inplace]
33.3.8.1. General [stoptoken.inplace.general]
-
The class
models the conceptinplace_stop_token
. It references the stop state of its associatedstoppable_token
object ([stopsource.inplace]), if any.inplace_stop_source namespace std { class inplace_stop_token { public : template < class CallbackFn > using callback_type = inplace_stop_callback < CallbackFn > ; inplace_stop_token () = default ; bool operator == ( const inplace_stop_token & ) const = default ; // [stoptoken.inplace.mem], member functions bool stop_requested () const noexcept ; bool stop_possible () const noexcept ; void swap ( inplace_stop_token & ) noexcept ; private : const inplace_stop_source * stop - source = nullptr ; // exposition only }; }
33.3.8.2. Member functions [stoptoken.inplace.members]
void swap ( inplace_stop_token & rhs ) noexcept ;
-
Effects: Exchanges the values of
andstop - source
.rhs . stop - source
bool stop_requested () const noexcept ;
-
Effects: Equivalent to:
return stop - source != nullptr && stop - source -> stop_requested (); -
As specified in [basic.life], the behavior of
is undefined unless the call strongly happens before the start of the destructor of the associatedstop_requested ()
, if any.inplace_stop_source
bool stop_possible () const noexcept ;
-
Returns:
.stop - source != nullptr -
As specified in [basic.stc.general], the behavior of
is implementation-defined unless the call strongly happens before the end of the storage duration of the associatedstop_possible ()
object, if any.inplace_stop_source
Insert a new subclause, Class
[stopsource.inplace], after the subclause added above, as a new subclause
of Stop tokens [thread.stoptoken].
33.3.9. Class inplace_stop_source
[stopsource.inplace]
33.3.9.1. General [stopsource.inplace.general]
-
The class
modelsinplace_stop_source
.stoppable - source namespace std { class inplace_stop_source { public : // [stopsource.inplace.cons], constructors, copy, and assignment constexpr inplace_stop_source () noexcept ; inplace_stop_source ( inplace_stop_source && ) = delete ; inplace_stop_source ( const inplace_stop_source & ) = delete ; inplace_stop_source & operator = ( inplace_stop_source && ) = delete ; inplace_stop_source & operator = ( const inplace_stop_source & ) = delete ; ~ inplace_stop_source (); //[stopsource.inplace.mem], stop handling constexpr inplace_stop_token get_token () const noexcept ; static constexpr bool stop_possible () noexcept { return true; } bool stop_requested () const noexcept ; bool request_stop () noexcept ; }; }
33.3.9.2. Constructors, copy, and assignment [stopsource.inplace.cons]
constexpr inplace_stop_source () noexcept ;
-
Effects: Initializes a new stop state inside
.* this -
Postconditions:
isstop_requested () false
.
33.3.9.3. Members [stopsource.inplace.mem]
constexpr inplace_stop_token get_token () const noexcept ;
-
Returns: A new associated
object. Theinplace_stop_token
object’sinplace_stop_token
member is equal tostop - source
.this
bool stop_requested () const noexcept ;
-
Returns:
true
if the stop state inside
has received a stop request; otherwise,* this false
.
bool request_stop () noexcept ;
-
Effects: Executes a stop request operation ([stoptoken.concepts]).
-
Postconditions:
isstop_requested () true
.
Insert a new subclause, Class template
[stopcallback.inplace], after the subclause
added above, as a new subclause of Stop tokens [thread.stoptoken].
33.3.10. Class template inplace_stop_callback
[stopcallback.inplace]
33.3.10.1. General [stopcallback.inplace.general]
namespace std { template < class CallbackFn > class inplace_stop_callback { public : using callback_type = CallbackFn ; // [stopcallback.inplace.cons], constructors and destructor template < class Initializer > explicit inplace_stop_callback ( inplace_stop_token st , Initializer && init ) noexcept ( is_nothrow_constructible_v < CallbackFn , Initializer > ); ~ inplace_stop_callback (); inplace_stop_callback ( inplace_stop_callback && ) = delete ; inplace_stop_callback ( const inplace_stop_callback & ) = delete ; inplace_stop_callback & operator = ( inplace_stop_callback && ) = delete ; inplace_stop_callback & operator = ( const inplace_stop_callback & ) = delete ; private : CallbackFn callback - fn ; // exposition only }; template < class CallbackFn > inplace_stop_callback ( inplace_stop_token , CallbackFn ) -> inplace_stop_callback < CallbackFn > ; }
-
Mandates:
satisfies bothCallbackFn
andinvocable
.destructible -
Remarks: For a type
, ifInitializer
is satisfied, thenstoppable - callback - for < CallbackFn , inplace_stop_token , Initializer >
is modeled. For anstoppable - callback - for < CallbackFn , inplace_stop_token , Initializer >
object, the exposition-onlyinplace_stop_callback < CallbackFn >
member is its associated callback function ([stoptoken.concepts]).callback - fn
33.3.10.2. Constructors and destructor [stopcallback.inplace.cons]
template < class Initializer > explicit inplace_stop_callback ( inplace_stop_token st , Initializer && init ) noexcept ( is_nothrow_constructible_v < CallbackFn , Initializer > );
-
Constraints:
is satisfied.constructible_from < CallbackFn , Initializer > -
Effects: Initializes
withcallback - fn
and executes a stoppable callback registration ([stoptoken.concepts]).std :: forward < Initializer > ( init )
~ inplace_stop_callback ();
-
Effects: Executes a stoppable callback deregistration ([stoptoken.concepts]).
Insert a new top-level clause
34. Execution control library [exec]
34.1. General [exec.general]
-
This Clause describes components supporting execution of function objects [function.objects].
-
The following subclauses describe the requirements, concepts, and components for execution control primitives as summarized in Table 1.
Subclause | Header |
-
Table 2 shows the types of customization point objects [customization.point.object] used in the execution control library:
Customization point object type | Purpose | Examples |
---|---|---|
core | provide core execution functionality, and connection between core components | e.g., ,
|
completion functions | called by senders to announce the completion of the work (success, error, or cancellation) | , ,
|
senders | allow the specialization of the provided sender algorithms |
|
queries | allow querying different properties of objects |
|
-
This clause makes use of the following exposition-only entities:
-
For a subexpression
, letexpr
be expression-equivalent toMANDATE - NOTHROW ( expr )
.expr -
Mandates:
isnoexcept ( expr ) true
.
-
-
namespace std { template < class T > concept movable - value = // exposition only move_constructible < decay_t < T >> && constructible_from < decay_t < T > , T > && ( ! is_array_v < remove_reference_t < T >> ); } -
For function types
andF1
denotingF2
andR1 ( Args1 ...)
respectively,R2 ( Args2 ...)
isMATCHING - SIG ( F1 , F2 ) true
if and only if
issame_as < R1 ( Args1 && ...), R2 ( Args2 && ...) > true
. -
For a subexpression
, leterr
beErr
and letdecltype (( err ))
be:AS - EXCEPT - PTR ( err ) -
iferr
denotes the typedecay_t < Err >
.exception_ptr -
Mandates:
iserr != exception_ptr () true
.
-
-
Otherwise,
ifmake_exception_ptr ( system_error ( err ))
denotes the typedecay_t < Err >
.error_code -
Otherwise,
.make_exception_ptr ( err )
-
-
34.2. Queries and queryables [exec.queryable]
34.2.1. General [exec.queryable.general]
-
A queryable object is a read-only collection of key/value pairs where each key is a customization point object known as a query object. A query is an invocation of a query object with a queryable object as its first argument and a (possibly empty) set of additional arguments. A query imposes syntactic and semantic requirements on its invocations.
-
Let
be a query object, letq
be a (possibly empty) pack of subexpressions, letargs
be a subexpression that refers to a queryable objectenv
of typeo
, and letO
be a subexpression referring tocenv
such thato
isdecltype (( cenv ))
. The expressionconst O &
is equal to ([concepts.equality]) the expressionq ( env , args ...)
.q ( cenv , args ...) -
The type of a query expression can not be
.void -
The expression
is equality-preserving ([concepts.equality]) and does not modify the query object or the arguments.q ( env , args ...) -
If the expression
is well-formed, then it is expression-equivalent toenv . query ( q , args ...)
.q ( env , args ...) -
Unless otherwise specified, the result of a query is valid as long as the queryable object is valid.
34.2.2. queryable
concept [exec.queryable.concept]
namespace std { template < class T > concept queryable = destructible < T > ; // exposition only }
-
The exposition-only
concept specifies the constraints on the types of queryable objects.queryable -
Let
be an object of typeenv
. The typeEnv
modelsEnv
if for each callable objectqueryable
and a pack of subexpressionsq
, ifargs
isrequires { q ( env , args ...) } true
then
meets any semantic requirements imposed byq ( env , args ...)
.q
34.3. Asynchronous operations [async.ops]
-
An execution resource is a program entity that manages a (possibly dynamic) set of execution agents ([thread.req.lockable.general]), which it uses to execute parallel work on behalf of callers. [Example 1: The currently active thread, a system-provided thread pool, and uses of an API associated with an external hardware accelerator are all examples of execution resources. -- end example] Execution resources execute asynchronous operations. An execution resource is either valid or invalid.
-
An asynchronous operation is a distinct unit of program execution that:
-
... is explicitly created.
-
... can be explicitly started once at most.
-
... once started, eventually completes exactly once with a (possibly empty) set of result datums and in exactly one of three dispositions: success, failure, or cancellation.
-
A successful completion, also known as a value completion, can have an arbitrary number of result datums.
-
A failure completion, also known as an error completion, has a single result datum.
-
A cancellation completion, also known as a stopped completion, has no result datum.
An asynchronous operation’s async result is its disposition and its (possibly empty) set of result datums.
-
-
... can complete on a different execution resource than the execution resource on which it started.
-
... can create and start other asynchronous operations called child operations. A child operation is an asynchronous operation that is created by the parent operation and, if started, completes before the parent operation completes. A parent operation is the asynchronous operation that created a particular child operation.
An asynchronous operation can in fact execute synchronously; that is, it can complete during the execution of its start operation on the thread of execution that started it.
-
-
An asynchronous operation has associated state known as its operation state.
-
An asynchronous operation has an associated environment. An environment is a queryable object ([exec.queryable]) representing the execution-time properties of the operation’s caller. The caller of an asynchronous operation is its parent operation or the function that created it. An asynchronous operation’s operation state owns the operation’s environment.
-
An asynchronous operation has an associated receiver. A receiver is an aggregation of three handlers for the three asynchronous completion dispositions: a value completion handler for a value completion, an error completion handler for an error completion, and a stopped completion handler for a stopped completion. A receiver has an associated environment. An asynchronous operation’s operation state owns the operation’s receiver. The environment of an asynchronous operation is equal to its receiver’s environment.
-
For each completion disposition, there is a completion function. A completion function is a customization point object ([customization.point.object]) that accepts an asynchronous operation’s receiver as the first argument and the result datums of the asynchronous operation as additional arguments. The value completion function invokes the receiver’s value completion handler with the value result datums; likewise for the error completion function and the stopped completion function. A completion function has an associated type known as its completion tag that is the unqualified type of the completion function. A valid invocation of a completion function is called a completion operation.
-
The lifetime of an asynchronous operation, also known as the operation’s async lifetime, begins when its start operation begins executing and ends when its completion operation begins executing. If the lifetime of an asynchronous operation’s associated operation state ends before the lifetime of the asynchronous operation, the behavior is undefined. After an asynchronous operation executes a completion operation, its associated operation state is invalid. Accessing any part of an invalid operation state is undefined behavior.
-
An asynchronous operation shall not execute a completion operation before its start operation has begun executing. After its start operation has begun executing, exactly one completion operation shall execute. The lifetime of an asynchronous operation’s operation state can end during the execution of the completion operation.
-
A sender is a factory for one or more asynchronous operations. Connecting a sender and a receiver creates an asynchronous operation. The asynchronous operation’s associated receiver is equal to the receiver used to create it, and its associated environment is equal to the environment associated with the receiver used to create it. The lifetime of an asynchronous operation’s associated operation state does not depend on the lifetimes of either the sender or the receiver from which it was created. A sender is started when it is connected to a receiver and the resulting asynchronous operation is started. A sender’s async result is the async result of the asynchronous operation created by connecting it to a receiver. A sender sends its results by way of the asynchronous operation(s) it produces, and a receiver receives those results. A sender is either valid or invalid; it becomes invalid when its parent sender (see below) becomes invalid.
-
A scheduler is an abstraction of an execution resource with a uniform, generic interface for scheduling work onto that resource. It is a factory for senders whose asynchronous operations execute value completion operations on an execution agent belonging to the scheduler’s associated execution resource. A schedule-expression obtains such a sender from a scheduler. A schedule sender is the result of a schedule expression. On success, an asynchronous operation produced by a schedule sender executes a value completion operation with an empty set of result datums. Multiple schedulers can refer to the same execution resource. A scheduler can be valid or invalid. A scheduler becomes invalid when the execution resource to which it refers becomes invalid, as do any schedule senders obtained from the scheduler, and any operation states obtained from those senders.
-
An asynchronous operation has one or more associated completion schedulers for each of its possible dispositions. A completion scheduler is a scheduler whose associated execution resource is used to execute a completion operation for an asynchronous operation. A value completion scheduler is a scheduler on which an asynchronous operation’s value completion operation can execute. Likewise for error completion schedulers and stopped completion schedulers.
-
A sender has an associated queryable object ([exec.queryable]) known as its attributes that describes various characteristics of the sender and of the asynchronous operation(s) it produces. For each disposition, there is a query object for reading the associated completion scheduler from a sender’s attributes; i.e., a value completion scheduler query object for reading a sender’s value completion scheduler, etc. If a completion scheduler query is well-formed, the returned completion scheduler is unique for that disposition for any asynchronous operation the sender creates. A schedule sender is required to have a value completion scheduler attribute whose value is equal to the scheduler that produced the schedule sender.
-
A completion signature is a function type that describes a completion operation. An asynchronous operation has a finite set of possible completion signatures corresponding to the completion operations that the asynchronous operation potentially evaluates ([basic.def.odr]). For a completion function
, receiverset
, and pack of argumentsrcvr
, letargs
be the completion operationc
, and letset ( rcvr , args ...)
be the function typeF
. A completion signaturedecltype ( auto ( set ))( decltype (( args ))...)
is associated withSig
if and only ifc
isMATCHING - SIG ( Sig , F ) true
([exec.general]). Together, a sender type and an environment type
determine the set of completion signatures of an asynchronous operation that results from connecting the sender with a receiver that has an environment of typeEnv
. The type of the receiver does not affect an asynchronous operation’s completion signatures, only the type of the receiver’s environment.Env -
A sender algorithm is a function that takes and/or returns a sender. There are three categories of sender algorithms:
-
A sender factory is a function that takes non-senders as arguments and that returns a sender.
-
A sender adaptor is a function that constructs and returns a parent sender from a set of one or more child senders and a (possibly empty) set of additional arguments. An asynchronous operation created by a parent sender is a parent operation to the child operations created by the child senders.
-
A sender consumer is a function that takes one or more senders and a (possibly empty) set of additional arguments, and whose return type is not the type of a sender.
-
34.4. Header < execution >
synopsis [exec.syn]
namespace std { // [exec.general], helper concepts template < class T > concept movable - value = see below ; // exposition only template < class From , class To > concept decays - to = same_as < decay_t < From > , To > ; // exposition only template < class T > concept class - type = decays - to < T , T > && is_class_v < T > ; // exposition only // [exec.queryable], queryable objects template < class T > concept queryable = see above ; // exposition only // [exec.queries], queries struct forwarding_query_t { see below }; struct get_allocator_t { see below }; struct get_stop_token_t { see below }; inline constexpr forwarding_query_t forwarding_query {}; inline constexpr get_allocator_t get_allocator {}; inline constexpr get_stop_token_t get_stop_token {}; template < class T > using stop_token_of_t = remove_cvref_t < decltype ( get_stop_token ( declval < T > ())) > ; template < class T > concept forwarding - query = // exposition only forwarding_query ( T {}); } namespace std :: execution { // [exec.queries], queries enum class forward_progress_guarantee { concurrent , parallel , weakly_parallel }; struct get_domain_t { see below }; struct get_scheduler_t { see below }; struct get_delegation_scheduler_t { see below }; struct get_forward_progress_guarantee_t { see below }; template < class CPO > struct get_completion_scheduler_t { see below }; inline constexpr get_domain_t get_domain {}; inline constexpr get_scheduler_t get_scheduler {}; inline constexpr get_delegation_scheduler_t get_delegation_scheduler {}; inline constexpr get_forward_progress_guarantee_t get_forward_progress_guarantee {}; template < class CPO > inline constexpr get_completion_scheduler_t < CPO > get_completion_scheduler {}; struct empty_env {}; struct get_env_t { see below }; inline constexpr get_env_t get_env {}; template < class T > using env_of_t = decltype ( get_env ( declval < T > ())); // [exec.domain.default], execution domains struct default_domain ; // [exec.sched], schedulers struct scheduler_t {}; template < class Sch > concept scheduler = see below ; // [exec.recv], receivers struct receiver_t {}; template < class Rcvr > concept receiver = see below ; template < class Rcvr , class Completions > concept receiver_of = see below ; struct set_value_t { see below }; struct set_error_t { see below }; struct set_stopped_t { see below }; inline constexpr set_value_t set_value {}; inline constexpr set_error_t set_error {}; inline constexpr set_stopped_t set_stopped {}; // [exec.opstate], operation states struct operation_state_t {}; template < class O > concept operation_state = see below ; struct start_t { see below }; inline constexpr start_t start {}; // [exec.snd], senders struct sender_t {}; template < class Sndr > concept sender = see below ; template < class Sndr , class Env = empty_env > concept sender_in = see below ; template < class Sndr , class Rcvr > concept sender_to = see below ; template < class ... Ts > struct type - list ; // exposition only // [exec.getcomplsigs], completion signatures struct get_completion_signatures_t { see below }; inline constexpr get_completion_signatures_t get_completion_signatures {}; template < class Sndr , class Env = empty_env > requires sender_in < Sndr , Env > using completion_signatures_of_t = call - result - t < get_completion_signatures_t , Sndr , Env > ; template < class ... Ts > using decayed - tuple = tuple < decay_t < Ts > ... > ; // exposition only template < class ... Ts > using variant - or - empty = see below ; // exposition only template < class Sndr , class Env = empty_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender_in < Sndr , Env > using value_types_of_t = see below ; template < class Sndr , class Env = empty_env , template < class ... > class Variant = variant - or - empty > requires sender_in < Sndr , Env > using error_types_of_t = see below ; template < class Sndr , class Env = empty_env > requires sender_in < Sndr , Env > inline constexpr bool sends_stopped = see below ; template < class Sndr , class Env > using single - sender - value - type = see below ; // exposition only template < class Sndr , class Env > concept single - sender = see below ; // exposition only template < sender Sndr > using tag_of_t = see below ; // [exec.snd.transform], sender transformations template < class Domain , sender Sndr , queryable ... Env > requires ( sizeof ...( Env ) <= 1 ) constexpr sender decltype ( auto ) transform_sender ( Domain dom , Sndr && sndr , const Env & ... env ) noexcept ( see below ); // [exec.snd.transform.env], environment transformations template < class Domain , sender Sndr , queryable Env > constexpr queryable decltype ( auto ) transform_env ( Domain dom , Sndr && sndr , Env && env ) noexcept ; // [exec.snd.apply], sender algorithm application template < class Domain , class Tag , sender Sndr , class ... Args > constexpr decltype ( auto ) apply_sender ( Domain dom , Tag , Sndr && sndr , Args && ... args ) noexcept ( see below ); // [exec.connect], the connect sender algorithm struct connect_t { see below }; inline constexpr connect_t connect {}; template < class Sndr , class Rcvr > using connect_result_t = decltype ( connect ( declval < Sndr > (), declval < Rcvr > ())); // [exec.factories], sender factories struct just_t { see below }; struct just_error_t { see below }; struct just_stopped_t { see below }; struct schedule_t { see below }; inline constexpr just_t just {}; inline constexpr just_error_t just_error {}; inline constexpr just_stopped_t just_stopped {}; inline constexpr schedule_t schedule {}; inline constexpr unspecified read {}; template < scheduler Sndr > using schedule_result_t = decltype ( schedule ( declval < Sndr > ())); // [exec.adapt], sender adaptors template < class - type D > struct sender_adaptor_closure { }; struct starts_on_t { see below }; struct continues_on_t { see below }; struct on_t { see below }; struct schedule_from_t { see below }; struct then_t { see below }; struct upon_error_t { see below }; struct upon_stopped_t { see below }; struct let_value_t { see below }; struct let_error_t { see below }; struct let_stopped_t { see below }; struct bulk_t { see below }; struct split_t { see below }; struct when_all_t { see below }; struct when_all_with_variant_t { see below }; struct into_variant_t { see below }; struct stopped_as_optional_t { see below }; struct stopped_as_error_t { see below }; inline constexpr starts_on_t starts_on {}; inline constexpr continues_on_t continues_on {}; inline constexpr on_t on {}; inline constexpr schedule_from_t schedule_from {}; inline constexpr then_t then {}; inline constexpr upon_error_t upon_error {}; inline constexpr upon_stopped_t upon_stopped {}; inline constexpr let_value_t let_value {}; inline constexpr let_error_t let_error {}; inline constexpr let_stopped_t let_stopped {}; inline constexpr bulk_t bulk {}; inline constexpr split_t split {}; inline constexpr when_all_t when_all {}; inline constexpr when_all_with_variant_t when_all_with_variant {}; inline constexpr into_variant_t into_variant {}; inline constexpr stopped_as_optional_t stopped_as_optional {}; inline constexpr stopped_as_error_t stopped_as_error {}; // [exec.utils], sender and receiver utilities // [exec.utils.cmplsigs] template < class Fn > concept completion - signature = // exposition only see below ; template < completion - signature ... Fns > struct completion_signatures {}; template < class Sigs > // exposition only concept valid - completion - signatures = see below ; // [exec.utils.tfxcmplsigs] template < valid - completion - signatures InputSignatures , valid - completion - signatures AdditionalSignatures = completion_signatures <> , template < class ... > class SetValue = see below , template < class > class SetError = see below , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> using transform_completion_signatures = completion_signatures < see below > ; template < sender Sndr , class Env = empty_env , valid - completion - signatures AdditionalSignatures = completion_signatures <> , template < class ... > class SetValue = see below , template < class > class SetError = see below , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> requires sender_in < Sndr , Env > using transform_completion_signatures_of = transform_completion_signatures < completion_signatures_of_t < Sndr , Env > , AdditionalSignatures , SetValue , SetError , SetStopped > ; // [exec.ctx], execution resources // [exec.run.loop], run_loop class run_loop ; } namespace std :: this_thread { // [exec.consumers], consumers struct sync_wait_t { see below }; struct sync_wait_with_variant_t { see below }; inline constexpr sync_wait_t sync_wait {}; inline constexpr sync_wait_with_variant_t sync_wait_with_variant {}; } namespace std :: execution { // [exec.as.awaitable] struct as_awaitable_t { see below }; inline constexpr as_awaitable_t as_awaitable {}; // [exec.with.awaitable.senders] template < class - type Promise > struct with_awaitable_senders ; }
-
The exposition-only type
is defined as follows:variant - or - empty < Ts ... > -
If
is greater than zero,sizeof ...( Ts )
denotesvariant - or - empty < Ts ... >
wherevariant < Us ... >
is the packUs ...
with duplicate types removed.decay_t < Ts > ... -
Otherwise,
denotes the exposition-only class type:variant - or - empty < Ts ... > namespace std :: execution { struct empty - variant { // exposition only empty - variant () = delete ; }; }
-
-
For types
andSndr
,Env
is an alias for:single - sender - value - type < Sndr , Env > -
if that type is well-formed,value_types_of_t < Sndr , Env , decay_t , type_identity_t > -
Otherwise,
ifvoid
isvalue_types_of_t < Sndr , Env , tuple , variant >
orvariant < tuple <>>
,variant <> -
Otherwise,
if that type is well-formed,value_types_of_t < Sndr , Env , decayed - tuple , type_identity_t > -
Otherwise,
is ill-formed.single - sender - value - type < Sndr , Env >
-
-
The exposition-only concept
is defined as follows:single - sender namespace std :: execution { template < class Sndr , class Env > concept single - sender = sender_in < Sndr , Env > && requires { typename single - sender - value - type < Sndr , Env > ; }; }
34.5. Queries [exec.queries]
34.5.1. forwarding_query
[exec.fwd.env]
-
asks a query object whether it should be forwarded through queryable adaptors.forwarding_query -
The name
denotes a query object. For some query objectforwarding_query
of typeq
,Q
is expression-equivalent to:forwarding_query ( q ) -
if that expression is well-formed.MANDATE - NOTHROW ( q . query ( forwarding_query )) -
Mandates: The expression above has type
and is a core constant expression ifbool
is a core constant expression.q
-
-
Otherwise,
true
if
isderived_from < Q , forwarding_query_t > true
. -
Otherwise,
false
.
-
34.5.2. get_allocator
[exec.get.allocator]
-
asks a queryable object for its associated allocator.get_allocator -
The name
denotes a query object. For a subexpressionget_allocator
,env
is expression-equivalent toget_allocator ( env )
.MANDATE - NOTHROW ( as_const ( env ). query ( get_allocator )) -
Mandates: If the expression above is well-formed, its type satisfies
([allocator.requirements.general]).simple - allocator
-
-
is a core constant expression and has valueforwarding_query ( get_allocator ) true
.
34.5.3. get_stop_token
[exec.get.stop.token]
-
asks a queryable object for an associated stop token.get_stop_token -
The name
denotes a query object. For a subexpressionget_stop_token
,env
is expression-equivalent to:get_stop_token ( env ) -
if that expression is well-formed.MANDATE - NOTHROW ( as_const ( env ). query ( get_stop_token )) -
Mandates: The type of the expression above satisfies
.stoppable_token
-
-
Otherwise,
.never_stop_token {}
-
-
is a core constant expression and has valueforwarding_query ( get_stop_token ) true
.
34.5.4. execution :: get_env
[exec.get.env]
-
is a customization point object. For a subexpressionexecution :: get_env
,o
is expression-equivalent to:execution :: get_env ( o ) -
if that expression is well-formed.MANDATE - NOTHROW ( as_const ( o ). get_env ()) -
Mandates: The type of the expression above satisfies
([exec.queryable]).queryable
-
-
Otherwise,
.empty_env {}
-
-
The value of
shall be valid whileget_env ( o )
is valid.o -
When passed a sender object,
returns the sender’s associated attributes. When passed a receiver,get_env
returns the receiver’s associated execution environment.get_env
34.5.5. execution :: get_domain
[exec.get.domain]
-
asks a queryable object for its associated execution domain tag.get_domain -
The name
denotes a query object. For a subexpressionget_domain
,env
is expression-equivalent toget_domain ( env )
.MANDATE - NOTHROW ( as_const ( env ). query ( get_domain )) -
is a core constant expression and has valueforwarding_query ( execution :: get_domain ) true
.
34.5.6. execution :: get_scheduler
[exec.get.scheduler]
-
asks a queryable object for its associated scheduler.get_scheduler -
The name
denotes a query object. For a subexpressionget_scheduler
,env
is expression-equivalent toget_scheduler ( env )
.MANDATE - NOTHROW ( as_const ( env ). query ( get_scheduler )) -
Mandates: If the expression above is well-formed, its type satisfies
.scheduler
-
-
is a core constant expression and has valueforwarding_query ( execution :: get_scheduler ) true
.
34.5.7. execution :: get_delegation_scheduler
[exec.get.delegation.scheduler]
-
asks a queryable object for a scheduler that can be used to delegate work to for the purpose of forward progress delegation ([intro.progress]).get_delegation_scheduler -
The name
denotes a query object. For a subexpressionget_delegation_scheduler
,env
is expression-equivalent toget_delegation_scheduler ( env )
.MANDATE - NOTHROW ( as_const ( env ). query ( get_delegation_scheduler )) -
Mandates: If the expression above is well-formed, its type satisfies
.scheduler
-
-
is a core constant expression and has valueforwarding_query ( execution :: get_delegation_scheduler ) true
.
34.5.8. execution :: get_forward_progress_guarantee
[exec.get.forward.progress.guarantee]
namespace std :: execution { enum class forward_progress_guarantee { concurrent , parallel , weakly_parallel }; }
-
asks a scheduler about the forward progress guarantee of execution agents created by that scheduler’s associated execution resource ([intro.progress]).get_forward_progress_guarantee -
The name
denotes a query object. For a subexpressionget_forward_progress_guarantee
, letsch
beSch
. Ifdecltype (( sch ))
does not satisfySch
,scheduler
is ill-formed. Otherwise,get_forward_progress_guarantee
is expression-equivalent to:get_forward_progress_guarantee ( sch ) -
, if that expression is well-formed.MANDATE - NOTHROW ( as_const ( sch ). query ( get_forward_progress_guarantee )) -
Mandates: The type of the expression above is
.forward_progress_guarantee
-
-
Otherwise,
.forward_progress_guarantee :: weakly_parallel
-
-
If
for some schedulerget_forward_progress_guarantee ( sch )
returnssch
, all execution agents created by that scheduler’s associated execution resource shall provide the concurrent forward progress guarantee. If it returnsforward_progress_guarantee :: concurrent
, all such execution agents shall provide at least the parallel forward progress guarantee.forward_progress_guarantee :: parallel
34.5.9. execution :: get_completion_scheduler
[exec.completion.scheduler]
-
obtains the completion scheduler associated with a completion tag from a sender’s attributes.get_completion_scheduler < completion - tag > -
The name
denotes a query object template. For a subexpressionget_completion_scheduler
, the expressionq
is ill-formed ifget_completion_scheduler < completion - tag > ( q )
is not one ofcompletion - tag
,set_value_t
, orset_error_t
. Otherwise,set_stopped_t
is expression-equivalent toget_completion_scheduler < completion - tag > ( q )
.MANDATE - NOTHROW ( as_const ( q ). query ( get_completion_scheduler < completion - tag > )) -
Mandates: If the expression above is well-formed, its type satisfies
.scheduler
-
-
Let
be a completion function ([async.ops]); letcompletion - fn
be the associated completion tag ofcompletion - tag
; letcompletion - fn
be a pack of subexpressions; and letargs
be a subexpression such thatsndr
issender < decltype (( sndr )) > true
and
is well-formed and denotes a schedulerget_completion_scheduler < completion - tag > ( get_env ( sndr ))
. If an asynchronous operation created by connectingsch
with a receiversndr
causes the evaluation ofrcvr
, the behavior is undefined unless the evaluation happens on an execution agent that belongs tocompletion - fn ( rcvr , args ...)
’s associated execution resource.sch -
The expression
is a core constant expression and has valueforwarding_query ( get_completion_scheduler < completion - tag > ) true
.
34.6. Schedulers [exec.sched]
-
The
concept defines the requirements of a scheduler type ([async.ops]).scheduler
is a customization point object that accepts a scheduler. A valid invocation ofschedule
is a schedule-expression.schedule namespace std :: execution { template < class Sch > concept scheduler = derived_from < typename remove_cvref_t < Sch >:: scheduler_concept , scheduler_t > && queryable < Sch > && requires ( Sch && sch ) { { schedule ( std :: forward < Sch > ( sch )) } -> sender ; { auto ( get_completion_scheduler < set_value_t > ( get_env ( schedule ( std :: forward < Sch > ( sch ))))) } -> same_as < remove_cvref_t < Sch >> ; } && equality_comparable < remove_cvref_t < Sch >> && copy_constructible < remove_cvref_t < Sch >> ; } -
Let
be the type of a scheduler and letSch
be the type of an execution environment for whichEnv
is satisfied. Thensender_in < schedule_result_t < Sch > , Env >
shall be modeled.sender - in - of < schedule_result_t < Sch > , Env > -
None of a scheduler’s copy constructor, destructor, equality comparison, or
member functions shall exit via an exception. None of these member functions, nor a scheduler type’sswap
function, shall introduce data races as a result of potentially concurrent ([intro.races]) invocations of those functions from different threads.schedule -
For any two values
andsch1
of some scheduler typesch2
,Sch
shall returnsch1 == sch2 true
only if both
andsch1
share the same associated execution resource.sch2 -
For a given scheduler expression
, the expressionsch
shall compare equal toget_completion_scheduler < set_value_t > ( get_env ( schedule ( sch )))
.sch -
For a given scheduler expression
, if the expressionsch
is well-formed, then the expressionget_domain ( sch )
is also well-formed and has the same type.get_domain ( get_env ( schedule ( sch ))) -
A scheduler type’s destructor shall not block pending completion of any receivers connected to the sender objects returned from
. The ability to wait for completion of submitted function objects can be provided by the associated execution resource of the scheduler.schedule
34.7. Receivers [exec.recv]
34.7.1. Receiver concepts [exec.recv.concepts]
-
A receiver represents the continuation of an asynchronous operation. The
concept defines the requirements for a receiver type ([async.ops]). Thereceiver
concept defines the requirements for a receiver type that is usable as the first argument of a set of completion operations corresponding to a set of completion signatures. Thereceiver_of
customization point object is used to access a receiver’s associated environment.get_env namespace std :: execution { template < class Rcvr > concept receiver = derived_from < typename remove_cvref_t < Rcvr >:: receiver_concept , receiver_t > && requires ( const remove_cvref_t < Rcvr >& rcvr ) { { get_env ( rcvr ) } -> queryable ; } && move_constructible < remove_cvref_t < Rcvr >> && // rvalues are movable, and constructible_from < remove_cvref_t < Rcvr > , Rcvr > ; // lvalues are copyable template < class Signature , class Rcvr > concept valid - completion - for = // exposition only requires ( Signature * sig ) { [] < class Tag , class ... Args > ( Tag ( * )( Args ...)) requires callable < Tag , remove_cvref_t < Rcvr > , Args ... > {}( sig ); }; template < class Rcvr , class Completions > concept has - completions = // exposition only requires ( Completions * completions ) { [] < valid - completion - for < Rcvr > ... Sigs > ( completion_signatures < Sigs ... >* ) {}( completions ); }; template < class Rcvr , class Completions > concept receiver_of = receiver < Rcvr > && has - completions < Rcvr , Completions > ; } -
Class types that are marked
do not model thefinal
concept.receiver -
Let
be a receiver and letrcvr
be an operation state associated with an asynchronous operation created by connectingop_state
with a sender. Letrcvr
be a stop token equal totoken
.get_stop_token ( get_env ( rcvr ))
shall remain valid for the duration of the asynchronous operation’s lifetime ([async.ops]). This means that, unless it knows about further guarantees provided by the type oftoken
, the implementation ofrcvr
can not useop_state
after it executes a completion operation. This also implies that any stop callbacks registered ontoken
must be destroyed before the invocation of the completion operation.token
34.7.2. execution :: set_value
[exec.set.value]
-
is a value completion function ([async.ops]). Its associated completion tag isset_value
. The expressionset_value_t
for a subexpressionset_value ( rcvr , vs ...)
and pack of subexpressionsrcvr
is ill-formed ifvs
is an lvalue or an rvalue of const type. Otherwise, it is expression-equivalent torcvr
.MANDATE - NOTHROW ( rcvr . set_value ( vs ...))
34.7.3. execution :: set_error
[exec.set.error]
-
is an error completion function ([async.ops]). Its associated completion tag isset_error
. The expressionset_error_t
for some subexpressionsset_error ( rcvr , err )
andrcvr
is ill-formed iferr
is an lvalue or an rvalue of const type. Otherwise, it is expression-equivalent torcvr
.MANDATE - NOTHROW ( rcvr . set_error ( err ))
34.7.4. execution :: set_stopped
[exec.set.stopped]
-
is a stopped completion function ([async.ops]). Its associated completion tag isset_stopped
. The expressionset_stopped_t
for a subexpressionset_stopped ( rcvr )
is ill-formed ifrcvr
is an lvalue or an rvalue ofrcvr
type. Otherwise, it is expression-equivalent toconst
.MANDATE - NOTHROW ( rcvr . set_stopped ())
34.8. Operation states [exec.opstate]
-
The
concept defines the requirements of an operation state type ([async.ops]).operation_state namespace std :: execution { template < class O > concept operation_state = derived_from < typename O :: operation_state_concept , operation_state_t > && is_object_v < O > && requires ( O & o ) { { start ( o ) } noexcept ; }; } -
If an
object is destroyed during the lifetime of its asynchronous operation ([async.ops]), the behavior is undefined. Theoperation_state
concept does not impose requirements on any operations other than destruction andoperation_state
, including copy and move operations. Invoking any such operation on an object whose type modelsstart
can lead to undefined behavior.operation_state -
The program is ill-formed if it performs a copy or move construction or assigment operation on an operation state object created by connecting a library-provided sender.
34.8.1. execution :: start
[exec.opstate.start]
-
The name
denotes a customization point object that starts ([async.ops]) the asynchronous operation associated with the operation state object. For a subexpressionstart
, the expressionop
is ill-formed ifstart ( op )
is an rvalue. Otherwise, it is expression-equivalent toop
.MANDATE - NOTHROW ( op . start ()) -
If
does not start ([async.ops]) the asynchronous operation associated with the operation stateop . start ()
, the behavior of callingop
is undefined.start ( op )
34.9. Senders [exec.snd]
34.9.1. General [exec.snd.general]
-
For the purposes of this subclause, a sender is an object whose type satisfies the
concept ([async.ops]).sender -
Subclauses [exec.factories] and [exec.adapt] define customizable algorithms that return senders. Each algorithm has a default implementation. Let
be the result of an invocation of such an algorithm or an object equal to the result ([concepts.equality]), and letsndr
beSndr
. Letdecltype (( sndr ))
be a receiver of typercvr
with associated environmentRcvr
of typeenv
such thatEnv
issender_to < Sndr , Rcvr > true
. For the default implementation of the algorithm that produced
, connectingsndr
tosndr
and starting the resulting operation state ([async.ops]) necessarily results in the potential evaluation ([basic.def.odr]) of a set of completion operations whose first argument is a subexpression equal torcvr
. Letrcvr
be a pack of completion signatures corresponding to this set of completion operations. Then the type of the expressionSigs
is a specialization of the class templateget_completion_signatures ( sndr , env )
([exec.utils.cmplsigs]), the set of whose template arguments iscompletion_signatures
. If a user-provided implementation of the algorithm that producedSigs
is selected instead of the default, any completion signature that is in the set of types denoted bysndr
and that is not part ofcompletion_signatures_of_t < Sndr , Env >
shall correspond to error or stopped completion operations, unless otherwise specified.Sigs -
This subclause makes use of the following exposition-only entities.
-
For a queryable object
,env
is an expression whose type satisfiesFWD - ENV ( env )
such that for a query objectqueryable
and a pack of subexpressionsq
, the expressionas
is ill-formed ifFWD - ENV ( env ). query ( q , as ...)
isforwarding_query ( q ) false
; otherwise, it is expression-equivalent to
.env . query ( q , as ...) -
For a query object
and a subexpressionq
,v
is an expressionMAKE - ENV ( q , v )
whose type satisfiesenv
such that the result ofqueryable
has a value equal toenv . query ( q )
([concepts.equality]). Unless otherwise stated, the object to whichv
refers remains valid whileenv . query ( q )
remains valid.env -
For two queryable objects
andenv1
, a query objectenv2
and a pack of subexpressionsq
,as
is an expressionJOIN - ENV ( env1 , env2 )
whose type satisfiesenv3
such thatqueryable
is expression-equivalent to:env3 . query ( q , as ...) -
if that expression is well-formed,env1 . query ( q , as ...) -
otherwise,
if that expression is well-formed,env2 . query ( q , as ...) -
otherwise,
is ill-formed.env3 . query ( q , as ...)
-
-
The results of
,FWD - ENV
, andMAKE - ENV
can be context-dependent; i.e., they can evaluate to expressions with different types and value categories in different contexts for the same arguments.JOIN - ENV -
For a scheduler
,sch
is an expressionSCHED - ATTRS ( sch )
whose type satisfieso1
such thatqueryable
is a expression with the same type and value aso1 . query ( get_completion_scheduler < Tag > )
wheresch
is one ofTag
orset_value_t
, and such thatset_stopped_t
is expression-equivalent too1 . query ( get_domain )
.sch . query ( get_domain )
is an expressionSCHED - ENV ( sch )
whose type satisfieso2
such thatqueryable
is a prvalue with the same type and value aso1 . query ( get_scheduler )
, and such thatsch
is expression-equivalent too2 . query ( get_domain )
.sch . query ( get_domain ) -
For two subexpressions
andrcvr
,expr
is expression-equivalent toSET - VALUE ( rcvr , expr )
if the type of( expr , set_value ( std :: move ( rcvr )))
isexpr
; otherwise,void
.set_value ( std :: move ( rcvr ), expr )
is equivalent to:TRY - EVAL ( rcvr , expr ) try { expr ; } catch (...) { set_error ( std :: move ( rcvr ), current_exception ()); } if
is potentially-throwing; otherwise,expr
.expr
isTRY - SET - VALUE ( rcvr , expr )
except thatTRY - EVAL ( rcvr , SET - VALUE ( rcvr , expr ))
is evaluated only once.rcvr -
template < class Default = default_domain , class Sndr > constexpr auto completion - domain ( const Sndr & sndr ) noexcept ; -
is the type of the expressionCOMPL - DOMAIN ( T )
.get_domain ( get_completion_scheduler < T > ( get_env ( sndr ))) -
Effects: If all of the types
,COMPL - DOMAIN ( set_value_t )
, andCOMPL - DOMAIN ( set_error_t )
are ill-formed,COMPL - DOMAIN ( set_stopped_t )
is a default-constructed prvalue of typecompletion - domain < Default > ( sndr )
. Otherwise, if they all share a common type ([meta.trans.other]) (ignoring those types that are ill-formed), thenDefault
is a default-constructed prvalue of that type. Otherwise,completion - domain < Default > ( sndr )
is ill-formed.completion - domain < Default > ( sndr )
-
-
template < class Tag , class Env , class Default > constexpr decltype ( auto ) query - with - default ( Tag , const Env & env , Default && value ) noexcept ( see below ); -
Let
be the expressione
if that expression is well-formed; otherwise, it isTag ()( env )
.static_cast < Default > ( std :: forward < Default > ( value )) -
Returns:
.e -
Remarks: The expression in the
clause isnoexcept
.noexcept ( e )
-
-
template < class Sndr > constexpr auto get - domain - early ( const Sndr & sndr ) noexcept ; -
Effects: Equivalent to:
wherereturn Domain ();
is the decayed type of the first of the following expressions that is well-formed:Domain -
get_domain ( get_env ( sndr )) -
completion - domain ( sndr ) -
default_domain ()
-
-
-
template < class Sndr , class Env > constexpr auto get - domain - late ( const Sndr & sndr , const Env & env ) noexcept ; -
Effects: Equivalent to:
-
If
issender - for < Sndr , continues_on_t > true
, then
wherereturn Domain ();
is the type of the following expression:Domain [] { auto [ _ , sch , _ ] = sndr ; return query - or - default ( get_domain , sch , default_domain ()); }(); The
algorithm works in tandem withcontinues_on
([exec.schedule.from])) to give scheduler authors a way to customize both how to transition onto (schedule_from
) and off of (continues_on
) a given execution context. Thus,schedule_from
ignores the domain of the predecessor and uses the domain of the destination scheduler to select a customization, a property that is unique tocontinues_on
. That is why it is given special treatment here.continues_on -
Otherwise,
wherereturn Domain ();
is the first of the following expressions that is well-formed and whose type is notDomain
:void -
get_domain ( get_env ( sndr )) -
completion - domain < void > ( sndr ) -
get_domain ( env ) -
get_domain ( get_scheduler ( env )) -
.default_domain ()
-
-
-
-
template < callable Fun > requires is_nothrow_move_constructible_v < Fun > struct emplace - from { // exposition only Fun fun ; // exposition only using type = call - result - t < Fun > ; constexpr operator type () && noexcept ( nothrow - callable < Fun > ) { return std :: move ( fun )(); } constexpr type operator ()() && noexcept ( nothrow - callable < Fun > ) { return std :: move ( fun )(); } }; -
is used to emplace non-movable types intoemplace - from
,tuple
,optional
, and similar types.variant
-
-
struct on - stop - request { // exposition only inplace_stop_source & stop - src ; // exposition only void operator ()() noexcept { stop - src . request_stop (); } }; -
template < class T 0 , class T 1 , ... class T n > struct product - type { // exposition only T 0 t 0 ; // exposition only T 1 t 1 ; // exposition only ... T n t n ; // exposition only template < size_t I , class Self > constexpr decltype ( auto ) get ( this Self && self ) noexcept ; // exposition only template < class Self , class Fn > constexpr decltype ( auto ) apply ( this Self && self , Fn && fn ) // exposition only noexcept ( see below ); }; -
is presented here in pseudo-code form for the sake of exposition. It can be approximated in standard C++ with aproduct - type
-like implementation that takes care to keep the type an aggregate that can be used as the initializer of a structured binding declaration.tuple -
An expression of type
is usable as the initializer of a structured binding declaration [dcl.struct.bind].product - type -
template < size_t I , class Self > constexpr decltype ( auto ) get ( this Self && self ) noexcept ; -
Effects: Equivalent to:
auto & [... ts ] = self ; return std :: forward_like < Self > ( ts ...[ I ]);
-
-
template < class Self , class Fn > constexpr decltype ( auto ) apply ( this Self && self , Fn && fn ) noexcept ( see below ); -
Effects: Equivalent to:
auto & [... ts ] = self ; return std :: forward < Fn > ( fn )( std :: forward_like < Self > ( ts )...); -
Requires: The expression in the
statement above is well-formed.return -
Remarks: The expression in the
clause isnoexcept true
if the
statement above is not potentially throwing; otherwise,return false
.
-
-
-
template < class Tag , class Data = see below , class ... Child > constexpr auto make - sender ( Tag tag , Data && data , Child && ... child ); -
Mandates: The following expressions are
true
:-
semiregular < Tag > -
movable - value < Data > -
( sender < Child > && ...)
-
-
Returns: A prvalue of type
that has been direct-list-initialized with the forwarded arguments, wherebasic - sender < Tag , decay_t < Data > , decay_t < Child > ... >
is the following exposition-only class template except as noted below:basic - sender namespace std :: execution { template < class Tag > concept completion - tag = // exposition only same_as < Tag , set_value_t > || same_as < Tag , set_error_t > || same_as < Tag , set_stopped_t > ; template < template < class ... > class T , class ... Args > concept valid - specialization = requires { typename T < Args ... > ; }; // exposition only struct default - impls { // exposition only static constexpr auto get - attrs = see below ; static constexpr auto get - env = see below ; static constexpr auto get - state = see below ; static constexpr auto start = see below ; static constexpr auto complete = see below ; }; template < class Tag > struct impls - for : default - impls {}; // exposition only template < class Sndr , class Rcvr > // exposition only using state - type = decay_t < call - result - t < decltype ( impls - for < tag_of_t < Sndr >>:: get - state ), Sndr , Rcvr &>> ; template < class Index , class Sndr , class Rcvr > // exposition only using env - type = call - result - t < decltype ( impls - for < tag_of_t < Sndr >>:: get - env ), Index , state - type < Sndr , Rcvr >& , const Rcvr &> ; template < class Sndr , size_t I = 0 > using child - type = decltype ( declval < Sndr > (). template get < I + 2 > ()); // exposition only template < class Sndr > using indices - for = remove_reference_t < Sndr >:: indices - for ; // exposition only template < class Sndr , class Rcvr > struct basic - state { // exposition only basic - state ( Sndr && sndr , Rcvr && rcvr ) noexcept ( see below ) : rcvr ( std :: move ( rcvr )) , state ( impls - for < tag_of_t < Sndr >>:: get - state ( std :: forward < Sndr > ( sndr ), rcvr )) { } Rcvr rcvr ; // exposition only state - type < Sndr , Rcvr > state ; // exposition only }; template < class Sndr , class Rcvr , class Index > requires valid - specialization < env - type , Index , Sndr , Rcvr > struct basic - receiver { // exposition only using receiver_concept = receiver_t ; using tag - t = tag_of_t < Sndr > ; // exposition only using state - t = state - type < Sndr , Rcvr > ; // exposition only static constexpr const auto & complete = impls - for < tag - t >:: complete ; // exposition only template < class ... Args > requires callable < decltype ( complete ), Index , state - t & , Rcvr & , set_value_t , Args ... > void set_value ( Args && ... args ) && noexcept { complete ( Index (), op -> state , op -> rcvr , set_value_t (), std :: forward < Args > ( args )...); } template < class Error > requires callable < decltype ( complete ), Index , state - t & , Rcvr & , set_error_t , Error > void set_error ( Error && err ) && noexcept { complete ( Index (), op -> state , op -> rcvr , set_error_t (), std :: forward < Error > ( err )); } void set_stopped () && noexcept requires callable < decltype ( complete ), Index , state - t & , Rcvr & , set_stopped_t > { complete ( Index (), op -> state , op -> rcvr , set_stopped_t ()); } auto get_env () const noexcept -> env - type < Index , Sndr , Rcvr > { return impls - for < tag - t >:: get - env ( Index (), op -> state , op -> rcvr ); } basic - state < Sndr , Rcvr >* op ; // exposition only }; constexpr auto connect - all = see below ; // exposition only template < class Sndr , class Rcvr > using connect - all - result = call - result - t < // exposition only decltype ( connect - all ), basic - state < Sndr , Rcvr >* , Sndr , indices - for < Sndr >> ; template < class Sndr , class Rcvr > requires valid - specialization < state - type , Sndr , Rcvr > && valid - specialization < connect - all - result , Sndr , Rcvr > struct basic - operation : basic - state < Sndr , Rcvr > { // exposition only using operation_state_concept = operation_state_t ; using tag - t = tag_of_t < Sndr > ; // exposition only connect - all - result < Sndr , Rcvr > inner - ops ; // exposition only basic - operation ( Sndr && sndr , Rcvr && rcvr ) noexcept ( see below ) // exposition only : basic - state < Sndr , Rcvr > ( std :: forward < Sndr > ( sndr ), std :: move ( rcvr )) , inner - ops ( connect - all ( this , std :: forward < Sndr > ( sndr ), indices - for < Sndr > ())) {} void start () & noexcept { auto & [... ops ] = inner - ops ; impls - for < tag - t >:: start ( this -> state , this -> rcvr , ops ...); } }; template < class Sndr , class Env > using completion - signatures - for = see below ; // exposition only template < class Tag , class Data , class ... Child > struct basic - sender : product - type < Tag , Data , Child ... > { // exposition only using sender_concept = sender_t ; using indices - for = index_sequence_for < Child ... > ; // exposition only decltype ( auto ) get_env () const noexcept { auto & [ _ , data , ... child ] = * this ; return impls - for < Tag >:: get - attrs ( data , child ...); } template < decays - to < basic - sender > Self , receiver Rcvr > auto connect ( this Self && self , Rcvr rcvr ) noexcept ( see below ) -> basic - operation < Self , Rcvr > { return { std :: forward < Self > ( self ), std :: move ( rcvr )}; } template < decays - to < basic - sender > Self , class Env > auto get_completion_signatures ( this Self && self , Env && env ) noexcept -> completion - signatures - for < Self , Env > { return {}; } }; } -
Remarks: The default template argument for the
template parameter denotes an unspecified empty trivially copyable class type that modelsData
.semiregular -
It is unspecified whether a specialization of
is an aggregate.basic - sender -
An expression of type
is usable as the initializer of a structured binding declaration [dcl.struct.bind].basic - sender -
The expression in the
clause of the constructor ofnoexcept
is:basic - state is_nothrow_move_constructible_v < Rcvr > && nothrow - callable < decltype ( impls - for < tag_of_t < Sndr >>:: get - state ), Sndr , Rcvr &> -
The object
is initialized with a callable object equivalent to the following lambda:connect - all [] < class Sndr , class Rcvr , size_t ... Is > ( basic - state < Sndr , Rcvr >* op , Sndr && sndr , index_sequence < Is ... > ) noexcept ( see below ) -> decltype ( auto ) { auto & [ _ , data , ... child ] = sndr ; return product - type { connect ( std :: forward_like < Sndr > ( child ), basic - receiver < Sndr , Rcvr , integral_constant < size_t , Is >> { op })...}; } -
Requires: The expression in the
statement is well-formed.return -
Remarks: The expression in the
clause isnoexcept true
if the
statement is not potentially throwing; otherwise,return false
.
-
-
The expression in the
clause of the constructor ofnoexcept
is:basic - operation is_nothrow_constructible_v < basic - state < Self , Rcvr > , Self , Rcvr > && noexcept ( connect - all ( this , std :: forward < Sndr > ( sndr ), indices - for < Sndr > ())) -
The expression in the
clause of thenoexcept
member function ofconnect
is:basic - sender is_nothrow_constructible_v < basic - operation < Self , Rcvr > , Self , Rcvr > -
The member
is initialized with a callable object equivalent to the following lambda:default - impls :: get - attrs []( const auto & , const auto & ... child ) noexcept -> decltype ( auto ) { if constexpr ( sizeof ...( child ) == 1 ) return ( FWD - ENV ( get_env ( child )), ...); else return empty_env (); } -
The member
is initialized with a callable object equivalent to the following lambda:default - impls :: get - env []( auto , auto & , const auto & rcvr ) noexcept -> decltype ( auto ) { return FWD - ENV ( get_env ( rcvr )); } -
The member
is initialized with a callable object equivalent to the following lambda:default - impls :: get - state [] < class Sndr , class Rcvr > ( Sndr && sndr , Rcvr & rcvr ) noexcept -> decltype ( auto ) { auto & [ _ , data , ... child ] = sndr ; return std :: forward_like < Sndr > ( data ); } -
The member
is initialized with a callable object equivalent to the following lambda:default - impls :: start []( auto & , auto & , auto & ... ops ) noexcept -> void { ( execution :: start ( ops ), ...); } -
The member
is initialized with a callable object equivalent to the following lambda:default - impls :: complete [] < class Index , class Rcvr , class Tag , class ... Args > ( Index , auto & state , Rcvr & rcvr , Tag , Args && ... args ) noexcept -> void requires callable < Tag , Rcvr , Args ... > { // Mandates: Index::value == 0 Tag ()( std :: move ( rcvr ), std :: forward < Args > ( args )...); } -
For a subexpression
letsndr
beSndr
. Letdecltype (( sndr ))
be a receiver with an associated environment of typercvr
such thatEnv
issender_in < Sndr , Env > true
.
denotes a specialization ofcompletion - signatures - for < Sndr , Env >
, the set of whose template arguments correspond to the set of completion operations that are potentially evaluated as a result of starting ([async.ops]) the operation state that results from connectingcompletion_signatures
andsndr
. Whenrcvr
issender_in < Sndr , Env > false
, the type denoted by
, if any, is not a specialization ofcompletion - signatures - for < Sndr , Env >
.completion_signatures Recommended practice: When
issender_in < Sndr , Env > false
, implementations are encouraged to use the type denoted by
to communicate to users why.completion - signatures - for < Sndr , Env >
-
-
template < sender Sndr , queryable Env > constexpr auto write - env ( Sndr && sndr , Env && env ); // exposition only -
is an exposition-only sender adaptor that, when connected with a receiverwrite - env
, connects the adapted sender with a receiver whose execution environment is the result of joining thercvr
argumentqueryable
to the result ofenv
.get_env ( rcvr ) -
Let
be an exposition-only empty class type.write - env - t -
Returns:
.make - sender ( write - env - t (), std :: forward < Env > ( env ), std :: forward < Sndr > ( sndr )) -
Remarks: The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:write - env - t template <> struct impls - for < write - env - t > : default - impls { static constexpr auto get - env = []( auto , const auto & state , const auto & rcvr ) noexcept { return JOIN - ENV ( state , get_env ( rcvr )); }; };
-
-
34.9.2. Sender concepts [exec.snd.concepts]
-
The
concept defines the requirements for a sender type ([async.ops]). Thesender
concept defines the requirements for a sender type that can create asynchronous operations given an associated environment type. Thesender_in
concept defines the requirements for a sender type that can connect with a specific receiver type. Thesender_to
customization point object is used to access a sender’s associated attributes. Theget_env
customization point object is used to connect ([async.ops]) a sender and a receiver to produce an operation state.connect namespace std :: execution { template < class Sigs > concept valid - completion - signatures = see below ; // exposition only template < class Sndr > concept is - sender = // exposition only derived_from < typename Sndr :: sender_concept , sender_t > ; template < class Sndr > concept enable - sender = // exposition only is - sender < Sndr > || is - awaitable < Sndr , env - promise < empty_env >> ; // [exec.awaitables] template < class Sndr > concept sender = bool ( enable - sender < remove_cvref_t < Sndr >> ) && // atomic constraint ([temp.constr.atomic]) requires ( const remove_cvref_t < Sndr >& sndr ) { { get_env ( sndr ) } -> queryable ; } && move_constructible < remove_cvref_t < Sndr >> && // senders are movable and constructible_from < remove_cvref_t < Sndr > , Sndr > ; // decay copyable template < class Sndr , class Env = empty_env > concept sender_in = sender < Sndr > && queryable < Env > && requires ( Sndr && sndr , Env && env ) { { get_completion_signatures ( std :: forward < Sndr > ( sndr ), std :: forward < Env > ( env )) } -> valid - completion - signatures ; }; template < class Sndr , class Rcvr > concept sender_to = sender_in < Sndr , env_of_t < Rcvr >> && receiver_of < Rcvr , completion_signatures_of_t < Sndr , env_of_t < Rcvr >>> && requires ( Sndr && sndr , Rcvr && rcvr ) { connect ( std :: forward < Sndr > ( sndr ), std :: forward < Rcvr > ( rcvr )); }; } -
Given a subexpression
, letsndr
beSndr
and letdecltype (( sndr ))
be a receiver with an associated environment whose type isrcvr
. A completion operation is a permissible completion forEnv
andSndr
if its completion signature appears in the argument list of the specialization ofEnv
denoted bycompletion_signatures
.completion_signatures_of_t < Sndr , Env >
andSndr
modelEnv
if all the completion operations that are potentially evaluated by connectingsender_in < Sndr , Env >
tosndr
and starting the resulting operation state are permissible completions forrcvr
andSndr
.Env -
A type models the exposition-only concept
if it denotes a specialization of thevalid - completion - signatures
class template.completion_signatures -
The exposition-only concepts
andsender - of
define the requirements for a sender type that completes with a given unique set of value result types.sender - in - of namespace std :: execution { template < class ... As > using value - signature = set_value_t ( As ...); // exposition only template < class Sndr , class Env , class ... Values > concept sender - in - of = sender_in < Sndr , Env > && MATCHING - SIG ( // see [exec.general] set_value_t ( Values ...), value_types_of_t < Sndr , Env , value - signature , type_identity_t > ); template < class Sndr , class ... Values > concept sender - of = sender - in - of < Sndr , empty_env , Values ... > ; } -
Let
be an expression such thatsndr
isdecltype (( sndr ))
. The typeSndr
is as follows:tag_of_t < Sndr > -
If the declaration
would be well-formed,auto && [ tag , data , ... children ] = sndr ;
is an alias fortag_of_t < Sndr >
.decltype ( auto ( tag )) -
Otherwise,
is ill-formed.tag_of_t < Sndr >
-
-
Let
be an exposition-only concept defined as follows:sender - for namespace std :: execution { template < class Sndr , class Tag > concept sender - for = sender < Sndr > && same_as < tag_of_t < Sndr > , Tag > ; } -
For a type
,T
denotes the typeSET - VALUE - SIG ( T )
ifset_value_t ()
is cvT
; otherwise, it denotes the typevoid
.set_value_t ( T ) -
Library-provided sender types:
-
Always expose an overload of a member
that accepts an rvalue sender.connect -
Only expose an overload of a member
that accepts an lvalue sender if they modelconnect
.copy_constructible
-
34.9.3. Awaitable helpers [exec.awaitables]
-
The sender concepts recognize awaitables as senders. For [exec], an awaitable is an expression that would be well-formed as the operand of a
expression within a given context.co_await -
For a subexpression
, letc
be expression-equivalent to the series of transformations and conversions applied toGET - AWAITER ( c , p )
as the operand of an await-expression in a coroutine, resulting in lvaluec
as described by [expr.await], wheree
is an lvalue referring to the coroutine’s promise, which has typep
. This includes the invocation of the promise type’sPromise
member if any, the invocation of theawait_transform
picked by overload resolution if any, and any necessary implicit conversions and materializations.operator co_await I have opened cwg#250 to give these transformations a term-of-art so we can more easily refer to it here.
-
Let
be the following exposition-only concept:is - awaitable namespace std { template < class T > concept await - suspend - result = see below ; // exposition only template < class A , class Promise > concept is - awaiter = // exposition only requires ( A & a , coroutine_handle < Promise > h ) { a . await_ready () ? 1 : 0 ; { a . await_suspend ( h ) } -> await - suspend - result ; a . await_resume (); }; template < class C , class Promise > concept is - awaitable = requires ( C ( * fc )() noexcept , Promise & p ) { { GET - AWAITER ( fc (), p ) } -> is - awaiter < Promise > ; }; }
isawait - suspend - result < T > true
if and only if one of the following istrue
:-
isT
, orvoid -
isT
, orbool -
is a specialization ofT
.coroutine_handle
-
-
For a subexpression
such thatc
is typedecltype (( c ))
, and an lvalueC
of typep
,Promise
denotes the typeawait - result - type < C , Promise >
.decltype ( GET - AWAITER ( c , p ). await_resume ()) -
Let
be the exposition-only class template:with - await - transform namespace std :: execution { template < class T , class Promise > concept has - as - awaitable = // exposition only requires ( T && t , Promise & p ) { { std :: forward < T > ( t ). as_awaitable ( p ) } -> is - awaitable < Promise &> ; }; template < class Derived > struct with - await - transform { template < class T > T && await_transform ( T && value ) noexcept { return std :: forward < T > ( value ); } template < has - as - awaitable < Derived > T > decltype ( auto ) await_transform ( T && value ) noexcept ( noexcept ( std :: forward < T > ( value ). as_awaitable ( declval < Derived &> ()))) { return std :: forward < T > ( value ). as_awaitable ( static_cast < Derived &> ( * this )); } }; } -
Let
be the exposition-only class template:env - promise namespace std :: execution { template < class Env > struct env - promise : with - await - transform < env - promise < Env >> { unspecified get_return_object () noexcept ; unspecified initial_suspend () noexcept ; unspecified final_suspend () noexcept ; void unhandled_exception () noexcept ; void return_void () noexcept ; coroutine_handle <> unhandled_stopped () noexcept ; const Env & get_env () const noexcept ; }; } Specializations of
are only used for the purpose of type computation; its members need not be defined.env - promise
34.9.4. execution :: default_domain
[exec.domain.default]
namespace std :: execution { struct default_domain { template < sender Sndr , queryable ... Env > requires ( sizeof ...( Env ) <= 1 ) static constexpr sender decltype ( auto ) transform_sender ( Sndr && sndr , const Env & ... env ) noexcept ( see below ); template < sender Sndr , queryable Env > static constexpr queryable decltype ( auto ) transform_env ( Sndr && sndr , Env && env ) noexcept ; template < class Tag , sender Sndr , class ... Args > static constexpr decltype ( auto ) apply_sender ( Tag , Sndr && sndr , Args && ... args ) noexcept ( see below ); }; }
34.9.4.1. Static members [exec.domain.default.statics]
template < sender Sndr , queryable ... Env > requires ( sizeof ...( Env ) <= 1 ) constexpr sender decltype ( auto ) transform_sender ( Sndr && sndr , const Env & ... env ) noexcept ( see below );
-
Let
be the expressione
if that expression is well-formed; otherwise,tag_of_t < Sndr > (). transform_sender ( std :: forward < Sndr > ( sndr ), env ...)
.std :: forward < Sndr > ( sndr ) -
Returns:
.e -
Remarks: The exception specification is equivalent to
.noexcept ( e )
template < sender Sndr , queryable Env > constexpr queryable decltype ( auto ) transform_env ( Sndr && sndr , Env && env ) noexcept ;
-
Let
be the expressione
if that expression is well-formed; otherwise,tag_of_t < Sndr > (). transform_env ( std :: forward < Sndr > ( sndr ), std :: forward < Env > ( env ))
.static_cast < Env > ( std :: forward < Env > ( env )) -
Mandates:
isnoexcept ( e ) true
. -
Returns:
.e
template < class Tag , sender Sndr , class ... Args > constexpr decltype ( auto ) apply_sender ( Tag , Sndr && sndr , Args && ... args ) noexcept ( see below );
-
Let
be the expressione
.Tag (). apply_sender ( std :: forward < Sndr > ( sndr ), std :: forward < Args > ( args )...) -
Constraints:
is a well-formed expression.e -
Returns:
.e -
Remarks: The exception specification is equivalent to
.noexcept ( e )
34.9.5. execution :: transform_sender
[exec.snd.transform]
namespace std :: execution { template < class Domain , sender Sndr , queryable ... Env > requires ( sizeof ...( Env ) <= 1 ) constexpr sender decltype ( auto ) transform_sender ( Domain dom , Sndr && sndr , const Env & ... env ) noexcept ( see below ); }
-
Let
be the expressiontransformed - sndr
if that expression is well-formed; otherwise,dom . transform_sender ( std :: forward < Sndr > ( sndr ), env ...)
. Letdefault_domain (). transform_sender ( std :: forward < Sndr > ( sndr ), env ...)
be the expressionfinal - sndr
iftransformed - sndr
andtransformed - sndr
have the same type ignoring cv qualifiers; otherwise, it is the expressionsndr
.transform_sender ( dom , transformed - sndr , env ...) -
Returns:
.final - sndr -
Remarks: The exception specification is equivalent to
.noexcept ( final - sndr )
34.9.6. execution :: transform_env
[exec.snd.transform.env]
namespace std :: execution { template < class Domain , sender Sndr , queryable Env > constexpr queryable decltype ( auto ) transform_env ( Domain dom , Sndr && sndr , Env && env ) noexcept ; }
-
Let
be the expressione
if that expression is well-formed; otherwise,dom . transform_env ( std :: forward < Sndr > ( sndr ), std :: forward < Env > ( env ))
.default_domain (). transform_env ( std :: forward < Sndr > ( sndr ), std :: forward < Env > ( env )) -
Mandates:
isnoexcept ( e ) true
. -
Returns:
.e
34.9.7. execution :: apply_sender
[exec.snd.apply]
namespace std :: execution { template < class Domain , class Tag , sender Sndr , class ... Args > constexpr decltype ( auto ) apply_sender ( Domain dom , Tag , Sndr && sndr , Args && ... args ) noexcept ( see below ); }
-
Let
be the expressione
if that expression is well-formed; otherwise,dom . apply_sender ( Tag (), std :: forward < Sndr > ( sndr ), std :: forward < Args > ( args )...)
.default_domain (). apply_sender ( Tag (), std :: forward < Sndr > ( sndr ), std :: forward < Args > ( args )...) -
Constraints: The expression
is well-formed.e -
Returns:
.e -
Remarks: The exception specification is equivalent to
.noexcept ( e )
34.9.8. execution :: get_completion_signatures
[exec.getcomplsigs]
-
is a customization point object. Letget_completion_signatures
be an expression such thatsndr
isdecltype (( sndr ))
, and letSndr
be an expression such thatenv
isdecltype (( env ))
. LetEnv
be the expressionnew_sndr
, and lettransform_sender ( decltype ( get - domain - late ( sndr , env )){}, sndr , env )
beNewSndr
. Thendecltype (( new_sndr ))
is expression-equivalent toget_completion_signatures ( sndr , env )
except that( void ( sndr ), void ( env ), CS ())
andvoid ( sndr )
are indeterminately sequenced, wherevoid ( env )
is:CS -
if that type is well-formed,decltype ( new_sndr . get_completion_signatures ( env )) -
Otherwise,
if that type is well-formed,remove_cvref_t < NewSndr >:: completion_signatures -
Otherwise, if
isis - awaitable < NewSndr , env - promise < Env >> true
, then:completion_signatures < SET - VALUE - SIG ( await - result - type < NewSndr , env - promise < Env >> ), // see [exec.snd.concepts] set_error_t ( exception_ptr ), set_stopped_t () > -
Otherwise,
is ill-formed.CS
-
-
Let
be an rvalue whose typercvr
modelsRcvr
, and letreceiver
be the type of a sender such thatSndr
issender_in < Sndr , env_of_t < Rcvr >> true
. Let
be the template arguments of theSigs ...
specialization named bycompletion_signatures
. Letcompletion_signatures_of_t < Sndr , env_of_t < Rcvr >>
be a completion function. If senderCSO
or its operation state cause the expressionSndr
to be potentially evaluated ([basic.def.odr]) then there shall be a signatureCSO ( rcvr , args ...)
inSig
such thatSigs ...
isMATCHING - SIG ( decayed - typeof < CSO > ( decltype ( args )...), Sig ) true
([exec.general]).
34.9.9. execution :: connect
[exec.connect]
-
connects ([async.ops]) a sender with a receiver.connect -
The name
denotes a customization point object. For subexpressionsconnect
andsndr
, letrcvr
beSndr
anddecltype (( sndr ))
beRcvr
, letdecltype (( rcvr ))
be the expressionnew_sndr
, and lettransform_sender ( decltype ( get - domain - late ( sndr , get_env ( rcvr ))){}, sndr , get_env ( rcvr ))
andDS
beDR
anddecay_t < decltype (( new_sndr )) >
, respectively.decay_t < Rcvr > -
Let
be the following exposition-only class:connect - awaitable - promise namespace std :: execution { struct connect - awaitable - promise : with - await - transform < connect - awaitable - promise > { connect - awaitable - promise ( DS & , DR & rcvr ) noexcept : rcvr ( rcvr ) {} suspend_always initial_suspend () noexcept { return {}; } [[ noreturn ]] suspend_always final_suspend () noexcept { terminate (); } [[ noreturn ]] void unhandled_exception () noexcept { terminate (); } [[ noreturn ]] void return_void () noexcept { terminate (); } coroutine_handle <> unhandled_stopped () noexcept { set_stopped ( std :: move ( rcvr )); return noop_coroutine (); } operation - state - task get_return_object () noexcept { return operation - state - task { coroutine_handle < connect - awaitable - promise >:: from_promise ( * this )}; } env_of_t < DR > get_env () const noexcept { return execution :: get_env ( rcvr ); } private : DR & rcvr ; // exposition only }; } -
Let
be the following exposition-only class:operation - state - task namespace std :: execution { struct operation - state - task { using operation_state_concept = operation_state_t ; using promise_type = connect - awaitable - promise ; explicit operation - state - task ( coroutine_handle <> h ) noexcept : coro ( h ) {} operation - state - task ( operation - state - task && o ) noexcept : coro ( exchange ( o . coro , {})) {} ~ operation - state - task () { if ( coro ) coro . destroy (); } void start () & noexcept { coro . resume (); } private : coroutine_handle <> coro ; // exposition only }; } -
Let
name the typeV
, letawait - result - type < DS , connect - awaitable - promise >
name the type:Sigs completion_signatures < SET - VALUE - SIG ( V ), // see [exec.snd.concepts] set_error_t ( exception_ptr ), set_stopped_t () > and let
be an exposition-only coroutine defined as follows:connect - awaitable namespace std :: execution { template < class Fun , class ... Ts > auto suspend - complete ( Fun fun , Ts && ... as ) noexcept { // exposition only auto fn = [ & , fun ]() noexcept { fun ( std :: forward < Ts > ( as )...); }; struct awaiter { decltype ( fn ) fn ; static constexpr bool await_ready () noexcept { return false; } void await_suspend ( coroutine_handle <> ) noexcept { fn (); } [[ noreturn ]] void await_resume () noexcept { unreachable (); } }; return awaiter { fn }; } operation - state - task connect - awaitable ( DS sndr , DR rcvr ) requires receiver_of < DR , Sigs > { exception_ptr ep ; try { if constexpr ( same_as < V , void > ) { co_await std :: move ( sndr ); co_await suspend - complete ( set_value , std :: move ( rcvr )); } else { co_await suspend - complete ( set_value , std :: move ( rcvr ), co_await std :: move ( sndr )); } } catch (...) { ep = current_exception (); } co_await suspend - complete ( set_error , std :: move ( rcvr ), std :: move ( ep )); } } -
The expression
is expression-equivalent to:connect ( sndr , rcvr ) -
if that expression is well-formed.new_sndr . connect ( rcvr ) -
Mandates: The type of the expression above satisfies
.operation_state
-
-
Otherwise,
.connect - awaitable ( new_sndr , rcvr ) -
Mandates:
issender < Sndr > && receiver < Rcvr > true
.
-
34.9.10. Sender factories [exec.factories]
34.9.10.1. execution :: schedule
[exec.schedule]
-
obtains a schedule sender ([async.ops]) from a scheduler.schedule -
The name
denotes a customization point object. For a subexpressionschedule
, the expressionsch
is expression-equivalent toschedule ( sch )
.sch . schedule () -
If the expression
is ill-formed or evaluates toget_completion_scheduler < set_value_t > ( get_env ( sch . schedule ())) == sch false
, the behavior of calling
is undefined.schedule ( sch ) -
Mandates: The type of
satisfiessch . schedule ()
.sender
-
34.9.10.2. execution :: just
, execution :: just_error
, execution :: just_stopped
[exec.just]
-
,just
, andjust_error
are sender factories whose asynchronous operations complete synchronously in their start operation with a value completion operation, an error completion operation, or a stopped completion operation respectively.just_stopped -
The names
,just
, andjust_error
denote customization point objects. Letjust_stopped
be one ofjust - cpo
,just
, orjust_error
. For a pack of subexpressionsjust_stopped
, letts
be the pack of typesTs
. The expressiondecltype (( ts ))
is ill-formed if:just - cpo ( ts ...) -
is( movable - value < Ts > && ...) false
, or -
isjust - cpo
andjust_error
issizeof ...( ts ) == 1 false
, or -
isjust - cpo
andjust_stopped
issizeof ...( ts ) == 0 false
;
Otherwise, it is expression-equivalent to
.make - sender ( just - cpo , product - type { ts ...}) -
-
For
,just
, andjust_error
, letjust_stopped
beset - cpo
,set_value
, andset_error
respectively. The exposition-only class templateset_stopped
([exec.snd.general]) is specialized forimpls - for
as follows:just - cpo namespace std :: execution { template <> struct impls - for < decayed - typeof < just - cpo >> : default - impls { static constexpr auto start = []( auto & state , auto & rcvr ) noexcept -> void { auto & [... ts ] = state ; set - cpo ( std :: move ( rcvr ), std :: move ( ts )...); }; }; }
34.9.10.3. execution :: read_env
[exec.read.env]
-
is a sender factory for a sender whose asynchronous operation completes synchronously in its start operation with a value completion result equal to a value read from the receiver’s associated environment.read_env -
is a customization point object. For some query objectread_env
, the expressionq
is expression-equivalent toread_env ( q )
.make - sender ( read_env , q ) -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:read_env namespace std :: execution { template <> struct impls - for < decayed - typeof < read_env >> : default - impls { static constexpr auto start = []( auto query , auto & rcvr ) noexcept -> void { TRY - SET - VALUE ( rcvr , query ( get_env ( rcvr ))); }; }; }
34.9.11. Sender adaptors [exec.adapt]
34.9.11.1. General [exec.adapt.general]
-
[exec.adapt] specifies a set of sender adaptors.
-
The bitwise inclusive OR operator is overloaded for the purpose of creating sender chains. The adaptors also support function call syntax with equivalent semantics.
-
Unless otherwise specified:
-
A sender adaptor is prohibited from causing observable effects, apart from moving and copying its arguments, before the returned sender is connected with a receiver using
, andconnect
is called on the resulting operation state.start -
A parent sender ([async.ops]) with a single child sender
has an associated attribute object equal tosndr
([exec.fwd.env]).FWD - ENV ( get_env ( sndr )) -
A parent sender with more than one child sender has an associated attributes object equal to
.empty_env {} -
When a parent sender is connected to a receiver
, any receiver used to connect a child sender has an associated environment equal torcvr
.FWD - ENV ( get_env ( rcvr ))
These requirements apply to any function that is selected by the implementation of the sender adaptor.
-
-
If a sender returned from a sender adaptor specified in [exec.adapt] is specified to include
among its set of completion signatures whereset_error_t ( Err )
denotes the typedecay_t < Err >
, but the implementation does not potentially evaluate an error completion operation with anexception_ptr
argument, the implementation is allowed to omit theexception_ptr
error completion signature from the set.exception_ptr
34.9.11.2. Sender adaptor closure objects [exec.adapt.objects]
-
A pipeable sender adaptor closure object is a function object that accepts one or more
arguments and returns asender
. For a pipeable sender adaptor closure objectsender
and an expressionc
such thatsndr
modelsdecltype (( sndr ))
, the following expressions are equivalent and yield asender
:sender c ( sndr ) sndr | c Given an additional pipeable sender adaptor closure object
, the expressiond
produces another pipeable sender adaptor closure objectc | d
:e
is a perfect forwarding call wrapper ([func.require]) with the following properties:e -
Its target object is an object
of typed2
direct-non-list-initialized withdecltype ( auto ( d ))
.d -
It has one bound argument entity, an object
of typec2
direct-non-list-initialized withdecltype ( auto ( c ))
.c -
Its call pattern is
, whered2 ( c2 ( arg ))
is the argument used in a function call expression ofarg
.e
-
The expression
is well-formed if and only if the initializations of
the state entities ([func.def]) of
are all well-formed.
-
An object
of typet
is a pipeable sender adaptor closure object ifT
modelsT
,derived_from < sender_adaptor_closure < T >>
has no other base classes of typeT
for any other typesender_adaptor_closure < U >
, andU
does not satisfyT
.sender -
The template parameter
forD
can be an incomplete type. Before any expression of typesender_adaptor_closure
appears as an operand to thecv D
operator,|
shall be complete and modelD
. The behavior of an expression involving an object of typederived_from < sender_adaptor_closure < D >>
as an operand to thecv D
operator is undefined if overload resolution selects a program-defined|
function.operator | -
A pipeable sender adaptor object is a customization point object that accepts a
as its first argument and returns asender
.sender -
If a pipeable sender adaptor object accepts only one argument, then it is a pipeable sender adaptor closure object.
-
If a pipeable sender adaptor object
accepts more than one argument, then letadaptor
be an expression such thatsndr
modelsdecltype (( sndr ))
, letsender
be arguments such thatargs ...
is a well-formed expression as specified below, and letadaptor ( sndr , args ...)
be a pack that denotesBoundArgs
. The expressiondecltype ( auto ( args ))...
produces a pipeable sender adaptor closure objectadaptor ( args ...)
that is a perfect forwarding call wrapper with the following properties:f -
Its target object is a copy of
.adaptor -
Its bound argument entities
consist of objects of typesbound_args
direct-non-list-initialized withBoundArgs ...
, respectively.std :: forward < decltype (( args )) > ( args )... -
Its call pattern is
, whereadaptor ( rcvr , bound_args ...)
is the argument used in a function call expression ofrcvr
.f
The expression
is well-formed if and only if the initializations of the bound argument entities of the result, as specified above, are all well-formed.adaptor ( args ...) -
34.9.11.3. execution :: starts_on
[exec.starts.on]
-
adapts an input sender into a sender that will start on an execution agent belonging to a particular scheduler’s associated execution resource.starts_on -
The name
denotes a customization point object. For subexpressionsstarts_on
andsch
, ifsndr
does not satisfydecltype (( sch ))
, orscheduler
does not satisfydecltype (( sndr ))
,sender
is ill-formed.starts_on ( sch , sndr ) -
Otherwise, the expression
is expression-equivalent to:starts_on ( sch , sndr ) transform_sender ( query - or - default ( get_domain , sch , default_domain ()), make - sender ( starts_on , sch , sndr )) except that
is evaluated only once.sch -
Let
andout_sndr
be subexpressions such thatenv
isOutSndr
. Ifdecltype (( out_sndr ))
issender - for < OutSndr , starts_on_t > false
, then the expressions
andstarts_on . transform_env ( out_sndr , env )
are ill-formed; otherwise:starts_on . transform_sender ( out_sndr , env ) -
is equivalent to:starts_on . transform_env ( out_sndr , env ) auto && [ _ , sch , _ ] = out_sndr ; return JOIN - ENV ( SCHED - ENV ( sch ), FWD - ENV ( env )); -
is equivalent to:starts_on . transform_sender ( out_sndr , env ) auto && [ _ , sch , sndr ] = out_sndr ; return let_value ( schedule ( sch ), [ sndr = std :: forward_like < OutSndr > ( sndr )]() mutable noexcept ( is_nothrow_move_constructible_v < decay_t > ) { return std :: move ( sndr ); });
-
-
Let
be a subexpression denoting a sender returned fromout_sndr
or one equal to such, and letstarts_on ( sch , sndr )
be the typeOutSndr
. Letdecltype (( out_sndr ))
be a subexpression denoting a receiver that has an environment of typeout_rcvr
such thatEnv
issender_in < OutSndr , Env > true
. Let
be an lvalue referring to the operation state that results from connectingop
without_sndr
. Callingout_rcvr
shall startstart ( op )
on an execution agent of the associated execution resource ofsndr
. If scheduling ontosch
fails, an error completion onsch
shall be executed on an unspecified execution agent.out_rcvr
34.9.11.4. execution :: continues_on
[exec.continues.on]
-
adapts a sender into one that completes on the specified scheduler.continues_on -
The name
denotes a pipeable sender adaptor object. For subexpressionscontinues_on
andsch
, ifsndr
does not satisfydecltype (( sch ))
, orscheduler
does not satisfydecltype (( sndr ))
,sender
is ill-formed.continues_on ( sndr , sch ) -
Otherwise, the expression
is expression-equivalent to:continues_on ( sndr , sch ) transform_sender ( get - domain - early ( sndr ), make - sender ( continues_on , sch , sndr )) except that
is evaluated only once.sndr -
The exposition-only class template
is specialized forimpls - for
as follows:continues_on_t namespace std :: execution { template <> struct impls - for < continues_on_t > : default - impls { static constexpr auto get_attrs = []( const auto & data , const auto & child ) noexcept -> decltype ( auto ) { return JOIN - ENV ( SCHED - ATTRS ( data ), FWD - ENV ( get_env ( child ))); }; }; } -
Let
andsndr
be subexpressions such thatenv
isSndr
. Ifdecltype (( sndr ))
issender - for < Sndr , continues_on_t > false
, then the expression
is ill-formed; otherwise, it is equal to:continues_on . transform_sender ( sndr , env ) auto [ _ , data , child ] = sndr ; return schedule_from ( std :: move ( data ), std :: move ( child )); This causes the
sender to becomecontinues_on ( sndr , sch )
when it is connected with a receiver whose execution domain does not customizeschedule_from ( sch , sndr )
.continues_on -
Let
be a subexpression denoting a sender returned fromout_sndr
or one equal to such, and letcontinues_on ( sndr , sch )
be the typeOutSndr
. Letdecltype (( out_sndr ))
be a subexpression denoting a receiver that has an environment of typeout_rcvr
such thatEnv
issender_in < OutSndr , Env > true
. Let
be an lvalue referring to the operation state that results from connectingop
without_sndr
. Callingout_rcvr
shall startstart ( op )
on the current execution agent and execute completion operations onsndr
on an execution agent of the execution resource associated without_rcvr
. If scheduling ontosch
fails, an error completion onsch
shall be executed on an unspecified execution agent.out_rcvr
34.9.11.5. execution :: schedule_from
[exec.schedule.from]
-
schedules work dependent on the completion of a sender onto a scheduler’s associated execution resource.schedule_from
is not meant to be used in user code; it is used in the implementation ofschedule_from
.continues_on -
The name
denotes a customization point object. For some subexpressionsschedule_from
andsch
, letsndr
beSch
anddecltype (( sch ))
beSndr
. Ifdecltype (( sndr ))
does not satisfySch
, orscheduler
does not satisfySndr
,sender
is ill-formed.schedule_from ( sch , sndr ) -
Otherwise, the expression
is expression-equivalent to:schedule_from ( sch , sndr ) transform_sender ( query - or - default ( get_domain , sch , default_domain ()), make - sender ( schedule_from , sch , sndr )) except that
is evaluated only once.sch -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:schedule_from_t namespace std :: execution { template <> struct impls - for < schedule_from_t > : default - impls { static constexpr auto get - attrs = see below ; static constexpr auto get - state = see below ; static constexpr auto complete = see below ; }; } -
The member
is initialized with a callable object equivalent to the following lambda:impls - for < schedule_from_t >:: get - attrs []( const auto & data , const auto & child ) noexcept -> decltype ( auto ) { return JOIN - ENV ( SCHED - ATTRS ( data ), FWD - ENV ( get_env ( child ))); } -
The member
is initialized with a callable object equivalent to the following lambda:impls - for < schedule_from_t >:: get - state [] < class Sndr , class Rcvr > ( Sndr && sndr , Rcvr & rcvr ) noexcept ( see below ) requires sender_in < child - type < Sndr > , env_of_t < Rcvr >> { auto & [ _ , sch , child ] = sndr ; using sched_t = decltype ( auto ( sch )); using variant_t = see below ; using receiver_t = see below ; using operation_t = connect_result_t < schedule_result_t < sched_t > , receiver_t > ; constexpr bool nothrow = noexcept ( connect ( schedule ( sch ), receiver_t { nullptr })); struct state - type { Rcvr & rcvr ; // exposition only variant_t async - result ; // exposition only operation_t op - state ; // exposition only explicit state - type ( sched_t sch , Rcvr & rcvr ) noexcept ( nothrow ) : rcvr ( rcvr ), op - state ( connect ( schedule ( sch ), receiver_t { this })) {} }; return state - type { sch , rcvr }; } -
Objects of the local class
can be used to initialize a structured binding.state - type -
Let
be a pack of the arguments to theSigs
specialization named bycompletion_signatures
. Letcompletion_signatures_of_t < child - type < Sndr > , env_of_t < Rcvr >>
be an alias template that transforms a completion signatureas - tuple
into theTag ( Args ...)
specializationtuple
. Thendecayed - tuple < Tag , Args ... >
denotes the typevariant_t
, except with duplicate types removed.variant < monostate , as - tuple < Sigs > ... > -
is an alias for the following exposition-only class:receiver_t namespace std :: execution { struct receiver - type { using receiver_concept = receiver_t ; state - type * state ; // exposition only void set_value () && noexcept { visit ( [ this ] < class Tuple > ( Tuple & result ) noexcept -> void { if constexpr ( ! same_as < monostate , Tuple > ) { auto & [ tag , ... args ] = result ; tag ( std :: move ( state -> rcvr ), std :: move ( args )...); } }, state -> async - result ); } template < class Error > void set_error ( Error && err ) && noexcept { execution :: set_error ( std :: move ( state -> rcvr ), std :: forward < Error > ( err )); } void set_stopped () && noexcept { execution :: set_stopped ( std :: move ( state -> rcvr )); } decltype ( auto ) get_env () const noexcept { return FWD - ENV ( execution :: get_env ( state -> rcvr )); } }; } -
The expression in the
clause of the lambda isnoexcept true
if the construction of the returned
object is not potentially throwing; otherwise,state - type false
.
-
-
The member
is initialized with a callable object equivalent to the following lambda:impls - for < schedule_from_t >:: complete [] < class Tag , class ... Args > ( auto , auto & state , auto & rcvr , Tag , Args && ... args ) noexcept -> void { using result_t = decayed - tuple < Tag , Args ... > ; constexpr bool nothrow = is_nothrow_constructible_v < result_t , Tag , Args ... > ; TRY - EVAL ( rcvr , [ & ]() noexcept ( nothrow ) { state . async - result . template emplace < result_t > ( Tag (), std :: forward < Args > ( args )...); }()); if ( state . async - result . valueless_by_exception ()) return ; if ( state . async - result . index () == 0 ) return ; start ( state . op - state ); };
-
-
Let
be a subexpression denoting a sender returned fromout_sndr
or one equal to such, and letschedule_from ( sch , sndr )
be the typeOutSndr
. Letdecltype (( out_sndr ))
be a subexpression denoting a receiver that has an environment of typeout_rcvr
such thatEnv
issender_in < OutSndr , Env > true
. Let
be an lvalue referring to the operation state that results from connectingop
without_sndr
. Callingout_rcvr
shall startstart ( op )
on the current execution agent and execute completion operations onsndr
on an execution agent of the execution resource associated without_rcvr
. If scheduling ontosch
fails, an error completion onsch
shall be executed on an unspecified execution agent.out_rcvr
34.9.11.6. execution :: on
[exec.on]
-
The
sender adaptor has two forms:on -
, which starts a senderon ( sch , sndr )
on an execution agent belonging to a schedulersndr
’s associated execution resource and that, uponsch
’s completion, transfers execution back to the execution resource on which thesndr
sender was started.on -
, which upon completion of a senderon ( sndr , sch , closure )
, transfers execution to an execution agent belonging to a schedulersndr
’s associated execution resource, then executes a sender adaptor closuresch
with the async results of the sender, and that then transfers execution back to the execution resource on whichclosure
completed.sndr
-
-
The name
denotes a pipeable sender adaptor object. For subexpressionson
andsch
,sndr
is ill-formed if any of the following is true:on ( sch , sndr ) -
does not satisfydecltype (( sch ))
, orscheduler -
does not satisfydecltype (( sndr ))
andsender
is not a pipeable sender adaptor closure object ([exec.adapt.objects]), orsndr -
satisfiesdecltype (( sndr ))
andsender
is also a pipeable sender adaptor closure object.sndr
-
-
Otherwise, if
satisfiesdecltype (( sndr ))
, the expressionsender
is expression-equivalent to:on ( sch , sndr ) transform_sender ( query - or - default ( get_domain , sch , default_domain ()), make - sender ( on , sch , sndr )) except that
is evaluated only once.sch -
For subexpressions
,sndr
, andsch
, ifclosure
does not satisfydecltype (( sch ))
, orscheduler
does not satisfydecltype (( sndr ))
, orsender
is not a pipeable sender adaptor closure object ([exec.adapt.objects]), the expressionclosure
is ill-formed; otherwise, it is expression-equivalent to:on ( sndr , sch , closure ) transform_sender ( get - domain - early ( sndr ), make - sender ( on , product - type { sch , closure }, sndr )) except that
is evaluated only once.sndr -
Let
andout_sndr
be subexpressions, letenv
beOutSndr
, and letdecltype (( out_sndr ))
beEnv
. Ifdecltype (( env ))
issender - for < OutSndr , on_t > false
, then the expressions
andon . transform_env ( out_sndr , env )
are ill-formed; otherwise:on . transform_sender ( out_sndr , env ) -
Let
be an unspecified empty class type, and letnot - a - scheduler
be the exposition-only type:not - a - sender struct not - a - sender { using sender_concept = sender_t ; auto get_completion_signatures ( auto && ) const { return see below ; } }; where the member function
returns an object of a type that is not a specialization of theget_completion_signatures
class template.completion_signatures -
The expression
has effects equivalent to:on . transform_env ( out_sndr , env ) auto && [ _ , data , _ ] = out_sndr ; if constexpr ( scheduler < decltype ( data ) > ) { return JOIN - ENV ( SCHED - ENV ( std :: forward_like < OutSndr > ( data )), FWD - ENV ( std :: forward < Env > ( env ))); } else { return std :: forward < Env > ( env ); } -
The expression
has effects equivalent to:on . transform_sender ( out_sndr , env ) auto && [ _ , data , child ] = out_sndr ; if constexpr ( scheduler < decltype ( data ) > ) { auto orig_sch = query - with - default ( get_scheduler , env , not - a - scheduler ()); if constexpr ( same_as < decltype ( orig_sch ), not - a - scheduler > ) { return not - a - sender {}; } else { return continues_on ( starts_on ( std :: forward_like < OutSndr > ( data ), std :: forward_like < OutSndr > ( child )), std :: move ( orig_sch )); } } else { auto & [ sch , closure ] = data ; auto orig_sch = query - with - default ( get_completion_scheduler < set_value_t > , get_env ( child ), query - with - default ( get_scheduler , env , not - a - scheduler ())); if constexpr ( same_as < decltype ( orig_sch ), not - a - scheduler > ) { return not - a - sender {}; } else { return write - env ( continues_on ( std :: forward_like < OutSndr > ( closure )( continues_on ( write - env ( std :: forward_like < OutSndr > ( child ), SCHED - ENV ( orig_sch )), sch )), orig_sch ), SCHED - ENV ( sch )); } } -
Recommended practice: Implementations should use the return type of
to inform users that their usage ofnot - a - sender :: get_completion_signatures
is incorrect because there is no available scheduler onto which to restore execution.on
-
-
Let
be a subexpression denoting a sender returned fromout_sndr
or one equal to such, and leton ( sch , sndr )
be the typeOutSndr
. Letdecltype (( out_sndr ))
be a subexpression denoting a receiver that has an environment of typeout_rcvr
such thatEnv
issender_in < OutSndr , Env > true
. Let
be an lvalue referring to the operation state that results from connectingop
without_sndr
. Callingout_rcvr
shall:start ( op ) -
Remember the current scheduler,
.get_scheduler ( get_env ( rcvr )) -
Start
on an execution agent belonging tosndr
’s associated execution resource.sch -
Upon
’s completion, transfer execution back to the execution resource associated with the scheduler remembered in step 1.sndr -
Forward
’s async result tosndr
.out_rcvr
If any scheduling operation fails, an error completion on
shall be executed on an unspecified execution agent.out_rcvr -
-
Let
be a subexpression denoting a sender returned fromout_sndr
or one equal to such, and leton ( sndr , sch , closure )
be the typeOutSndr
. Letdecltype (( out_sndr ))
be a subexpression denoting a receiver that has an environment of typeout_rcvr
such thatEnv
issender_in < OutSndr , Env > true
. Let
be an lvalue referring to the operation state that results from connectingop
without_sndr
. Callingout_rcvr
shall:start ( op ) -
Remember the current scheduler, which is the first of the following expressions that is well-formed:
-
get_completion_scheduler < set_value_t > ( get_env ( sndr )) -
get_scheduler ( get_env ( rcvr ))
-
-
Start
on the current execution agent.sndr -
Upon
’s completion, transfer execution to an agent owned bysndr
’s associated execution resource.sch -
Forward
’s async result as if by connecting and starting a sendersndr
, whereclosure ( S )
is a sender that completes synchronously withS
’s async result.sndr -
Upon completion of the operation started in step 4, transfer execution back to the execution resource associated with the scheduler remembered in step 1 and forward the operation’s async result to
.out_rcvr
If any scheduling operation fails, an error completion on
shall be executed on an unspecified execution agent.out_rcvr -
34.9.11.7. execution :: then
, execution :: upon_error
, execution :: upon_stopped
[exec.then]
-
attaches an invocable as a continuation for an input sender’s value completion operation.then
andupon_error
do the same for the error and stopped completion operations respectively, sending the result of the invocable as a value completion.upon_stopped -
The names
,then
, andupon_error
denote pipeable sender adaptor objects. Let the expressionupon_stopped
be one ofthen - cpo
,then
, orupon_error
. For subexpressionsupon_stopped
andsndr
, iff
does not satisfydecltype (( sndr ))
, orsender
does not satisfydecltype (( f ))
,movable - value
is ill-formed.then - cpo ( sndr , f ) -
Otherwise, the expression
is expression-equivalent to:then - cpo ( sndr , f ) transform_sender ( get - domain - early ( sndr ), make - sender ( then - cpo , f , sndr )) except that
is evaluated only once.sndr -
For
,then
, andupon_error
, letupon_stopped
beset - cpo
,set_value
, andset_error
respectively. The exposition-only class templateset_stopped
([exec.snd.general]) is specialized forimpls - for
as follows:then - cpo namespace std :: execution { template <> struct impls - for < decayed - typeof < then - cpo >> : default - impls { static constexpr auto complete = [] < class Tag , class ... Args > ( auto , auto & fn , auto & rcvr , Tag , Args && ... args ) noexcept -> void { if constexpr ( same_as < Tag , decayed - typeof < set - cpo >> ) { TRY - SET - VALUE ( rcvr , invoke ( std :: move ( fn ), std :: forward < Args > ( args )...)); } else { Tag ()( std :: move ( rcvr ), std :: forward < Args > ( args )...); } }; }; } -
The expression
has undefined behavior unless it returns a senderthen - cpo ( sndr , f )
that:out_sndr -
Invokes
or a copy of such with the value, error, or stopped result datums off
forsndr
,then
, andupon_error
respectively, using the result value ofupon_stopped
asf
’s value completion, andout_sndr -
Forwards all other completion operations unchanged.
-
34.9.11.8. execution :: let_value
, execution :: let_error
, execution :: let_stopped
, [exec.let]
-
,let_value
, andlet_error
transform a sender’s value, error, and stopped completions respectively into a new child asynchronous operation by passing the sender’s result datums to a user-specified callable, which returns a new sender that is connected and started.let_stopped -
For
,let_value
, andlet_error
, letlet_stopped
beset - cpo
,set_value
, andset_error
respectively. Let the expressionset_stopped
be one oflet - cpo
,let_value
, orlet_error
. For a subexpressionlet_stopped
, letsndr
be expression-equivalent to the first well-formed expression below:let - env ( sndr ) -
SCHED - ENV ( get_completion_scheduler < decayed - typeof < set - cpo >> ( get_env ( sndr ))) -
MAKE - ENV ( get_domain , get_domain ( get_env ( sndr ))) -
( void ( sndr ), empty_env {})
-
-
The names
,let_value
, andlet_error
denote pipeable sender adaptor objects. For subexpressionslet_stopped
andsndr
, letf
be the decayed type ofF
. Iff
does not satisfydecltype (( sndr ))
or ifsender
does not satisfydecltype (( f ))
, the expressionmovable - value
is ill-formed. Iflet - cpo ( sndr , f )
does not satisfyF
, the expressioninvocable
is ill-formed.let_stopped ( sndr , f ) -
Otherwise, the expression
is expression-equivalent to:let - cpo ( sndr , f ) transform_sender ( get - domain - early ( sndr ), make - sender ( let - cpo , f , sndr )) except that
is evaluated only once.sndr -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:let - cpo namespace std :: execution { template < class State , class Rcvr , class ... Args > void let - bind ( State & state , Rcvr & rcvr , Args && ... args ); // exposition only template <> struct impls - for < decayed - typeof < let - cpo >> : default - impls { static constexpr auto get - state = see below ; static constexpr auto complete = see below ; }; } -
Let
denote the following exposition-only class template:receiver2 namespace std :: execution { template < class Rcvr , class Env > struct receiver2 { using receiver_concept = receiver_t ; template < class ... Args > void set_value ( Args && ... args ) && noexcept { execution :: set_value ( std :: move ( rcvr ), std :: forward < Args > ( args )...); } template < class Error > void set_error ( Error && err ) && noexcept { execution :: set_error ( std :: move ( rcvr ), std :: forward < Error > ( err )); } void set_stopped () && noexcept { execution :: set_stopped ( std :: move ( rcvr )); } decltype ( auto ) get_env () const noexcept { return JOIN - ENV ( env , FWD - ENV ( execution :: get_env ( rcvr ))); } Rcvr & rcvr ; // exposition only Env env ; // exposition only }; } -
is initialized with a callable object equivalent to the following:impls - for < decayed - typeof < let - cpo >>:: get - state [] < class Sndr , class Rcvr > ( Sndr && sndr , Rcvr & rcvr ) requires see below { auto & [ _ , fn , child ] = sndr ; using fn_t = decay_t < decltype ( fn ) > ; using env_t = decltype ( let - env ( child )); using args_variant_t = see below ; using ops2_variant_t = see below ; struct state - type { fn_t fn ; // exposition only env_t env ; // exposition only args_variant_t args ; // exposition only ops2_variant_t ops2 ; // exposition only }; return state - type { std :: forward_like < Sndr > ( fn ), let - env ( child ), {}, {}}; } -
Let
be a pack of the arguments to theSigs
specialization named bycompletion_signatures
. Letcompletion_signatures_of_t < child - type < Sndr > , env_of_t < Rcvr >>
be a pack of those types inLetSigs
with a return type ofSigs
. Letdecayed - typeof < set - cpo >
be an alias template such thatas - tuple
denotes the typeas - tuple < Tag ( Args ...) >
. Thendecayed - tuple < Args ... >
denotes the typeargs_variant_t
except with duplicate types removed.variant < monostate , as - tuple < LetSigs > ... > -
Given a type
and a packTag
, letArgs
be an alias template such thatas - sndr2
denotes the typeas - sndr2 < Tag ( Args ...) >
. Thencall - result - t < Fn , decay_t < Args >& ... >
denotes the typeops2_variant_t
except with duplicate types removed.variant < monostate , connect_result_t < as - sndr2 < LetSigs > , receiver2 < Rcvr , Env >> ... > -
The requires-clause constraining the above lambda is satisfied if and only if the types
andargs_variant_t
are well-formed.ops2_variant_t
-
-
The exposition-only function template
has effects equivalent to:let - bind using args_t = decayed - tuple < Args ... > ; auto mkop2 = [ & ] { return connect ( apply ( std :: move ( state . fn ), state . args . template emplace < args_t > ( std :: forward < Args > ( args )...)), receiver2 { rcvr , std :: move ( state . env )}); }; start ( state . ops2 . template emplace < decltype ( mkop2 ()) > ( emplace - from { mkop2 })); -
is initialized with a callable object equivalent to the following:impls - for < decayed - typeof < let - cpo >>:: complete [] < class Tag , class ... Args > ( auto , auto & state , auto & rcvr , Tag , Args && ... args ) noexcept -> void { if constexpr ( same_as < Tag , decayed - typeof < set - cpo >> ) { TRY - EVAL ( rcvr , let - bind ( state , rcvr , std :: forward < Args > ( args )...)); } else { Tag ()( std :: move ( rcvr ), std :: forward < Args > ( args )...); } }
-
-
Let
andsndr
be subexpressions, and letenv
beSndr
. Ifdecltype (( sndr ))
issender - for < Sndr , decayed - typeof < let - cpo >> false
, then the expression
is ill-formed. Otherwise, it is equal tolet - cpo . transform_env ( sndr , env )
.JOIN - ENV ( let - env ( sndr ), FWD - ENV ( env )) -
Let the subexpression
denote the result of the invocationout_sndr
or an object equal to such, and let the subexpressionlet - cpo ( sndr , f )
denote a receiver such that the expressionrcvr
is well-formed. The expressionconnect ( out_sndr , rcvr )
has undefined behavior unless it creates an asynchronous operation ([async.ops]) that, when started:connect ( out_sndr , rcvr ) -
invokes
whenf
is called withset - cpo
’s result datums,sndr -
makes its completion dependent on the completion of a sender returned by
, andf -
propagates the other completion operations sent by
.sndr
-
34.9.11.9. execution :: bulk
[exec.bulk]
-
runs a task repeatedly for every index in an index space.bulk -
The name
denotes a pipeable sender adaptor object. For subexpressionsbulk
,sndr
, andshape
, letf
beShape
. Ifdecltype ( auto ( shape ))
does not satisfydecltype (( sndr ))
, or ifsender
does not satisfyShape
, or ifintegral
does not satisfydecltype (( f ))
,movable - value
is ill-formed.bulk ( sndr , shape , f ) -
Otherwise, the expression
is expression-equivalent to:bulk ( sndr , shape , f ) transform_sender ( get - domain - early ( sndr ), make - sender ( bulk , product - type { shape , f }, sndr )) except that
is evaluated only once.sndr -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:bulk_t namespace std :: execution { template <> struct impls - for < bulk_t > : default - impls { static constexpr auto complete = see below ; }; } -
The member
is initialized with a callable object equivalent to the following lambda:impls - for < bulk_t >:: complete [] < class Index , class State , class Rcvr , class Tag , class ... Args > ( Index , State & state , Rcvr & rcvr , Tag , Args && ... args ) noexcept -> void requires see below { if constexpr ( same_as < Tag , set_value_t > ) { auto & [ shape , f ] = state ; constexpr bool nothrow = noexcept ( f ( auto ( shape ), args ...)); TRY - EVAL ( rcvr , [ & ]() noexcept ( nothrow ) { for ( decltype ( auto ( shape )) i = 0 ; i < shape ; ++ i ) { f ( auto ( i ), args ...); } Tag ()( std :: move ( rcvr ), std :: forward < Args > ( args )...); }()); } else { Tag ()( std :: move ( rcvr ), std :: forward < Args > ( args )...); } } -
The expression in the requires-clause of the lambda above is
true
if and only if
denotes a type other thanTag
or if the expressionset_value_t
is well-formed.f ( auto ( shape ), args ...)
-
-
-
Let the subexpression
denote the result of the invocationout_sndr
or an object equal to such, and let the subexpressionbulk ( sndr , shape , f )
denote a receiver such that the expressionrcvr
is well-formed. The expressionconnect ( out_sndr , rcvr )
has undefined behavior unless it creates an asynchronous operation ([async.ops]) that, when started:connect ( out_sndr , rcvr ) -
on a value completion operation, invokes
for everyf ( i , args ...)
of typei
fromShape
to0
, whereshape
is a pack of lvalue subexpressions referring to the value completion result datums of the input sender, andargs -
propagates all completion operations sent by
.sndr
-
34.9.11.10. execution :: split
[exec.split]
-
adapts an arbitrary sender into a sender that can be connected multiple times.split -
Let
be the type of an environment such that, given an instancesplit - env
, the expressionenv
is well-formed and has typeget_stop_token ( env )
.inplace_stop_token -
The name
denotes a pipeable sender adaptor object. For a subexpressionsplit
, letsndr
beSndr
. Ifdecltype (( sndr ))
issender_in < Sndr , split - env > false
,
is ill-formed.split ( sndr ) -
Otherwise, the expression
is expression-equivalent to:split ( sndr ) transform_sender ( get - domain - early ( sndr ), make - sender ( split , {}, sndr )) except that
is evaluated only once.sndr -
The default implementation of
will have the effect of connecting the sender to a receiver. It will return a sender with a different tag type.transform_sender
-
-
Let
denote the following exposition-only class template:local - state namespace std :: execution { struct local - state - base { // exposition only virtual ~ local - state - base () = default ; virtual void notify () noexcept = 0 ; // exposition only }; template < class Sndr , class Rcvr > struct local - state : local - state - base { // exposition only using on - stop - callback = // exposition only stop_callback_of_t < stop_token_of_t < env_of_t < Rcvr >> , on - stop - request > ; local - state ( Sndr && sndr , Rcvr & rcvr ) noexcept ; ~ local - state (); void notify () noexcept override ; private : optional < on - stop - callback > on_stop ; // exposition only shared - state < Sndr >* sh_state ; // exposition only Rcvr * rcvr ; // exposition only }; } -
local - state ( Sndr && sndr , Rcvr & rcvr ) noexcept ; -
Effects: Equivalent to:
auto & [ _ , data , _ ] = sndr ; this -> sh_state = data . sh_state . get (); this -> sh_state -> inc - ref (); this -> rcvr = addressof ( rcvr );
-
-
~ local - state (); -
Effects: Equivalent to:
sh_state -> dec - ref ();
-
-
void notify () noexcept override ; -
Effects: Equivalent to:
on_stop . reset (); visit ( [ this ]( const auto & tupl ) noexcept -> void { apply ( [ this ]( auto tag , const auto & ... args ) noexcept -> void { tag ( std :: move ( * rcvr ), args ...); }, tupl ); }, sh_state -> result );
-
-
-
Let
denote the following exposition-only class template:split - receiver namespace std :: execution { template < class Sndr > struct split - receiver { using receiver_concept = receiver_t ; template < class Tag , class ... Args > void complete ( Tag , Args && ... args ) noexcept { // exposition only using tuple_t = decayed - tuple < Tag , Args ... > ; try { sh_state -> result . template emplace < tuple_t > ( Tag (), std :: forward < Args > ( args )...); } catch (...) { using tuple_t = tuple < set_error_t , exception_ptr > ; sh_state -> result . template emplace < tuple_t > ( set_error , current_exception ()); } sh_state -> notify (); } template < class ... Args > void set_value ( Args && ... args ) && noexcept { complete ( execution :: set_value , std :: forward < Args > ( args )...); } template < class Error > void set_error ( Error && err ) && noexcept { complete ( execution :: set_error , std :: forward < Error > ( err )); } void set_stopped () && noexcept { complete ( execution :: set_stopped ); } struct env { // exposition only shared - state < Sndr >* sh - state ; // exposition only inplace_stop_token query ( get_stop_token_t ) const noexcept { return sh - state -> stop_src . get_token (); } }; env get_env () const noexcept { return env { sh_state }; } shared - state < Sndr >* sh_state ; // exposition only }; } -
Let
denote the following exposition-only class template:shared - state namespace std :: execution { template < class Sndr > struct shared - state { using variant - type = see below ; // exposition only using state - list - type = see below ; // exposition only explicit shared - state ( Sndr && sndr ); void start - op () noexcept ; // exposition only void notify () noexcept ; // exposition only void inc - ref () noexcept ; // exposition only void dec - ref () noexcept ; // exposition only inplace_stop_source stop_src {}; // exposition only variant - type result {}; // exposition only state - list - type waiting_states ; // exposition only atomic < bool > completed { false}; // exposition only atomic < size_t > ref_count { 1 }; // exposition only connect_result_t < Sndr , split - receiver < Sndr >> op_state ; // exposition only }; } -
Let
be a pack of the arguments to theSigs
specialization named bycompletion_signatures
. For typecompletion_signatures_of_t < Sndr >
and packTag
, letArgs
be an alias template such thatas - tuple
denotes the typeas - tuple < Tag ( Args ...) >
. Thendecayed - tuple < Tag , Args ... >
denotes the typevariant - type
, but with duplicate types removed.variant < tuple < set_stopped_t > , tuple < set_error_t , exception_ptr > , as - tuple < Sigs > ... > -
Let
be a type that stores a list of pointers tostate - list - type
objects and that permits atomic insertion.local - state - base -
explicit shared - state ( Sndr && sndr ); -
Effects: Initializes
with the result ofop_state
.connect ( std :: forward < Sndr > ( sndr ), split - receiver { this }) -
Postcondition:
is empty, andwaiting_states
iscompleted false
.
-
-
void start - op () noexcept ; -
Effects: Calls
. Ifinc - ref ()
isstop_src . stop_requested () true
, calls
; otherwise, callsnotify ()
.start ( op_state )
-
-
void notify () noexcept ; -
Effects: Atomically does the following:
-
Sets
tocompleted true
, and -
Exchanges
with an empty list, storing the old value in a localwaiting_states
.prior_states
Then, for each pointer
inp
, callsprior_states
. Finally, callsp -> notify ()
.dec - ref () -
-
-
void inc - ref () noexcept ; -
Effects: Increments
.ref_count
-
-
void dec - ref () noexcept ; -
Effects: Decrements
. If the new value ofref_count
isref_count
, calls0
.delete this -
Synchronization: If
does not decrement thedec - ref ()
toref_count
then synchronizes with the call to0
that decrementsdec - ref ()
toref_count
.0
-
-
-
Let
be an empty exposition-only class type. Given an expressionsplit - impl - tag
, the expressionsndr
is equivalent to:split . transform_sender ( sndr ) auto && [ tag , _ , child ] = sndr ; auto * sh_state = new shared - state { std :: forward_like < decltype (( sndr )) > ( child )}; return make - sender ( split - impl - tag (), shared - wrapper { sh_state , tag }); where
is an exposition-only class that manages the reference count of theshared - wrapper
object pointed to byshared - state
.sh_state
modelsshared - wrapper
with move operations nulling out the moved-from object, copy operations incrementing the reference count by callingcopyable
, and assignment operations performing a copy-and-swap operation. The destructor has no effect ifsh_state -> inc - ref ()
is null; otherwise, it decrements the reference count by callingsh_state
.sh_state -> dec - ref () -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:split - impl - tag namespace std :: execution { template <> struct impls - for < split - impl - tag > : default - impls { static constexpr auto get - state = see below ; static constexpr auto start = see below ; }; } -
The member
is initialized with a callable object equivalent to the following lambda expression:impls - for < split - impl - tag >:: get - state [] < class Sndr > ( Sndr && sndr , auto & rcvr ) noexcept { return local - state { std :: forward < Sndr > ( sndr ), rcvr }; } -
The member
is initialized with a callable object that has a function call operator equivalent to the following:impls - for < split - impl - tag >:: start template < class Sndr , class Rcvr > void operator ()( local - state < Sndr , Rcvr >& state , Rcvr & rcvr ) const noexcept ; -
Effects: If
isstate . sh_state -> completed true
, calls
and returns. Otherwise, does the following in order:state . notify () -
Calls:
state . on_stop . emplace ( get_stop_token ( get_env ( rcvr )), on - stop - request { state . sh_state -> stop_src }); -
Then atomically does the following:
-
Reads the value
ofc
, andstate . sh_state -> completed -
Inserts
intoaddressof ( state )
ifstate . sh_state -> waiting_states
isc false
.
-
-
If
isc true
, calls
and returns.state . notify () -
Otherwise, if
is the first item added toaddressof ( state )
, callsstate . sh_state -> waiting_states
.state . sh_state -> start - op ()
-
-
-
34.9.11.11. execution :: when_all
[exec.when.all]
-
andwhen_all
both adapt multiple input senders into a sender that completes when all input senders have completed.when_all_with_variant
only accepts senders with a single value completion signature and on success concatenates all the input senders' value result datums into its own value completion operation.when_all
is semantically equivalent towhen_all_with_variant ( sndrs ...)
, wherewhen_all ( into_variant ( sndrs )...)
is a pack of subexpressions whose types modelsndrs
.sender -
The names
andwhen_all
denote customization point objects. Letwhen_all_with_variant
be a pack of subexpressions, letsndrs
be a pack of the typesSndrs
, and letdecltype (( sndrs ))...
be the typeCD
. The expressionscommon_type_t < decltype ( get - domain - early ( sndrs ))... >
andwhen_all ( sndrs ...)
are ill-formed if any of the following is true:when_all_with_variant ( sndrs ...) -
is 0, orsizeof ...( sndrs ) -
is( sender < Sndrs > && ...) false
, or -
is ill-formed.CD
-
-
The expression
is expression-equivalent to:when_all ( sndrs ...) transform_sender ( CD (), make - sender ( when_all , {}, sndrs ...)) -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:when_all_t namespace std :: execution { template <> struct impls - for < when_all_t > : default - impls { static constexpr auto get - attrs = see below ; static constexpr auto get - env = see below ; static constexpr auto get - state = see below ; static constexpr auto start = see below ; static constexpr auto complete = see below ; }; } -
The member
is initialized with a callable object equivalent to the following lambda expression:impls - for < when_all_t >:: get - attrs []( auto && , auto && ... child ) noexcept { if constexpr ( same_as < CD , default_domain > ) { return empty_env (); } else { return MAKE - ENV ( get_domain , CD ()); } } -
The member
is initialized with a callable object equivalent to the following lambda expression:impls - for < when_all_t >:: get - env [] < class State , class Rcvr > ( auto && , State & state , const Receiver & rcvr ) noexcept { return JOIN - ENV ( MAKE - ENV ( get_stop_token , state . stop_src . get_token ()), get_env ( rcvr )); } -
The member
is initialized with a callable object equivalent to the following lambda expression:impls - for < when_all_t >:: get - state [] < class Sndr , class Rcvr > ( Sndr && sndr , Rcvr & rcvr ) noexcept ( e ) -> decltype ( e ) { return e ; } where
is the expression:e std :: forward < Sndr > ( sndr ). apply ( make - state < Rcvr > ()) and where
is the following exposition-only class template:make - state template < class Sndr , class Env > concept max -1 - sender - in = sender_in < Sndr , Env > && // exposition only ( tuple_size_v < value_types_of_t < Sndr , Env , tuple , tuple >> <= 1 ); enum class disposition { started , error , stopped }; // exposition only template < class Rcvr > struct make - state { template < max -1 - sender - in < env_of_t < Rcvr >> ... Sndrs > auto operator ()( auto , auto , Sndrs && ... sndrs ) const { using values_tuple = see below ; using errors_variant = see below ; using stop_callback = stop_callback_of_t < stop_token_of_t < env_of_t < Rcvr >> , on - stop - request > ; struct state - type { void arrive ( Rcvr & rcvr ) noexcept { if ( 0 == -- count ) { complete ( rcvr ); } } void complete ( Rcvr & rcvr ) noexcept ; // see below atomic < size_t > count { sizeof ...( sndrs )}; // exposition only inplace_stop_source stop_src {}; // exposition only atomic < disposition > disp { disposition :: started }; // exposition only errors_variant errors {}; // exposition only values_tuple values {}; // exposition only optional < stop_callback > on_stop { nullopt }; // exposition only }; return state - type {}; } }; -
Let copy-fail be
if decay-copying any of the child senders' result datums can potentially throw; otherwise,exception_ptr
, wherenone - such
is an unspecified empty class type.none - such -
The alias
denotes the typevalues_tuple
if that type is well-formed; otherwise,tuple < value_types_of_t < Sndrs , env_of_t < Rcvr > , decayed - tuple , optional > ... >
.tuple <> -
The alias
denotes the typeerrors_variant
with duplicate types removed, wherevariant < none - such , copy - fail , Es ... >
is the pack of the decayed types of all the child senders' possible error result datums.Es -
The member
behaves as follows:void state :: complete ( Rcvr & rcvr ) noexcept -
If
is equal todisp
, evaluates:disposition :: started auto tie = [] < class ... T > ( tuple < T ... >& t ) noexcept { return tuple < T & ... > ( t ); }; auto set = [ & ]( auto & ... t ) noexcept { set_value ( std :: move ( rcvr ), std :: move ( t )...); }; on_stop . reset (); apply ( [ & ]( auto & ... opts ) noexcept { apply ( set , tuple_cat ( tie ( * opts )...)); }, values ); -
Otherwise, if
is equal todisp
, evaluates:disposition :: error on_stop . reset (); visit ( [ & ] < class Error > ( Error & error ) noexcept { if constexpr ( ! same_as < Error , none - such > ) { set_error ( std :: move ( rcvr ), std :: move ( error )); } }, errors ); -
Otherwise, evaluates:
on_stop . reset (); set_stopped ( std :: move ( rcvr ));
-
-
-
The member
is initialized with a callable object equivalent to the following lambda expression:impls - for < when_all_t >:: start [] < class State , class Rcvr , class ... Ops > ( State & state , Rcvr & rcvr , Ops & ... ops ) noexcept -> void { state . on_stop . emplace ( get_stop_token ( get_env ( rcvr )), on - stop - request { state . stop_src }); if ( state . stop_src . stop_requested ()) { state . on_stop . reset (); set_stopped ( std :: move ( rcvr )); } else { ( start ( ops ), ...); } } -
The member
is initialized with a callable object equivalent to the following lambda expression:impls - for < when_all_t >:: complete [] < class Index , class State , class Rcvr , class Set , class ... Args > ( this auto & complete , Index , State & state , Rcvr & rcvr , Set , Args && ... args ) noexcept -> void { if constexpr ( same_as < Set , set_error_t > ) { if ( disposition :: error != state . disp . exchange ( disposition :: error )) { state . stop_src . request_stop (); TRY - EMPLACE - ERROR ( state . errors , std :: forward < Args > ( args )...); } } else if constexpr ( same_as < Set , set_stopped_t > ) { auto expected = disposition :: started ; if ( state . disp . compare_exchange_strong ( expected , disposition :: stopped )) { state . stop_src . request_stop (); } } else if constexpr ( ! same_as < decltype ( State :: values ), tuple <>> ) { if ( state . disp == disposition :: started ) { auto & opt = get < Index :: value > ( state . values ); TRY - EMPLACE - VALUE ( complete , opt , std :: forward < Args > ( args )...); } } state . arrive ( rcvr ); } where
, for subexpressionsTRY - EMPLACE - ERROR ( v , e )
andv
, is equivalent to:e try { v . template emplace < decltype ( auto ( e )) > ( e ); } catch (...) { v . template emplace < exception_ptr > ( current_exception ()); } if the expression
is potentially throwing; otherwise,decltype ( auto ( e ))( e )
; and wherev . template emplace < decltype ( auto ( e )) > ( e )
, for subexpressionsTRY - EMPLACE - VALUE ( c , o , as ...)
,c
, and pack of subexpressionso
, is equivalent to:as try { o . emplace ( as ...); } catch (...) { c ( Index (), state , rcvr , set_error , current_exception ()); return ; } if the expression
is potentially throwing; otherwise,decayed - tuple < decltype ( as )... > { as ...}
.o . emplace ( as ...)
-
-
The expression
is expression-equivalent to:when_all_with_variant ( sndrs ...) transform_sender ( CD (), make - sender ( when_all_with_variant , {}, sndrs ...)); -
Given subexpressions
andsndr
, ifenv
issender - for < decltype (( sndr )), when_all_with_variant_t > false
, then the expression
is ill-formed; otherwise, it is equivalent to:when_all_with_variant . transform_sender ( sndr , env ) auto && [ _ , _ , ... child ] = sndr ; return when_all ( into_variant ( std :: forward_like < decltype (( sndr )) > ( child ))...); This causes the
sender to becomewhen_all_with_variant ( sndrs ...)
when it is connected with a receiver whose execution domain does not customizewhen_all ( into_variant ( sndrs )...)
.when_all_with_variant
34.9.11.12. execution :: into_variant
[exec.into.variant]
-
adapts a sender with multiple value completion signatures into a sender with just one value completion signature consisting of ainto_variant
ofvariant
s.tuple -
The name
denotes a pipeable sender adaptor object. For a subexpressioninto_variant
, letsndr
beSndr
. Ifdecltype (( sndr ))
does not satisfySndr
,sender
is ill-formed.into_variant ( sndr ) -
Otherwise, the expression
is expression-equivalent to:into_variant ( sndr ) transform_sender ( get - domain - early ( sndr ), make - sender ( into_variant , {}, sndr )) except that
is only evaluated once.sndr -
The exposition-only class template
([exec.snd.general]) is specialized forimpls - for
as follows:into_variant namespace std :: execution { template <> struct impls - for < into_variant_t > : default - impls { static constexpr auto get - state = see below ; static constexpr auto complete = see below ; }; } -
The member
is initialized with a callable object equivalent to the following lambda:impls - for < into_variant_t >:: get - state [] < class Sndr , class Rcvr > ( Sndr && sndr , Rcvr & rcvr ) noexcept -> type_identity < value_types_of_t < child - type < Sndr > , env_of_t < Rcvr >>> { return {}; } -
The member
is initialized with a callable object equivalent to the following lambda:impls - for < into_variant_t >:: complete [] < class State , class Rcvr , class Tag , class ... Args > ( auto , State , Rcvr & rcvr , Tag , Args && ... args ) noexcept -> void { if constexpr ( same_as < Tag , set_value_t > ) { using variant_type = typename State :: type ; TRY - SET - VALUE ( rcvr , variant_type ( decayed - tuple < Args ... > { std :: forward < Args > ( args )...})); } else { Tag ()( std :: move ( rcvr ), std :: forward < Args > ( args )...); } }
-
34.9.11.13. execution :: stopped_as_optional
[exec.stopped.as.optional]
-
maps a sender’s stopped completion operation into a value completion operation as an disengagedstopped_as_optional
. The sender’s value completion operation is also converted into anoptional
. The result is a sender that never completes with stopped, reporting cancellation by completing with an disengagedoptional
.optional -
The name
denotes a pipeable sender adaptor object. For a subexpressionstopped_as_optional
, letsndr
beSndr
. The expressiondecltype (( sndr ))
is expression-equivalent to:stopped_as_optional ( sndr ) transform_sender ( get - domain - early ( sndr ), make - sender ( stopped_as_optional , {}, sndr )) except that
is only evaluated once.sndr -
Let
andsndr
be subexpressions such thatenv
isSndr
anddecltype (( sndr ))
isEnv
. Ifdecltype (( env ))
issender - for < Sndr , stopped_as_optional_t > false
, or if the type
is ill-formed orsingle - sender - value - type < Sndr , Env >
, then the expressionvoid
is ill-formed; otherwise, it is equivalent to:stopped_as_optional . transform_sender ( sndr , env ) auto && [ _ , _ , child ] = sndr ; using V = single - sender - value - type < Sndr , Env > ; return let_stopped ( then ( std :: forward_like < Sndr > ( child ), [] < class ... Ts > ( Ts && ... ts ) noexcept ( is_nothrow_constructible_v < V , Ts ... > ) { return optional < V > ( in_place , std :: forward < Ts > ( ts )...); }), []() noexcept { return just ( optional < V > ()); });
34.9.11.14. execution :: stopped_as_error
[exec.stopped.as.error]
-
maps an input sender’s stopped completion operation into an error completion operation as a custom error type. The result is a sender that never completes with stopped, reporting cancellation by completing with an error.stopped_as_error -
The name
denotes a pipeable sender adaptor object. For some subexpressionsstopped_as_error
andsndr
, leterr
beSndr
and letdecltype (( sndr ))
beErr
. If the typedecltype (( err ))
does not satisfySndr
or if the typesender
doesn’t satisfyErr
,movable - value
is ill-formed. Otherwise, the expressionstopped_as_error ( sndr , err )
is expression-equivalent to:stopped_as_error ( sndr , err ) transform_sender ( get - domain - early ( sndr ), make - sender ( stopped_as_error , err , sndr )) except that
is only evaluated once.sndr -
Let
andsndr
be subexpressions such thatenv
isSndr
anddecltype (( sndr ))
isEnv
. Ifdecltype (( env ))
issender - for < Sndr , stopped_as_error_t > false
, then the expression
is ill-formed; otherwise, it is equivalent to:stopped_as_error . transform_sender ( sndr , env ) auto && [ _ , err , child ] = sndr ; using E = decltype ( auto ( err )); return let_stopped ( std :: forward_like < Sndr > ( child ), [ err = std :: forward_like < Sndr > ( err )]() mutable noexcept ( is_nothrow_move_constructible_v < E > ) { return just_error ( std :: move ( err )); });
34.9.12. Sender consumers [exec.consumers]
34.9.12.1. this_thread :: sync_wait
[exec.sync.wait]
-
andthis_thread :: sync_wait
are used to block the current thread of execution until the specified sender completes and to return its async result.this_thread :: sync_wait_with_variant
mandates that the input sender has exactly one value completion signature.sync_wait -
Let
be the following exposition-only class type:sync - wait - env namespace std :: this_thread { struct sync - wait - env { execution :: run_loop * loop ; // exposition only auto query ( execution :: get_scheduler_t ) const noexcept { return loop -> get_scheduler (); } auto query ( execution :: get_delegation_scheduler_t ) const noexcept { return loop -> get_scheduler (); } }; } -
Let
andsync - wait - result - type
be exposition-only alias templates defined as follows:sync - wait - with - variant - result - type namespace std :: this_thread { template < execution :: sender_in < sync - wait - env > Sndr > using sync - wait - result - type = optional < execution :: value_types_of_t < Sndr , sync - wait - env , decayed - tuple , type_identity_t >> ; template < execution :: sender_in < sync - wait - env > Sndr > using sync - wait - with - variant - result - type = optional < execution :: value_types_of_t < Sndr , sync - wait - env >> ; } -
The name
denotes a customization point object. For a subexpressionthis_thread :: sync_wait
, letsndr
beSndr
. Ifdecltype (( sndr ))
issender_in < Sndr , sync - wait - env > false
, the expression
is ill-formed. Otherwise, it is expression-equivalent to the following, except thatthis_thread :: sync_wait ( sndr )
is evaluated only once:sndr apply_sender ( get - domain - early ( sndr ), sync_wait , sndr ) Mandates:
-
The type
is well-formed.sync - wait - result - type < Sndr > -
issame_as < decltype ( e ), sync - wait - result - type < Sndr >> true
, where
is thee
expression above.apply_sender
-
-
Let
andsync - wait - state
be the following exposition-only class templates:sync - wait - receiver namespace std :: this_thread { template < class Sndr > struct sync - wait - state { // exposition only execution :: run_loop loop ; // exposition only exception_ptr error ; // exposition only sync - wait - result - type < Sndr > result ; // exposition only }; template < class Sndr > struct sync - wait - receiver { // exposition only using receiver_concept = execution :: receiver_t ; sync - wait - state < Sndr >* state ; // exposition only template < class ... Args > void set_value ( Args && ... args ) && noexcept ; template < class Error > void set_error ( Error && err ) && noexcept ; void set_stopped () && noexcept ; sync - wait - env get_env () const noexcept { return { & state -> loop }; } }; } -
template < class ... Args > void set_value ( Args && ... args ) && noexcept ; -
Effects: Equivalent to:
try { state -> result . emplace ( std :: forward < Args > ( args )...); } catch (...) { state -> error = current_exception (); } state -> loop . finish ();
-
-
template < class Error > void set_error ( Error && err ) && noexcept ; -
Effects: Equivalent to:
state -> error = AS - EXCEPT - PTR ( std :: forward < Error > ( err )); // see [exec.general] state -> loop . finish ();
-
-
void set_stopped () && noexcept ; -
Effects: Equivalent to
.state -> loop . finish ()
-
-
-
For a subexpression
, letsndr
beSndr
. Ifdecltype (( sndr ))
issender_to < Sndr , sync - wait - receiver < Sndr >> false
, the expression
is ill-formed; otherwise, it is equivalent to:sync_wait . apply_sender ( sndr ) sync - wait - state < Sndr > state ; auto op = connect ( sndr , sync - wait - receiver < Sndr > { & state }); start ( op ); state . loop . run (); if ( state . error ) { rethrow_exception ( std :: move ( state . error )); } return std :: move ( state . result ); -
The behavior of
is undefined unless:this_thread :: sync_wait ( sndr ) -
It blocks the current thread of execution ([defns.block]) with forward progress guarantee delegation ([intro.progress]) until the specified sender completes. The default implementation of
achieves forward progress guarantee delegation by providing async_wait
scheduler via therun_loop
query on theget_delegation_scheduler
’s environment. Thesync - wait - receiver
is driven by the current thread of execution.run_loop -
It returns the specified sender’s async results as follows:
-
For a value completion, the result datums are returned in a
in an engagedtuple
object.optional -
For an error completion, an exception is thrown.
-
For a stopped completion, a disengaged
object is returned.optional
-
-
-
The name
denotes a customization point object. For a subexpressionthis_thread :: sync_wait_with_variant
, letsndr
beSndr
. Ifdecltype ( into_variant ( sndr ))
issender_in < Sndr , sync - wait - env > false
,
is ill-formed. Otherwise, it is expression-equivalent to the following, exceptthis_thread :: sync_wait_with_variant ( sndr )
is evaluated only once:sndr apply_sender ( get - domain - early ( sndr ), sync_wait_with_variant , sndr ) Mandates:
-
The type
is well-formed.sync - wait - with - variant - result - type < Sndr > -
issame_as < decltype ( e ), sync - wait - with - variant - result - type < Sndr >> true
, where
is thee
expression above.apply_sender
-
-
If
iscallable < sync_wait_t , Sndr > false
, the expression
is ill-formed. Otherwise, it is equivalent to:sync_wait_with_variant . apply_sender ( sndr ) using result_type = sync - wait - with - variant - result - type < Sndr > ; if ( auto opt_value = sync_wait ( into_variant ( sndr ))) { return result_type ( std :: move ( get < 0 > ( * opt_value ))); } return result_type ( nullopt ); -
The behavior of
is undefined unless:this_thread :: sync_wait_with_variant ( sndr ) -
It blocks the current thread of execution ([defns.block]) with forward progress guarantee delegation ([intro.progress]) until the specified sender completes. The default implementation of
achieves forward progress guarantee delegation by relying on the forward progress guarantee delegation provided bysync_wait_with_variant
.sync_wait -
It returns the specified sender’s async results as follows:
-
For a value completion, the result datums are returned in an engaged
object that contains aoptional
ofvariant
s.tuple -
For an error completion, an exception is thrown.
-
For a stopped completion, a disengaged
object is returned.optional
-
-
34.10. Sender/receiver utilities [exec.utils]
34.10.1. execution :: completion_signatures
[exec.utils.cmplsigs]
-
is a type that encodes a set of completion signatures ([async.ops]).completion_signatures -
[Example:
struct my_sender { using sender_concept = sender_t ; using completion_signatures = execution :: completion_signatures < set_value_t (), set_value_t ( int , float ), set_error_t ( exception_ptr ), set_error_t ( error_code ), set_stopped_t () > ; }; // Declares my_sender to be a sender that can complete by calling // one of the following for a receiver expression rcvr: // set_value(rcvr) // set_value(rcvr, int{...}, float{...}) // set_error(rcvr, exception_ptr{...}) // set_error(rcvr, error_code{...}) // set_stopped(rcvr) -- end example]
-
[exec.utils.cmplsigs] makes use of the following exposition-only entities:
template < class Fn > concept completion - signature = see below ; template < bool > struct indirect - meta - apply { template < template < class ... > class T , class ... As > using meta - apply = T < As ... > ; // exposition only }; template < class ... > concept always - true= true; // exposition only -
A type
satisfiesFn
if and only if it is a function type with one of the following forms:completion - signature -
, whereset_value_t ( Vs ...)
is a pack of object or reference types.Vs -
, whereset_error_t ( Err )
is an object or reference type.Err -
set_stopped_t ()
-
template < class Tag , valid - completion - signatures Completions , template < class ... > class Tuple , template < class ... > class Variant > using gather - signatures = see below ; -
Let
be a pack of the arguments of theFns
specialization named bycompletion_signatures
, letCompletions
be a pack of the function types inTagFns
whose return types areFns
, and letTag
be a pack of the function argument types in theTs n
-th type inn
. Then, given two variadic templatesTagFns
andTuple
, the typeVariant
names the typegather - signatures < Tag , Completions , Tuple , Variant >
, whereMETA - APPLY ( Variant , META - APPLY ( Tuple , Ts 0 ...), META - APPLY ( Tuple , Ts 1 ...), ... META - APPLY ( Tuple , Ts m -1 . ..))
is the size of the packm
andTagFns
is equivalent to:META - APPLY ( T , As ...) typename indirect - meta - apply < always - true< As ... >>:: template meta - apply < T , As ... > ; -
The purpose of
is to make it valid to use non-variadic templates asMETA - APPLY
andVariant
arguments toTuple
.gather - signatures
-
-
namespace std :: execution { template < completion - signature ... Fns > struct completion_signatures {}; template < class Sndr , class Env = empty_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender_in < Sndr , Env > using value_types_of_t = gather - signatures < set_value_t , completion_signatures_of_t < Sndr , Env > , Tuple , Variant > ; template < class Sndr , class Env = empty_env , template < class ... > class Variant = variant - or - empty > requires sender_in < Sndr , Env > using error_types_of_t = gather - signatures < set_error_t , completion_signatures_of_t < Sndr , Env > , type_identity_t , Variant > ; template < class Sndr , class Env = empty_env > requires sender_in < Sndr , Env > inline constexpr bool sends_stopped = ! same_as < type - list <> , gather - signatures < set_stopped_t , completion_signatures_of_t < Sndr , Env > , type - list , type - list >> ; }
34.10.2. execution :: transform_completion_signatures
[exec.utils.tfxcmplsigs]
-
is an alias template used to transform one set of completion signatures into another. It takes a set of completion signatures and several other template arguments that apply modifications to each completion signature in the set to generate a new specialization oftransform_completion_signatures
.completion_signatures -
[Example:
// Given a sender Sndr and an environment Env, adapt the completion // signatures of Sndr by lvalue-ref qualifying the values, adding an additional // exception_ptr error completion if its not already there, and leaving the // other completion signatures alone. template < class ... Args > using my_set_value_t = completion_signatures < set_value_t ( add_lvalue_reference_t < Args > ...) > ; using my_completion_signatures = transform_completion_signatures < completion_signatures_of_t < Sndr , Env > , completion_signatures < set_error_t ( exception_ptr ) > , my_set_value_t > ; -- end example]
-
[exec.utils.tfxcmplsigs] makes use of the following exposition-only entities:
template < class ... As > using default - set - value = completion_signatures < set_value_t ( As ...) > ; template < class Err > using default - set - error = completion_signatures < set_error_t ( Err ) > ; -
namespace std :: execution { template < valid - completion - signatures InputSignatures , valid - completion - signatures AdditionalSignatures = completion_signatures <> , template < class ... > class SetValue = default - set - value , template < class > class SetError = default - set - error , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> using transform_completion_signatures = completion_signatures < see below > ; } -
shall name an alias template such that for any pack of typesSetValue
, the typeAs
is either ill-formed or elseSetValue < As ... >
is satisfied.valid - completion - signatures < SetValue < As ... >> -
shall name an alias template such that for any typeSetError
,Err
is either ill-formed or elseSetError < Err >
is satisfied.valid - completion - signatures < SetError < Err >>
Then:
-
Let
be a pack of the types in theVs ...
named bytype - list
.gather - signatures < set_value_t , InputSignatures , SetValue , type - list > -
Let
be a pack of the types in theEs ...
named bytype - list
, wheregather - signatures < set_error_t , InputSignatures , type_identity_t , error - list >
is an alias template such thaterror - list
iserror - list < Ts ... >
.type - list < SetError < Ts > ... > -
Let
name the typeSs
ifcompletion_signatures <>
is an alias for the typegather - signatures < set_stopped_t , InputSignatures , type - list , type - list >
; otherwise,type - list <>
.SetStopped
Then:
-
If any of the above types are ill-formed, then
is ill-formed.transform_completion_signatures < InputSignatures , AdditionalSignatures , SetValue , SetError , SetStopped > -
Otherwise,
is the typetransform_completion_signatures < InputSignatures , AdditionalSignatures , SetValue , SetError , SetStopped >
wherecompletion_signatures < Sigs ... >
is the unique set of types in all the template arguments of all theSigs ...
specializations in the setcompletion_signatures
.AdditionalSignatures , Vs ..., Es ..., Ss
-
34.11. Execution contexts [exec.ctx]
34.11.1. execution :: run_loop
[exec.run.loop]
-
A
is an execution resource on which work can be scheduled. It maintains a thread-safe first-in-first-out queue of work. Itsrun_loop
member function removes elements from the queue and executes them in a loop on the thread of execution that callsrun ()
.run () -
A
instance has an associated count that corresponds to the number of work items that are in its queue. Additionally, arun_loop
instance has an associated state that can be one of starting, running, or finishing.run_loop -
Concurrent invocations of the member functions of
other thanrun_loop
and its destructor do not introduce data races. The member functionsrun
,pop_front
, andpush_back
execute atomically.finish -
Recommended practice: Implementations are encouraged to use an intrusive queue of operation states to hold the work units to make scheduling allocation-free.
namespace std :: execution { class run_loop { // [exec.run.loop.types] Associated types class run - loop - scheduler ; // exposition only class run - loop - sender ; // exposition only struct run - loop - opstate - base { // exposition only virtual void execute () = 0 ; // exposition only run_loop * loop ; // exposition only run - loop - opstate - base * next ; // exposition only }; template < class Rcvr > using run - loop - opstate = unspecified ; // exposition only // [exec.run.loop.members] Member functions: run - loop - opstate - base * pop - front (); // exposition only void push - back ( run - loop - opstate - base * ); // exposition only public : // [exec.run.loop.ctor] construct/copy/destroy run_loop () noexcept ; run_loop ( run_loop && ) = delete ; ~ run_loop (); // [exec.run.loop.members] Member functions: run - loop - scheduler get_scheduler (); void run (); void finish (); }; }
34.11.1.1. Associated types [exec.run.loop.types]
class run - loop - scheduler ;
-
is an unspecified type that modelsrun - loop - scheduler
.scheduler -
Instances of
remain valid until the end of the lifetime of therun - loop - scheduler
instance from which they were obtained.run_loop -
Two instances of
compare equal if and only if they were obtained from the samerun - loop - scheduler
instance.run_loop -
Let
be an expression of typesch
. The expressionrun - loop - scheduler
has typeschedule ( sch )
and is not potentially-throwing ifrun - loop - sender
is not potentially-throwing.sch
class run - loop - sender ;
-
is an exposition-only type that satisfiesrun - loop - sender
. For any typesender
,Env
is:completion_signatures_of_t < run - loop - sender , Env > completion_signatures < set_value_t (), set_error_t ( exception_ptr ), set_stopped_t () > -
An instance of
remains valid until the end of the lifetime of its associatedrun - loop - sender
instance.run_loop -
Let
be an expression of typesndr
, letrun - loop - sender
be an expression such thatrcvr
isreceiver_of < decltype (( rcvr )), CS > true
where
is theCS
specialization above. Letcompletion_signatures
be eitherC
orset_value_t
. Then:set_stopped_t -
The expression
has typeconnect ( sndr , rcvr )
and is potentially-throwing if and only ifrun - loop - opstate < decay_t < decltype (( rcvr )) >>
is potentially-throwing.( void ( sndr ), auto ( rcvr )) -
The expression
is potentially-throwing if and only ifget_completion_scheduler < C > ( get_env ( sndr ))
is potentially-throwing, has typesndr
, and compares equal to therun - loop - scheduler
instance from whichrun - loop - scheduler
was obtained.sndr
-
template < class Rcvr > struct run - loop - opstate ;
-
inherits privately and unambiguously fromrun - loop - opstate < Rcvr >
.run - loop - opstate - base -
Let
be a non-o
lvalue of typeconst
, and letrun - loop - opstate < Rcvr >
be a non-REC ( o )
lvalue reference to an instance of typeconst
that was initialized with the expressionRcvr
passed to the invocation ofrcvr
that returnedconnect
. Then:o -
The object to which
refers remains valid for the lifetime of the object to whichREC ( o )
refers.o -
The type
overridesrun - loop - opstate < Rcvr >
such thatrun - loop - opstate - base :: execute ()
is equivalent to:o . execute () if ( get_stop_token ( REC ( o )). stop_requested ()) { set_stopped ( std :: move ( REC ( o ))); } else { set_value ( std :: move ( REC ( o ))); } -
The expression
is equivalent to:start ( o ) try { o . loop -> push - back ( addressof ( o )); } catch (...) { set_error ( std :: move ( REC ( o )), current_exception ()); }
-
34.11.1.2. Constructor and destructor [exec.run.loop.ctor]
run_loop () noexcept ;
-
Postconditions: count is
and state is starting.0
~ run_loop ();
-
Effects: If count is not
or if state is running, invokes0
. Otherwise, has no effects.terminate ()
34.11.1.3. Member functions [exec.run.loop.members]
run - loop - opstate - base * pop - front ();
-
Effects: Blocks ([defns.block]) until one of the following conditions is
true
:-
count is
and state is finishing, in which case0
returnspop - front
; ornullptr -
count is greater than
, in which case an item is removed from the front of the queue, count is decremented by0
, and the removed item is returned.1
-
void push - back ( run - loop - opstate - base * item );
-
Effects: Adds
to the back of the queue and increments count byitem
.1 -
Synchronization: This operation synchronizes with the
operation that obtainspop - front
.item
run - loop - scheduler get_scheduler ();
-
Returns: An instance of
that can be used to schedule work onto thisrun - loop - scheduler
instance.run_loop
void run ();
-
Precondition: state is starting.
-
Effects: Sets the state to running. Then, equivalent to:
while ( auto * op = pop - front ()) { op -> execute (); } -
Remarks: When state changes, it does so without introducing data races.
void finish ();
-
Effects: Changes state to finishing.
-
Synchronization:
synchronizes with thefinish
operation that returnspop - front
.nullptr
34.12. Coroutine utilities [exec.coro.utils]
34.12.1. execution :: as_awaitable
[exec.as.awaitable]
-
transforms an object into one that is awaitable within a particular coroutine. [exec.coro.utils] makes use of the following exposition-only entities:as_awaitable namespace std :: execution { template < class Sndr , class Promise > concept awaitable - sender = single - sender < Sndr , env_of_t < Promise >> && sender_to < Sndr , awaitable - receiver > && // see below requires ( Promise & p ) { { p . unhandled_stopped () } -> convertible_to < coroutine_handle <>> ; }; template < class Sndr , class Promise > class sender - awaitable ; } -
The type
is equivalent to:sender - awaitable < Sndr , Promise > namespace std :: execution { template < class Sndr , class Promise > class sender - awaitable { struct unit {}; // exposition only using value - type = // exposition only single - sender - value - type < Sndr , env_of_t < Promise >> ; using result - type = // exposition only conditional_t < is_void_v < value - type > , unit , value - type > ; struct awaitable - receiver ; // exposition only variant < monostate , result - type , exception_ptr > result {}; // exposition only connect_result_t < Sndr , awaitable - receiver > state ; // exposition only public : sender - awaitable ( Sndr && sndr , Promise & p ); static constexpr bool await_ready () noexcept { return false; } void await_suspend ( coroutine_handle < Promise > ) noexcept { start ( state ); } value - type await_resume (); }; } -
is equivalent to:awaitable - receiver struct awaitable - receiver { using receiver_concept = receiver_t ; variant < monostate , result - type , exception_ptr >* result - ptr ; // exposition only coroutine_handle < Promise > continuation ; // exposition only // ... see below }; Let
be an rvalue expression of typercvr
, letawaitable - receiver
be acrcvr
lvalue that refers toconst
, letrcvr
be a pack of subexpressions, and letvs
be an expression of typeerr
. Then:Err -
If
is satisfied, the expressionconstructible_from < result - type , decltype (( vs ))... >
is equivalent to:set_value ( rcvr , vs ...) try { rcvr . result - ptr -> template emplace < 1 > ( vs ...); } catch (...) { rcvr . result - ptr -> template emplace < 2 > ( current_exception ()); } rcvr . continuation . resume (); Otherwise,
is ill-formed.set_value ( rcvr , vs ...) -
The expression
is equivalent to:set_error ( rcvr , err ) rcvr . result - ptr -> template emplace < 2 > ( AS - EXCEPT - PTR ( err )); // see [exec.general] rcvr . continuation . resume (); -
The expression
is equivalent to:set_stopped ( rcvr ) static_cast < coroutine_handle <>> ( rcvr . continuation . promise (). unhandled_stopped ()). resume (); -
For any expression
whose type satisfiestag
and for any pack of subexpressionsforwarding - query
,as
is expression-equivalent toget_env ( crcvr ). query ( tag , as ...)
.tag ( get_env ( as_const ( crcvr . continuation . promise ())), as ...)
-
-
sender - awaitable ( Sndr && sndr , Promise & p ); -
Effects: Initializes
withstate
.connect ( std :: forward < Sndr > ( sndr ), awaitable - receiver { addressof ( result ), coroutine_handle < Promise >:: from_promise ( p )})
-
-
value - type await_resume (); -
Effects: Equivalent to:
if ( result . index () == 2 ) rethrow_exception ( get < 2 > ( result )); if constexpr ( ! is_void_v < value - type > ) return std :: forward < value - type > ( get < 1 > ( result ));
-
-
-
-
is a customization point object. For subexpressionsas_awaitable
andexpr
wherep
is an lvalue,p
names the typeExpr
anddecltype (( expr ))
names the typePromise
,decay_t < decltype (( p )) >
is expression-equivalent to:as_awaitable ( expr , p ) -
if that expression is well-formed.expr . as_awaitable ( p ) -
Mandates:
isis - awaitable < A , Promise > true
, where
is the type of the expression above.A
-
-
Otherwise,
if( void ( p ), expr )
isis - awaitable < Expr , U > true
, where
is an unspecified class type that is notU
and that lacks a member namedPromise
.await_transform -
Preconditions:
isis - awaitable < Expr , Promise > true
and the expression
in a coroutine with promise typeco_await expr
is expression-equivalent to the same expression in a coroutine with promise typeU
.Promise
-
-
Otherwise,
ifsender - awaitable { expr , p }
isawaitable - sender < Expr , Promise > true
. -
Otherwise,
.( void ( p ), expr )
except that the evaluations of
andexpr
are indeterminately sequenced.p -
34.12.2. execution :: with_awaitable_senders
[exec.with.awaitable.senders]
-
, when used as the base class of a coroutine promise type, makes senders awaitable in that coroutine type.with_awaitable_senders In addition, it provides a default implementation of
such that if a sender completes by callingunhandled_stopped ()
, it is treated as if an uncatchable "stopped" exception were thrown from the await-expression. The coroutine is never resumed, and theset_stopped
of the coroutine caller’s promise type is called.unhandled_stopped namespace std :: execution { template < class - type Promise > struct with_awaitable_senders { template < class OtherPromise > requires ( ! same_as < OtherPromise , void > ) void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept ; coroutine_handle <> continuation () const noexcept { return continuation ; } coroutine_handle <> unhandled_stopped () noexcept { return stopped - handler ( continuation . address ()); } template < class Value > see below await_transform ( Value && value ); private : // exposition only [[ noreturn ]] static coroutine_handle <> default - unhandled - stopped ( void * ) noexcept { terminate (); } coroutine_handle <> continuation {}; // exposition only // exposition only coroutine_handle <> ( * stopped - handler )( void * ) noexcept = & default - unhandled - stopped ; }; } -
template < class OtherPromise > requires ( ! same_as < OtherPromise , void > ) void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept ; -
Effects: Equivalent to:
continuation = h ; if constexpr ( requires ( OtherPromise & other ) { other . unhandled_stopped (); } ) { stopped - handler = []( void * p ) noexcept -> coroutine_handle <> { return coroutine_handle < OtherPromise >:: from_address ( p ) . promise (). unhandled_stopped (); }; } else { stopped - handler = & default - unhandled - stopped ; } -
-
template < class Value > call - result - t < as_awaitable_t , Value , Promise &> await_transform ( Value && value ); -
Effects: Equivalent to:
return as_awaitable ( std :: forward < Value > ( value ), static_cast < Promise &> ( * this )); -