This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of New status.
hive
operations involving insertion/erasure should have 𝒪(log n)
time complexitySection: 23.3.9 [hive] Status: New Submitter: Matt Bentley Opened: 2025-08-19 Last modified: 2025-09-28
Priority: Not Prioritized
View all issues with New status.
Discussion:
Under 23.3.9.4 [hive.modifiers] p4 complexity is stated as "Constant.
Exactly one object of type T
is constructed."
Log(n)
(on most platforms, log64 or log32) in the
capacity of the element block selected to insert into. This time complexity only occurs
during the operation to find an erased element memory location within a block
which is known to have one, and does not involve the elements themselves.
This is both the simplest and fastest solution to supporting small types
in hive without artificial widening that I have come across.
Further, I have discovered that this approach can be extended to larger
block capacities via
"bitset stacking"
while retaining 𝒪(log n)
intra-block lookup time complexity,
regardless of block capacity. Overall this approach would be useful for embedded
and other memory-scarse platforms as it reduces the 16-bit-per-element cost of the
reference implementation down to a 1-bit-per-element cost. For 64-bit and larger types,
there are other ways to obtain this reduction without
losing 𝒪(1)
lookup but it is unclear whether those methods would
in fact be faster. For approaches involving bitset-stacking, the logarithmic
complexity also occurs during erasure, specifically adding a couple of extra
instructions for every 64x (i.e. bit-depth) increase in block capacity. But again
this does not involve the elements and is logarithmic in the capacity of the block
erased from.
Regardless, the increased complexity during insertion is necessary for small-type support.
There was ambiguity as to whether this should result in a change to hive
time complexity when discussed on the reflector, as it is unrelated to element numbers (unless
all elements fit within one block), but it is related to block capacities, which are defined
as part of the hive
technical specification.
Previous resolution [SUPERSEDED]:
This wording is relative to N5014.
[Drafting note: I am unclear on whether
assign()
andassign_range()
operations would require specification since they also have the capability to reuse existing erased element memory spaces, but we do not currently supply time complexity wording for these in the standard in general and I'm unsure why that is.]
Modify 23.3.9.1 [hive.overview] as indicated:
-1- A
-2- […] -3- Erasures use unspecified techniqueshive
is a type of sequence containerthat provides constant-time insertion and erasure operations. Swhere storage is automatically managed in multiple memory blocks, referred to as element blocks. Insertion position is determined by the container, andinsertionmay re-use the memory locations of erased elements. Insertions are either constant time or logarithmic in the capacity of the element block inserted into.of constant time complexityto identify the memory locations of erased elements, which are subsequently skipped during iteration in constant time, as opposed to relocating subsequent elements during erasure. These techniques are either constant time or logarithmic in the capacity of the element block erased from.Modify 23.3.9.2 [hive.cons] as indicated:
hive& operator=(const hive& x);-25- Preconditions: […]
-26- Effects: […] -27- Complexity: Linear insize() + x.size()
. Additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within.hive& operator=(hive&& x) noexcept(allocator_traits<Allocator>::propagate_on_container_move_assignment::value || allocator_traits<Allocator>::is_always_equal::value);-28- Preconditions: […]
-29- Effects: […] -30- Postconditions: […] -31- Complexity: Linear insize()
. If(allocator_traits<Allocator>::propagate_on_container_move_assignment::value || get_allocator() == x.get_allocator())is
false
, also linear inx.size()
and additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within.Modify 23.3.9.3 [hive.capacity] as indicated:
void shrink_to_fit();-8- Preconditions: […]
-9- Effects: […] -10- Complexity: If reallocation happens, linear in the size of the sequence and at worst𝒪(log n)
in the capacity of each element block which elements are reallocated into. -11- Remarks: […][…]
void reshape(hive_limits block_limits);-21- Preconditions: […]
-22- Effects: […] -23- Postconditions: […] -24- Complexity: Linear in the number of element blocks in*this
. If reallocation happens, also linear in the number of elements reallocated and at worst𝒪(log n)
in the capacity of each element block which elements are reallocated into. -25- Remarks: […]Modify 23.3.9.4 [hive.modifiers] as indicated:
template<class... Args> iterator emplace(Args&&... args); template<class... Args> iterator emplace_hint(const_iterator hint, Args&&... args);-1- Preconditions: […]
-2- Effects: […] -3- Returns: […] -4- Complexity:ConstantAt worst𝒪(log n)
in the capacity of the element block which the element is constructed within. Exactly one object of typeT
is constructed.[…]
void insert(initializer_list<T> rg); template<container-compatible-range<T> R> void insert_range(R&& rg);-7- Preconditions: […]
-8- Effects: […] -9- Complexity: Linear in the number of elements inserted. Additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within. Exactly one object of typeT
is constructed for each element inserted. -10- Remarks: […]void insert(size_type n, const T& x);-11- Preconditions: […]
-12- Effects: […] -13- Complexity: Linear inn
. Additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within.. Exactly one object of typeT
is constructed for each element inserted. -14- Remarks: […][…]
iterator erase(const_iterator position); iterator erase(const_iterator first, const_iterator last);-16- Complexity: Linear in the number of elements erased and for each erased element at worst
𝒪(log n)
in the capacity of the block containing the element. Additionally, if any active blocks become empty of elements as a result of the function call, at worst linear in the number of element blocks.Modify 23.3.9.5 [hive.operations] as indicated:
[Drafting note: If issue LWG 4323(i) is decided to be acted upon and the proposed solution accepted, the proposed complexity wording becomes:
-11- Complexity: If
empty()
isfalse
, exactlysize() - 1
applications of the corresponding predicate, otherwise no applications of the predicate. For each element erased as a result of this function call, at worst𝒪(log n)
in the capacity of the block containing the element. Additionally, if any active blocks become empty of elements as a result of the function call, at worst linear in the number of element blocks.]
template<class BinaryPredicate = equal_to<T>> size_type unique(BinaryPredicate binary_pred = BinaryPredicate());-7- Preconditions: […]
-8- Effects: […] -9- Returns: […] -10- Throws: […] -11- Complexity: Ifempty()
isfalse
, exactlysize() - 1
applications of the corresponding predicate, otherwise no applications of the predicate. Additionally, for each element erased as a result of this function call, at worst𝒪(log n)
in the capacity of each block containing the element. -12- Remarks: […]
[2025-09-26; Matt comments and provides improved wording]
Past LWG/LEWG telecon discussion around this topic concluded that because elements are not involved, and the logarithmic action is within the capacity of a block (a fixed number), not the size of the sequence, and the actual performance cost is negligible, that the complexity of these actions are in fact constant. But there is some disagreement on this.
One possibility is to add additional complexity data to each of the effected functions. This would impact onemplace
, range insert
, fill insert
, shrink_to_fit
, reshape
, copy/move
assignment operator, erase
and unique
. However I feel this is overkill and may confuse
implementors as the log(n) complexity is not permitted to involve elements.
Having looked into it and sought feedback I think a blanket note on 23.3.9.1 [hive.overview] p3
would be sufficient, such that the time complexity is limited to "techniques to identify the memory
locations of erased elements". Otherwise we need to stay with the previous resolution that this
is in fact constant time behaviour.
Proposed resolution:
This wording is relative to N5014.
Modify 23.3.9.1 [hive.overview] as indicated:
-1- A
-2- […] -3- Erasures use unspecified techniqueshive
is a type of sequence container that provides constant-time insertion and erasure operations. Storage is automatically managed in multiple memory blocks, referred to as element blocks. Insertion position is determined by the container, and may re-use the memory locations of erased elements.of constant time complexityto identify the memory locations of erased elements, which are subsequently skipped during iteration in constant time, as opposed to relocating subsequent elements during erasure. The same or different techniques may be utilized to find and re-use these locations during subsequent insertions. [Note: The techniques are permitted to be at worst logarithmic in the capacity of the element blocks being inserted into or erased from, while maintaining constant-time iteration, to allow latitude for implementation-specific optimizations. — end note]