This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of New status.
hive
operations involving insertion/erasure should have 𝒪(log n)
time complexitySection: 23.3.9 [hive] Status: New Submitter: Matt Bentley Opened: 2025-08-19 Last modified: 2025-08-25
Priority: Not Prioritized
View all issues with New status.
Discussion:
Under 23.3.9.4 [hive.modifiers] p4 complexity is stated as "Constant.
Exactly one object of type T
is constructed."
Log(n)
in the capacity of the element block selected
to insert into (when erased element memory locations are available for reuse) but
imposes a small maximum block capacity cap. This time complexity only occurs
during the operation to find an erased element memory location within a block
which is known to have one.
This is both the simplest and fastest solution to supporting small types
in hive without artificial widening that I have come across.
Further, I have discovered that this approach can be extended to larger
block capacities via
"bitset stacking"
while retaining 𝒪(log n)
intra-block lookup time complexity,
regardless of block capacity. Overall this approach would be useful for embedded
and other memory-scarse platforms as it reduces the 16-bit-per-element cost of the
reference implementation down to a 1-bit-per-element cost. For 64-bit and larger types,
there are other ways to obtain this reduction without
losing 𝒪(1)
lookup but it is unclear whether those methods would
in fact be faster.
Regardless, it is necessary for small types. In all cases N
is capped by
the maximum block capacity defined in the implementation.
There is ambiguity as to whether this should result in a change to hive::insert/emplace
time complexity when discussed on the reflector, as it is unrelated to element numbers (unless
all elements fit within one block), but it is related to block capacities, which are defined
as part of the hive
technical specification.
The exact mechanism by which erased element locations are recorded (for later reuse
23.3.9.1 [hive.overview] p3) in hive
is not specified in the technical
specification. The issue is therefore in some ways similar to deque
insert/emplace
time complexity, where a push_back
may in fact result in a 𝒪(n)
operation in the number of blocks — but because we do not specify the storage
mechanism in deque
, we ignore the fact that deques are typically constructed as
vectors of pointers to blocks, and therefore any insert may trigger vector expansion.
This was also the reasoning discussed for hive
during LWG talks, since it can also
be implemented as a vector of pointers to blocks.
In addition, if we were to support approaches involving bitset stacking, this would
also affect functions which can erase elements, since the number of modifications to
the bitset-stack of a given block would increase by 1 with every 64x increase (assuming
64-bit words) in block capacity.
I do not personally believe any wording change are necessary here, based on precedents set by the existing standard and what has been conveyed to me in the past regarding this topic (
𝒪
complexity within blocks). However reflector discussion on the subject has not been entirely conclusive since the relevance of some time complexity involves a degree of subjectivity. This issue has therefore been submitted in order for a definitive conclusion to be reached on the issue.
Changes necessary:
Making this change would affect any functions which may reuse existing erased-element memory locations. This includes: 23.3.9.4 [hive.modifiers]:emplace
) p4insert
) p9insert
) p13
23.3.9.3 [hive.capacity]:shrink_to_fit
) p10reshape
) p24
(both of the above functions may potentially relocate existing elements
from one block to erased element locations in another)
23.3.9.2 [hive.cons]:operator= &
) p27operator= &&
) p31
(move assignment will switch to moving individual elements where
allocators are not equal - and in the case of non-trivial/allocating
types, move-assigning to existing elements in the source may be beneficial)
In addition, if we were to support bitset-stacking approaches these also
effect functions which erase individual elements. This includes:
23.3.9.4 [hive.modifiers]:erase
) p16
23.3.9.5 [hive.operations]:unique
) p11
This issue has some overlap to LWG 4323(i), and it would affect the outcome wording of that issue for
unique
.
Proposed resolution:
This wording is relative to N5014.
[Drafting note: I am unclear on whether
assign()
andassign_range()
operations would require specification since they also have the capability to reuse existing erased element memory spaces, but we do not currently supply time complexity wording for these in the standard in general and I'm unsure why that is.]
Modify 23.3.9.1 [hive.overview] as indicated:
-1- A
-2- […] -3- Erasures use unspecified techniqueshive
is a type of sequence containerthat provides constant-time insertion and erasure operations. Swhere storage is automatically managed in multiple memory blocks, referred to as element blocks. Insertion position is determined by the container, andinsertionmay re-use the memory locations of erased elements. Insertions are either constant time or logarithmic in the capacity of the element block inserted into.of constant time complexityto identify the memory locations of erased elements, which are subsequently skipped during iteration in constant time, as opposed to relocating subsequent elements during erasure. These techniques are either constant time or logarithmic in the capacity of the element block erased from.
Modify 23.3.9.2 [hive.cons] as indicated:
hive& operator=(const hive& x);-25- Preconditions: […]
-26- Effects: […] -27- Complexity: Linear insize() + x.size()
. Additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within.hive& operator=(hive&& x) noexcept(allocator_traits<Allocator>::propagate_on_container_move_assignment::value || allocator_traits<Allocator>::is_always_equal::value);-28- Preconditions: […]
-29- Effects: […] -30- Postconditions: […] -31- Complexity: Linear insize()
. If(allocator_traits<Allocator>::propagate_on_container_move_assignment::value || get_allocator() == x.get_allocator())is
false
, also linear inx.size()
and additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within.
Modify 23.3.9.3 [hive.capacity] as indicated:
void shrink_to_fit();-8- Preconditions: […]
-9- Effects: […] -10- Complexity: If reallocation happens, linear in the size of the sequence and at worst𝒪(log n)
in the capacity of each element block which elements are reallocated into. -11- Remarks: […][…]
void reshape(hive_limits block_limits);-21- Preconditions: […]
-22- Effects: […] -23- Postconditions: […] -24- Complexity: Linear in the number of element blocks in*this
. If reallocation happens, also linear in the number of elements reallocated and at worst𝒪(log n)
in the capacity of each element block which elements are reallocated into. -25- Remarks: […]
Modify 23.3.9.4 [hive.modifiers] as indicated:
template<class... Args> iterator emplace(Args&&... args); template<class... Args> iterator emplace_hint(const_iterator hint, Args&&... args);-1- Preconditions: […]
-2- Effects: […] -3- Returns: […] -4- Complexity:ConstantAt worst𝒪(log n)
in the capacity of the element block which the element is constructed within. Exactly one object of typeT
is constructed.[…]
void insert(initializer_list<T> rg); template<container-compatible-range<T> R> void insert_range(R&& rg);-7- Preconditions: […]
-8- Effects: […] -9- Complexity: Linear in the number of elements inserted. Additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within. Exactly one object of typeT
is constructed for each element inserted. -10- Remarks: […]void insert(size_type n, const T& x);-11- Preconditions: […]
-12- Effects: […] -13- Complexity: Linear inn
. Additionally at worst𝒪(log n)
in the capacity of each element block which an element is constructed within.. Exactly one object of typeT
is constructed for each element inserted. -14- Remarks: […][…]
iterator erase(const_iterator position); iterator erase(const_iterator first, const_iterator last);-16- Complexity: Linear in the number of elements erased and for each erased element at worst
𝒪(log n)
in the capacity of the block containing the element. Additionally, if any active blocks become empty of elements as a result of the function call, at worst linear in the number of element blocks.
Modify 23.3.9.5 [hive.operations] as indicated:
[Drafting note: If issue LWG 4323(i) is decided to be acted upon and the proposed solution accepted, the proposed complexity wording becomes:
-11- Complexity: If
empty()
isfalse
, exactlysize() - 1
applications of the corresponding predicate, otherwise no applications of the predicate. For each element erased as a result of this function call, at worst𝒪(log n)
in the capacity of the block containing the element. Additionally, if any active blocks become empty of elements as a result of the function call, at worst linear in the number of element blocks.]
template<class BinaryPredicate = equal_to<T>> size_type unique(BinaryPredicate binary_pred = BinaryPredicate());-7- Preconditions: […]
-8- Effects: […] -9- Returns: […] -10- Throws: […] -11- Complexity: Ifempty()
isfalse
, exactlysize() - 1
applications of the corresponding predicate, otherwise no applications of the predicate. Additionally, for each element erased as a result of this function call, at worst𝒪(log n)
in the capacity of each block containing the element. -12- Remarks: […]