make_shared() For Arrays by AdamBucior · Pull Request #309 · microsoft/STL (original) (raw)
Changelog for the last batch of changes, which were in response to things that I found during my exhaustive review and didn't submit code review comments for:
Add noexcept
to internal machinery.
Instead of testing (rank_v<_Ty>) > 0
, simply test is_array_v<_Ty>
.
Change _Reverse_destroy_multidimensional_n_guard
to be an aggregate. (Note that Clang 10 doesn't support CTAD for aggregates.)
Rework the optimization metaprogramming in _Uninitialized_copy_multidimensional
. First, _Ptr_copy_cat<_Ty*, _Ty*>::_Really_trivial
was incorrect. This is because _Ptr_copy_cat
checks is_trivially_assignable_v
, which will be false when _Ty
is an array. Instead, we should test is_trivial_v<_Ty>
. (We don't need the rest of what _Ptr_copy_cat
does, because we're not working with user-defined iterators, and our source and destination are the same type.) Second, is_trivial_v<_Ty>
can be the first thing we test; when we're working with trivial elements, we can perform a single _Copy_memmove
, regardless of whether these are single-dimensional or multi-dimensional arrays. Finally, for is_array_v<_Ty>
, it was strange (although correct) to construct an RAII guard for each iteration of the loop. We can lift this out and simply update the guard's index. (Same transformation for the following algorithms.)
Similarly for _Uninitialized_value_construct_multidimensional_n
, _Use_memset_value_construct_v<_Ty*>
was incorrect as it checks is_scalar
which is false for arrays. We need to use remove_all_extents_t<_Ty>
, which I've been referring to as _Item
. Also similarly, we can perform this _Zero_range
optimization at the top level. Finally, we should consistently use _Idx
instead of _Count
to iterate (also changed below).
In _Uninitialized_fill_multidimensional_n
, add the // intentionally copy, not fill
comment (previously applied to the allocator version).
Mark _Get_ptr()
as _NODISCARD
and noexcept
.
Change pre-existing code for consistency: upgrade _Ref_count_obj_alloc2
to _Ref_count_obj_alloc3
. The difference is that we store _Rebind_alloc_t<_Alloc, _Ty> so we can directly use it later. (We still need to rebind for _Delete_this
.) Also add a static_assert
verifying that we're following the Standard's remove_cv_t
wording.
Change _Reverse_destroy_multidimensional_n_al_guard
and its associated algorithms similarly to the non-allocator versions.
In _Uninitialized_copy_multidimensional_al
, change _Uses_default_construct<_Alloc, _Ty*, _Ty>
to _Uses_default_construct<_Alloc, _Item*, const _Item&>
. This is because the provided _Alloc
is for the _Item
(aka remove_all_extents_t<_Ty>
) that we ultimately construct from a const lvalue.
In _Uninitialized_value_construct_multidimensional_n_al
, similarly change the _Zero_range
metaprogramming.
In _Uninitialized_fill_multidimensional_n_al
, slightly adjust _Uses_default_construct<_Alloc, _Ty*, _Ty>
to _Uses_default_construct<_Alloc, _Ty*, const _Ty&>
because we construct from a const lvalue, and that distinction could be important.
Change _Ref_count_unbounded_array_alloc
to store _Rebind_alloc_t<_Alloc, remove_all_extents_t<_Ty>>
so we don't need to rebind it later. Also static_assert
remove_cv_t
.
Adjust the metaprogramming in _Destroy()
. is_trivially_destructible<_Element_type>
was correct (_Element_type
removes the unbounded array, and the type trait "recurses" to the non-array item), but is_trivially_destructible<_Item>
is what we're ultimately interested in, and will be more consistent with other control blocks.
Also, _Rebound
is never a reference now, and we have the _Item
typedef.
Fix _Ref_count_bounded_array_alloc
comment; "for object" should be "for bounded array". Similarly change its implementation. Here, is_trivially_destructible<_Ty>
was correct (_Ty
is a bounded array), butis_trivially_destructible<_Item>
lets us be consistent with the unbounded array control block.