Toolset update: VS 2022 17.13 Preview 2, F32as_v6 by StephanTLavavej 路 Pull Request #5186 路 microsoft/STL (original) (raw)
馃摐 Changelog
- Code cleanups:
- Removed compiler bug workarounds.
- Infrastructure improvements:
- Updated dependencies.
* Updated build compiler to VS 2022 17.13 Preview 2.
* Updated Python to 3.13.1.
* Updated VMs to compute-optimized F32as_v6.
- Updated dependencies.
鈿欙笍 Commits
- Remove workarounds for:
- VSO-2188364 "EDG assertion failed in
conversion_for_direct_reference_binding_possible
". - VSO-2254804 "EDG ICE in
cpfe.dll!make_coroutine_result_expression
with C++23<generator>
test". - VSO-2046190 "[CI-NIGHTLY][Libs-ASan-amd64]
src\vctools\crt\github\tests\std\tests\Dev09_056375_locale_cleanup
failed due to ERROR: AddressSanitizer: access-violation on unknown address".
- VSO-2188364 "EDG assertion failed in
- Add
/shallowScan
to work around VSO-2293247 "/Zc:preprocessor
does not terminate macro definitions properly".- This has been fixed internally, so we don't need to mirror this to the Perl script.
- Python 3.13.1.
- VS 2022 17.13 Preview 2 (not yet required).
- Standard_F32as_v6.
- Use
C:
and rename directories to avoid any possible collisions.- This is necessary because this SKU lacks a local/temporary
D:
storage drive.
- This is necessary because this SKU lacks a local/temporary
- Power Word: NVMe
- This is necessary because this is an NVMe SKU. Passing -DiskControllerType 'NVMe' to New-AzVMConfig might not be strictly necessary, but passing the
DiskControllerTypes
feature toNew-AzGalleryImageDefinition
is absolutely necessary. Otherwise, 1ES Hosted Pools provisioning will completely and mysteriously fail. - Figuring this out was fun. Fasv6 links to Supported OS images for remote NVMe which doesn't mention Server 2025 as supported, but I deduced that it must be. That then links to FAQ for remote NVMe disks which finally has a section "How do I tag an image that supports NVMe for remote disks?" and provides an Azure CLI command that I was able to translate to our Azure PowerShell. Along the way, I encountered Store and share images in an Azure Compute Gallery which mentions "DiskControllerType" (singular!) as a feature, with arrays (!) of strings as possible values, which is doubly bogus AFAICT, but it led me down the right path.
- This is necessary because this is an NVMe SKU. Passing -DiskControllerType 'NVMe' to New-AzVMConfig might not be strictly necessary, but passing the
- Standard_F32as_v6 NVMe pool.
馃殌 Speedup vs. 馃 Cost
This significantly accelerates the CI. To do proper science, I cleanly compared this toolset update with varying SKUs:
- Our old SKU Standard_D32ads_v5.
- This is general-purpose (D), 32 logical cores, AMD (a), temporary storage (d), premium storage (s), Zen 3 (v5).
- Test run: https://dev.azure.com/vclibs/STL/_build/results?buildId=18033&view=results
- This new SKU Standard_F32as_v6.
- This is compute-optimized (F), 32 physical cores (i.e. no SMT), AMD (a), no temporary storage (d absent), premium storage (s), Zen 4 (v6).
- Test run: https://dev.azure.com/vclibs/STL/_build/results?buildId=18037&view=results
Previously, the x64 shards took an average of 731 seconds (12m 11s). Now they take an average of 507 seconds (8m 27s), which is a 1.442x speedup. Unsurprisingly, this SKU is more expensive per-hour (1.326x), but the speedup means that this is effectively cheaper to run.