Precision (original) (raw)

Toggle site navigation sidebar

Composer

Toggle table of contents sidebar

Back to top

Edit this page

Toggle table of contents sidebar

class composer.core.Precision(value)[source]#

Enum class for the numerical precision to be used by the model.

FP32#

Use 32-bit floating-point precision. Compatible with CPUs and GPUs.

AMP_FP16#

Use torch.cuda.amp with 16-bit floating-point precision. Only compatible with GPUs.

AMP_BF16#

Use torch.cuda.amp with 16-bit BFloat precision.

AMP_FP8#

Use transformer_engine.pytorch.fp8_autocast with 8-bit FP8 precison.