Inference on models with custom head (original) (raw)
January 27, 2025, 9:57pm 1
Hi,
I trained a BERT model with a custom classifier head. I tried to load is using the ORTModelForSequenceClassification
, but it did not load the proper classifier heads. It just loaded the default heads.
After several hours of digging through the code for optimum, I was able to get my code to work using this:
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.exporters.tasks import TasksManager
TasksManager.infer_library_from_model = lambda *args, **kwargs: "transformers"
TasksManager.get_model_class_for_task = lambda *args, **kwargs: BertWithCustomHead
onnx_model = ORTModelForSequenceClassification.from_pretrained("./my-classifier", export=True, task="text-classification")
It gets the job done, but I feel it’s kind of hacky. Does anyone know if there is there a better way to do this?
echarlaix January 28, 2025, 11:18am 2
Hi @kaushu42, when loading your model with ORTModelForSequenceClassification
, the original model will be loaded before conversion with AutoModelForSequenceClassification
. In your case, you should first load your model and then use onnx_export_from_model to export it to ONNX