chore: reenable py313 by zewenli98 · Pull Request #3455 · pytorch/TensorRT (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation19 Commits22 Checks310 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
Description
Reenabled Python 3.13 builds since TensorRT has 3.13 wheels available now for TensorRT 10.9.
Type of change
- New feature (non-breaking change which adds functionality)
Checklist:
- My code follows the style guidelines of this project (You can use the linters)
- I have performed a self-review of my own code
- I have commented my code, particularly in hard-to-understand areas and hacks
- I have made corresponding changes to the documentation
- I have added tests to verify my fix or my feature
- New and existing unit tests pass locally with my changes
- I have added the relevant labels to my PR in so that relevant reviewers are notified
No ciflow labels are configured for this repo.
For information on how to enable CIFlow bot see this wiki
narendasan linked an issue
that may beclosed by this pull request
- name: Install Rust |
---|
run: | |
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y |
source $HOME/.cargo/env |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you don't need to install rust. Instead, you should try to bump the version of transformers
in tests/py/requirements.txt
. transformers 4.40.2
depended on tokenizers 0.19.x
, but tokenizers 0.19.x
didn't have wheels for cp313. That's why pip downloaded the source archive of tokenizers
and then compiled unsuccessfully.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The updated transformers
doesn't match the results of BERT, compared with old transformers and TRT, so we fixed the version here. Any idea?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
zewenli98 marked this pull request as ready for review
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
transformers==4.40.2 |
---|
nvidia-modelopt[deploy,hf,torch]~=0.17.0 |
transformers==4.49.0 |
nvidia-modelopt[deploy,hf,torch]~=0.17.0; python_version < "3.13" |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modelopt doesn't support 3.13 yet ? So, quantization also won't work right in Python 3.13?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, I think we've already directly skip quantization tests like
@unittest.skipIf( |
---|
not importlib.util.find_spec("modelopt"), |
"ModelOpt is required to run this test", |
) |
@pytest.mark.unit |
def test_base_fp8(ir): |
import modelopt.torch.quantization as mtq |
from modelopt.torch.quantization.utils import export_torch_mode |
class SimpleNetwork(torch.nn.Module): |
def __init__(self): |
super(SimpleNetwork, self).__init__() |
self.linear1 = torch.nn.Linear(in_features=10, out_features=5) |
self.linear2 = torch.nn.Linear(in_features=5, out_features=1) |
def forward(self, x): |
x = self.linear1(x) |
x = torch.nn.ReLU()(x) |
x = self.linear2(x) |
return x |
Users may not be aware of it though
@@ -62,6 +64,22 @@ def not_implemented(*args: List[Any], **kwargs: Dict[str, Any]) -> Any: |
---|
return wrapper |
def needs_refit(f: Callable[..., Any]) -> Callable[..., Any]: |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can make this a bit more generic (like for any feature in the FeatureSet)
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think it mostly looks good, I think we should think about making the requires decorator a bit more generic and unify needs_torch_tensorrt_runtime
and needs_refit
, maybe something like needs_feature
. Maybe something for after 2.7
Think it mostly looks good, I think we should think about making the requires decorator a bit more generic and unify
needs_torch_tensorrt_runtime
andneeds_refit
, maybe something likeneeds_feature
. Maybe something for after 2.7
sure, will think about it for the next release.
zewenli98 added a commit that referenced this pull request