Request: Allowlist Access for Gemini 3.1 Pro Preview — Vertex AI (original) (raw)
Hello Google AI Team,
I am requesting allowlist access to Gemini 3.1 Pro Preview on Vertex AI for my project.
Project Details:
- Region:
us-central1 - Models requested:
gemini-3.1-pro-preview,gemini-3-flash-preview
Use Case:
I am participating in the NVIDIA Nemotron Model Reasoning Challenge on Kaggle — a competition focused on improving logical reasoning capabilities of large language models. My team is building a fine-tuning pipeline that requires high-quality Chain-of-Thought (CoT) reasoning traces for training data distillation.
Gemini 3.1 Pro’s advanced reasoning capabilities would significantly improve the quality of our synthetic training data, particularly for complex algebraic and symbolic puzzles where current models struggle.
Why Gemini 3.1 Pro specifically:
- Superior structured reasoning compared to 2.5 Pro for multi-step logic puzzles
- Thinking mode capabilities aligned with our CoT generation requirements
- We need the strongest available reasoning model as a teacher for knowledge distillation
Expected usage: ~10,000-15,000 API calls over 2-3 weeks, well within standard quotas.
I already have billing enabled and Vertex AI API activated on this project. Currently receiving 404 NOT_FOUND when attempting to access the model.
Thank you for your consideration. Happy to provide any additional information needed.
I’m determined to give this challenge everything I’ve got — but please don’t make me win it using only a competitor’s AI when Google’s best model is right there!
Give me access to Gemini 3.1 Pro and let me show what your technology can do on the world stage.
Best regards
Tito_42
I ran into this exact same hurdle!
The 404 NOT_FOUND error you’re seeing isn’t actually an allowlist issue—it’s an endpoint routing mismatch. Currently, the Gemini 3.1 preview models on Vertex AI are only served via the global endpoint.
Even though your project and other resources might be set to us-central1, the model request itself needs to be pointed globally for this phase.
Note: This is separate from where your Agents or Vertex AI Search data stores are deployed. You can keep your infrastructure regional, but the model inference for Gemini 3.1 must route through the global gateway.
Good luck with the NVIDIA Nemotron Reasoning Challenge—that sounds like an awesome use case for CoT distillation!