Sudden Spike in 429 Errors with Gemini 2.5 via Vertex AI Global Endpoint (original) (raw)
January 22, 2026, 4:45am 1
We’re experiencing a dramatic increase in HTTP 429 rate limit errors when using Gemini API through Vertex AI. This issue appeared in the past few days.
Since we’re using the global endpoint with DSQ, we don’t believe this is a straightforward quota limitation issue.
Configuration:
- Platform: Google Cloud Vertex AI
- Endpoint: Global region
- Models: Gemini 2.5 series (text generation)
- Method:
google.cloud.aiplatform.v1beta1.PredictionService.StreamGenerateContent
30-Day Metrics:
- Total Requests: 156,579
- Error Rate: 1.01%
- Average Latency: 10.538 seconds
- 99th Percentile Latency: 32.764 seconds
Daily error rate trends over the past 30 days
Questions:
- Could our system prompt characteristics be influencing rate limiting behavior?
- What could be causing this sudden rate limit increase? Are there known platform issues or any changes?
getting an error rate of almost 100% today
I started seeing a spike in 429s as well with gemini-2.5-flash-image. It sure seems like something changed on Google’s side. The API has become nearly unusable due to the volume of 429s. Does not seem like normal DSQ behavior as I see the same error rate consistently every day for the last few days now.
January 29, 2026, 1:24am 4
Over 80% of the requests are for the Gemini 2.5 Pro model.
Help! From one of my servers I am not getting any 429 errors , for 2 servers located on customer premises (different url than mine) I am getting a ton of 429 - tested with same vertex key to eliminate any quota issues. Tried moving to global instead of us-central1 - no change. Anyone found a solution or a tip that works ?
March 20, 2026, 8:26pm 6
This or provisioned are the only way I can get it to run consistently. Google is pay to play. pay more, get more priority in resources it would seem.
I’ve fougth them for a long time on this. They have other resources like the backoff jitter. When I had a call with them, they told me that if I go to Priority Pay Go, they expect my errors would drop to 0. Long story short I have an intense use case so they didn’t but its much more usable now and seems to work better. Gemini 3.1 is off the table for now. takes a lot longer and I get more errors.
dhruvil April 8, 2026, 5:38am 7
Hey, did you manage to fix this? How did you handle it?
I’m actually running into the exact same issue with gemini-2.5-flash-image getting a consistent spike in 429s over the past few days with vertex ai.
Yes, I managed to fix it by switching from consuming the model via Vertex AI (using GCP service account) to Gemini API (using Gemini API key). I don’t love having to use an API key instead of a service account, but the 429s went away when doing this.
Before:
const googleGenAIForImages = new GoogleGenAI({
vertexai: true,
project: config.gcp.project,
location: "global",
httpOptions: {
timeout: imageGenerationTimeoutMs,
},
});
After:
const googleGenAIForImages = new GoogleGenAI({
apiKey: config.gemini.apiKey,
httpOptions: {
timeout: imageGenerationTimeoutMs,
},
});


