Jupyter-ai pluging recieves RateLimitError only when using the chat feature (original) (raw)

We have jupyter-ai installed on our Z2JH cluster.

From inside the jupyter lab:

$ pip freeze | grep ai
jupyter_ai==2.31.2
jupyter_ai_magics==2.31.2
langchain==0.3.23
langchain-anthropic==0.3.10
langchain-aws==0.2.18
langchain-community==0.3.21
langchain-core==0.3.51
langchain-openai==0.3.12
langchain-text-splitters==0.3.8
openai==1.72.0

The API key is set in the environment variables:

$ printenv | grep AI
OPENAI_API_KEY=sk-proj-***

And the defaults are set in the jupyter_server_config.py.

c.AiExtension.allowed_providers = ["openai", "openai-chat", "anthropic", "anthropic-chat", "bedrock", "bedrock-chat"]
c.AiExtension.default_language_model = "openai-chat:gpt-4o"
c.AiMagics.default_language_model = "openai-chat:gpt-4o"
c.AiExtension.default_embeddings_model = "openai:text-embedding-3-large"
c.AiExtension.default_api_keys = {'OPENAI_API_KEY': os.getenv("OPENAI_API_KEY"), 'ANTHROPIC_API_KEY': os.getenv("ANTHROPIC_API_KEY")}

I can use the extension without any issue via the magics command:

image

But when I use the ‘chat’ feature I get an error:

image

The full error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 229, in on_message
    await self.process_message(message)
  File "/usr/local/lib/python3.11/site-packages/jupyter_ai/chat_handlers/default.py", line 72, in process_message
    await self.stream_reply(inputs, message)
  File "/usr/local/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 567, in stream_reply
    async for chunk in chunk_generator:
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5630, in astream
    async for item in self.bound.astream(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5630, in astream
    async for item in self.bound.astream(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3467, in astream
    async for chunk in self.atransform(input_aiter(), config, **kwargs):
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3449, in atransform
    async for chunk in self._atransform_stream_with_config(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2319, in _atransform_stream_with_config
    chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3416, in _atransform
    async for output in final_pipeline:
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5669, in atransform
    async for item in self.bound.atransform(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5012, in atransform
    async for output in self._atransform_stream_with_config(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2319, in _atransform_stream_with_config
    chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4992, in _atransform
    async for chunk in output.astream(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5630, in astream
    async for item in self.bound.astream(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3467, in astream
    async for chunk in self.atransform(input_aiter(), config, **kwargs):
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3449, in atransform
    async for chunk in self._atransform_stream_with_config(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2319, in _atransform_stream_with_config
    chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3416, in _atransform
    async for output in final_pipeline:
  File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 87, in atransform
    async for chunk in self._atransform_stream_with_config(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2278, in _atransform_stream_with_config
    final_input: Optional[Input] = await py_anext(input_for_tracing, None)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 78, in anext_impl
    return await __anext__(iterator)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 128, in tee_peer
    item = await iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1478, in atransform
    async for output in self.astream(final, config, **kwargs):
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 540, in astream
    async for chunk in self._astream(
  File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 2376, in _astream
    async for chunk in super()._astream(*args, **kwargs):
  File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 1069, in _astream
    response = await self.async_client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2000, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1767, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1461, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1547, in _request
    return await self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1594, in _retry_request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1547, in _request
    return await self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1594, in _retry_request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1562, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

I’d assume this was an OpenAI/API key issue, but the key works just fine with magics (and I’m not seeing anything limiting on our account/project key).
And if I use the key in a python script as well:

image

Interestingly enough too - one user finds (only in one instance [dev/prod]) they CAN use the chat feature without issue.

image

I’m not able to find anything different in their pod compared to another user pod in the same instance, or their own pod in the other instance.