Widget fails to display response from LLM (original) (raw)

December 26, 2024, 3:03am 1

I am running a test from phidata module to check the response from LLM using interactive mode in vs code:


from phi.agent import Agent
from phi.model.groq import Groq 
from dotenv import load_dotenv

load_dotenv()

agent = Agent(
    model = Groq(id = "llama-3.3-70b-versatile")
)

agent.print_response("Describe briefly how checks and balances works in US. politics")

The output from terminal looks good, which gives a short paragraph.
But here’s the output from interactive mode:

Screenshot 2024-12-25 at 9.57.08 PM

Does anyone know how to fix it? Thanks!