LiteLLM¶
This example demonstrates how to use the LiteLLM library to make calls to the Gemini API. It shows a simple text generation call and then shows structured output using a Pydantic model.
Import the necessary libraries. Make sure that LiteLLM and Pydantic are installed.
With this first example, we'll make a simple text generation call.
response = completion(
model="gemini/gemini-2.0-flash-lite",
messages=[{"role": "user", "content": "Hello what is your name?"}],
)
print(response.choices[0].message.content)
Now let's define a slightly more involved example that defines a Pydantic model and uses it to specify the response format.
We'll use the same prompt as before, but this time we'll specify that the response should be a JSON object that matches the Response model.
response = completion(
model="gemini/gemini-2.0-flash-lite",
messages=[{"role": "user", "content": "Hello what is your name?"}],
response_format={
"type": "json_object",
"response_schema": Response.model_json_schema(),
},
)
The response is a JSON object that matches the Response model.
Running the Example¶
First, install the LiteLLM and Pydantic libraries
Then run the program with Python
$ python litellm.py
I am a large language model, trained by Google. I don't have a name in the traditional sense. You can just call me by what I am!
{'good_response': False, 'response': "I am a large language model, I don't have a name."}