Skip to main content

Using Web Search

Use web search with litellm

FeatureDetails
Supported Endpoints- /chat/completions
- /responses
Supported Providersopenai, xai, vertex_ai, gemini
LiteLLM Cost Tracking✅ Supported
LiteLLM Versionv1.71.0+

/chat/completions (litellm.completion)​

Quick Start​

from litellm import completion

response = completion(
model="openai/gpt-4o-search-preview",
messages=[
{
"role": "user",
"content": "What was a positive news story from today?",
}
],
web_search_options={
"search_context_size": "medium" # Options: "low", "medium", "high"
}
)

Search context size​

OpenAI (using web_search_options)

from litellm import completion

# Customize search context size
response = completion(
model="openai/gpt-4o-search-preview",
messages=[
{
"role": "user",
"content": "What was a positive news story from today?",
}
],
web_search_options={
"search_context_size": "low" # Options: "low", "medium" (default), "high"
}
)

xAI (using web_search_options)

from litellm import completion

# Customize search context size for xAI
response = completion(
model="xai/grok-3",
messages=[
{
"role": "user",
"content": "What was a positive news story from today?",
}
],
web_search_options={
"search_context_size": "high" # Options: "low", "medium" (default), "high"
}
)

VertexAI/Gemini (using web_search_options)

from litellm import completion

# Customize search context size for Gemini
response = completion(
model="gemini-2.0-flash",
messages=[
{
"role": "user",
"content": "What was a positive news story from today?",
}
],
web_search_options={
"search_context_size": "low" # Options: "low", "medium" (default), "high"
}
)

/responses (litellm.responses)​

Quick Start​

from litellm import responses

response = responses(
model="openai/gpt-4o",
input=[
{
"role": "user",
"content": "What was a positive news story from today?"
}
],
tools=[{
"type": "web_search_preview" # enables web search with default medium context size
}]
)

Search context size​

from litellm import responses

# Customize search context size
response = responses(
model="openai/gpt-4o",
input=[
{
"role": "user",
"content": "What was a positive news story from today?"
}
],
tools=[{
"type": "web_search_preview",
"search_context_size": "low" # Options: "low", "medium" (default), "high"
}]
)

Configuring Web Search in config.yaml​

You can set default web search options directly in your proxy config file:

model_list:
# Enable web search by default for all requests to this model
- model_name: grok-3
litellm_params:
model: xai/grok-3
api_key: os.environ/XAI_API_KEY
web_search_options: {} # Enables web search with default settings

Note: When web_search_options is set in the config, it applies to all requests to that model. Users can still override these settings by passing web_search_options in their API requests.

Use litellm.supports_web_search(model="model_name") -> returns True if model can perform web searches

# Check OpenAI models
assert litellm.supports_web_search(model="openai/gpt-4o-search-preview") == True

# Check xAI models
assert litellm.supports_web_search(model="xai/grok-3") == True

# Check VertexAI models
assert litellm.supports_web_search(model="gemini-2.0-flash") == True

# Check Google AI Studio models
assert litellm.supports_web_search(model="gemini/gemini-2.0-flash") == True