Table of contents
Official Content
  • This documentation is valid for:

List of supported models via /chat Api

Module Provider Model Full Name Chat Support Function Calling support Environment support
saia.models.openai OpenAI
  • openai/gpt-5.2
  • openai/gpt-5.2-2025-12-11
  • openai/gpt-5.2-chat-latest
  • openai/gpt-5.2-pro
  • openai/gpt-5.1-codex-mini
  • openai/gpt-5.1-codex
  • openai/gpt-5.1
  • openai/gpt-5-pro
  • openai/gpt-5-codex
  • openai/gpt-5-chat-latest
  • openai/gpt-5(1)
  • openai/gpt-5-mini(1)
  • openai/gpt-5-nano(1)
  • openai/gpt-4o
  • openai/gpt-4o-mini
  • openai/gpt-4o-2024-11-20
  • openai/gpt-4.1
  • openai/gpt-4.1-mini
  • openai/gpt-4.1-nano
  • openai/gpt-image-1.5
  • openai/o1(1)
  • openai/o1-pro(1)
  • openai/o3(1)
  • openai/o3-pro(1)
  • openai/o3-mini(1)
  • openai/o4-mini(1)
  • openai/codex-mini-latest
  • openai/o4-mini-deep-research(3)
  • openai/o3-deep-research(3)
  • Beta
  • Production
  • openai/gpt-4o-search-preview(2)
  • openai/gpt-4o-mini-search-preview(2)
 
saia.models.googlevertexai Google VertexAI
  • vertex_ai/gemini-3-flash-preview
  • vertex_ai/gemini-3-pro-preview
  • vertex_ai/gemini-3-pro-image-preview
  • vertex_ai/openai-gpt-oss-20b-maas
  • vertex_ai/openai-gpt-oss-120b-maas
  • vertex_ai/gemini-2.0-flash
  • vertex_ai/gemini-2.0-flash-lite
  • vertex_ai/gemini-2.0-flash-001
  • vertex_ai/gemini-2.5-pro
  • vertex_ai/gemini-2.5-flash
  • vertex_ai/gemini-2.5-flash-lite
  • vertex_ai/gemini-2.5-flash-image
  • vertex_ai/deepseek-r1-0528-maas
  • vertex_ai/claude-opus-4-5-20251101
  • vertex_ai/claude-opus-4-1-20250805
  • vertex_ai/claude-opus-4-20250514
  • vertex_ai/claude-sonnet-4-5 
  • vertex_ai/claude-haiku-4-5
  • vertex_ai/claude-sonnet-4-20250514
  • vertex_ai/claude-3-7-sonnet-20250219(5)
  • vertex_ai/claude-3-5-haiku-20241022(5)
  • vertex_ai/mistral-small-2503
  • vertex_ai/mistral-large-2411
  • Beta
  • Production
  • vertex_ai/codestral-2501
 
saia.models.azure Azure OpenAI
  • azure/gpt-4.1
  • azure/gpt-4.1-mini
  • azure/gpt-4.1-nano
  • azure/gpt-4o
  • azure/gpt-4o-mini
  • azure/o1(1)
  • azure/o3(1)
  • azure/o3-mini(1)
  • azure/o4-mini(1)
  • Beta
  • Production
  • azure/o1-mini(1)
 
saia.models.anthropic Anthropic
  • anthropic/claude-haiku-4-5-20251001
  • anthropic/haiku 4.5-20251001
  • anthropic/claude-opus-4-5-20251101
  • anthropic/claude-sonnet-4-5-20250929
  • anthropic/claude-opus-4-20250514
  • anthropic/claude-sonnet-4-20250514
  • anthropic/claude-opus-4-1-20250805
  • anthropic/claude-3-5-haiku-20241022(5)
  • anthropic/claude-3-7-sonnet-latest(5)
  • Beta
  • Production
saia.models.awsbedrock AWS Bedrock
  • awsbedrock/global.anthropic.claude-opus-4-5-20251101-v1:0
  • awsbedrock/openai.gpt-oss-120b-1:0
  • awsbedrock/openai.gpt-oss-20b-1:0
  • awsbedrock/us.anthropic.claude-opus-4-1-20250805-v1:0
  • awsbedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0.
  • awsbedrock/us.anthropic.claude-opus-4-20250514-v1:0
  • awsbedrock/us.anthropic.claude-sonnet-4-20250514-v1:0
  • awsbedrock/anthropic.claude-3-7-sonnet(5)
  • awsbedrock/anthropic.claude-3.5-haiku(5)
  • awsbedrock/meta.llama3-8b
  • awsbedrock/meta.llama3-70b
  • awsbedrock/meta.llama3-1-70b
  • awsbedrock/meta.llama3-1-405b
  • awsbedrock/amazon.nova-pro-v1:0
  • awsbedrock/amazon.nova-lite-v1:0
  • awsbedrock/amazon.nova-micro-v1:0
  • awsbedrock/meta.llama3-2-1b
  • awsbedrock/meta.llama3-2-3b
  • awsbedrock/meta.llama3-2-11b
  • awsbedrock/meta.llama3-2-90b
  • Beta
  • Production
  • awsbedrock/us.deepseek.r1-v1:0
 
saia.models.xai xAI
  • xai/grok-4-fast-reasoning
  • xai/grok-4-1-fast-reasoning
  • xai/grok-4
  • xai/grok-3
  • xai/grok-3-mini
  • xai/grok-2-vision-1212
  • xai/grok-4-fast-non-reasoning
  • xai/grok-code-fast-1
  • Beta
  • Production
saia.models.cohere Cohere
  • cohere/command-r-08-2024
  • cohere/command-r-plus-08-2024
  • cohere/command-r7b-12-2024
  • cohere/command-a-03-2025
  • Beta
  • Production
saia.models.azure.foundry Azure AI Foundry
  • azure_ai_foundry/grok-3
  • azure_ai_foundry/grok-3-mini
  • azure_ai_foundry/gpt-4.1
  • azure_ai_foundry/gpt-4.1-mini
  • azure_ai_foundry/DeepSeek-V3-0324
  • azure_ai_foundry/DeepSeek-R1-0528
  • Beta
  • Production
  • azure_ai_foundry/gpt-oss-120b
  • azure_ai_foundry/DeepSeek-R1
  • azure_ai_foundry/Phi-4
  • azure_ai_foundry/Phi-4-mini-instruct
  • azure_ai_foundry/Phi-4-mini-reasoning
  • azure_ai_foundry/Phi-4-multimodal-instruct
 

saia.models.globantdgx

Globant DGX
  • globant_dgx/Qwen3-235B-A22B
  • Globant Clients and Corp

saia.models.openrouter

OpenRouter

  • Beta
  • openrouter/qwen3-8b:free
  • openrouter/qwen3-14b:free
  • openrouter/qwen3-30b-a3b:free
  • openrouter/qwen3-32b:free
  • openrouter/qwen3-235b-a22b:free
 
saia.models.mistral Mistral AI
  • mistral/magistral-medium-latest
  • mistral/mistral-medium-latest
  • mistral/codestral-latest
  • mistral/mistral-saba-latest
  • mistral/mistral-large-latest
  • mistral/pixtral-large-latest
  • mistral/ministral-3b-latest
  • mistral/ministral-8b-latest
  • mistral/devstral-small-latest
  • mistral/mistral-small-latest
  • mistral/pixtral-12b-2409
  • mistral/open-mistral-nemo
  • Beta
  • mistral/magistral-small-latest
 
saia.models.deepseek DeepSeek
  • deepseek/deepseek-chat
  • deepseek/deepseek-reasoner
  • Beta
saia.models.groq Groq
  • groq/openai-gpt-oss-120b
  • groq/openai-gpt-oss-20b
  • groq/moonshotai-kimi-k2-instruct
  • groq/llama-3.3-70b-versatile
  • groq/llama-3.1-8b-instant
  • groq/meta-llama-4-scout-17b-16e-instruct
  • groq/meta-llama-4-maverick-17b-128e-instruct
  • groq/qwen3-32b
  • groq/deepseek-r1-distill-llama-70b
  • groq/mistral-saba-24b
  • Beta
saia.models.nvidia NVidia
  • nvidia/nvidia.nemotron-mini-4b-instruct
  • nvidia/meta.llama-3.1-8b-instruct
  • nvidia/meta.llama-3.1-70b-instruct
  • nvidia/meta.llama-3.1-405b-instruct
  • nvidia/meta.llama-3.2-3b-instruct
  • nvidia/meta-llama-4-scout-17b-16e-instruct
  • nvidia/llama-3.3-nemotron-super-49b-v1
  • nvidia/llama-3.1-nemotron-70b-instruct
  • Beta
  • nvidia/meta.llama-3.2-1b-instruct
  • nvidia/llama-3.1-nemotron-ultra-253b-v1
  • nvidia/meta-llama-4-maverick-17b-128e-instruct
  • nvidia/deepseek-ai-deepseek-r1
 
saia.models.sambanova SambaNova
  • sambanova/Meta-Llama-3.3-70B-Instruct
  • sambanova/Llama-4-Maverick-17B-128E-Instruct
  • Beta
  • sambanova/DeepSeek-R1-Distill-Llama-70B
 
saia.models.cerebras Cerebras
  • cerebras/gpt-oss-120b
  • cerebras/llama3.1-8b
  • cerebras/llama-3.3-70b
  • Beta
  • cerebras/llama-4-scout-17b-16e-instruct
 
saia.models.inception Inception Labs
  • inception/mercury(4)
  • inception/mercury-coder(4)
  • Beta
(1) - To use these models the temperature must be set to 1, check Reasoning models.
(2)- These models do not support the Temperature parameter in the request body.
(3) - This model is only available via Responses API.
(4) - difussion LLM (dLLM).  
(5) - These models will be deprecated soon. Check Deprecated Models for migration details.

See Also

Last update: December 2025 | © GeneXus. All rights reserved. GeneXus Powered by Globant