Version 2026-02 was officially released on Globant Enerprise AI on March 23, 2026 and and includes additional updates delivered in Hot Fix 1 (April 1), Hot Fix 2 (April 9) and Hot Fix 3 (April 16).
- LLMs
- Improved LiteLLM server hardening and configuration by updating the container image and standardizing environment variables.
- Include new Anthropic API Key.
- Fixed several security vulnerabilities.
- Security
- Fixed unexpected 403 responses from AWS WAF for long callback URLs so you can send Base64-encoded parameters without being blocked.
- Improved database credential management so you retrieve DB passwords from a secrets manager rather than static configuration.
- Updated application behavior to read credentials from a secure secrets backend.
- Updated documentation to reflect the new password management process.
- API
- Fixed Chat API routing for OpenAI gpt-5.2+ models so you can use tools together with reasoning_effort without receiving HTTP 400 errors.
- Automatically switches to the /v1/responses endpoint when a gpt-5.2+ model is invoked with tools and reasoning_effort.
- Ensures consistent behavior across /chat, /v1/chat/completions, and /chat/completions.
- Keeps existing behavior for requests without reasoning_effort or without tools.
- New secure file download endpoint via OAuth so you can access conversation files using private, authenticated URLs instead of signed links.
- Returns the file directly and masks storage URLs (S3/Azure).
- Validates permissions using your OAuth/API token.
- Applies to files rendered in chats and to links in RAG answers.
- Tools/Agents
- Fixed Firecrawl web_scraper_post failures when combined with web_search so you can reliably retrieve page content.
- Corrects request body serialization for the UEL parameter to prevent 422 errors and “Content Retrieval Failed.”
- Improved stability of the com.globant.geai.create_image Tool so you can generate images reliably across supported models.
- Eliminates “max_tokens or model output limit was reached” errors seen with models such as gemini-2.5-flash-image, gpt-4.1-mini, gpt-4, and gpt-5.
- Aligns behavior between The Lab Agent tests and the Workspace and improves generation latency.
- Workspace
- Fixed chat responses stalling or never completing in Workspace (Legacy) so you receive the final message without refreshing the page or losing context.
- Addresses cases where the Console recorded a full response but the UI did not render it.
- Fixed disappearance of conversation history in Workspace so you can access prior messages normally.
- Restores visibility for impacted users.
- Fixed rendering of images generated by the create_image Tool so you can view generated images inline in chat.
- Lab
- Fixed the inability to add Integration parameters so you can configure integrations in QA and Production environments.
- Security
- Security remediation: Updated LiteLLM in the LLM Server to v1.83.0 so you receive fixes for critical and high vulnerabilities and a verified clean build.
- Remediates CVE-2026-35030 (CRITICAL) and CVE-2026-35029 (HIGH).
- Rebuilt via the CI/CD v2 pipeline with forensic audit validation and resolves wolfi-based CVEs on rebuild.
- Fixed unauthorized access risk to uploaded files by replacing exposed signed URLs with OAuth-protected, private downloads.
- Removes pre-signed URLs from endpoint responses and enforces token-based authorization.
- API
- Improved LLM Server error handling so you receive structured, normalized responses for LiteLLM provider exceptions.
- Adds a user_message field with clear, human‑readable text.
- Preserves the original message for diagnostics and logs.
- Maps HTTP status codes consistently with LiteLLM (400, 401, 5XX, and unmapped), and preserves guardrail 422 responses.
- Lab
- Improved operational behavior when Request‑sensitive logging is enabled for all projects, reducing noise and preventing unexpected failures in SaaS workloads.
- Improved organization‑level execution across projects so Organization Admins can run agents without being explicitly added to each project.
- Fixed “Unauthorized” errors for Organization Admins when executing an agent in projects where they are not members, restoring expected capabilities.
- Station
- Improved organization‑level execution across projects so Organization Admins can trigger agents without project‑level membership.
- Fixed “Unauthorized” errors for Organization Admins when executing agents across projects.
- Integrations/Agents
- Fixed default visibility for Tools registered when launching an MCP server so newly registered Tools are private by default and not exposed across organizations or projects.
- RAG
- Fixed OmniParser failures when processing files whose names include accents or special characters, ensuring successful ingestion.
Below are the most important fixes and features introduced in this version.
- Station
- Enhanced publication flow from Lab to Station, clarifying visibility scopes and capturing solution metadata consistently to preserve privacy settings and maintain moderated organization-wide visibility.
- Tenant-isolated SaaS deployment option for Station, provisioned in a client’s cloud or on-prem environment, with IdP integration (Azure AD/Entra, Okta, SAML/OIDC) and in-tenant admin moderation for approvals.
- AI Pods client environments in Station, configurable per client organization, including IdP setup and an administrator role to moderate and publish only approved solutions.
- Improved confidential solution setup and usage in Station, enabling definition of execution privacy from Lab and consistent privacy indicators when solutions appear in Station.
- Agents are executed through The Station.
- External Solutions invite flow
- Allows you to request external owners to publish or update solutions.
- Create and send invites with 48h secure links.
- Resolve invites on page load and preload proper form mode (new solution or new revision).
- Enforce one-time consumption on submission and return clear error states for expired, used, or invalid tokens.
- Persist and return organizationId in invite responses to guarantee tenant isolation (403 on org mismatch).
- Hide Subscription selection in the form; you submit null and Admins assign it later.
- Accept avatar images as base64; the backend stores the generated URL.
- Include an HTML email template aligned with Station branding.
- Improved Conversation APIs return consistent execution and ownership context
- Return projectId and pluginProjectId on list and detail.
- Always enrich avatar, chatSharingPermissions, and externalExecutionPermissions in Get Conversation responses.
- Allow runs across projects when externalExecutionPermissions = organization.
- Allow open/close operations even if the related plugin is deleted, keeping conversation lifecycle independent.
- Improved pinned solutions behavior
- Allows pin/unpin even if the agent is deleted.
- Validations check the correct version/revision IDs.
- APIS
- Agents API: Allows you to restrict specified AI models by Organization so that only permitted models are used when recreating or editing an Agent. Only permitted models are accessible for each organization when the Agent is attempted to be published.
- Improved outbound email hygiene to protect sender reputation and prevent SES blocks:
- Validate email syntax (RFC-like) and MX with A or AAAA fallback
- Apply suppression on bounces and support double opt-in
- Provide clear error responses on invalid destinations
- Improved HttpClient Java library to honor JVM proxy system properties, enabling standard proxying via JAVA_TOOL_OPTIONS without code changes.
- Fixed Organization API to exclude deleted project tokens from listings.
- Fixed invitation link routing so Lab targets no longer append /home incorrectly.
- Fixed AccessControl v2 role listings so accessType only includes Backend or Frontend as intended.
- LLMs
- New OpenAI GPT-5.4 family support
- openai/gpt-5.4
- openai/gpt-5.4-pro
- openai/gpt-5.4-mini
- openai/gpt-5.4-nano
- openai/gpt-5.3-chat-latest
- Enables faster, multimodal chat with streaming, function calling, and tool_choice.
- New Vertex AI image model coverage
- vertex_ai/gemini-3.1-flash-image-preview (Nano Banana 2).
- New Bedrock EU models (tenant-scoped)
- You can request EU-region Claude models (Opus/Sonnet 4.5/4.6) for regulated workloads.
- These models are hidden and priced with the regional premium.
- Improvements
- Improved Claude Sonnet 4.5 large-context runs:
- You automatically get anthropic-beta: context-1m-2025-08-07 for Claude Sonnet 4.5 calls routed via LiteLLM, enabling up to 1M tokens when supported.
- Improved LiteLLM engine:
- Upgraded to v1.81.12 for stability and compatibility with newest providers.
- Improved Azure OpenAI resilience:
- Automatic fallback from gpt-4o and gpt-4o-mini to gpt-5 and gpt-5-mini when required by Azure lifecycle changes.
- Improved Gemini file handling:
- Uses updated SupportedFileExtensions for multimodal Gemini families to match provider specs.
- Fixed prolonged “unexpected error” responses:
- vertex_ai/gemini-2.5-flash now uses improved service handling and fallbacks.
- Configuration
- New dynamic configuration for GEAI LLM Server:
- You can point the service to an external config via GEAI_LLMSERVER_CONFIG_URL to streamline custom deployments.
- PyGEAI v0.6.0
- New SDK Python—diff command: You compare resource versions or states from the CLI with user-friendly output.
- Responses API with Streaming
- Responses API Support: New get_response() method in ChatClient for accessing the Responses API endpoint
- Multimodal Inputs: Support for sending images and PDF files alongside text input
- Real-time Streaming: Stream responses in real time for immediate feedback
- Function Calling: Complete support for tools and tool_choice parameters for advanced function calling
- CLI Integration: New geai chat response command with comprehensive parameter support
- Authentication & Access Control
- API Token Management: Complete CLI and client support for managing project API tokens
- Access Control Endpoints: Manage organization and project memberships, roles, and permissions
- Agent Migration & Import/Export
- Migration Tools: Enhanced migration capabilities for agents, tools, and agentic processes between environments
- Embeddings Enhancement
- Improved Embeddings API: Enhanced embeddings generation with better parameter support
- Additional Parameters: Support for encoding format, dimensions, user tracking, input type, and caching options
- Analytics Module
- Complete Analytics API integration for monitoring platform usage, costs, and performance
- New pygeai.analytics module with AnalyticsClient, AnalyticsManager, and response models
- 35 analytics endpoints covering:
- Lab metrics
- Request metrics
- Cost metrics
- User and agent activity
- CLI commands (geai analytics):
- agents-created (ac) - Get agents created and modified counts
- requests-per-day (rpd) - Get daily request counts with error tracking
- total-cost (tc) - Get total cost for a period
- average-cost (ac) - Get average cost per request
- total-tokens (tt) - Get token consumption metrics
- error-rate (er) - Get overall error rate percentage
- top-agents (ta) - Get top 10 agents by requests
- active-users (au) - Get total active users count
- full-report (fr)
- Comprehensive analytics report with CSV export
- Generate a full report combining:
- Lab metrics (agents, flows, processes created or modified)
- Request metrics (total requests, errors, error rate, average time)
- Cost metrics (total cost, average cost per request)
- Token metrics (total tokens, average tokens per request)
- User and agent metrics (active users, agents, projects)
- Top performers (top 10 agents by requests or tokens, top 10 users by requests or cost)
- Export results using the --csv option
- Date range defaults
- All commands default to the previous month when you do not specify a date range
- Inline endpoint documentation and manager methods
- Access manager methods for all analytics endpoints:
- Lab: agents, flows, and processes created and modified (total and per day)
- Requests: total, per day, errors, error rate, average time, per user
- Cost: total, per day, average per request or user
- Tokens: total, per agent or day, average per request
- Activity: active users, agents, and projects, usage per user, averages
- Top performers: agents by requests or tokens, users by requests or cost
- OAuth 2.0 Authentication Support
- Added access_token and project_id keyword-only parameters to all client classes
- OAuth authentication support in BaseClient, Session, and ApiService
- Automatic injection of Authorization: Bearer {token} and ProjectId headers
- Backward compatibility maintained with existing API key authentication
- Validation ensures both access_token and project_id are provided together
- Support for OAuth in clients such as AILabClient, AgentClient, ToolClient, AgenticProcessClient, EvaluationClient, and SecretClient
- ToolClient refactored to rely on ApiService for automatic header injection
- CLI Verbose Mode
- Global --verbose / -v flag for detailed debug logging
- Usage: geai --verbose or geai -v
- Enables DEBUG-level logging with detailed execution flow, including:
- Command identification and matching
- Option extraction and parsing
- Session and configuration information
- Execution flow and completion status
- Output format: YYYY-MM-DD HH:MM:SS - geai - LEVEL - message
- Debug logs are sent to stderr to avoid interfering with command output
- Enhanced CLI Error Handling & Validation
- ValidationError exception with structured error context:
- Field-specific validation errors with attributes: field, expected, received, example
- Formatted multi-line error output
- Enhanced validators:
- Integer, float, boolean, JSON, and URL validation with clear error messages and examples
- Improved error message formatting:
- ERROR [Type]: message
- Includes actionable suggestions and preserves exit codes
- UnknownArgumentError improvements:
- Typed attributes: arg, available_commands, available_options
- Enhanced CLI documentation:
- Error Handling section with examples and error types
- Extended test coverage:
- 133 or more test commands added to Docker CLI test suite
- Validation tests for all parameter types
- Bug fixes and stability improvements
- Fixed evaluation tests
- Fixed Docker CLI tests
- Fixed AILabClient credential passing to AdminClient for token validation
- Corrected test signatures and mocks (removed old project_id parameters)
- Fixed AI Lab tests, processes tests, and migration tests
- Improved file response handling in core files module
- Fixed CLI help examples formatting
- Corrected imports in documentation examples
- Enhanced informative messages for failed embedding generation
- Fixed organization and project message validation
- Resolved continuous development pipeline issues
- Fixed configuration loop issue
- Version requirements
- Python 3.13 is the recommended version
- Python 3.10 or higher is the minimum supported version
- Workspace
- Improved Run behavior opens new conversations in the current browser tab to keep navigation focused.
- Improved chat safety by proactively blocking input when:
- pluginExternalExecutionPermissions = none
- pluginStationStatus = deleted
You see a banner and you can still read history.
- Improved file attachments respect each agent configuration, including supported types and limits returned by the conversation APIs.
- Fixed an error when unpinning a conversation with the chat closed.
- Fixed multiple UI inconsistencies and readability issues, including Mandarin character rendering in Legacy Workspace and Light Mode menu visibility.
- Console
- Improved UI customization centralization so you configure branding in UIConfiguration from the Console instead of S3.
- Improved Models module migrated to React for a more consistent Console experience.
- Fixed API Tokens creation errors so you can create and manage tokens without interruption.
- Fixed popup windows sizing for project and role creation dialogs.
- Fixed Roles pagination and broken pagination controls across lists.
- Deprecated creation of API and Chat Assistants from the Console; related buttons are now disabled.
- Integration/Agents
- Improved Integration naming and renaming controls for private integrations to keep tool names consistent and conflict-free.
- Improved import validation to prevent tool names exceeding the 64-character limit by constraining the Integration name prefix.
- Fixed tools not being included in LLM requests for some integrations, restoring expected tool execution from Lab and runtime.
- Fixed publication flow so optional Integration parameters are no longer forced when publishing an agent.
- Fixed SharePoint tool to honor parameter overrides at runtime.
- Fixed imported Integration security schemes so agents include the expected auth configuration after import.
- Fixed migration artifacts setting proxy servers or tools to public; scopes are corrected to private unless explicitly set as public.
- Flows
- Improved encryption for channel Integration configurations by migrating stored secrets to a stronger, compliant method without downtime.
- Improved language handling so the Workspace sends the correct Accept-Language header for Flows.
- Fixed duplicated messages in Conversation History under high load by hardening the delivery pipeline against Firehose retries.
- Fixed language selection not shown in Create Flow so you can create new flows normally.
- RAG
- Improved base image hardening and dependency updates to prepare the move to Node.js 24 LTS.
- Security Trivy findings addressed across the RAG image to reduce CVE exposure.
- Fixed OmniParser content extraction so parts keep their original order and types (Title, Table, Image, and others) instead of flattening to plain text.
- Fixed OmniParser uploads to handle filenames with accents and special characters without throwing InvalidPathException.
- Fixed intermittent list index out of range errors on document processing.
- Security addressed Trivy findings in the omni-parser image.
- Lab
- Fixed backend validation to block unauthorized Provider or Model updates via the Agents PUT endpoint, keeping UI and API enforcement consistent.
- Fixed Iris to respect character limits across agent name, role, purpose, background knowledge, and guidelines.
- Security
- Security improved overall platform security by addressing identified vulnerabilities and strengthening protections across services and components.
- Security applied fixes to prevent unauthorized access, data exposure, and potential denial-of-service scenarios.
- Security remediated reported vulnerabilities and reinforced secure handling of inputs, processing, and internal operations.
- Security addressed security findings across platform components to reduce risk and improve compliance.