Version 2026-01 was officially released on Globant Enerprise AI on March 1, 2026, and includes additional updates delivered in Hot Fix 1 (March 10).
Below are the most important fixes and features introduced in this version and its Hot Fix.
- Security
- Fixed vulnerabilities in the Teamstudio-Worker container image reported so you reduce the attack surface.
- Updated base image and dependencies to align with current scanning requirements.
- Fixed high-severity Java-related vulnerabilities in API, Console, and LLM Server detected in pre-production so you meet security expectations before rollout.
- Fixed findings for the omni-parser image so you comply with current security scan requirements.
- Improved container hardening across services so you benefit from scan-based remediation, refreshed base images, and dependency updates that strengthen your security posture.
- RAG
- Improved OmniParser document processing reliability so you consistently receive complete section extraction, including content derived from images.
- Fixed a runtime error in OmniParser when calling /v1/omni-parser/process (“list index out of range”) so you can process affected documents without failures.
- Flows
- Fixed authentication for the embedded Flow chatbot so you no longer see unnecessary login prompts or error pages in production.
- Console
- Fixed a validation error when assigning permissions to assistants from Roles > Permissions so you can complete permission assignments.
- Fixed layout and visualization regressions introduced after the 2026-01 release so you can navigate and operate the Console as expected.
- Lab
- Integrations now appear in the Side Menu, replacing the former Tools option.
- Integrations allow you to group Tools so they can share common parameters. For example, multiple Tools in the same group can use the same authentication key.
- An automatic migration converted standalone Tools (Tools created without grouping) into Integrations that group one or more Tools. Relevant parameters and credentials were automatically moved to the group level.
- Integrations support JSON definitions based on OpenAPI and MCP, allowing you to import OpenAPI 3.x, or connect to an MCP Remote Server.
- Show Tools in Integrations Dashboard, improving The Lab and user experience consistency; fix CSS in Security Scheme,
Integration Parameters and Tool Parameters tabs; show built‑in edit constraints; adjust labels/texts and empty states; match The Lab data model to the Integrations API:
- Compose Tool names with Integrations name; list Integrations with filters; delete Integrations with cascade and soft-delete rules; limit the number of Tools per group; expanded detail level to return Tool and parameter values; publish all draft Tools of Integrations in bulk.
- Endpoints include full parameter hierarchy and allow inheritance where applicable.
- For more information about the automatic migration, see Migration from Tools to Integrations.
- New Credential Manager with list, create, and edit workflows (Name, Integration, Type, Status), showing Authentication Level and Type, and improved error propagation.
- Credential Manager allows exactly one active credential per integration in the API and The Lab, with clear rules and statuses for activation and deletion.
- Treat clientId/clientSecret and fromSecret defaults as secrets (masked in The Lab/API), and keep values encrypted at rest.
- Tools improvements:
- New SharePoint Tools for reading, listing, and saving files with service account and OAuth support.
- Read files with dynamic folders and multi-auth.
- Save files (PDF/Word) to dynamic folders with multi-auth.
- List folders/files and read document content.
- New Email Tool capabilities to build dynamic HTML bodies, subjects, recipients, and attach files generated by Agents.
- New Google Drive Integration action to create PDFs (headings, lists, tables, images).
- New default API Key fallback for Web Search Tools when no Credential is set (env var CREDENTIAL__APIKEY).
- New Agent publication checks validate that Integration parameters and credentials are complete before publishing.
- Agents and Iris improvements:
- Iris now includes Agent presentation (intro, description, starters, features) and avoids adding process-only tools; running older Agent revisions now works as expected.
- Files and multimodal playback improvements:
- Fixed file uploads stuck in antivirus scan and restored audio/video playback in the multimodal player; aligned allowed file categories across views.
- Fixed multiple Import/Export issues (Agent details mismatch after import, dark banner on light mode, integration import empty response, GetToolPlugin by name/ID).
- Fixed missing icons and missing public Jira in dashboard.
- Fixed OAuth-based Tools failing to authenticate and access user documents.
- Fixed send-email tool PDF generation by selecting a supported pdf-engine.
- Security sanitized callApi logs to avoid leaking tokens/keys.
- Station
- New Favorites model and endpoints across Station and Middleware, allowing you to favorite/unfavorite published Solutions from any project and view them in a dedicated My Favorites section.
- Favorites list, mark/unmark from cards, pagination, and solution metadata (solutionId, externalVersion/Revision, permissions, subscription).
- New scoped Station Admin roles and login resolution (Organization Admin and Project Admin) propagated through Station and Middleware.
- Frontend and backend now resolve admin privileges for the current organization/project context; login/session endpoints include scoped data.
- Taxonomy administration improvements:
- Install default taxonomies from editable JSON via Console; migrate all organizations in bulk with safe purge semantics; resolve Translation Agent by Agent Name.
- Gallery and Workspace UX improvements:
- Infinite scroll with accurate counters and “You’re all caught up”; preserve filters after login redirect; fix “Explore Solutions” filter behavior; Show Lab Version.Revision in Detail; updated CODA card and logo; reposition Share button; add test-ids; persist selected language across sessions; add 404 page.
- Improved My Creations to list project Solutions by type, status, and latest Lab vs. Published/Unpublished revisions, with middleware proxy alignment.
- Fixed light mode visuals and multiple UI regressions (login background height, “Showing results for” hint, proceed/change project flow, caught-up message logic, CODA external image).
- Deprecated Assistant creation in Console in favor of Agents in the Lab (UI disabled and deprecation banner shown).
- Console
- Evolution from Assistants to Agents:
- Creation of Chat and API Assistants in the Console has been disabled.
- Creation through the API remains temporarily available but carries risks because newly created assistants will not be migrated and will be deprecated in the future.
- Going forward, it is recommended to use Agents.
- Data Analyst Assistant 2.0 has been discontinued.
- For more information, see Evolution from Assistants to Agents.
- Model Configuration Controls: Organizations can now define which LLMs are enabled, improving governance and cost management.
- New UI management for organization-approved LLM Providers/Models, including bulk add/remove actions and Assistant editors that filter only approved options.
- New Usage Dashboard Pivot with Month, Provider, Model, Cost, Requests, Total Tokens and KPIs across project dashboards.
- Improved file uploads in Console after image hardening and fixed Prompt Files URL formatting (pretty route).
- User-Id is now stored for Agentic Process executions in The Lab > Jobs. The Owner field in The Lab > Jobs > Subject Details View (General tab) now identifies the user who initiated the execution, improving tracking and auditing.
- Fixed CSV export security permission issues across multiple grids.
- Security blocked unsupported HTTP verbs (PUT/DELETE) in Backoffice endpoints.
- Workspace
- New feature flag to temporarily disable the new Run flow and re-enable the legacy Run in this release, while keeping navigation in the same tab.
- Improved deep-linked navigation by restoring all filters and query parameters after login.
- Improved language persistence so the selected locale survives logout/login.
- New and improved Pinned/History modules and API responses (pinId, grouping, extended Station fields).
- API
- New Integrations API to register, retrieve, update, and delete Integrations, including security scheme management and import/export operations.
- New Organization scope for Public Providers so only models explicitly allowed to your organization appear across APIs (LLM API, Chat API).
- New Organization API endpoints to bulk Delete model overrides by provider or reset all overrides for an organization.
- New Access Control API endpoint to remove a user from a Project.
- New Terms & Conditions APIs to check and register acceptance per organization/installation.
- New File API support for uploads coming from Chat (up to 1 GB, with type/size validation).
- Improved error transparency for LiteLLM errors and authorization failures.
- Return descriptive error bodies for Chat/LLM calls instead of “LiteLLM General Error” when safe to display.
- Return meaningful 401/403 JSON errors on GET /accessControl/organization/plugin-runtime-policies.
- Improved stability by separating HTTP timeouts (connect/socket/pool) to avoid hanging connections.
- Improved Usage Limits API and role checks to avoid 404 “Project not found” for Provisioning_Services role.
- Fixed Chat API to support POST only (harden HTTP methods).
- New MCP runtime to execute Tools over remote MCP with user-based authentication.
- Flows
- Flows Conversation History will be deprecated and replaced by a new section on the platform to display records.
- New OAuth login support for GEAI-managed channels (Web Chat), including token validation and backend permission checks; the Chat API now forwards the GAM token to Flows via x-saia-access-token.
- Improved Spring Boot and Keycloak dependencies to current supported versions for security hardening and compatibility.
- Spring Boot upgraded to 3.x.
- Keycloak SDK upgraded to 25.0.5.
- Improved database portability by migrating Flows storage from MongoDB to CosmosDB for managed-service readiness.
- Improved Content Security Policy configuration using an environment variable for custom deployments.
- Fixed Slack interactive button events not handled when translating messages to Slack.
- Fixed partial responses not shown in Web Chat after a second message.
- Security enforced OAuth flow guardrails and error handling for unsupported channels.
- RAG
- New LLM ingestion strategy for omni‑parser to process entire pages with the configured LLM when hi_res OCR yields misreads.
- Improved multimodal embeddings by honoring chat.ingestion.multimodalBatchSize for providers that require single‑item batching.
- Fixed PPTX/XLSX ingestion rejections from Console uploads.
- Security removed sensitive payload elements from omni‑parser/RAG logs and addressed Trivy findings.
- LLMs
- New Anthropic Claude 4.6 Opus and Sonnet across providers. Incorporation of Claude Opus 4.6 and Claude Sonnet 4.6, Anthropic’s flagship models optimized for advanced coding and agentic, multi-step workflows with tool use. Available now in production via Anthropic, and coming soon via AWS Bedrock and Google Cloud Vertex AI:
- anthropic/claude-opus-4-6 — Tool use, streaming, multimodal.
- vertex_ai/claude-opus-4-6; awsbedrock/us.anthropic.claude-opus-4-6-v1 (+fallback to Vertex AI).
- anthropic/claude-sonnet-4-6 and cross‑provider fallbacks to Vertex AI.
- New Google Gemini models. Integration of Gemini 3.1 Pro Preview, Google's latest reasoning model featuring a 2x+ improvement in reasoning performance over Gemini 3 Pro, enhanced token efficiency, and optimized agentic multi-step workflows with precise tool use. Available in preview via Google Cloud Vertex AI:
- vertex_ai/gemini-3.1-pro-preview (multimodal, streaming, tool use).
- gemini-3-flash-preview (Vertex AI + Gemini API).
- vertex_ai/gemini-3-pro-preview.
- Integration of GPT-5.2-Codex, an upgraded GPT-5.2 variant optimized for agentic coding tasks, available via the Chat API or the Responses API:
- openai/gpt-5.3-codex.
- openai/gpt-5.1-codex, gpt-5.1-codex-mini, and gpt-5.1-codex-max.
- Deprecated codex-mini-latest migrated to openai/gpt-5.1-codex-mini. For more details on the migration, please refer to Deprecated Models.
- New Image generation models:
- openai/gpt-image-1.5 and gpt-image-1-mini.
- New providers/models:
- vertex_ai/zai-org-glm-5-maas; OpenRouter GLM‑4.7; Nvidia DGX Nemotron‑49B‑v1_5; OpenRouter MiniMax M2.1; OpenRouter DeepSeek V3.2 family; Globant DGX GLM‑4.6.
- New Azure AI Foundry additions including gpt‑5.x family and Model Router support for dynamic model selection.
- Improved Bedrock - Vertex fallbacks for Claude models to mitigate rate limits and account restrictions.
- Fixed LiteLLM container image issues (remove test keys, address missing vertexai import, reduce CVEs) and restored OpenAI gpt‑5 in remote module for new installs.
- New model optimized for coding and agent-based workflows:
- openrouter/minimax/minimax-m2.1
- New Qwen3-VL Models added:
- openrouter/qwen3-vl-235b-a22b-instruct
- openrouter/qwen3-vl-235b-a22b-thinking
- openrouter/qwen3-vl-30b-a3b-instruct
- openrouter/qwen3-vl-30b-a3b-thinking
- openrouter/qwen3-vl-8b-instruct
- Fixed LiteLLM General Error issues in Assistants/Chat with clearer messages and recommended configuration (e.g., token limits, alternative models).
- Security
- Security hardened container images across API, Console, Lab, RAG/omni‑parser, and LiteLLM (non‑root execution, removed shells/tools, labeled builds, addressed CVEs).
- Security improved Chat/LLM error redaction while preserving actionable messages, removed sensitive data from logs, and masked fromSecret defaults in APIs/UI.
- Security restricted Chat API to POST, tightened method handling in Console, and fixed multiple OWASP/SBOM findings.
- Security enforced organization-level LLM enablement at runtime (direct calls, Agents, Assistants).
- Known Issues
- Some third‑party models may present provider‑side deprecations or quota constraints. Fallback models are configured where possible. If a model fails due to provider limitations, select an approved alternative from your organization’s allowed list.