Version 2025-07 was officially released on August 4, 2025.
Below are the most important fixes and features introduced in this version.
- A2A (Agent-to-Agent) Protocol Support for Enhanced Integration and Extensibility
- Globant Enterprise AI now supports the A2A (Agent-to-Agent) protocol, enabling seamless integration of Agents defined in other frameworks. With this new feature, you can import external Agents and use them as Tools within Agents created in The Lab. This powerful capability significantly enhances the integration and extensibility of Globant Enterprise AI, allowing organizations to leverage existing investments, connect diverse Agent ecosystems, and build more sophisticated solutions by combining Agents across platforms.
- All Agents Automatically Exposed via A2A Protocol
- All Agents defined in The Lab are now automatically exposed through the A2A protocol, with no additional configuration required. Each Agent is published with an A2A-compliant API, and its capabilities and skills are described in an AgentCard format. The AgentCard is available at a dedicated endpoint, following the A2A standard:
- <GEAI_API_URL>/a2a/<agent-id-or-name>/.well-known/agent.json
- This enhancement allows third-party systems that support A2A to seamlessly discover and interact with Globant Enterprise AI Agents. For more details on the A2A protocol and AgentCard specification, see the official A2A documentation.
- More information: Importing Tools using MCP and A2A Servers
- Workspace
- Shareable Chat Links
- Universal File Upload Compatibility
- Consumers can now upload previously unsupported file formats—such as .doc, .docx, .odt, .rtf, .ppt, and .pptx—directly in the chat interface of Assistants. Even if the selected LLM does not natively support these formats, the platform will automatically convert the files (e.g., to PDF or plain text) at the server level before processing. This enhancement ensures broader file compatibility across both multimodal and non-multimodal models, streamlining interactions and improving consumer experience.
- Lab Improvements
- Agent Export and Import Options
- New features have been introduced: the export and import options in The Lab. These features allow you to easily share Agent definitions and their associated Tools with others, even across different Projects. This enables seamless collaboration and reuse of Agent configurations within Globant Enterprise AI.
- Agent Execution Trace Debugging & Download
- A new feature available in The Lab enhances the Agent testing experience. You can now view detailed execution traces of Agents in a dedicated debug tab while testing. Additionally, there is a new option to download the complete execution log for further analysis or record-keeping.
- New Agent Configuration Parameter: maxRuns
- A new configuration parameter called Max Runs is now available in the AI and Tools Tab of an Agent. This setting defines the maximum number of autonomous iterations an Agent can perform before returning control to the consumer. Each iteration corresponds to a single LLM call, and the default value is set to 5. This allows fine-tuning the level of Agent autonomy based on the complexity and nature of the task.
- Iris: Agents as Tools Support
- A new feature has been added to Iris, enabling Agents created by Iris to use other Agents as Tools. With this update, Agents built with Iris can seamlessly integrate and leverage the capabilities of additional Agents, significantly enhancing their functionality and enabling more complex workflows. This allows Iris-created Agents to delegate tasks or access specialized skills from other Agents, making them more versatile.
- Tools
- Per-User Consent for GDrive Tool Access
- A new consent mechanism will be introduced for Tools that integrate with Google Drive. Before an Agent can access or manipulate an consumer's GDrive data, the consumer must explicitly grant permission. This per-user consent model ensures secure, transparent usage of third-party Tools, aligning with data privacy best practices and organizational compliance requirements. More information: Google Drive (User Consent) Integration
- Expanded Model Support for the Create Image Tool
- The create_image Tool, which can be associated with Agents in The Lab, now supports a wider range of image generation models. Consumer can now generate images using the following models:
- openai/gpt-image-1
- openai/dall-e-3
- vertex_ai/imagen-3.0-generate-001
- vertex_ai/imagen-3.0-fast-generate-001
- vertex_ai/imagen-3.0-generate-002
- xai/grok-2-image-1212
- This expanded support gives consumer greater flexibility and more options for creating images tailored to their specific needs.
- New Public Tool: com.globant.geai.serpapi.google_search
- A new public Tool, com.globant.geai.serpapi.google_search, has been added to The Lab. This web search Tool allows you to query across various Google engines, including Google, Google Maps, Google News, Google Images, Google Videos, and Google Scholar. You can specify which search engine to use in the Agent guidelines or directly in the chat. By default, the standard Google engine is used. This Tool expands the information retrieval capabilities of your Agents, enabling more dynamic and context-aware responses.
- New Public Tools: Firecrawl Web Scraper and Web Search
- Two new public Tools from Firecrawl have been added:
- com.globant.geai.firecrawl.web_scraper
- This Tool allows Agents to fetch content from any web page. It returns page content in multiple formats, including markdown, HTML, links, and screenshots. You can specify one or more formats to retrieve (e.g., markdown, links, screenshot). Additionally, this Tool supports fetching PDF documents from the web.
- com.globant.geai.firecrawl.web_search
- This Tool enables Agents to search web pages and view short snippets from the results. It can be used in combination with the web scraper Tool to extract the full content of selected web pages.
- For detailed configuration steps, see Firecrawl Integration.
- These additions provide Agents with enhanced web browsing and data extraction capabilities, broadening the range of information accessible within Globant Enterprise AI.
- LLM Usage Limit Alerts and Notifications
- A new feature has been added to the Agents and Backoffice - Console to help you manage your LLM usage more effectively. You will now receive warning notifications when your LLM consumption exceeds the configurable alert threshold (soft limit), which can be set per project or as a general cap at the organization level. In addition, if a project or organization runs out of available balance to continue using LLMs, an error notification will be displayed. These alerts enhance visibility and control over LLM usage, helping consumers avoid unexpected interruptions.
- LLMs
- New OpenAI models already available through the Responses API and coming soon through the Chat API:
- o3-pro: Part of OpenAI’s “o” series, this model is trained with reinforcement learning to perform complex reasoning and deliver more accurate answers. o3-pro leverages increased compute to “think before it answers,” consistently providing higher-quality responses.
- codex-mini-latest: This is a fine-tuned version of o4-mini, specifically optimized for use in Codex CLI.
- New Anthropic – Web Search Tool: The web search Tool gives Claude direct access to real-time web content, enabling it to answer questions using up-to-date information beyond its training cutoff. Claude automatically cites sources from search results as part of its response. More details on usage and supported models: How to use LLMs with built-in web search tools via API.
- Claude 4: Anthropic’s latest generation of models, featuring Claude Opus 4 for advanced reasoning and coding, and Claude Sonnet 4 for high-performance, efficient task execution, is now available.
- New Providers Coming to Production: xAI (Grok models) and Cohere.
- Integration of Azure AI Foundry: Azure AI Foundry is being introduced as an LLM provider to leverage its unified platform for building, customizing, and deploying AI applications. This integration provides access to a diverse catalog of over 11,000 models from providers such as OpenAI, xAI, Microsoft, DeepSeek, Meta, Hugging Face, and Cohere, along with robust Tools for responsible AI development and seamless integration with the Azure ecosystem.
- Imagen 4: The Imagen 4 family of models is now available for text-to-image generation through the Images API via Vertex AI. This integration brings Google’s advanced Imagen 4 models—including Standard, Ultra, and Fast variants—for high-quality, brand-consistent image creation with support for multiple languages.
- Model Lifecycle Updates:
- GPT-4.5 Preview Deprecation: Access to GPT-4.5 Preview via the API will end on July 14, 2025. To avoid disruption, this model is being migrated to GPT-4.1.
- Vertex AI Gemini 2.5 Updates: New GA endpoints for Gemini 2.5 Flash (gemini-2.5-flash) and Gemini 2.5 Pro (gemini-2.5-pro) are now available (effective June 17, 2025). Existing preview endpoints for Gemini 2.5 Flash and Pro will be migrated to these new GA endpoints.
- For more information, please refer to Deprecated Models.
- Fixed wrong timeout of 600 seconds when calling assistants. It now uses the provider configured under the parameter HttpTimeout, which defaults to 120s.
- Cohere support for embed-v4.0 embeddings.
- Flows
- Slack Mentions Support for Flows
- Globant Enterprise AI now supports mentions for Flows within Slack. Consumers can add a Flow to a Slack channel and invoke it directly by @-tagging the Flow. This enables seamless initiation and management of conversation threads with Flows straight from Slack. This integration streamlines collaboration and enhances productivity by allowing teams to interact with and trigger Flows without leaving their Slack workspace.
- RAG Revision 9
- New RAG Integration for use from Agents.
- New ingestion properties also valid for omni-parser API.
- New password parameter for processing password-protected PDF files.
- New chunkStrategy parameter to decide how to process tables and images (enabled by default using byLayoutType).
- New chunkSize and chunkOverlap parameters to override the default assistant configuration.
- The Requests log section details the parameters used for ingestion.
- New RAG Document API to better serve associated documents.
- New Multivalued Filter Operators when ingesting with multivalued metadata.
- New Assistants defaults
- Embeddings configuration updated to use cache by default.
- LLM configuration updated from gpt-4o-mini to gpt-4.1-mini.
- ingestion vLLM usage from openai/gpt-4o to openai/gpt-4.1-mini, minor updates on the associated prompts.
- Fixed an issue when handling the threadId (conversation) from the Workspace.
- Fixed an issue where the plugins API did not return the StartPage section.
- Fixed a PayloadTooLargeError error when using a Prompt exceeding 12k tokens.
- Performance improvements when processing embeddings associated with xlsx/csv files.
- Performance improvements when querying Pinecone Vector Store Provider.
- Python SDK Updates and Enhancements
- The Python SDK has been updated with several new features, improvements, and changes to streamline development and agent management. These enhancements make the Python SDK more robust, user-friendly, and supportive of advanced agent development and management workflows.
- Added
- Save and Restore Chat Sessions: You can now save and restore chat sessions using JSON files, making it easier to maintain conversation history.
- Switch Agents in Chat GUI: The chat user interface now allows seamless switching between Agents within an active session.
- API Status Command: The GEAI CLI now includes a status command to check the health of your PyGEA instances.
- Reasoning Strategy in Agent Definition: Agent definitions now support specifying a reasoning strategy for more advanced customization.
- Changed
- Man Pages Installation: The script for installing man pages has been updated to support system-wide installation with the --system flag.
- Comprehensive Help in Man Pages: All help texts are now included in the man pages for the GEAI CLI.
- Simplified Lab Project Selection: The Lab no longer requires explicit project IDs; it now retrieves the project automatically using the provided API key and base URL.