Vast Data has rolled out a new AI application environment called the Vast AgentEngine, built into its data management framework centered on the Vast Data Store storage system.
This move dives deeper into data management, focusing on AI and vector data, enhancing what they already offer. The AgentEngine, set to launch in the latter half of 2025, enables customers to deploy and manage AI agents. Vast plans to introduce a new pre-configured agent each month, but users can customize these agents to fit their specific needs. For instance, they can build an agent, configure it with tools like directories, S3 buckets, and search functions, and apply reasoning models with AI frameworks.
The foundation of this setup includes the Vast Data Store, which offers petabyte-scale storage with advanced QLC flash technology and performance optimization. Then there’s the Vast DataBase, supporting options like SQL and Python, alongside Vast DataEngine, a containerized, Python-driven layer for scalable computing over storage. This whole structure operates within what Vast calls the DataSpace, which can be used in the cloud or on-site.
Vast co-founder Jeff Denworth highlighted how these agents can streamline tedious tasks for companies. He pointed to a project with a UK broadcast studio that needs a video summarization tool. This tool allows producers to keep tabs on competitors’ channels without constantly watching them.
“To pull this off, you need robust reasoning and video language models to decide what’s worth summarizing,” Denworth explained.
Why a monthly release for agents? Denworth acknowledged that while different customers have unique needs, the base functionality of these agents offers enough to get started. “We want to drive adoption. Our development environment uses no code/low code tools, making it accessible. If we can give a solution that’s almost there, it kickstarts the conversation.”
Vast has come a long way from its origins as a storage provider, delving into data management and now AI application building. Denworth emphasized that storage has always been just the beginning of their journey.
“We’ve integrated orchestration and scheduling into our system from the start. We focused on building a strong foundation so we could layer on capabilities later,” he said.
He pointed out that many businesses stumble when implementing AI due to data architecture issues. For example, traditional vector databases aren’t set up for transactions. Without a transactional vector database, companies analyzing real-time data, like video feeds, can’t effectively use that data with generative or agentic AI models.