DataStax has unveiled its new AI platform, developed with Nvidia, aiming to accelerate enterprise AI development and streamline workflows.
DataStax has made a significant advancement in the realm of enterprise AI with the launch of its new AI platform, DataStax AI, which has been developed in collaboration with Nvidia. This development, announced today, marks a crucial step in DataStax’s ongoing efforts to enhance its data platform to support the growing needs of enterprise AI developers.
The new platform integrates several of DataStax’s existing technologies, most notably DataStax Astra, which is designed for cloud-native applications, and the Hyper-Converged Database (HCD) for self-managed deployments. Alongside these, the platform incorporates Langflow, a tool designed to streamline the creation of AI workflows, and Nvidia’s suite of enterprise AI components. These Nvidia technologies include the NeMo Retriever, NeMo Guardrails, and NIM Agent Blueprints, all purposed to expedite and refine the process of developing and deploying AI models.
A key promise of the DataStax AI Platform is its ability to cut AI development time by up to 60% and execute AI workloads with speeds reportedly 19 times faster than existing solutions. Ed Anuff, the Chief Product Officer of DataStax, emphasised the importance of reducing the time it takes to bring AI projects to production, often referring to this delay as “development hell.”
Langflow, an integral component of the new platform, is a visual AI orchestration tool that allows developers to build AI workflows by dragging and dropping various components. These components include different DataStax and Nvidia capabilities, thus simplifying the creation of complex AI applications. Anuff highlighted Langflow’s ability to visualise and connect DataStax capabilities and Nvidia’s microservices, thereby facilitating interactive and intuitive AI workflow constructions.
The platform supports the development of three main types of agentic AI, thanks to Langflow’s capabilities. Task-oriented agents can handle specific assignments like assembling a vacation package. Automation agents perform tasks autonomously, interacting with APIs to manage workflows. Multi-agent systems deconstruct complicated tasks into more manageable subtasks, overseen by specialised agents.
The integration of Nvidia’s technologies together with DataStax’s capabilities provides enterprise AI users with multiple advantages. The Nvidia microservices make it easier for enterprise users to implement custom language models and embeddings. Moreover, Nvidia’s hardware and software features can be leveraged to optimise the performance of these models.
One notable addition is the guardrails feature, which serves to filter out unsafe content and outputs from models. Anuff noted that this feature, operating as a “sidecar model,” plays a crucial role in maintaining the safety of content inputted by users or retrieved from databases. Furthermore, components like the NeMo Curator aid in the constant improvement of AI models by identifying additional data for fine-tuning.
The integration of these technologies aims to enable enterprises to harness AI capabilities more swiftly and cost-effectively. While GPU computation offers enhanced speed, the platform also supports CPU workloads, allowing users to balance performance with cost savings where feasible. This flexibility ensures that enterprises can allocate computational tasks in a manner that maximises efficiency and minimises expenditure.
Overall, the DataStax AI Platform represents a significant leap forward in equipping enterprises with powerful, flexible, and user-friendly tools to accelerate their AI endeavours.
Source: Noah Wire Services











