DataCentreNews UK - Specialist news for cloud & data centre decision-makers
Story image

Cloudera enhances AI capabilities with NVIDIA partnership in the cloud

Cloudera, the data company for trusted enterprise AI, has announced further support for NVIDIA's advanced technologies in both public and private clouds. This collaboration is set to allow customers to construct and deploy top-of-the-line applications for artificial intelligence (AI) with increased efficiency.

"GPU acceleration applies to all phases of the AI application lifecycle - from data pipelines for ingestion and curation, data preparation, model development and tuning, to inference and model serving," said Priyank Patel, Vice President of Product Management at Cloudera. “NVIDIA's leadership in AI computing perfectly complements Cloudera's leadership in data management, providing customers with a comprehensive solution to harness the power of GPUs across the entire AI lifecycle."

The boosted technology collaboration between Cloudera and NVIDIA brings multigenerational GPU capabilities for data engineering, machine learning, and AI in both public and private clouds.

Cloudera Machine Learning (CML) empowers enterprises to create personalised AI applications. This service unlocks the potential of open-source Large Language Models (LLMs) by permitting the use of their own proprietary data assets to generate secure and contextually-appropriate responses. The CML service further supports the advanced NVIDIA H100 GPU in public clouds and in data centres enabling faster insights and more efficient generative AI workloads.

"Organisations are looking to deploy a number of AI applications across a wide range of data sets," said Jack Gold, President of J.Gold Associates. “By providing their customers the ability to accelerate machine learning and inference by utilising the power of the latest generation of NVIDIA accelerators in cloud or hybrid cloud instances, Cloudera allows users of their data lakehouse and data engineering tools to reduce time-to-market and train models particular to their own data resources. This kind of capability is a significant differentiator for enterprises considering making LLMs a mission-critical part of their solution set."

Joe Ansaldi, IRS/Research Applied Analytics & Statistics Division (RAAS)/Technical Branch Chief commented on the need to make accurate, timely decisions with large volumes of data. He said, "The Cloudera and NVIDIA integration will empower us to use data-driven insights to power mission-critical use cases such as fraud detection. We are implementing this integration and are already witnessing more than 10 times speed improvements for our data engineering and data science workflows."

In addition to accelerating AI workloads, every segment of the AI lifecycle can be sped up with NVIDIA GPU acceleration. Cloudera Data Engineering (CDE) is another example of this. Being a data service that enables building reliable and production-ready data pipelines, CDE now integrates with NVIDIA's innovative Spark RAPIDS. This accelerates extracting, transforming, and loading (ETL) workloads without the need for refactoring, proving more efficient by up to 16x compared to standard CPUs.

Follow us on: