Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Top 10 NVIDIA Updates

From the forefront of innovation to the frontiers of cutting-edge AI, NVIDIA is constantly pushing the boundaries of technology. A series of innovations, each more significant than the previous, are unveiled by NVIDIA in a flurry of releases. How about we take a trip down memory lane and see the top ten NVIDIA updates that are changing the game?

Top 10 NVIDIA Updates for this Week

#1 Microsoft and NVIDIA Revolutionize Generative Technologies for Businesses Globally

As part of their ongoing partnership, Microsoft Corp. and NVIDIA unveiled new integrations at Monday’s GTC that bring together the cutting-edge OmniverseTM and generative AI technologies from NVIDIA with Azure, Azure AI services, Fabric, and Microsoft 365.

AI inference predictions in Microsoft Copilot for Microsoft 365 are powered by NVIDIA GPUs and NVIDIA Triton Inference ServerTM. Copilot for Microsoft 365, which will soon be available as a dedicated physical key on Windows 11 PCs, integrates massive language models with confidential business data to provide users with contextualized intelligence in real time, allowing them to improve their efficiency, innovation, and expertise.

Azure AI is getting NVIDIA NIMTM inference microservices, which will speed up AI deployments. Optimal inference on over twenty common foundation models, including NVIDIA-built models that customers can sample at ai.nvidia.com, is provided by cloud-native microservices offered by NIM, which is part of the NVIDIA AI Enterprise software platform that is also accessible on the Azure Marketplace. To aid developers in reducing the time it takes to bring performance-optimized production AI applications to market, the microservices include prebuilt, run-anywhere containers that are powered by NVIDIA AI Enterprise inference technologies, such as Triton Inference Server, TensorRTTM, and TensorRT-LLM.

#2 Databricks and NVIDIA Team Up to Supercharge Workloads

Databricks and NVIDIA have partnered to enhance Databricks Data Intelligence Platform data and AI workloads. Based on NVIDIA’s involvement in Databricks’ Series I fundraising round, the partnership is expanding. Databricks uses NVIDIA’s accelerated computing and software across the board for model deployment. Mosaic AI Model Serving by Databricks relies on NVIDIA TensorRT-LLM software, which guarantees a scalable, performant, and cost-effective solution with state-of-the-art performance. As a former TensorRT-LLM launch partner, Mosaic AI continues to work closely with NVIDIA on technical projects.

Quickly set up the correct NVIDIA infrastructure and maintain a consistent environment for all users with these tools. To facilitate high-performance distributed and single-node training for machine learning workloads, Databricks is compatible with NVIDIA Tensor Core GPUs across all three main clouds.

Read: Data monetization With IBM For Your Financial benefits

#3 SAP and NVIDIA Join Forces to Drive Global Innovation

With today’s announcement of their expanded alliance, NVIDIA, and SAP want to help SAP’s enterprise customers quickly leverage data and generative AI across all of SAP’s cloud products and apps. SAP intends to employ NVIDIA AI Enterprise software, specifically NVIDIA NIM inference microservices and NVIDIA NeMo RetrieverTM microservices, once models are prepared for deployment in SAP cloud solutions. With NVIDIA NIM, the enhanced SAP infrastructure can run inferences faster and more efficiently. To boost accuracy and insights, SAP aims to add RAG capabilities using NVIDIA NeMo Retriever microservices. These capabilities will allow generative AI applications to more securely access data running on SAP infrastructure. Both SAP and data from outside parties can be used by customers with RAG.

To streamline and improve digital transformation, SAP and NVIDIA are investigating over 20 use cases for generative AI. Generative AI features in SAP S/4HANA Cloud automate ERP with intelligent invoice matching; in SAP SuccessFactors, improve HR use cases; and in SAP Signavio, speed up the processing of new generative AI insights to optimize customer support and better process business recommendations.

#4 Oracle and NVIDIA collaborate to deliver accelerated computing and generative AI services

An expanded partnership to provide global customers with sovereign AI solutions was announced today by Oracle and NVIDIA. Governments and corporations can implement AI factories thanks to Oracle’s distributed cloud, AI infrastructure, and generative AI services, as well as NVIDIA’s accelerated processing and generative AI software. With the integration of NVIDIA’s full-stack AI platform and Oracle’s Enterprise AI, customers have access to a cutting-edge AI solution that enhances digital sovereignty by giving them more control over operations, location, and security. This solution can be deployed across Oracle Alloy, Oracle EU Sovereign Cloud, Oracle Government Cloud, and OCI Dedicated Region.

Computing will enter a new age with the help of the NVIDIA GB200 GraceTM Blackwell Superchip. Supercharging AI training, data processing, and engineering design and simulation, GB200 delivers up to 30X faster real-time large language model (LLM) inference, 25X lower total cost of ownership (TCO), and uses 25X less energy than the previous generation of GPUs. High-performance computing (HPC), data analytics, and artificial intelligence (AI) tasks are the specialty of NVIDIA’s Blackwell B200 Tensor Core GPUs.

#5 Google Cloud and NVIDIA announced a deepened partnership to foster the machine learning (ML) community

Google will incorporate NVIDIA NIM inference microservices into their platform, expanding on their previous partnership to enhance the Gemma family of open models. This will allow developers to utilize their choice of tools and frameworks for training and deployment on an open and flexible platform. Additionally, the businesses showcased their support for JAX on NVIDIA GPUs, as well as Vertex AI instances that are powered by NVIDIA H100 and L4 Tensor Core GPUs. By bringing JAX’s benefits to NVIDIA GPUs, Google Cloud, and NVIDIA increased the ML community’s access to large-scale LLM training.

Read: Top 10 Benefits Of AI In The Real Estate Industry

One of the most user-friendly and powerful frameworks for LLM training is JAX, a high-performance machine learning framework that is native to Python and focused on compilers. MaxText and the Accelerated Processing Kit (XPK) on Google Cloud now make it possible for AI practitioners to employ JAX with NVIDIA H100 GPUs. With the help of Google Cloud HPC Toolkit and Google Kubernetes Engine (GKE), NVIDIA NeMoTM framework deployments on Google Cloud have become much simpler. Rapid deployment of turnkey environments is made possible with configurable blueprints that start the development process, and developers may automate and scale the training and serving of generative AI models. Another convenient approach for users to get NVIDIA NeMo and other frameworks to speed up AI development is through the Google Marketplace. Nemo is part of NVIDIA AI Enterprise.

#6 NVIDIA DRIVE Thor, a next-generation centralized computer for safe and secure autonomous vehicles

To power their next-generation consumer and commercial fleets, including new energy vehicles, trucks, robotaxis, robobuses, and last-mile autonomous delivery vehicles, NVIDIA DRIVE ThorTM centralized car computers have been chosen by top transportation companies, according to a news release from NVIDIA today.

Related Posts
1 of 5,882

The emergence of generative AI applications is driving their importance in the automotive sector, and DRIVE Thor is an in-vehicle computer platform designed for these purposes. This system, which is the next generation of DRIVE Orin, can provide a centralized platform with feature-rich cockpit capabilities as well as safe and secure autonomous driving. During his talk at GTC, NVIDIA founder and CEO Jensen Huang introduced the new Blackwell architecture, which is designed for transformer, LLM, and generative AI applications. This next-generation AV platform will incorporate this architecture. The new NVIDIA Blackwell architecture, which includes a generative AI engine and other state-of-the-art capabilities, is set to be used by DRIVE Thor in production vehicles as early as next year. It possesses 1,000 teraflops of performance and helps to guarantee the safety and security of autonomous machines.

#7 NVIDIA announced Project GR00T, a general-purpose foundation model for humanoid robots, designed to further its work driving breakthroughs in robotics and embodied AI.

Project GR00T, meant to promote NVIDIA’s efforts driving breakthroughs in robotics and embodied AI, is a general-purpose foundation model for humanoid robots. It was revealed today by the company.

Included in the initiative was the unveiling of a new computer called Jetson Thor for use in humanoid robots, built on the NVIDIA Thor system-on-a-chip (SoC). Additionally, the NVIDIA Isaac™ robotics platform received substantial upgrades, including new models for generative AI foundation, simulation tools, and AI workflow infrastructure. To run multimodal generative AI models like GR00T, the system on a chip has a next-gen graphics processing unit (GPU) built on the NVIDIA Blackwell architecture. Its transformer engine provides 800 teraflops of 8-bit floating point AI capability. It makes design and integration much easier with its built-in functional safety processor, high-performance CPU cluster, and 100GB of ethernet bandwidth.

1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, XPENG Robotics, and many more leading humanoid robot firms are part of NVIDIA’s effort to establish a comprehensive AI platform. Isaac Manipulator and Isaac Perceptor, a set of reference hardware, libraries, and pre-trained models for robotics, were also introduced by NVIDIA.

With its extensive library of foundation models and GPU-accelerated capabilities, Isaac Manipulator provides robotic arms with cutting-edge dexterity and customizable AI capabilities. Developers can automate more new robotic jobs with its help because it speeds up path planning by up to 80 times and boosts efficiency and throughput with zero-shot perception. Yaskawa, Franka Robotics, READY Robotics, Solomon, Universal Robots (a Teradyne business), and PickNik Robotics are among the early ecosystem partners.

#8 NVIDIA announced that Japan’s new ABCI-Q supercomputer will be powered by NVIDIA platforms for accelerated and quantum computing

ABCI-Q will pave the way for industry-wide research by enabling high-fidelity quantum simulations. The open-source hybrid quantum computing platform NVIDIA® CUDA-QTM is integrated into the scalable, high-performance system. It offers robust simulation tools and the ability to program hybrid quantum-classical systems. The supercomputer runs on 500+ nodes linked by NVIDIA Quantum-2 InfiniBand, the only completely offloadable in-network computing platform in the world, and over 2,000 NVIDIA H100 Tensor Core GPUs.

Developed by Fujitsu at the G-QuAT supercomputing center of the National Institute of Advanced Industrial Science and Technology (AIST), ABCI-Q is slated for deployment early next year and is built to integrate with future quantum hardware.

#9 NVIDIA announced a 6G research platform that empowers researchers with a novel approach to develop the next phase of wireless technology

Researchers have access to a full range of tools to enhance artificial intelligence (AI) for radio access network (RAN) technologies on the open, flexible, and linked NVIDIA 6G Research Cloud platform. Organizations can use the platform to speed up the development of 6G technologies, which will link trillions of devices to cloud infrastructures. This will pave the way for a hyper-intelligent world with smart spaces, autonomous vehicles, and various forms of immersive and extended reality education, as well as collaborative robots.

By bringing together these robust foundational tools, the NVIDIA 6G Research Cloud platform paves the path for the next generation of wireless technology and allows telecoms to realize 6G’s full potential. Researchers interested in using the platform can join the NVIDIA 6G Developer Program.

Read: 4 Common Myths Related To Women In The Workplace

#10 NVIDIA announced that NVIDIA Omniverse™ Cloud will be available as APIs for creating industrial digital twin applications

Developers can now incorporate core Omniverse technologies into their simulation workflows for testing and validating autonomous machines, such as self-driving vehicles or robots, or into digital twin design and automation software applications with the help of five new APIs from Omniverse Cloud.

Ansys, Cadence, Hexagon, Microsoft, Rockwell Automation, Siemens, and Trimble are among the world’s leading industrial software makers who have integrated Omniverse Cloud APIs into their product portfolios. Dassault Systèmes is known for its 3DEXCITE trademark.

Robots, AVs, and AI-based monitoring systems are becoming increasingly popular, therefore developers are looking for ways to speed up their end-to-end workflows.

Training, testing, and validating full-stack autonomy—from perception to planning and control—requires sensor data.

To facilitate full-stack training and testing with physically based sensor simulation, the Omniverse Cloud APIs link a robust developer ecosystem of simulation tools and applications, including Foretellix’s ForetifyTM Platform, CARLA, and MathWorks, with industry-leading sensor solution providers, such as FORVIA HELLA, Luminar, SICK AG, and Sony Semiconductor Solutions.

Initially made available on Microsoft Azure, developers will have the opportunity to utilize the Omniverse Cloud APIs later this year on both self-hosted and managed NVIDIA accelerated systems.

Wrapping Up

As we wrap up our look at the top 10 NVIDIA upgrades, one thing stands out: skilled hands are shaping the future of technology. The limits of what is possible are continually being pushed farther by NVIDIA’s dogged quest for innovation and perfection. They are laying the groundwork for a future where cutting-edge technology, scientific inquiry, and immersive gaming all come together with every update. We look forward to what the future holds with great anticipation, but one thing is certain: NVIDIA will keep at the forefront, molding the tech industry for years to come while sparking wonder and new ideas at every turn.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Comments are closed.