NVIDIA Touts Jetson Xavier NX as ‘World’s Smallest Supercomputer for AI’

Image: NVIDIA

November 06, 2019      

Robot and embedded device developers who have had to sacrifice AI performance for size considerations now have another option. NVIDIA today launched the Jetson Xavier NX, offering higher performance at the same size of its last AI system, the Jetson Nano. The Jetson Xavier NX module will begin shipping in March 2020 for $399, but developers can begin planning for the system today through the company’s development kit with a software patch to emulate the module.

Compared to NVIDIA’s other Jetson systems, the Xavier NX sits between the higher-performing Xavier series, and the company’s older TX2 series. But it packs a punch in performance compared to the TX2 and Jetson Nano, while remaining the same size as Jetson Nano (70 x 45 mm). NVIDIA said the energy-efficient Jetson Xavier NX module can deliver “server-class performance” of up to 21 Tera Operations per Second (TOPS) for running modern AI workloads, consuming as little as 10 watts of power.

Small systems, big performance

In the robotics space, NVIDIA said the Jetson Xavier NX system would be suited for autonomous mobile devices (including smaller drones), as well as high-resolution sensors that can be placed on robot arms. In addition, the company touted smaller embedded devices such as medical devices, and network video recorders that need to perform video analytics applications. For all of these devices, the point is to allow for AI processing to be done on the edge, compared to the cloud, the company said.

Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA

NVIDIA’s Deepu Talla

“AI has become the enabling technology for modern robotics and embedded devices that will transform industries,” said Deepu Talla, vice president and general manager of Edge Computing at NVIDIA. “Many of these devices, based on small form factors and lower power, were constrained from adding more AI features. Jetson Xavier NX lets our customers and partners dramatically increase AI capabilities without increasing the size or power consumption of the device.”

The company said Xavier NX can deliver up to 14 TOPS at 10W, or up to 21 TOPS at 15W, running multiple neural networks in parallel and processing data from multiple high-resolution sensors simultaneously. For companies already building embedded machines, the Xavier NX runs on the same CUDA-X AI software architecture as all Jetson offerings, NVIDIA said.

The Jetson Xavier NX is supported by the company’s JetPack software development kit, a complete AI software stack that can run AI networks, accelerated libraries for deep learning, as well as computer vision, computer graphics, and multimedia, the company added.

“NVIDIA’s embedded Jetson products have been accelerating the research, development and deployment of embedded AI solutions on Lockheed Martin’s platforms,” said Lee Ritholtz, director and chief architect of Applied Artificial Intelligence at Lockheed Martin. “With Jetson Xavier’s NX’s exceptional performance, small form factor and low power, we will be able to do more processing in real time at the edge than ever before.”

Specifications

Module specifications of Jetson Xavier NX include:

  • GPU: NVIDIA Volta with 384 NVIDIA CUDA cores and 48 Tensor Cores, plus 2x NVDLA;
  • CPU: 6-core Carmel Arm 64-bit CPU, 6MB L2 + 4MB L3;
  • Video: 2x 4K30 Encode and 2x 4K60 Decode;
  • Camera: Up to six CSI cameras (36 via virtual channels); 12 lanes (3×4 or 6×2) MIPI CSI-2;
  • Memory: 8GB 128-bit LPDDR4x; 51.2GB/second;
  • Connectivity: Gigabit Ethernet;
  • OS Support: Ubuntu-based Linux;
  • Module size: 70×45 mm

NVIDA said the Xavier NX is also pin-compatible with Jetson Nano, allowing for shared hardware designs and those with Jetson Nano carrier boards and systems to upgrade to the Xavier NX system. The module also supports all major AI frameworks, including TensorFlow, PyTorch, MxNet, Caffe, and others, the company added.

In a separate announcement, NVIDIA said it topped all five benchmarks measuring the performance of AI workloads in data centers and at the edge, through the MLPerf Inference 0.5 suite of tests.