Developers or those of you interested in learning more about deep learning accelerator On NVIDIA’s Jetson Orin microcomputer, we’re happy to know that NVIDIA has published a new article in its tech blog that provides an overview of the Deep Learning Accelerator (DLA) when used with the Jetson system that combines a CPU and GPU into one. Providing developers with the expanded NVIDIA software stack in a small, low-power package that can be deployed at the edge.
deep learning accelerator
“Although DLA does not have as many supported layers as the GPU, it still supports a variety of layers used in many common neural network architectures. In many cases, layer support may cover the requirements of your model. For example For example, the NVIDIA TAO toolkit includes a variety of pre-trained and DLA-supported models, from object detection to action recognition.”
“While it is important to note that the DLA throughput is usually lower than that of a GPU, it is energy efficient and allows you to offload deep learning workloads, freeing up the GPU for other tasks. Instead, based on On your application, you can run the same model on the GPU and DLA simultaneously to achieve higher net throughput.”
“Many NVIDIA Jetson developers are already using DLA to successfully improve their applications. Postmates have improved their delivery bot application on the Jetson AGX Xavier while taking advantage of DLA in conjunction with the GPU. Use the Cainiao ET Lab DLA to improve their logistics vehicle. If you Looking to improve your entire application, DLA is an important part of the Jetson reference to consider.”
For more information on using the Deep Learning Accelerator with Jetson Orin, head over to the official NVIDIA blog by following the link below.
Filed under: Technology News, Top News