Blog

The Power of Embedded AI Technology

As time progresses, we find that there are more advancements involving artificial intelligence (AI) technology than we could have ever imagined, and it’s clear that AI is making an impact across industries worldwide. After years of continuous research and development, AI and Deep Learning (DL) models have become efficient and accurate enough to be used as core components of most modern-day data processing solutions across land, sea, air, and space.

In applications like Degraded Visual Environment (DVE), ISR operations, command and control situations, and unmanned aerial vehicles (UAVs), AI is going to be essential to the success of the mission and protects humans from the danger of a hostile environment without sacrificing efficiency. It has been found that AI can handle these complex environments more effectively than humans, resulting in increased mission performance. From both angles – offensive and defensive – the use of deep learning can boost threat detection thanks to the ever increasing amount of computational power and predictive analysis throughput available.

Defense and Aerospace systems that are being newly integrated or refreshed need to (at a bare minimum) account for and future proof with AI in mind. All signs in the computing space point towards a future where hand written algorithms for common data and image processing tasks are replaced by AI and Deep Learning neural networks/models.

Upgrading for AI

Regardless of domain, GPU based integration is the logical path forward for a quick and successful deployment of AI into the field. The use of GPU based AI processing is frequently utilized because of the high flexibility and set upgrade path available while still providing high throughput and response time.  Additionally, GPUs are typically already available on the payload and being utilized for other applications.  FPGAs are attempting to compete in this space with integrations of on chip SoCs and slowly adopting more open AI libraries/frameworks, but modern GPUs support all major frameworks and have a settled and standardized data bus path and driver implementation via PCI-e that allow the upgrade path to be entirely isolated from the mating CPU data arbiter.

Both data size and model complexity are  going to increase over time as Deep learning continually evolves.  With this, it is necessary that a logical upgrade path is planned for to accomodate not only hardware upgrades, but software changes as well.  GPUs support all modern frameworks and their reliance on mated CPUs running standard operating systems ensures easy reconfiguration of software and AI models as well as driver and framework updates.  Given deep learning has a heavy reliance on retraining models as more data is accumulated, it is highly important an iterative in field software upgrade path is accomodated for and this is simplified when all components of the system are using standard components and software.

The Tech Behind Rugged AI

It is clear that the future of the military involves AI, and therefore all future video/data processing and encoding platforms should consider the implementation of accelerated AI capabilities. AI inferencing, deep learning, sensor processing, and data analytics will require the system to have a GPU powerful enough to deliver the general processing required as well as specialized cores needed to optimize AI tasks.

NVIDIA® GPUs are an excellent choice for aerospace and defense applications with significant video data processing requirements including video stabilization, image processing, terrain analysis, object tracking, and 3D visualization of data. Support for the CUDA® framework provides the industry’s most advanced and efficient GPGPU framework, opening new doors for the embedded military environment for highly efficient, near or at real-time processing of data on edge devices.  The CUDA-X platform also provides integrations with the most popular Deep Learning frameworks (TensorFlow, PyTorch, Caffe2, etc) as well as TensorRT, an optimization layer for inference that reduces model complexity.

EIZO Rugged Solutions’ Condor GR5-RTX5000 is built using the latest embedded GPU technology from NVIDIA. The Condor GR5-RTX5000 has a rugged 3U VPX form factor card based on NVIDIA® Turing™ architecture and the NVIDIA RTX™ platform. The Condor GR5-RTX5000 meets strict data integrity requirements for mission-critical applications with uncompromised computing accuracy and reliability. The NVIDIA® Quadro RTX® 5000 GPU has 3072 CUDA parallel processing cores in the NVIDIA Turingarchitecture which offer a multitude of capabilities such as mesh shading, variable rate shading, texture space shading, multi-view rendering, and ultra-high performance GPGPU computing.

With 384 Tensor cores, the Condor GR5-RTX5000 delivers high AI inferencing performance. These Tensor cores are optimized pieces of hardware that are designed to both train and infer from deep learning models. These Tensor cores support numerous precision modes, including FP64, FP32, FP16, INT8, INT4, and INT1, which can enable up to 32X throughput on compatible models compared to previous generations. These AI models can be combined with NVIDIA® NGX™, which provides features such as AI-powered image upscaling denoising. The RTX5000 also contains 48 RT cores for realtime raytracing – a breakthrough in 3D rendering.

The NVIDIA GPUs used in our Condor line of rugged VPX, XMC, and PCIe solutions are ideal for a multitude of compute-intensive applications that require low-latency AI and GPGPU processing. For the most demanding applications in SWaP-constrained platforms, EIZO Rugged Solutions works closely with NVIDIA to provide the most innovative GPU technology for mission-critical applications.

You can read more information about EIZO Rugged Solutions product capabilities such as deep learning, GPU design, raw video capture, CUDA support, or what video formats our products support on our capabilities page.

Share
Facebook
Twitter
LinkedIn

View Latest Press Releases