Edge AI in a 5G world

Alex Cattle

on 6 February 2020

This article is more than 5 years old.


Deploying AI/ML solutions in latency-sensitive use cases requires a new solution architecture approach for many businesses.

Fast computational units (i.e. GPUs) and low-latency connections (i.e. 5G) allow for AI/ML models to be executed outside the sensors/actuators (e.g. cameras & robotic arms). This reduces costs through lower hardware complexity as well as compute resource sharing amongst the IoT fleet.

Strict AI responsiveness requirements that before required IoT AI model embedding can now be met with co-located GPUs (e.g. on the same factory building) as the sensors and actuators. An example of this is the robot ‘dummification’ trend that is currently being observed for factory robotics with a view to reducing robot unit costs and fleet management.

In this webinar we will explore some real-life scenarios in which GPUs and low-latency connectivity can unlock previously prohibitively expensive solutions now available for businesses to put in place and lead the 4th industrial revolution.

Watch the webinar

Enterprise AI, simplified

AI doesn’t have to be difficult. Accelerate innovation with an end-to-end stack that delivers all the open source tooling you need for the entire AI/ML lifecycle.

Explore Canonical’s AI solutions ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Real-time OS examples: use cases across industries  

In sectors where precision and predictability are non-negotiable, timing is everything. Whether coordinating robotic arms on a factory floor, maintaining...

Canonical announces optimized Ubuntu images for Google Cloud’s Axion N4A Virtual Machines

Today Canonical, the publishers of Ubuntu, and Google Cloud announced the immediate availability of optimized Ubuntu images for the new Axion-based N4A...

Edge Networking gets smarter: AI and 5G in action

Organizations everywhere are pushing AI and networks closer to the edge. With that expansion comes a challenge: how do you ensure reliable performance,...