3 tips to keep in mind before implementing

AI artificial intelligence.  Technology to work with a smart brain computer.  Human robot working on a laptop.  Maintenance engineer working with digital screen
Image: ZinetroN/Adobe Stock

As artificial intelligence (AI) matures, adoption continues to rise. According to recent research, 35% of organizations use AI and 42% explore its potential. While AI is well understood and largely implemented in the cloud, it is still nascent at the edge and has some unique challenges.

Many use AI throughout the day, from navigating cars to tracking steps and talking to digital assistants. Although a user accesses these services often on a mobile device, the computing results reside in the uses of AI in the cloud. More specifically, a person requests information and that request is processed by a central learning model in the cloud, which then sends the results to the person’s local device.

AI at the edge is less understood and implemented less frequently than AI in the cloud. From their inception, AI algorithms and innovations have been based on a fundamental assumption: that all data can be sent to a central location. At this central location, an algorithm has full access to the data. This allows the algorithm to build its intelligence like a brain or central nervous system, with full authority over computation and data.

But the AI ​​on the perimeter is different. It distributes intelligence through all cells and nerves. By bringing intelligence to the edge, we give these edge devices agency. That’s essential in many applications and domains, such as healthcare and industrial manufacturing.

WATCH: Artificial Intelligence Ethics Policy (TechRepublic Premium)

Reasons to implement AI at the edge

There are three main reasons to implement AI at the edge.

Protection of Personally Identifiable Information (PII)

First, some organizations dealing with PII or sensitive IP (intellectual property) prefer to leave the data where it originates: on the imaging machine in the hospital or on a manufacturing machine on the production floor. This can reduce the risk of “excursions” or “leakage” that can occur when transmitting data over a network.

Minimize bandwidth usage

The second is a bandwidth issue. Sending large amounts of data from the edge to the cloud can clog the network and in some cases is not practical. It is not uncommon for an imaging machine in a healthcare environment to generate files that are so large that they cannot be transferred to the cloud or would take days to complete.

It may be more efficient to simply process the data at the edge, especially if the insights are intended to improve a proprietary machine. In the past, compute was much more difficult to move and maintain, justifying moving this data to the compute location. This paradigm is now being challenged, where data is now often more important and more difficult to manage, leading to use cases that justify moving compute to the data location.

Avoid latency

The third reason to deploy AI at the edge is latency. Internet is fast, but it is not in real time. If there is a case where milliseconds matter, such as a robotic arm assisting in surgery or a time-sensitive manufacturing line, an organization may decide to run AI at the edge.

Challenges with AI at the edge and how to solve them

Despite the benefits, there are still some unique challenges to implementing AI at the edge. Here are some tips to consider to help tackle those challenges.

Good and bad results in model training

Most AI techniques use large amounts of data to train a model. However, this often becomes more difficult in edge industrial use cases, where the majority of products manufactured are not defective and are therefore labeled or noted as good. The resulting imbalance of “good results” versus “bad results” makes it more difficult for models to learn to recognize problems.

Pure AI solutions that are based on data classification without contextual information are often not easy to create and implement due to the lack of labeled data and even the occurrence of rare events. Adding context to AI, or what’s known as a data-centric approach, often pays dividends in accuracy and scale of the final solution. The truth is, while AI can often replace the mundane tasks that humans perform manually, it benefits greatly from human knowledge when putting together a model, especially when there isn’t a lot of data to work with.

Getting the commitment up front from an experienced subject matter expert to work closely with the data scientists building the algorithm gives the AI ​​a learning boost.

AI cannot magically solve or provide answers to all problems

There are often many steps involved in an output. For example, there may be many stations on a factory floor and they may be interdependent. Moisture in one area of ​​the factory during one process can affect the results of another process down the manufacturing line in a different area.

People often assume that AI can magically rebuild all of these relationships. While in many cases it can do this, it is also likely to require a large amount of data and a long time to collect, resulting in a very complex algorithm that does not support explainability and updates.

AI cannot live in a vacuum. Capturing those interdependencies will push the boundaries from a simple solution to a solution that can scale over time and different deployments.

Lack of stakeholder buy-in can limit AI scale

It is difficult to scale AI in an organization if a group of people in the organization are skeptical about its benefits. The best (and perhaps only) way to gain broad acceptance is to start with a high-value, difficult problem, and then solve it with AI.

At Audi, we considered figuring out how often to change electrodes on welding guns. But the electrodes were inexpensive and this did not eliminate any of the mundane tasks humans were doing. Instead, they chose the welding process, a difficult problem universally agreed upon for the entire industry, and improved the quality of the process dramatically through AI. This sparked the imagination of engineers across the company to investigate how they could use AI in other processes to improve efficiency and quality.

Balancing the benefits and challenges of edge AI

Deploying AI at the edge can help organizations and their teams. It has the potential to transform a facility into an intelligent edge, improving quality, optimizing the manufacturing process, and inspiring developers and engineers across the organization to explore how they might incorporate AI or advance AI use cases to include analytics. predictions, recommendations to improve efficiency or detection of anomalies. But it also presents new challenges. As an industry, we need to be able to implement it while reducing latency, increasing privacy, protecting IP, and keeping the network running smoothly.

Camilo Morhardt
Camille Morhardt, Director of Security and Communications Initiatives

With over a decade of experience initiating and leading technology product lines from the edge to the cloud, Camille Morhardt eloquently humanizes and distills complex technical concepts into enjoyable conversations. Camille she hosts What That Means, a Cyber ​​Security Inside podcast, where she talks to leading technical experts to get the definitions straight from those who define them. She is part of Intel Security Center of Excellence and is passionate about Compute Lifecycle Assurance, an industry initiative to increase supply chain transparency and security.

Rita Wouhabi
Rita Wouhaybi, IoT Group Lead Senior AI Engineer

Rita Wouhaybi is a Senior Principal AI Engineer in the Office of the CTO in the Network and Edge Group at Intel. She leads the architecture team focused on the manufacturing and federal market segments and is helping drive the delivery of AI edge solutions covering architecture, algorithms, and benchmarking using Intel hardware and software assets. Rita is also a Time Series Data Scientist at Intel and a Principal Architect at Intel. Edge prospects for the industry. She received her Ph.D. in Electrical Engineering from Columbia University, he has more than 20 years of industry experience and has filed more than 300 patents and published more than 20 articles in acclaimed IEEE and ACM conferences and journals.

Leave a comment

Stay up to date

Register now to get updates on promotions and coupons

Shopping cart

×