Insights into the forces shaping our industry.

Artificial Intelligence at the Edge: Exploring Opportunities and Challenges

Industry Commentary

White Paper written by:  Adventech Edge AI  www.advantech.com

Artificial Intelligence at the Edge: Exploring Opportunities and Challenges

Technological innovations are driving widespread deployment of AI-based computer vision for industrial IoT applications, resulting in safety, quality, and efficiency improvements.

Introduction

Once confined to data centers, artificial intelligence (AI) is now on the network edge. Hardware and software advances are making AI at the edge even easier to implement, enabling increased performance and greater flexibility; however, success requires the right technology ecosystem and the right technology partner.

Artificial Intelligence at the Edge in Reach

Before exploring benefits and industry successes in edge AI, it helps to review why easily implementing these machine-learning technologies are now within reach. Firstly, AI is biologically inspired. For example, when fed data, an artificial neural network adjusts the weight of its neurons relative to one another through feedback and feed-forward mechanisms. This activity creates an inference about the input, a model that can produce an assessment. In practice, an AI system can process and identify a variety of images.

As an example, let us say, an AI inference system needs to review images of animals; it uses training sets for identification and can build criteria to identify what makes a fish, a fish. The neural net software writes the classification software.

Previously, experts would construct these classification instructions in a labor-intensive and expensive process. With AI neural nets, however, software inspects numerous images and creates classification criteria. The results can be quite accurate. Researchers, for instance, reported a 99.7 percent AI classification accuracy rate, even when using poor-quality images.

Machine learning is more attainable due to increased input data, more powerful computing platforms, and more capable software.

For the first item, imaging produces megabits per picture and a multitude of cameras worldwide aid AI inference systems in completing tasks, such as quality assurance, product flow monitoring, traffic control, security, etc.

For computing platforms, available industrial- grade GPUs (graphics processing units) can boost neural net performance 27-fold or more in image evaluation when compared to general- purpose or commercial central processing units (CPUs). Regarding software, algorithms are more accurate and capable than ever before.

With the combination of these three elements, end users no longer need an army of data scientists and engineers to implement effective, successful AI solutions.

AI Inference System Classification

Successful Edge AI Use Cases

The ability to simulate human intelligence, such as reasoning, goal setting, understanding and generating language, and perception and response to sensory inputs have become benchmarks to evaluate the progress of AI.

With a series of logical rules being derived from various learning models, we are constantly creating an updated knowledge base for AI. We now need devices, called “Inference Systems,” to apply that logic for different tasks.

To see AI advances at work, consider that robots and automated systems commonly utilize sensors and machine vision to evaluate a situation and then act. This perception, reasoning, and action sequence benefits from processing AI data at the edge.

Smart Cities and Building Management

One area where AI is flourishing is in the utilization of physical space, a multi-trillion- dollar industry that includes smart city and building management. Video plays a big part in the perception process and edge AI technology can make use of this data. In Taipei, Taiwan,

for instance, dynamic traffic signal lights help give pedestrians time to cross wide roads, safely. Safety and efficiency are essential urban transportation infrastructure attributes. By using video analytics, intelligent transportation systems can reduce costs, lower emissions, and improve road safety. In the specific dynamic traffic signal use case, cameras feed images to an AI system, which determines the position and speed of pedestrians and automobiles. By adjusting traffic lights in real time, the system decreases average traffic wait times by as much as 78 percent, saving tens of thousands of dollars in fuel alone.

Warehouse Inventory Management

Additionally, warehouse management is part of physical space utilization applications.  AI solutions can help recognize bar codes of products as they enter a facility, enabling complete traceability and adding flexibility.

The system may start on a specific application, such as finding and documenting an identifying bar code. Once imaging is in place, though, software alterations can add additional capabilities, such as sorting objects by color, shape, or size.

Agriculture Processes

AI technology in automation is moving from manufacturing and infrastructure toward farming and agriculture. In agriculture, utilizing AI at the edge can improve operations efficiency and help maximize profits. These systems are helping to improve overall harvest quality and accuracy, while AI algorithms can also provide intelligent, real-time monitoring for weather, soil factors, animal health conditions, etc.

In one example, AI technology helps to speed inspection and grading of eggs for a poultry farm. Poultry Management Systems, Inc. (PMSI), adopted automation in its barns more than 20 years ago, covering 60 to 70% of the U.S. egg market. Now, PMSI has moved to AI, using it in egg counting, dialysis of eggs, quality control of egg size, and more, to help the barns manage more than 1 million birds efficiently. Machine vision, when combined with smart automation, allows for reduced-error processing and packaging of washed eggs. Similar approaches can benefit other agriculture processes for improved efficiency.

AI in Robotics for Safety

Successful autonomous robotics applications are possible with AI platforms, and it helps when those platforms are also extremely compact. For example, an AI systems can fit inside a robot where the total length, width, and height are less than 60 centimeters. The technology is the foundation for an autonomous UV disinfection robot. In the UV disinfection robot, the AI inference system is central to achieving complete disinfection. With its compact size, the system still possesses powerful computing capabilities. The Autonomous Mobile Robot (AMR) is equipped with multiple sensors and performing AI deep learning to plan paths and avoid obstacles. It operates smoothly, eliminating any system crashes or processing delays that are often seen in traditional Industrial PCs. In terms of expandability, systems can support various I/O interfaces for users to upgrade the device based on their needs.

Meant to replace humans in precarious disinfection tasks, the robot helps lower risks of widespread infection while also keeping the environment safe. It not only helps save labor costs, but also increases safety while achieving complete disinfection for sites.

In any machine vision application, AI at the edge lessens the network load. For example, if a recycling center system is scanning items as they move by on a line, it can determine if the items are glass or cardboard. In a bottle production plant, the AI system can scan products as they move on the line to determine defects. Without artificial intelligence, this classification would require manual inspection from employees or sending an entire image, megabits in size, over the network. Both are costly, time-consuming options.

Edge AI also saves decision-making time because data processes locally, closer to the source. In a high-speed process, sorting decisions may need to take place in milliseconds. Saving even a fraction of a second by eliminating delays may be critical to achieving success, especially in mission-critical applications. Real-time operational and safety monitoring is possible for critical and real-time decision-making.

In the previously discussed examples, edge AI solutions may run on a sensor, nearby industrial computer, local servers, and more. These varied possible locations bring up an important point:  AI at the edge is transforming industries of all types, but it requires support for specific needs.

The right technology partner and ecosystem architecture is key for success. The training phase, for instance, involves large amounts of data and may be best on servers or in the cloud. The inference engine, on the other hand, is much less computationally intensive and may need to be real-time. Its optimum location may be in the sensor or close to it. With so many options, it helps to have a partner that offers a wide range of IoT device solutions, all with easy portability between settings. This breadth of technology makes a solution easier to develop and deploy.

Additionally, the right technology partner and ecosystem, users can bundle products that solve common problems and combine solutions for easier application development.

It also means simplified deployment and implementation so future innovations are multi- modal AI. This approach combines images, text, speech, and numerical data with multiple algorithms to achieve higher performance.

Conclusion

With advances in technology, AI is no longer a science experiment or a future concept. Today, AI solutions at the edge are available to solve various operation challenges. Due to nuances of varying applications, however, it is important to ensure support who can help build a precise ecosystem tailored to specific needs.