Why Hardware Design Needs AI Technology?

What is Artificial Intelligence (AI)? AI is a buzz word that’s thrown around a lot lately by marketing departments and startups to add a degree of pazazz and glitz to their product offerings. Unfortunately, in common parlance, it is often used when people are simply describing an Algorithm. So let’s put some meat on the bones of what that means. Artificial Intelligence is not simply a task performed by a program or machine that, if a human were to carry out the same activity, we would say that the human had to apply intelligence to accomplish the task. This is the description of AI as given by the fathers of the field Minsky and McCarthy back in the 1950s and loosely describes how AI is often applied to technology these days. 

However, if you are in the field of AI TECH then you would have a much narrower definition. AI, in a nutshell, is able to complete a defined task or goal by applying an aspect of logic we have traditionally associated with human intelligence such as planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity. Using these “human” logical approaches they are then able to come to a conclusion and present it. 

This differs from an Algorithm that works more like an equation. You put in input, it goes through a series of predefined steps or calculations and an answer comes out. You understand all parts of the process and there is no “interpretation” occurring at any point through the exercise. AI is adaptable to a degree in that it can be ‘taught’ and then use this learned knowledge and apply it to solving similar problems in the future. We may not always know “how” it works or “why” it works just that we put the right question to it and it gave us the right answer. 

Now, this leads us into a new definition of AI that will be important going forward. That is that there are two types of AI: 

Strong and Weak. Weak AI is a high functioning system that replicates or surpasses human intelligence but only for a dedicated purpose or task such as a self-driving car. A self-driving car is as good or better than a human at the specific task of driving a car on regular public streets. 

Strong AI, or more commonly known Artificial General Intelligence, uses a more generalised human cognitive ability to find solutions to unfamiliar tasks. Now Strong AI currently doesn’t exist and the main reason is computing power. We simply don’t have hardware powerful enough to fully simulate the human mind and replicate our cognitive abilities in software, and so comes our first bottleneck.

AI Hardware: 

We are currently seeing an explosion of ‘“SMART” products and Tech. built on the revolution of the Internet of Things or IoT. That is internet-enabled smart products that are able to carry out tasks ‘“intelligently” around us. Or so we would think. Currently, we’ve essentially just been digitising analogue products. Turning lights on and off remotely or with sensors, connected cameras, maybe a voice assistant or two. With the increase of the adoption of these technologies so has also increased the demands of the consumers for their smart products to be actually “smart”. That is for them to adapt and learn based on user behaviour. Fortunately, technology is moving forward and products are becoming smarter but the bottleneck is still processing data. Making machines that can make their own decisions is a real challenge as the amount of data you need to crunch is huge, often collating multiple sensory inputs. Fortunately, a number of companies are waking up to the huge demand for AI Chips. 

Google is a notable entrant to the world of AI Chipsets having recently unveiled the third version of its Tensor Processing Unit (TPU). The architecture of the Google TPU was designed to accelerate deep learning workloads developed in its TensorFlow environment. It may be of little surprise that Google, or indeed other web giants, are driving the development of AI and it’s related hardware as their businesses are reliant on storing and mining/interpreting worlds pictures and data. These companies who have immense resources and huge demand are often referred to as Hyperscalers as they apply colossal amounts of coordinated effort towards achieving their goals, scaling the technology and dramatically reducing the timelines of development. 

At first, a lot of AI relied on graphics cards as they were able to make undertake mathematical calculations are a much higher pace than traditional CPU tasks as they were optimised for the physics calculations demanded by AAA games. So by a stroke of luck, Nvidia, the Graphics card manufacturer found themselves sat atop a gold mine of technology seeing 1,200% growth of their stocks over the last three years. Technology companies have found though that whilst GPU’s were the best on the market at the time they still aren’t as optimised or as adapted as they really could be for AI. So we have now started to see traditional chipset incumbents such as Intel making moves into deep learning and AI chips by acquiring several firms including spending a reported $400M on Nervana systems in 2016, and $16.7B acquiring FPGA maker Altera in 2015. AI really is in its infancy though as most top-end systems are only now moving away from the Nvidia GPU setups towards more dedicated hardware. So, in essence, we are seeing AI systems move away from a working prototype towards a production and mature system. 

AI on the Edge:

So with all this innovation in AI chipsets we’re starting to experience our second bottleneck of AI which has three parts: scale, bandwidth and time. So far a lot of AI systems have been reliant on server size architecture and have worked for large scale applications crunching lots of data. This bottleneck comes in the form of data, and it’s scale, bandwidth or time. So let’s take scale as IoT products increase so does the amount of data that is coming back that needs to be processed. The sheer amount of raw, unprocessed data that an AI system then is required to churn through can swell to an unsustainable volume. The second bottleneck is bandwidth. Related to scale, you might not be able to build the architecture of a system to be able to take the bandwidth of data coming back, but more likely the associated costs with backhauling this scale of data makes it unsustainable or impractical. Lastly is time, there is a lead time between an end device sending a packet of data to a server, the server receiving it, processing it, making a decision, sending this decision back to the end device, and then the device enacting on that decision. For things like self-driving cars making a decision on a potential collision, this lead time could be the difference between life and death. 

These devices that are physically present to either gather data or are end-user products that need to make decisions are referred to as Edge devices. That is on the entire system that you have built these are the devices that form the outer edge of your system. One solution we have to all of these problems is enabling the edge devices to process data and make decisions independently of the central control system or architecture. This means that the only data that is sent back to the central control is processed, essential data. This reduces the scale and bandwidth of backhaul data making the system much leaner and lowers the operating costs. More importantly, it allows edge devices to make decisions in near real-time so for self-driving cars, in particular, safety and reaction times are increased. 

Intel is one such company enabling any company to compute neural networks at the edge using their Intel® Neural Compute Stick 2 (Intel® NCS2) a Plug and Play Development Kit for AI Inferencing giving you AI computing on a USB stick. With more technology companies such as Intel miniaturising AI technology and breaking down the barriers of entry, AI is becoming more accessible for IoT and smart device designers and developers.

AI hardware and technology development is crucial to the success of AI and the increase in market adoption. Fortunately, we are starting to see the barriers break down and we are starting to get more and more AI products hit the market. However, it’s still a challenging market to get into with a great number of challenges. Fortunately here at Detekt, we know how to innovate your product designs with AI Technologies. Contact us today to see how we could hyper-scale your designs and create world-class products now.