Bitesize A.I. for People on the Go!
Over the last fifteen months we’ve experienced a seismic shift in utilising digital transformation, due to the pandemic. Particularly, as services have substantially moved online, ranging from banking to doctor appointments to working from home. These advancements have been supported by new technologies including the driving force of Artificial Intelligence (AI).
Over the last half decade this term has increasingly seeped into our awareness. But beyond headlines about robots taking over our jobs, what actually is AI? In this article I summarise a glossary of terms to aid your understanding. So if you’re curious about a major technology that’s shaping our world, lean in.
Artificial Intelligence is a branch of Computer Science. And its underlying premise is to develop systems that can function intelligently and independently, just like humans. The systems could be computers, software or computerised robots. A distinctive feature of AI is its ability to learn and expand the meaning of data, in self-reliant ways. It has an unending capacity to speedily and critically think, in ways that we as humans can’t.
AI replicates our human actions in digital form. Take the following:
- Our verbal communication style is represented by AI Speech Recognition. It shows up in the form of Google Assistant, Siri and Alexa. So, whenever you’re searching for answers by using your voice, a digital assistant responds. Speech Recognition is based on statistics and falls within the category of Statistical Learning.
- We also read and write text in multiple languages and these are catered for by Natural Language Processing (NLP). It’s a form of software that interprets and analyses text to ascertain its true meaning. For example, if you make reference to an attachment in an email, but forget to include it, NLP prevents you from sending it.
- For some of us, we use our eyes to make sense of what we see. This behaviour is known as Image Processing. Even though it’s not directly related to AI, it does come within the remit of Computer Vision.
- In relation to blind and partially sighted people, there are some new developments. Researchers at the University of Bristol are currently working on an experiential programme. They’re teaching AI agents to type on Braille keyboards via Reinforcement Learning. This form of guidance trains machine learning models to make logical decisions. The researchers are teaching robots to do hard tasks that humans can also do with their hands.
- By moving around our environments we gain more understanding of the mobility and spatial awareness. These activities are connected to the field of Robotics.
- Our environments are populated with different patterns, items, shapes and objects. And we tend to group similarities together. This conduct comes under Pattern Recognition. However, machines are far more efficient at recognising patterns based on categorising vast sums of data. Imagine trying to group together and analyse an unsorted list of 1000 names, into alphabetical order. Eventually, you’d be able to do this, however, it’s time consuming. Hence the development of Machine Learning.
- Our wonderful brains are comprised of a network of neurons that help us to learn. As a result, we develop cognitive capabilities. The goal of Neural Networks is to recreate the function and structure of the human brain. The reason for this is to put these capabilities into machines.
- Neural Networks may have additional depth and complexity. When this arises, we can then use them to learn complex information. Deep Learning covers this activity with different techniques that reproduces what the human brain does.
- In cases where the Neural Networks scan images, for instance, from right to left, this is known as a Convolution Network. This function is used to recognise objects in an environment or scene. Consequently, this is how Computer Vision fits in and Object Recognition is achieved, using AI.
- As humans we’re able to remember the past, whether it’s short, medium or long term. Networks can behave in similar ways to a limited extent. When this happens they’re operating as Recurrent Neural Networks.
From these shared examples you may see that AI works in a couple of ways. It’s either based on symbols or data. Machine Learning covers the latter and it relies on being fed massive amounts of data so that it can learn. Patterns are embedded in the information and on the basis of them, machines can make limitless predictions. By contrast, our human faculties can’t compete with this range.
Beyond making predications, Machine Learning also uses Classification. So, for example, if you want to place clients in groups based on their age, then you’re classifying them. Conversely, if you think they might leave to join another company, then you’re making a prediction.
I’ve just given you a snapshot of some sweeping changes taking place with AI. Each day our lives are being transformed by digital transformation. it could well be unrecognisable within the next 15 to 20 years. What preparatory steps can you start to take now?