Bitesize A.I. for People on the Go!

Over the last fifteen months we’ve experienced a seismic shift in utilising digital transformation, due to the pandemic. Particularly, as services have substantially moved online, ranging from banking to doctor appointments to working from home. These advancements have been supported by new technologies including the driving force of Artificial Intelligence (AI).

Over the last half decade this term has increasingly seeped into our awareness. But beyond headlines about robots taking over our jobs, what actually is AI? In this article I summarise a glossary of terms to aid your understanding. So if you’re curious about a major technology that’s shaping our world, lean in.

Artificial Intelligence is a branch of Computer Science. And its underlying premise is to develop systems that can function intelligently and independently, just like humans. The systems could be computers, software or computerised robots. A distinctive feature of AI is its ability to learn and expand the meaning of data, in self-reliant ways. It has an unending capacity to speedily and critically think, in ways that we as humans can’t.

AI replicates our human actions in digital form. Take the following:

  1. Our verbal communication style is represented by AI Speech Recognition. It shows up in the form of Google Assistant, Siri and Alexa. So, whenever you’re searching for answers by using your voice, a digital assistant responds. Speech Recognition is based on statistics and falls within the category of Statistical Learning.
  2. We also read and write text in multiple languages and these are catered for by Natural Language Processing (NLP). It’s a form of software that interprets and analyses text to ascertain its true meaning. For example, if you make reference to an attachment in an email, but forget to include it, NLP prevents you from sending it.
  3. For some of us, we use our eyes to make sense of what we see. This behaviour is known as Image Processing. Even though it’s not directly related to AI, it does come within the remit of Computer Vision.
  4. In relation to blind and partially sighted people, there are some new developments. Researchers at the University of Bristol are currently working on an experiential programme. They’re teaching AI agents to type on Braille keyboards via Reinforcement Learning. This form of guidance trains machine learning models to make logical decisions. The researchers are teaching robots to do hard tasks that humans can also do with their hands.
  5. By moving around our environments we gain more understanding of the mobility and spatial awareness. These activities are connected to the field of Robotics.
  6. Our environments are populated with different patterns, items, shapes and objects. And we tend to group similarities together. This conduct comes under Pattern Recognition. However, machines are far more efficient at recognising patterns based on categorising vast sums of data. Imagine trying to group together and analyse an unsorted list of 1000 names, into alphabetical order. Eventually, you’d be able to do this, however, it’s time consuming. Hence the development of Machine Learning.
  7. Our wonderful brains are comprised of a network of neurons that help us to learn. As a result, we develop cognitive capabilities. The goal of Neural Networks is to recreate the function and structure of the human brain. The reason for this is to put these capabilities into machines.
  8. Neural Networks may have additional depth and complexity. When this arises, we can then use them to learn complex information. Deep Learning covers this activity with different techniques that reproduces what the human brain does.
  9. In cases where the Neural Networks scan images, for instance, from right to left, this is known as a Convolution Network. This function is used to recognise objects in an environment or scene. Consequently, this is how Computer Vision fits in and Object Recognition is achieved, using AI.
  10. As humans we’re able to remember the past, whether it’s short, medium or long term. Networks can behave in similar ways to a limited extent. When this happens they’re operating as Recurrent Neural Networks.

From these shared examples you may see that AI works in a couple of ways. It’s either based on symbols or data. Machine Learning covers the latter and it relies on being fed massive amounts of data so that it can learn. Patterns are embedded in the information and on the basis of them, machines can make limitless predictions. By contrast, our human faculties can’t compete with this range.

Beyond making predications, Machine Learning also uses Classification. So, for example, if you want to place clients in groups based on their age, then you’re classifying them. Conversely, if you think they might leave to join another company, then you’re making a prediction.

I’ve just given you a snapshot of some sweeping changes taking place with AI. Each day our lives are being transformed by digital transformation. it could well be unrecognisable within the next 15 to 20 years. What preparatory steps can you start to take now?




I write about Ageing, Astrology, A.I and Web 3.0, to help improve your emotional wellbeing & raise awareness of how tech is transforming our lives.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Google Allo Rethought

Getting started with AI, ML, NN — A beginners guide

4 Simple Ways to AI-Based Medical Imaging Can Be Made Better

The rise of the pro-sumer — feeding deep learning’s curiosity

3 Business Trends to Consider When Choosing Cloud Accounting Software

How to build an excellent chatbot

3 Overlooked things Deepmind Flamingo: A Large Model for Computer Vision

AI Learning Helper

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Bybreen Samuels

Bybreen Samuels

I write about Ageing, Astrology, A.I and Web 3.0, to help improve your emotional wellbeing & raise awareness of how tech is transforming our lives.

More from Medium

How did Netflix use ML to become the world’s streaming leader?

An Introduction to Synthetic Data and the Platform

Join our webinar on March 31st at 10 AM to learn about what synthetic data is and how to use it to overcome challenges associated with real-world data to train ml and ai systems

AI/Machine Learning -What is missing?

3 Pillars to Data-Centric Approach | Dataloop Blog