Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.


What is deep learning? Everything you need to know

The lowdown on deep learning, including how it relates to the wider field of machine learning and how to get started.
Written by Maria Diaz, Staff Writer
Pink glass brain representing deep learning
Jonathan Kitchen/Getty Images

What is deep learning?

Deep learning is a subset of machine learning that falls within the artificial intelligence (AI) field. This technology works by teaching a computer model to learn by example, similar to how a child can learn from parents and teachers. 

In very plain terms, a computer model is shown different images of a variety of objects and is told what each one represents. With training, the model can learn to recognize and categorize the different patterns it makes out in the images and eventually recognize and learn from new images it perceives. 

Also: 6 ways ChatGPT can make your everyday life easier

Deep learning is critical for the functioning of autonomous or driverless cars. A driverless car uses a combination of cameras and sensors to capture data from its surroundings, such as traffic signals, pedestrians, and other cars on the road. It then processes that data to determine the best course of action: Whether to slow down, stop, go, etc.

How does deep learning work?

Deep learning's capabilities differ in several key respects from traditional shallow machine learning, allowing computers to solve many complex problems.

This technology uses neural networks, a model based on the human brain's activity. Much like the brain contains layers of interconnected neurons, a neural network in AI does the same, where nodes are interconnected to share information.

Also: What is generative AI and why is it so popular? Here's everything you need to know

Training these deep-learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated as the system gradually refines its model to achieve the best outcome.

Neural networks are expanded into sprawling networks with a large number of sizable layers that are trained using massive amounts of data. These deep neural networks have fueled the current leap forward in the ability of computers to carry out speech recognition, the many abilities of generative AI, and advancements in healthcare.

What are some examples of deep learning?

Nowadays, deep learning can be everywhere, from groundbreaking AI companies to the voice assistant in your smartphone.

Here are just a few of the most popular deep-learning applications:


OpenAI's chatbot uses deep learning and is one of the largest deep-learning models available. ChatGPT uses OpenAI's 3.5 version of a generative pre-trained transformer (GPT 3.5), which touts 175 billion parameters. The neural network that makes ChatGPT so efficient is trained to learn patterns and relationships in language. 

Also: How to use ChatGPT

The fourth version of this generative pre-trained transformer (GPT-4) expertly performs natural language processing (NLP) tasks with the largest architecture of large language models (LLM), consisting of a trillion parameters.

Virtual assistants

Voice assistants, such as Google Assistant, Amazon Alexa, and Apple's Siri, use deep learning for speech recognition and NLP. They apply these deep-learning techniques to process what you tell them and respond accordingly and accurately. 

Also: The best AI art generators to try

These deep-learning algorithms can also learn from patterns in user interactions to continuously improve the user experience. 

Fraud detection

Various entities can use deep learning to detect and prevent fraud. Financial institutions, for example, use different algorithms to detect fraud. One example you might be familiar with is long short-term memory (LSTM), a deep-learning model that flags suspicious activity that strays from the data it has been trained on. 

Also: AI may compromise our personal information

LSTM is a recurrent neural network (RNN) that handles sequential data and stores information about what it processes to recognize a standout event, like a potentially fraudulent transaction, to flag for human intervention.


Artificial intelligence has already made a significant impact in healthcare. Deep-learning technology has been found useful in diagnosing eye diseases, including diabetic retinopathy and glaucoma, and even certain cancers. 

Also: The top ten highest-paid tech skills can make you a lot of money - here's how much

The advancements of AI in medicine are only just beginning.

What is machine learning vs deep learning?

Artificial intelligence encompasses many fields of research that can make machines capable of carrying out tasks that typically would have required human intelligence and can range from genetic algorithms to natural language processing

Machine learning is a subset of AI and is defined as the process of teaching a computer to carry out a task rather than programming it how to carry that task out step by step.

Deep learning, in turn, is a subset of machine learning, whose capabilities differ in several key respects from traditional shallow machine learning, allowing computers to solve a host of complex problems that couldn't otherwise be tackled.

Also: What is Auto-GPT? Everything to know about the next powerful AI tool

Machine learning can tackle shallow predictions when fed data, such as determining whether a fruit in a photo is an apple or an orange. Deep learning can solve more complex problems, like recognizing handwritten numbers where a massive amount of data is necessary during training. 

In the specific example illustrated below, the computer needs to be able to cope with a huge variety in how the data can be presented. Every digit between 0 and 9 can be written in a myriad of ways: The size and exact shape of each handwritten digit can vary significantly depending on who's writing and in what circumstance.

Also: How to write better AI prompts

Coping with the variability of these features, and the even bigger mess of interactions between them, is where deep learning and deep neural networks become useful.

Each neuron within a neural network is a mathematical function that takes in data through an input, transforms that data into a more amenable form, and then spits it out via an output. You can think of neurons in a neural network as being arranged in layers, as illustrated in the image below.

Neural network

A simple diagram of how a neural network is organized.

Maria Diaz/ZDNET

How does a deep neural network work?

In the example above, where a model is learning how to recognize handwritten numbers, you can see a very simple depiction of the anatomy of a neural network. 

All neural networks have an input layer, where the initial data is fed in, and an output layer that generates the final prediction. 

Also: Have 10 hours? IBM will train you in AI fundamentals - for free

But in a deep neural network, there can be tens of hundreds of hidden layers of neurons between these input and output layers, each feeding data into the other. Hence the term "deep" in "deep learning" and "deep neural networks"; it is a reference to the large number of hidden layers at the heart of these neural networks.

In the graphic above, each circle represents a neuron in the network, and they're organized in vertical layers and interconnected. The hidden layers in deep neural networks can have tens of hundreds of hidden layers.

When should you use deep learning?

Deep-learning algorithms can take messy and broadly unlabeled data, such as video, images, audio recordings, and text, and impose enough order to make useful predictions, building a hierarchy of features that make up a dog or cat in an image or of sounds that form a word in speech.

Also: 4 reasons why you should really use Copilot in Microsoft Edge

As a result, it's best to use deep learning when there is a massive amount of data, and it is largely unstructured. 

What are the drawbacks of deep learning?

One of the big drawbacks is the amount of data required for training, which translates into needing access to massive amounts of distributed computing power. This requirement results in a high cost of training and computer hardware, as training can require expensive hardware, like high-end GPUs and GPU arrays. 

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

Another downside is that deep neural networks are difficult to train for several reasons besides computational resources. 

Some common challenges for deep neural networks include the vanishing gradient problem and exploding gradients, which can affect gradient-based learning methods; taking the proper time to tune hyperparameters, like the batch size and training rate; and overfitting, when the high complexity of the network causes it to also learn the noise in the training data. 

What deep learning techniques exist?

There are various types of deep neural networks, such as the examples explained below, with structures suited to different tasks. 

Also: 4 things Claude AI can do that ChatGPT can't

This list remains fluid as research yields new deep-learning techniques to be developed with time:

  1. Convolutional neural networks (CNN): These tend to be used for computer vision tasks, as their initial layers are specialized for extracting distinct features from an image, which are then processed by a more conventional neural network to categorize the image. 
  2. Recurrent neural networks (RNN): These are more common for processing language, as they have built-in feedback loops, where data output from one layer is passed back to the layer preceding it, lending the network a form of memory. 
  3. Long short-term memory networks (LSTM): As discussed in an example above, LSTMs can be used in fraud detection as they excel at capturing long-term dependencies in sequences.
  4. Generative adversarial networks (GANs): Most commonly used to generate data, such as images, text, and videos, GANs feature two battling neural networks: the generator and the discriminator. The generator network tries to create convincing synthetic data, and the discriminator attempts to tell the difference between fake and real data. 

There is a large number of different types of deep neural networks. No one network is inherently better than the other; they are just better suited to learning particular tasks.

How long does it take to train a deep-learning model?

Training a deep-learning model can take from hours or weeks to months. The time varies widely, as it depends on factors such as the available hardware, optimization, the number of layers in the neural network, the network architecture, the size of the dataset, and more.

Editorial standards