Understanding Memory Requirements for Deep Learning and Machine Learning

Deep Learning and Machine Learning Memory Requirements

Building a machine learning workstation can be difficult, not to mention choosing the right workstation with the proper machine learning memory requirements. There are a lot of moving parts based on the types of projects you plan to run.

Understanding machine learning memory requirements is a critical part of the building process. Sometimes, though, it is easy to overlook. The average memory requirement is 16GB of RAM, but some applications require more memory.

Image Source

A massive GPU is typically understood to be a “must-have”, but thinking through the machine learning memory requirements probably doesn’t weigh into that purchase. However, it can make or break your application performance.

This article will walk you through how much RAM for is needed for a typical AI project, whether you should consider SSD or HDD for machine learning, and even answering, “is GPU memory important for deep learning?” This is a comprehensive guide on the various machine learning memory requirements you need to know for whatever AI projects you might be working on.

SSD in a PC build, Image Source

Our first stop is to talk about memory requirements for your entire workstation. When it comes to any kind of AI project there is going to be a lot of data moving as you train your programs.

The various machine learning memory requirements are fairly complicated, but the idea of using SSD or HDD for machine learning is easy to tackle. As with most builds, it is probably best to have a bit of both.

There will be plenty of temporary storage of datasets and it is going to be incredibly convenient to have an SSD that will quickly migrate data as needed. However, for data that won’t be moved frequently or will eventually land in a permanent storage situation, HDD is going to work just fine and be far cheaper.

If you are not using large datasets and instead plan on using simulations to train your AI program, then you may be able to bypass the need for a permanent storage solution like an HDD in order to save on expenses. At the end of the day, one thing is for sure: you should absolutely consider an SSD as it will save you a lot of time. It’s an investment that will pay off in the long run.

If you want to know more about the differences between SSD or HDD for machine learning, we have a great blog post that better explores these differences.

GPU in a PC build, Image Source

Before diving in, let’s first separate out deep learning and machine learning.

Deep learning is going to include projects where you are building out an AI program to think. This might be a neural network or it might be a project where the program will need to process and interpret data and come up with unique solutions. Deep learning is a specific type of machine learning, which we will discuss later.

With that in mind, this next question is difficult to tackle. How much RAM for deep learning is even necessary?

It can depend entirely on the type of project you are running. For example, if you are running a deep learning project that will heavily depend on massive amounts of data being input and processed, then that will ultimately require a heavier memory load.

If, however, you are training a program visually or through simulations, then that will require less memory but will have a heavier workload that needs processed quickly.

Machine learning, on the other hand, is less about an AI program learning to think on its own and create unique solutions but more on being able to process data and generate solutions that are more predetermined or expected. There is a lot more human involvement with machine learning and memory can be moved and manipulated as needed.

Therefore, machine learning will usually require less memory, but only marginally so depending on the type of data being used and how intensive the data is.

NVIDIA GeForce RTX 3090, Image Source

A general rule of thumb for RAM for deep learning is to have at least as much RAM as you have GPU memory and then add about 25% for growth.

This simple formula will help you stay on top of your RAM needs and will save you a lot of time switching from SSD to HDD, if you have both set up.

While there isn’t a preferred overarching amount of RAM necessary for deep learning projects this is an easy way to stay ahead of any issues and having to worry about scaling in the immediate future. Depending on if you are using a data intensive visual component for training your deep learning program, though, you may need more than you think.

There are many types of GPUs you can look at. Our recommendation is to opt for the best of the best so you never have to worry about it: the NVIDIA GeForce RTX 3090 has 24GB memory and is a powerhouse for all types of AI projects. However, a cheaper option that should still be able to help is the NVIDIA GeForce RTX 3060 with 12GB memory.

SD card for video camera, Image Source

If your deep learning program is going to be taking in lots of visual data — from live feeds to processing simple images, then you are going to need to more carefully consider your RAM and GPU memory requirements.

If a deep learning workstation is going to be used to track images or video, then it is going to be running and storing (if only temporarily) a large amount of data in order to do this. From the perspective of a sheer quantity of datasets, there is going to be a significant need for more RAM and memory.

Apple HomePod speaker, Image Source

On the other hand, if you are creating a deep learning program that can process, interpret, and produce speech, then there is less need to be concerned with the amount of memory required. More than likely, you will have far fewer datasets to manipulate then you would with tracking and interpreting visual data.

The one exception to this rule may be if you are working on a deep learning project that will be trained to listen and hear human speech, recognize and interpret this auditory information, and also generate unique human speech as a response. In this case, more is going to be better and should be treated similarly to the memory requirements of visual data.

Stick of RAM, Image Source

Machine learning memory requirements are going to function similarly to deep learning, but with a lesser workload and amount of RAM and memory required. As we stated before, machine learning is going to have a higher amount of human interaction, so there should be less requirement for massive amounts of memory.

In general, though, you will still want to follow the rule for deep learning and have at least as much RAM as you have GPU memory (and add a 25% cushion). We highly recommend the NVIDIA GeForce RTX 3060 for many machine learning projects as it can handle a majority of workloads without any trouble, although you may need to look at RTX 3080 or RTX 3090 if your memory requirements call for it.

Later on we will discuss the ability to have multiple GPUs to help with larger projects, too.

Image Source

A good ballpark to understand machine learning memory requirements for a video and image-based machine learning project is going to be around 16GB. This isn’t true in every case, but it is a good amount of RAM and memory that should be able to handle the majority of machine learning projects for visual data.

Since there will be less interpretation involved by the machine learning program there is a lot of room to save on memory requirements.

Image Source

Ironically, when it comes down to text or speech-based machine learning projects, the memory requirements are basically the same. It doesn’t make sense to downgrade your GPU too far because it will come in handy in so many other ways. For the purpose of discussing machine learning memory requirements, though, you don’t want to drop lower than a GPU with 12GB of memory.

It is always safe to assume a slightly higher amount of RAM and memory than you think you might need for machine learning and deep learning. If there is a bare minimum for AI projects at a workstation, then text and speech-based machine learning projects comfortably sit around that cut off point.

Deep learning station with multiple GPUs, Image Source

As we mentioned earlier, having multiple GPUs for an AI project is fairly common! When you are using multiple GPUs how much do you need to be concerned about the GPU memory and RAM, though?

Well, the answer is a bit of a double-edged sword. You will never need or even be able to use more RAM than you have GPU memory of any single GPU.

For example, if you use the NVIDIA GeForce RTX 3090, then you would want to take a look at having 24GB of RAM (or higher, to adjust for upgrades). Even if you decided to add a max amount of extra RTX 3090s, you wouldn’t be able to use more than that 24GB of RAM.

When it comes down to it, your RAM eventually caps out at a maximum capacity based on your GPU with the most amount of memory.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
James Montantes

James Montantes

Interested in HMI, AI, and decentralized systems and applications. I like to tinker with GPU systems for deep learning. Currently at Exxact Corporation.