paint-brush
Use plaidML to do Machine Learning on macOS with an AMD GPUby@Alex_Wulff
1,906 reads
1,906 reads

Use plaidML to do Machine Learning on macOS with an AMD GPU

by Alex WulffDecember 16th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Want to train machine learning models on your Mac’s integrated AMD GPU or an external graphics card? Look no further than PlaidML.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Use plaidML to do Machine Learning on macOS with an AMD GPU
Alex Wulff HackerNoon profile picture

Want to train machine learning models on your Mac’s integrated AMD GPU or an external graphics card? Look no further than PlaidML.

Anyone who has tried to train a neural network with TensorFlow on macOS knows that the process kind of sucks. TensorFlow can only leverage the CPU on Macs, as GPU-accelerated training requires an Nvidia chipset. Most large models take orders of magnitude more time to train on a CPU than on even a simple GPU.

To make matters worse, many Macs have powerful discreet AMD GPUs that are forced to sit idle while training. TensorFlow only supports Nvidia devices, which are incompatible with macOS. This is where plaidML comes in.

Rather than pay for time on a cloud-based system or purchase a new machine, you can install PlaidML and use it to train Keras models right on your Mac’s graphics processor.

TensorFlow serves as a backend for Keras, interpreting Keras’ high-level Python syntax and converting it to instructions that can be executed in parallel on specialized hardware like a GPU.

PlaidML is an alternative backend for Keras that supports parallelization frameworks other than Nvidia’s CUDA. On a Mac, you can use PlaidML to train Keras models on your CPU, your CPU’s integrated graphics, a discreet AMD graphics processor, or even an external AMD GPU connected via Thunderbolt 3.

I first started poking around with PlaidML because I was looking for a way to train a deep convolutional neural network on a very large image dataset. I attempted to do this in Google’s Colab, but the online tool proved to be very frustrating for long-running jobs. I had a Radeon RX580 eGPU gathering dust, so I wanted a way to use it to train models locally with my MacBook.

After a few quick steps, I was up and running with PlaidML. Here’s how you can use it on your system. First, install PlaidML via pip. I highly recommend using virtual environments here to isolate your PlaidML installation from the rest of your system.

PlaidML’s power comes with its simplicity. After installation, activating your GPU is as simple as running

plaidml-setup

After selecting whether or not you want to enable experimental features, this tool will ask you which computational device you’d like to use. You should see a list like the following:

1 : llvm_cpu.0
2 : metal_intel(r)_hd_graphics_530.0
3 : metal_amd_radeon_pro_450.0
4 : metal_amd_radeon_rx_580.0

The first option is my CPU, the second is the Intel integrated graphics inside my CPU, the third option is the discreet AMD GPU in my 15" MacBook Pro, and the fourth option is my RX 580 eGPU. I absolutely love how easy it is to switch processors; this allows me to train simple models on-the-go with my laptop’s discreet GPU and use my eGPU for heavier tasks.

The only caveat here is you no longer have access to TensorFlow features such as TensorFlow Datasets. All code you write needs to be in pure Keras. I haven’t found this to be much of a limitation, and it leads to more portable software anyway. PlaidML also works with Nvidia GPUs, so if you work on a team that uses different GPU architectures PlaidML makes things very easy. Using PlaidML as Keras’ backend is as simple as the following:

from os import environ
environ["KERAS_BACKEND"] = "plaidml.keras.backend"
import keras

And that’s literally it. Below is a complete example that you can try on your own system after installing PlaidML. It trains a very simple neural network with one hidden layer that sums input vectors.

import numpy as np
from os import environ
environ["KERAS_BACKEND"] = "plaidml.keras.backend"
import keras
from keras.layers import Dense
from matplotlib import pyplot as plt

# Params
num_samples = 100000; vect_len = 20; max_int = 10; min_int = 1;

# Generate dataset
X = np.random.randint(min_int, max_int, (num_samples, vect_len))
Y = np.sum(X, axis=1)

# Get 80% of data for training
split_idx = int(0.8 * len(Y))
train_X = X[:split_idx, :]; test_X = X[split_idx:, :]
train_Y = Y[:split_idx]; test_Y = Y[split_idx:]

# Make model
model = keras.models.Sequential()
model.add(keras.layers.Dense(32, activation='relu', input_shape=(vect_len,)))
model.add(keras.layers.Dense(1))
model.compile('adam', 'mse')

history = model.fit(train_X, train_Y, validation_data=(test_X, test_Y), \
                    epochs=10, batch_size=100)

# summarize history
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

You can try this on different computing devices. You will likely find that training this model on your CPU is faster, as the dataset is very small and the model very simple. You will notice a significant speedup for more complex models, however. You can find some heavier tests on the PlaidML GitHub page.

So far, PlaidML has been awesome. Intel now appears to be involved in some way with PlaidML’s development, so hopefully this will ensure that PlaidML is around for a long time.

I hope you found this article useful. Check out more of my writing here.

Also published at https://alexwulff.medium.com/machine-learning-on-macos-with-an-amd-gpu-and-plaidml-55a46fe94bc0