So You Think You Need A GPU Part 2? Kaggle Kernels Can Also Help!

In this first article of this series I showed how you can use Google Colaboratory to access free GPU compute. In this article I will show you how to do the same thing in Kaggle. Kaggle originated as a platform to host machine learning competitions, but quickly became a go-to site for data scientists to discuss methods, programming, and machine learning strategies as well. It hosts hundreds of publicly available datasets as well as open machine learning competitions. In an effort to become a more holistic data science site they also recently began allowing people to host and run code on their site through the use of their kernels.

Imgur

As I mentioned earlier GPUs do provide a drastic speed up on compute time when it comes to problems that require a lot of matrix algebra computation. So, let's walk through how to access and use Kaggle kernels.

Kaggle Kernels

Step 1: Create a Kaggle account by signing up at Kaggle

Step 2: Go to your your profile

In the top right of the site you will see your profile. You can click on it and go to "my profile"

Within this page you will see several tabs. One of them is titled Kernels. You will click this tab and then click New Kernel

Imgur

You'll then choose notebook and it will launch a Jupyter Notebook for you.

Step 3: Walk through the code from the previous notebook

Imgur

  • Note fastai comes pre-installed on Kaggle kernels, so you don't need to install it.

So, let's import the specifics that we are going to use.

Imgur

Next let's again try to check our cuda device:

Imgur

Just like last time we get an error telling us GPU is not enabled. Similar to Colab, we have to turn on GPU.

Step 4: Enable the GPU

Over to the right over your kaggle kernel you will see a couple of dropdowns, like session, workspace, versions, and settings. Click down the settings tab and you will see a toggle switch for GPU and Internet toggle both of those on (GPU for GPU and Internet to download the Pets dataset).

Imgur

Step 5: Rerun torch.cuda

You should now see the name of the device, like below.

Imgur

Now we can re-run the same code we ran in the Colab notebook to setup the images for our resnet 34 and see some of the adorable dogs and cats :)

Imgur

Step 6: Run our model

Imgur

and as you can see the kaggle kernel ran an epoch on the dogs and cats data in 1:30, which is actually 5 seconds faster than the Tesla T4 being used by Google. I think this has to do with throttling the Tesla T4 because the T4 is a much larger and more expensive GPU, but either way each are roughly 25x faster than the CPU implementation on Colab.

2 Free Options

These most recent articles have provided you with 2 free options for your GPU compute needs.

In [ ]: