Code for testing various M1 Chip benchmarks with TensorFlow.

Overview

M1, M1 Pro, M1 Max Machine Learning Speed Test Comparison

This repo contains some sample code to benchmark the new M1 MacBooks (M1 Pro and M1 Max) against various other pieces of hardware.

It also has steps below to setup your M1, M1 Pro and M1 Max (steps should also for work Intel) Mac to run the code.

Who is this repo for?

You: have a new M1, M1 Pro, M1 Max machine and would like to get started doing machine learning and data science on it.

This repo: teaches you how to install the most common machine learning and data science packages (software) on your machine and make sure they run using sample code.

Machine Learning Experiments Conducted

All experiments were run with the same code. For Apple devices, TensorFlow environments were created with the steps below.

Notebook Number Experiment
00 TinyVGG model trained on CIFAR10 dataset with TensorFlow code.
01 EfficientNetB0 Feature Extractor on Food101 dataset with TensorFlow code.
02 RandomForestClassifier from Scikit-Learn trained with random search cross-validation on California Housing dataset.

Results

See the results directory.

Steps (how to test your M1 machine)

  1. Create an environment and install dependencies (see below)
  2. Clone this repo
  3. Run various notebooks (results come at the end of the notebooks)

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (shorter version)

If you're experienced with making environments and using the command line, follow this version. If not, see the longer version below.

  1. Download and install Homebrew from https://brew.sh. Follow the steps it prompts you to go through after installation.
  2. Download Miniforge3 (Conda installer) for macOS arm64 chips (M1, M1 Pro, M1 Max).
  3. Install Miniforge3 into home directory.
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
  1. Restart terminal.
  2. Create a directory to setup TensorFlow environment.
mkdir tensorflow-test
cd tensorflow-test
  1. Make and activate Conda environment. Note: Python 3.8 is the most stable for using the following setup.
conda create --prefix ./env python=3.8
conda activate ./env
  1. Install TensorFlow dependencies from Apple Conda channel.
conda install -c apple tensorflow-deps
  1. Install base TensorFlow (Apple's fork of TensorFlow is called tensorflow-macos).
python -m pip install tensorflow-macos
  1. Install Apple's tensorflow-metal to leverage Apple Metal (Apple's GPU framework) for M1, M1 Pro, M1 Max GPU acceleration.
python -m pip install tensorflow-metal
  1. (Optional) Install TensorFlow Datasets to run benchmarks included in this repo.
python -m pip install tensorflow-datasets
  1. Install common data science packages.
conda install jupyter pandas numpy matplotlib scikit-learn
  1. Start Jupyter Notebook.
jupyter notebook
  1. Import dependencies and check TensorFlow version/GPU access.
import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(f"TensorFlow has access to the following devices:\n{tf.config.list_physical_devices()}")

# See TensorFlow version
print(f"TensorFlow version: {tf.__version__}")

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.8.0

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (longer version)

If you're new to creating environments, using a new M1, M1 Pro, M1 Max machine and would like to get started running TensorFlow and other data science libraries, follow the below steps.

Note: You're going to see the term "package manager" a lot below. Think of it like this: a package manager is a piece of software that helps you install other pieces (packages) of software.

Installing package managers (Homebrew and Miniforge)

  1. Download and install Homebrew from https://brew.sh. Homebrew is a package manager that sets up a lot of useful things on your machine, including Command Line Tools for Xcode, you'll need this to run things like git. The command to install Homebrew will look something like:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

It will explain what it's doing and what you need to do as you go.

  1. Download the most compatible version of Miniforge (minimal installer for Conda specific to conda-forge, Conda is another package manager and conda-forge is a Conda channel) from GitHub.

If you're using an M1 variant Mac, it's "Miniforge3-MacOSX-arm64" <- click for direct download.

Clicking the link above will download a shell file called Miniforge3-MacOSX-arm64.sh to your Downloads folder (unless otherwise specified).

  1. Open Terminal.

  2. We've now got a shell file capable of installing Miniforge, but to do so we'll have to modify it's permissions to make it executable.

To do so, we'll run the command chmod -x FILE_NAME which stands for "change mode of FILE_NAME to -executable".

We'll then execute (run) the program using sh.

chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
  1. This should install Miniforge3 into your home directory (~/ stands for "Home" on Mac).

To check this, we can try to activate the (base) environment, we can do so using the source command.

source ~/miniforge3/bin/activate

If it worked, you should see something like the following in your terminal window.

(base) [email protected] ~ %
  1. We've just installed some new software and for it to fully work, we'll need to restart terminal.

Creating a TensorFlow environment

Now we've got the package managers we need, it's time to install TensorFlow.

Let's setup a folder called tensorflow-test (you can call this anything you want) and install everything in there to make sure it's working.

Note: An environment is like a virtual room on your computer. For example, you use the kitchen in your house for cooking because it's got all the tools you need. It would be strange to have an oven in your bedroom. The same thing on your computer. If you're going to be working on specific software, you'll want it all in one place and not scattered everywhere else.

  1. Make a directory called tensorflow-test. This is the directory we're going to be storing our environment. And inside the environment will be the software tools we need to run TensorFlow.

We can do so with the mkdir command which stands for "make directory".

mkdir tensorflow-test
  1. Change into tensorflow-test. For the rest of the commands we'll be running them inside the directory tensorflow-test so we need to change into it.

We can do this with the cd command which stands for "change directory".

cd tensorflow-test
  1. Now we're inside the tensorflow-test directory, let's create a new Conda environment using the conda command (this command was installed when we installed Miniforge above).

We do so using conda create --prefix ./env which stands for "conda create an environment with the name file/path/to/this/folder/env". The . stands for "everything before".

For example, if I didn't use the ./env, my filepath looks like: /Users/daniel/tensorflow-test/env

conda create --prefix ./env
  1. Activate the environment. If conda created the environment correctly, you should be able to activate it using conda activate path/to/environment.

Short version:

conda activate ./env

Long version:

conda activate /Users/daniel/tensorflow-test/env

Note: It's important to activate your environment every time you'd like to work on projects that use the software you install into that environment. For example, you might have one environment for every different project you work on. And all of the different tools for that specific project are stored in its specific environment.

If activating your environment went correctly, your terminal window prompt should look something like:

(/Users/daniel/tensorflow-test/env) [email protected] tensorflow-test %
  1. Now we've got a Conda environment setup, it's time to install the software we need.

Let's start by installing various TensorFlow dependencies (TensorFlow is a large piece of software and depends on many other pieces of software).

Rather than list these all out, Apple have setup a quick command so you can install almost everything TensorFlow needs in one line.

conda install -c apple tensorflow-deps

The above stands for "hey conda install all of the TensorFlow dependencies from the Apple Conda channel" (-c stands for channel).

If it worked, you should see a bunch of stuff being downloaded and installed for you.

  1. Now all of the TensorFlow dependencies have been installed, it's time install base TensorFlow.

Apple have created a fork (copy) of TensorFlow specifically for Apple Macs. It has all the features of TensorFlow with some extra functionality to make it work on Apple hardware.

This Apple fork of TensorFlow is called tensorflow-macos and is the version we'll be installing:

python -m pip install tensorflow-macos

Depending on your internet connection the above may take a few minutes since TensorFlow is quite a large piece of software.

  1. Now we've got base TensorFlow installed, it's time to install tensorflow-metal.

Why?

Machine learning models often benefit from GPU acceleration. And the M1, M1 Pro and M1 Max chips have quite powerful GPUs.

TensorFlow allows for automatic GPU acceleration if the right software is installed.

And Metal is Apple's framework for GPU computing.

So Apple have created a plugin for TensorFlow (also referred to as a TensorFlow PluggableDevice) called tensorflow-metal to run TensorFlow on Mac GPUs.

We can install it using:

python -m pip install tensorflow-metal

If the above works, we should now be able to leverage our Mac's GPU cores to speed up model training with TensorFlow.

  1. (Optional) Install TensorFlow Datasets. Doing the above is enough to run TensorFlow on your machine. But if you'd like to run the benchmarks included in this repo, you'll need TensorFlow Datasets.

TensorFlow Datasets provides a collection of common machine learning datasets to test out various machine learning code.

python -m pip install tensorflow-datasets
  1. Install common data science packages. If you'd like to run the benchmarks above or work on other various data science and machine learning projects, you're likely going to need Jupyter Notebooks, pandas for data manipulation, NumPy for numeric computing, matplotlib for plotting and Scikit-Learn for traditional machine learning algorithms and processing functions.

To install those in the current environment run:

conda install jupyter pandas numpy matplotlib scikit-learn
  1. Test it out. To see if everything worked, try starting a Jupyter Notebook and importing the installed packages.
# Start a Jupyter notebook
jupyter notebook

Once the notebook is started, in the first cell:

import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(tf.config.list_physical_devices())

# See TensorFlow version
print(tf.__version__)

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.5.0
  1. To see if it really worked, try running one of the notebooks above end to end!

And then compare your results to the benchmarks above.

Owner
Daniel Bourke
Machine Learning Engineer live on YouTube.
Daniel Bourke
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 05, 2022
EMNLP 2021 paper The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers.

Codebase for training transformers on systematic generalization datasets. The official repository for our EMNLP 2021 paper The Devil is in the Detail:

Csordás Róbert 57 Nov 21, 2022
OpenVINO黑客松比赛项目

Window_Guard OpenVINO黑客松比赛项目 英文名称:Window_Guard 中文名称:窗口卫士 硬件 树莓派4B 8G版本 一个磁石开关 USB摄像头(MP4视频文件也可以) 软件(库) OpenVINO RPi 使用方法 本项目使用的OPenVINO是是2021.3版本,并使用了

Tango 6 Jul 04, 2021
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

SK T-Brain 754 Dec 29, 2022
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022
Practical and Real-world applications of ML based on the homework of Hung-yi Lee Machine Learning Course 2021

Machine Learning Theory and Application Overview This repository is inspired by the Hung-yi Lee Machine Learning Course 2021. In that course, professo

SilenceJiang 35 Nov 22, 2022
text_recognition_toolbox: The reimplementation of a series of classical scene text recognition papers with Pytorch in a uniform way.

text recognition toolbox 1. 项目介绍 该项目是基于pytorch深度学习框架,以统一的改写方式实现了以下6篇经典的文字识别论文,论文的详情如下。该项目会持续进行更新,欢迎大家提出问题以及对代码进行贡献。 模型 论文标题 发表年份 模型方法划分 CRNN 《An End-t

168 Dec 24, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
U-Net for GBM

My Final Year Project(FYP) In National University of Singapore(NUS) You need Pytorch(stable 1.9.1) Both cuda version and cpu version are OK File Str

PinkR1ver 1 Oct 27, 2021
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Ubiquitous Knowledge Processing Lab 22 Jan 02, 2023
SVG Icon processing tool for C++

BAWR This is a tool to automate the icons generation from sets of svg files into fonts and atlases. The main purpose of this tool is to add it to the

Frank David Martínez M 66 Dec 14, 2022
Generating Band-Limited Adversarial Surfaces Using Neural Networks

Generating Band-Limited Adversarial Surfaces Using Neural Networks This is the official repository of the technical report that was published on arXiv

3 Jul 26, 2022
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
Collection of generative models in Pytorch version.

pytorch-generative-model-collections Original : [Tensorflow version] Pytorch implementation of various GANs. This repository was re-implemented with r

Hyeonwoo Kang 2.4k Dec 31, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 03, 2023
This repository is dedicated to developing and maintaining code for experiments with wide neural networks.

Wide-Networks This repository contains the code of various experiments on wide neural networks. In particular, we implement classes for abc-parameteri

Karl Hajjar 0 Nov 02, 2021
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning

Awesome production machine learning This repository contains a curated list of awesome open source libraries that will help you deploy, monitor, versi

The Institute for Ethical Machine Learning 12.9k Jan 04, 2023
Deep ViT Features as Dense Visual Descriptors

dino-vit-features [paper] [project page] Official implementation of the paper "Deep ViT Features as Dense Visual Descriptors". We demonstrate the effe

Shir Amir 113 Dec 24, 2022