Hardware for Machine Learning
Hardware for Deep Learning
TPU - GPU
Turning TPU into GPU (mean: it compiles Pytorch to work on a TPU). PyTorchXLA converts the TPUv3-8 hardware into a GPU so you can use it with PyTorch as a normal GPU. TPUv3-8 which is part of free access from Google Colab can give a computation power that is equivalent to 8 V100 Tesla GPU and possibly 6 3090RTX GPU. info is here. TPUs are ~5x as expensive as GPUs ($1.46/hr for a Nvidia Tesla P100 GPU vs $8.00/hr for a Google TPU v3 vs $4.50/hr for the TPUv2 with “on-demand” access on GCP).
We recommend CPUs for their versatility and for their large memory capacity. GPUs are a great alternative to CPUs when you want to speed up a variety of data science workflows, and TPUs are best when you specifically want to train a machine learning model as fast as you possibly can. In Google Colab, CPU types vary according to variability (Intel Xeon, Intel Skylake, Intel Broadwell, or Intel Haswell CPUs). GPUs were NVIDIA P100 with Intel Xeon 2GHz (2 core) CPU and 13GB RAM. TPUs were TPUv3 (8 core) with Intel Xeon 2GHz (4 core) CPU and 16GB RAM).
Free TPU
TensorFlow Research Cloud - Free TPU : Accelerate your cutting-edge machine learning research with free Cloud TPUs.
AI/ML Cloud Computing
AI Platform
Last updated