So Google is basically admitting PyTorch/XLA on TPUs didn't work — TorchTPU looks like them rebuilding what should have worked day one. Its hard to run production ML on a toolchain engineers can't trust, no matter how fast the silicon is.
Adding a support for new hardware to PyTorch is actually quite convenient. I did that with WebGPU using the same PrivateUse1 mechanism TorchTPU used. Every hardware has its own slot and identifier, and when you want to add a support for a new one without merging it into PyTorch, PrivateUse1 works essentially like plug-in slot
Is it just me, or does it feel like everyone now uses AI to write any kind of blog?
These parts here somehow trigger me:
- Enter TorchTPU. As an engineering team, our mandate was to build a stack that leads with usability, portability, and excellent performance.
- Engineering the TorchTPU Stack: The Technical Reality
- Eager First: Flexibility Without Compromise
- The breakthrough, however, is our fused eager mode.
- The Road Ahead: 2026 and Beyond
I have mixed feelings about this. On one hand, we all seem to be using the same tools and converging to the same style. On the other hand, if we all use the same models with the same system prompts, we might lose a lot of creativity and diversity in online content.
I did trained some research models using the existing PyTorch/XLA on TPUs, and it was a mess of undocumented behavior and bugs (silently hanging after 8 hours of training!).
If anyone is trying to use PyTorch on TPU before TorchTPU is released, you can check out the training pipeline that I ended up building to support my research: https://github.com/aklein4/easy-torch-tpu
I attended the related session at Next’26 yesterday. From my understanding it is a new backend and they will release the torch tpu source on github in one or two months. It will not support all ops initially but they are moving fast. Still for a while torchax is mature enough to run torch models on tpus by translating to jax.
pitch basically boils down to 'just change one line and it works' which sounds too good to be true, but if they actually pull it off at 100k-chip scale, that's genuinely a big deal
I'm thinking of picking up some used Gaudis from eBay. they're pretty TPU-like. but other than oddball hardware like that it's just the GPU duopoly and proprietary bespoke stuff the hyperscalers have made for themselves.
shit, maybe China will start selling Huawei Ascend chips internationally.
So Google is basically admitting PyTorch/XLA on TPUs didn't work — TorchTPU looks like them rebuilding what should have worked day one. Its hard to run production ML on a toolchain engineers can't trust, no matter how fast the silicon is.
Adding a support for new hardware to PyTorch is actually quite convenient. I did that with WebGPU using the same PrivateUse1 mechanism TorchTPU used. Every hardware has its own slot and identifier, and when you want to add a support for a new one without merging it into PyTorch, PrivateUse1 works essentially like plug-in slot
https://github.com/jmaczan/torch-webgpu
Is it just me, or does it feel like everyone now uses AI to write any kind of blog?
These parts here somehow trigger me:
- Enter TorchTPU. As an engineering team, our mandate was to build a stack that leads with usability, portability, and excellent performance.
- Engineering the TorchTPU Stack: The Technical Reality
- Eager First: Flexibility Without Compromise
- The breakthrough, however, is our fused eager mode.
- The Road Ahead: 2026 and Beyond
I have mixed feelings about this. On one hand, we all seem to be using the same tools and converging to the same style. On the other hand, if we all use the same models with the same system prompts, we might lose a lot of creativity and diversity in online content.
it's just sad
This is great to see.
I did trained some research models using the existing PyTorch/XLA on TPUs, and it was a mess of undocumented behavior and bugs (silently hanging after 8 hours of training!).
If anyone is trying to use PyTorch on TPU before TorchTPU is released, you can check out the training pipeline that I ended up building to support my research: https://github.com/aklein4/easy-torch-tpu
Sounds good, but my main question is: is this a fork, or a new backend they're building in (like MPS)?
I attended the related session at Next’26 yesterday. From my understanding it is a new backend and they will release the torch tpu source on github in one or two months. It will not support all ops initially but they are moving fast. Still for a while torchax is mature enough to run torch models on tpus by translating to jax.
They write that they use PrivateUse1, so it’s a custom out-of-tree backend
pitch basically boils down to 'just change one line and it works' which sounds too good to be true, but if they actually pull it off at 100k-chip scale, that's genuinely a big deal
Now all that’s missing is an actual chip that can be purchased. Any ideas?
I'm thinking of picking up some used Gaudis from eBay. they're pretty TPU-like. but other than oddball hardware like that it's just the GPU duopoly and proprietary bespoke stuff the hyperscalers have made for themselves.
shit, maybe China will start selling Huawei Ascend chips internationally.
Very excited for this.