Go offline with the Player FM app!
Asynchronous versus synchronous execution
Manage episode 298430836 series 2921809
CUDA is asynchronous, CPU is synchronous. Making them play well together can be one of the more thorny and easy to get wrong aspects of the PyTorch API. I talk about why non_blocking is difficult to use correctly, a hypothetical "asynchronous CPU" device which would help smooth over some of the API problems and also why it used to be difficult to implement async CPU (but it's not hard anymore!) At the end, I also briefly talk about how async/sync impedance can also show up in unusual places, namely the CUDA caching allocator.
Further reading.
- CUDA semantics which discuss non_blocking somewhat https://pytorch.org/docs/stable/notes/cuda.html
- Issue requesting async cpu https://github.com/pytorch/pytorch/issues/44343
83 episodes
Manage episode 298430836 series 2921809
CUDA is asynchronous, CPU is synchronous. Making them play well together can be one of the more thorny and easy to get wrong aspects of the PyTorch API. I talk about why non_blocking is difficult to use correctly, a hypothetical "asynchronous CPU" device which would help smooth over some of the API problems and also why it used to be difficult to implement async CPU (but it's not hard anymore!) At the end, I also briefly talk about how async/sync impedance can also show up in unusual places, namely the CUDA caching allocator.
Further reading.
- CUDA semantics which discuss non_blocking somewhat https://pytorch.org/docs/stable/notes/cuda.html
- Issue requesting async cpu https://github.com/pytorch/pytorch/issues/44343
83 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.