Skip to content

Conversation

@lowdy1
Copy link
Contributor

@lowdy1 lowdy1 commented Nov 4, 2025

Description

Adds support for Ascend NPUs in most single-agent sota-implementations. These changes are non-intrusive to existing code and have been thoroughly tested on Ascend NPU devices.
As the next step, we are working to extend TorchRL NPU support to multi-agent experiments and GRPO, and we plan to submit a separate PR for this work soon.

Motivation and Context

We aim to extend TorchRL to enable execution on NPUs, providing broader hardware compatibility.
And our issue is #3154

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3229

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 3 New Failures, 1 Cancelled Job, 11 Unrelated Failures

As of commit 78a77ed with merge base 1ac0fd0 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 4, 2025
@lowdy1 lowdy1 changed the title Add NPU Support for Single Agent Nov 5, 2025
@vmoens vmoens added the enhancement New feature or request label Dec 31, 2025
Copy link
Collaborator

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! I made several improvements to this PR:

1. Added hasattr guard for compatibility

Wrapped all torch.npu.is_available() calls with hasattr(torch, "npu") to ensure compatibility with PyTorch versions that don't have NPU support. This matches the pattern already used in torchrl/collectors/_single.py.

2. Created centralized get_available_device() utility

Instead of duplicating the device selection logic across 22 files, I created a new utility function in torchrl/_utils.py:

def get_available_device(return_str: bool = False) -> torch.device | str:
    """Return the available accelerator device, or CPU if none is found.

    Checks for accelerator availability in the following order: CUDA, NPU, MPS.
    Returns the first available accelerator, or CPU if none are present.

    .. note::
        PyTorch generally assumes a single accelerator type per system.
        Running with multiple accelerator types (e.g., both CUDA and NPU)
        is not officially supported. This function simply returns the first
        available accelerator it finds.
    """

This also adds MPS support for consistency. All 22 sota-implementation files now use:

device = torch.device(cfg.network.device) if cfg.network.device else get_available_device()

3. Added documentation

Added get_available_device to docs/source/reference/utils.rst.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request

2 participants