* Re: [PATCH v5] drm/amdgpu: replace PASID IDR with XArray
2026-03-30 19:11 [PATCH v5] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
@ 2026-03-30 21:18 ` Eric Huang
2026-03-31 3:32 ` Lazar, Lijo
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Eric Huang @ 2026-03-30 21:18 UTC (permalink / raw)
To: Mikhail Gavrilov, Alex Deucher, Christian König
Cc: lijo.lazar, David Airlie, Simona Vetter, amd-gfx, dri-devel,
stable
It looks good to me.
Reviewed-by: Eric Huang <jinhuieric.huang@amd.com>
On 2026-03-30 15:11, Mikhail Gavrilov wrote:
> Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> converted the global PASID allocator from IDA to IDR with a spinlock
> for cyclic allocation, but introduced two locking bugs:
>
> 1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
> which can sleep.
>
> 2) amdgpu_pasid_free() can be called from hardirq context via the
> fence signal path (amdgpu_pasid_free_cb), but the lock is taken
> with plain spin_lock() in process context, creating a potential
> deadlock:
>
> CPU0
> ----
> spin_lock(&amdgpu_pasid_idr_lock) // process context, IRQs on
> <Interrupt>
> spin_lock(&amdgpu_pasid_idr_lock) // deadlock
>
> The hardirq call chain is:
>
> sdma_v6_0_process_trap_irq
> -> amdgpu_fence_process
> -> dma_fence_signal
> -> drm_sched_job_done
> -> dma_fence_signal
> -> amdgpu_pasid_free_cb
> -> amdgpu_pasid_free
>
> This was observed on an RX 7900 XTX when exiting a Vulkan game
> running under Proton/Wine, which triggers the fence callback path
> during VM teardown.
>
> Replace the IDR + spinlock with XArray. xa_alloc_cyclic() handles
> GFP_KERNEL pre-allocation and IRQ-safe locking internally, so it is
> used directly in amdgpu_pasid_alloc(). For amdgpu_pasid_free(), which
> can be called from hardirq context, use explicit xa_lock_irqsave()
> with __xa_erase() since xa_erase() only uses plain xa_lock() which
> is not IRQ-safe.
>
> Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
> Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
> ---
>
> v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
> since xa_erase() only uses plain xa_lock() which is not safe from
> hardirq context. Keep xa_alloc_cyclic() for amdgpu_pasid_alloc()
> as it handles locking internally. (Lijo Lazar)
> v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
> xa_lock_irqsave, as suggested by Lijo Lazar.
> https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
> v3: Replace IDR with XArray instead of fixing the spinlock, as
> suggested by Lijo Lazar.
> https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
> v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
> lock inconsistency (spin_lock -> spin_lock_irqsave).
> https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
> using idr_preload/GFP_NOWAIT.
> https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 47 ++++++++++++-------------
> 1 file changed, 23 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index d88523568b62..3fbf631e67c7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -22,7 +22,7 @@
> */
> #include "amdgpu_ids.h"
>
> -#include <linux/idr.h>
> +#include <linux/xarray.h>
> #include <linux/dma-fence-array.h>
>
>
> @@ -35,13 +35,13 @@
> * PASIDs are global address space identifiers that can be shared
> * between the GPU, an IOMMU and the driver. VMs on different devices
> * may use the same PASID if they share the same address
> - * space. Therefore PASIDs are allocated using IDR cyclic allocator
> - * (similar to kernel PID allocation) which naturally delays reuse.
> - * VMs are looked up from the PASID per amdgpu_device.
> + * space. Therefore PASIDs are allocated using an XArray cyclic
> + * allocator (similar to kernel PID allocation) which naturally delays
> + * reuse. VMs are looked up from the PASID per amdgpu_device.
> */
>
> -static DEFINE_IDR(amdgpu_pasid_idr);
> -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> +static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
> +static u32 amdgpu_pasid_xa_next;
>
> /* Helper to free pasid from a fence callback */
> struct amdgpu_pasid_cb {
> @@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
> * amdgpu_pasid_alloc - Allocate a PASID
> * @bits: Maximum width of the PASID in bits, must be at least 1
> *
> - * Uses kernel's IDR cyclic allocator (same as PID allocation).
> - * Allocates sequentially with automatic wrap-around.
> + * Uses XArray cyclic allocator for sequential allocation with wrap-around.
> *
> * Returns a positive integer on success. Returns %-EINVAL if bits==0.
> * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
> @@ -62,20 +61,22 @@ struct amdgpu_pasid_cb {
> */
> int amdgpu_pasid_alloc(unsigned int bits)
> {
> - int pasid;
> + u32 pasid;
> + int r;
>
> if (bits == 0)
> return -EINVAL;
>
> - spin_lock(&amdgpu_pasid_idr_lock);
> - pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> - 1U << bits, GFP_KERNEL);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> + XA_LIMIT(1, (1U << bits) - 1),
> + &amdgpu_pasid_xa_next, GFP_KERNEL);
>
> - if (pasid >= 0)
> + if (r >= 0) {
> trace_amdgpu_pasid_allocated(pasid);
> + return pasid;
> + }
>
> - return pasid;
> + return r;
> }
>
> /**
> @@ -84,11 +85,13 @@ int amdgpu_pasid_alloc(unsigned int bits)
> */
> void amdgpu_pasid_free(u32 pasid)
> {
> + unsigned long flags;
> +
> trace_amdgpu_pasid_freed(pasid);
>
> - spin_lock(&amdgpu_pasid_idr_lock);
> - idr_remove(&amdgpu_pasid_idr, pasid);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_lock_irqsave(&amdgpu_pasid_xa, flags);
> + __xa_erase(&amdgpu_pasid_xa, pasid);
> + xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
> }
>
> static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> @@ -625,13 +628,9 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
> }
>
> /**
> - * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
> - *
> - * Cleanup the IDR allocator.
> + * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
> */
> void amdgpu_pasid_mgr_cleanup(void)
> {
> - spin_lock(&amdgpu_pasid_idr_lock);
> - idr_destroy(&amdgpu_pasid_idr);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_destroy(&amdgpu_pasid_xa);
> }
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH v5] drm/amdgpu: replace PASID IDR with XArray
2026-03-30 19:11 [PATCH v5] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-30 21:18 ` Eric Huang
@ 2026-03-31 3:32 ` Lazar, Lijo
2026-03-31 6:57 ` Claude review: " Claude Code Review Bot
2026-03-31 6:57 ` Claude Code Review Bot
3 siblings, 0 replies; 5+ messages in thread
From: Lazar, Lijo @ 2026-03-31 3:32 UTC (permalink / raw)
To: Mikhail Gavrilov, Alex Deucher, Christian König
Cc: Eric Huang, David Airlie, Simona Vetter, amd-gfx, dri-devel,
stable
On 31-Mar-26 12:41 AM, Mikhail Gavrilov wrote:
> Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> converted the global PASID allocator from IDA to IDR with a spinlock
> for cyclic allocation, but introduced two locking bugs:
>
> 1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
> which can sleep.
>
> 2) amdgpu_pasid_free() can be called from hardirq context via the
> fence signal path (amdgpu_pasid_free_cb), but the lock is taken
> with plain spin_lock() in process context, creating a potential
> deadlock:
>
> CPU0
> ----
> spin_lock(&amdgpu_pasid_idr_lock) // process context, IRQs on
> <Interrupt>
> spin_lock(&amdgpu_pasid_idr_lock) // deadlock
>
> The hardirq call chain is:
>
> sdma_v6_0_process_trap_irq
> -> amdgpu_fence_process
> -> dma_fence_signal
> -> drm_sched_job_done
> -> dma_fence_signal
> -> amdgpu_pasid_free_cb
> -> amdgpu_pasid_free
>
> This was observed on an RX 7900 XTX when exiting a Vulkan game
> running under Proton/Wine, which triggers the fence callback path
> during VM teardown.
>
> Replace the IDR + spinlock with XArray. xa_alloc_cyclic() handles
> GFP_KERNEL pre-allocation and IRQ-safe locking internally, so it is
> used directly in amdgpu_pasid_alloc(). For amdgpu_pasid_free(), which
> can be called from hardirq context, use explicit xa_lock_irqsave()
> with __xa_erase() since xa_erase() only uses plain xa_lock() which
> is not IRQ-safe.
>
> Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
> Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Reviewed-by: Lijo Lazar <lijo.lazar@amd.com>
Thanks,
Lijo
> ---
>
> v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
> since xa_erase() only uses plain xa_lock() which is not safe from
> hardirq context. Keep xa_alloc_cyclic() for amdgpu_pasid_alloc()
> as it handles locking internally. (Lijo Lazar)
> v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
> xa_lock_irqsave, as suggested by Lijo Lazar.
> https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
> v3: Replace IDR with XArray instead of fixing the spinlock, as
> suggested by Lijo Lazar.
> https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
> v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
> lock inconsistency (spin_lock -> spin_lock_irqsave).
> https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
> using idr_preload/GFP_NOWAIT.
> https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 47 ++++++++++++-------------
> 1 file changed, 23 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index d88523568b62..3fbf631e67c7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -22,7 +22,7 @@
> */
> #include "amdgpu_ids.h"
>
> -#include <linux/idr.h>
> +#include <linux/xarray.h>
> #include <linux/dma-fence-array.h>
>
>
> @@ -35,13 +35,13 @@
> * PASIDs are global address space identifiers that can be shared
> * between the GPU, an IOMMU and the driver. VMs on different devices
> * may use the same PASID if they share the same address
> - * space. Therefore PASIDs are allocated using IDR cyclic allocator
> - * (similar to kernel PID allocation) which naturally delays reuse.
> - * VMs are looked up from the PASID per amdgpu_device.
> + * space. Therefore PASIDs are allocated using an XArray cyclic
> + * allocator (similar to kernel PID allocation) which naturally delays
> + * reuse. VMs are looked up from the PASID per amdgpu_device.
> */
>
> -static DEFINE_IDR(amdgpu_pasid_idr);
> -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> +static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
> +static u32 amdgpu_pasid_xa_next;
>
> /* Helper to free pasid from a fence callback */
> struct amdgpu_pasid_cb {
> @@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
> * amdgpu_pasid_alloc - Allocate a PASID
> * @bits: Maximum width of the PASID in bits, must be at least 1
> *
> - * Uses kernel's IDR cyclic allocator (same as PID allocation).
> - * Allocates sequentially with automatic wrap-around.
> + * Uses XArray cyclic allocator for sequential allocation with wrap-around.
> *
> * Returns a positive integer on success. Returns %-EINVAL if bits==0.
> * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
> @@ -62,20 +61,22 @@ struct amdgpu_pasid_cb {
> */
> int amdgpu_pasid_alloc(unsigned int bits)
> {
> - int pasid;
> + u32 pasid;
> + int r;
>
> if (bits == 0)
> return -EINVAL;
>
> - spin_lock(&amdgpu_pasid_idr_lock);
> - pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> - 1U << bits, GFP_KERNEL);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> + XA_LIMIT(1, (1U << bits) - 1),
> + &amdgpu_pasid_xa_next, GFP_KERNEL);
>
> - if (pasid >= 0)
> + if (r >= 0) {
> trace_amdgpu_pasid_allocated(pasid);
> + return pasid;
> + }
>
> - return pasid;
> + return r;
> }
>
> /**
> @@ -84,11 +85,13 @@ int amdgpu_pasid_alloc(unsigned int bits)
> */
> void amdgpu_pasid_free(u32 pasid)
> {
> + unsigned long flags;
> +
> trace_amdgpu_pasid_freed(pasid);
>
> - spin_lock(&amdgpu_pasid_idr_lock);
> - idr_remove(&amdgpu_pasid_idr, pasid);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_lock_irqsave(&amdgpu_pasid_xa, flags);
> + __xa_erase(&amdgpu_pasid_xa, pasid);
> + xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
> }
>
> static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> @@ -625,13 +628,9 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
> }
>
> /**
> - * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
> - *
> - * Cleanup the IDR allocator.
> + * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
> */
> void amdgpu_pasid_mgr_cleanup(void)
> {
> - spin_lock(&amdgpu_pasid_idr_lock);
> - idr_destroy(&amdgpu_pasid_idr);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_destroy(&amdgpu_pasid_xa);
> }
^ permalink raw reply [flat|nested] 5+ messages in thread* Claude review: drm/amdgpu: replace PASID IDR with XArray
2026-03-30 19:11 [PATCH v5] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-30 21:18 ` Eric Huang
2026-03-31 3:32 ` Lazar, Lijo
@ 2026-03-31 6:57 ` Claude Code Review Bot
2026-03-31 6:57 ` Claude Code Review Bot
3 siblings, 0 replies; 5+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 6:57 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Commit message**: Excellent. The bug description is thorough, with clear lockdep-style annotation and a concrete reproduction scenario (RX 7900 XTX, Vulkan game under Proton). The changelog across v1-v5 is well documented.
**Bug — xa_alloc_cyclic() is not IRQ-safe** (line 70-72 of patched file):
```c
r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
XA_LIMIT(1, (1U << bits) - 1),
&amdgpu_pasid_xa_next, GFP_KERNEL);
```
`xa_alloc_cyclic()` uses plain `xa_lock()` (see `include/linux/xarray.h:982`):
```c
xa_lock(xa);
err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
xa_unlock(xa);
```
Since `amdgpu_pasid_free()` can be called from hardirq context (the entire reason for this patch), an IRQ arriving while this lock is held will deadlock. This must be `xa_alloc_cyclic_irq()` which disables IRQs around the lock (`include/linux/xarray.h:1054`).
**Return value change** (lines 74-79): The original `idr_alloc_cyclic()` returned the allocated ID on success (positive) or negative errno. `xa_alloc_cyclic()` returns 0 on success and writes the ID to `*pasid`. The patch correctly handles this — checking `r >= 0` and returning `pasid` rather than `r`. Note that `xa_alloc_cyclic()` actually swallows the wrap-around indicator (returns 0 even on wrap), but `__xa_alloc_cyclic()` returns 1 on wrap. Since this code doesn't care about wrap notification, this is fine.
**Return value for -EBUSY** (line 79): The original IDR code returned `-ENOSPC` on exhaustion (per IDR semantics), while `xa_alloc_cyclic()` returns `-EBUSY`. The docstring at line 59 still says "Returns %-ENOSPC if no PASID was available" — this should be updated to `-EBUSY` to match reality. Minor documentation nit.
**amdgpu_pasid_free() locking** (lines 92-94): Correct. Using `xa_lock_irqsave()`/`__xa_erase()`/`xa_unlock_irqrestore()` is the right approach for hardirq context.
**xa_mk_value(0) as stored entry** (line 70): Storing `xa_mk_value(0)` works — XArray treats NULL as "empty", so an explicit non-NULL value is needed to mark the slot as occupied. `xa_mk_value(0)` is the lightest-weight way to do this. Fine.
**Cleanup** (line 261): `xa_destroy()` without locking is correct — this is called at module teardown when no concurrent access is possible.
**Recommendation**: NAK as-is due to the `xa_alloc_cyclic()` deadlock. The one-line fix is to change it to `xa_alloc_cyclic_irq()`. With that change, the patch would be correct and ready.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 5+ messages in thread
* Claude review: drm/amdgpu: replace PASID IDR with XArray
2026-03-30 19:11 [PATCH v5] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
` (2 preceding siblings ...)
2026-03-31 6:57 ` Claude review: " Claude Code Review Bot
@ 2026-03-31 6:57 ` Claude Code Review Bot
3 siblings, 0 replies; 5+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 6:57 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 3
Reviewed: 2026-03-31T16:57:38.374656
---
This is a single v5 patch converting amdgpu's PASID allocator from IDR+spinlock to XArray, fixing two real bugs introduced by commit 8f1de51f49be: sleeping under spinlock and a process/hardirq lock deadlock. The conversion is clean and well-motivated, but **the patch has a remaining deadlock vulnerability** — the same class of bug it's trying to fix.
The core issue: `xa_alloc_cyclic()` uses plain `xa_lock()` internally (IRQs remain enabled), while `amdgpu_pasid_free()` correctly uses `xa_lock_irqsave()`. If a hardirq fires on the same CPU while `xa_alloc_cyclic()` holds the xa_lock, and the IRQ handler calls `amdgpu_pasid_free()` → `xa_lock_irqsave()`, it will deadlock on the already-held lock — exactly the same pattern as the original bug.
**The fix is straightforward**: use `xa_alloc_cyclic_irq()` instead of `xa_alloc_cyclic()`. This variant (at `include/linux/xarray.h:1054`) takes and releases the xa_lock while disabling interrupts, which prevents the deadlock.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 5+ messages in thread