* [PATCH v3] drm/amdgpu: replace PASID IDR with XArray
@ 2026-03-30 11:35 Mikhail Gavrilov
2026-03-30 13:10 ` Lazar, Lijo
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Mikhail Gavrilov @ 2026-03-30 11:35 UTC (permalink / raw)
To: Alex Deucher, Christian König
Cc: lijo.lazar, Eric Huang, David Airlie, Simona Vetter, amd-gfx,
dri-devel, stable, Mikhail Gavrilov
Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
converted the global PASID allocator from IDA to IDR with a spinlock
for cyclic allocation, but introduced two locking bugs:
1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
which can sleep.
2) amdgpu_pasid_free() can be called from hardirq context via the
fence signal path (amdgpu_pasid_free_cb), but the lock is taken
with plain spin_lock() in process context, creating a potential
deadlock:
CPU0
----
spin_lock(&amdgpu_pasid_idr_lock) // process context, IRQs on
<Interrupt>
spin_lock(&amdgpu_pasid_idr_lock) // deadlock
The hardirq call chain is:
sdma_v6_0_process_trap_irq
-> amdgpu_fence_process
-> dma_fence_signal
-> drm_sched_job_done
-> dma_fence_signal
-> amdgpu_pasid_free_cb
-> amdgpu_pasid_free
This was observed on an RX 7900 XTX when exiting a Vulkan game
running under Proton/Wine, which triggers the fence callback path
during VM teardown.
Replace the IDR + spinlock with an XArray, which provides built-in
cyclic allocation (__xa_alloc_cyclic) and fine-grained IRQ-safe
locking (xa_lock_irqsave). This fixes both bugs in a single,
cleaner conversion.
Suggested-by: Lazar, Lijo <lijo.lazar@amd.com>
Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
Cc: stable@vger.kernel.org
Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
---
v3: Replace IDR with XArray instead of fixing the spinlock, as
suggested by Lijo Lazar.
https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
lock inconsistency (spin_lock -> spin_lock_irqsave).
https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
using idr_preload/GFP_NOWAIT.
https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 50 +++++++++++++------------
1 file changed, 27 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index d88523568b62..1e660fbc42ff 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -22,7 +22,7 @@
*/
#include "amdgpu_ids.h"
-#include <linux/idr.h>
+#include <linux/xarray.h>
#include <linux/dma-fence-array.h>
@@ -35,13 +35,13 @@
* PASIDs are global address space identifiers that can be shared
* between the GPU, an IOMMU and the driver. VMs on different devices
* may use the same PASID if they share the same address
- * space. Therefore PASIDs are allocated using IDR cyclic allocator
- * (similar to kernel PID allocation) which naturally delays reuse.
- * VMs are looked up from the PASID per amdgpu_device.
+ * space. Therefore PASIDs are allocated using an XArray cyclic
+ * allocator (similar to kernel PID allocation) which naturally delays
+ * reuse. VMs are looked up from the PASID per amdgpu_device.
*/
-static DEFINE_IDR(amdgpu_pasid_idr);
-static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
+static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
+static u32 amdgpu_pasid_xa_next;
/* Helper to free pasid from a fence callback */
struct amdgpu_pasid_cb {
@@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
* amdgpu_pasid_alloc - Allocate a PASID
* @bits: Maximum width of the PASID in bits, must be at least 1
*
- * Uses kernel's IDR cyclic allocator (same as PID allocation).
- * Allocates sequentially with automatic wrap-around.
+ * Uses XArray cyclic allocator for sequential allocation with wrap-around.
*
* Returns a positive integer on success. Returns %-EINVAL if bits==0.
* Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
@@ -62,20 +61,25 @@ struct amdgpu_pasid_cb {
*/
int amdgpu_pasid_alloc(unsigned int bits)
{
- int pasid;
+ unsigned long flags;
+ u32 pasid;
+ int r;
if (bits == 0)
return -EINVAL;
- spin_lock(&amdgpu_pasid_idr_lock);
- pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
- 1U << bits, GFP_KERNEL);
- spin_unlock(&amdgpu_pasid_idr_lock);
+ xa_lock_irqsave(&amdgpu_pasid_xa, flags);
+ r = __xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
+ XA_LIMIT(1, (1U << bits) - 1),
+ &amdgpu_pasid_xa_next, GFP_ATOMIC);
+ xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
- if (pasid >= 0)
+ if (r >= 0) {
trace_amdgpu_pasid_allocated(pasid);
+ return pasid;
+ }
- return pasid;
+ return r;
}
/**
@@ -84,11 +88,13 @@ int amdgpu_pasid_alloc(unsigned int bits)
*/
void amdgpu_pasid_free(u32 pasid)
{
+ unsigned long flags;
+
trace_amdgpu_pasid_freed(pasid);
- spin_lock(&amdgpu_pasid_idr_lock);
- idr_remove(&amdgpu_pasid_idr, pasid);
- spin_unlock(&amdgpu_pasid_idr_lock);
+ xa_lock_irqsave(&amdgpu_pasid_xa, flags);
+ __xa_erase(&amdgpu_pasid_xa, pasid);
+ xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
}
static void amdgpu_pasid_free_cb(struct dma_fence *fence,
@@ -625,13 +631,11 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
}
/**
- * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
+ * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
*
- * Cleanup the IDR allocator.
+ * Free all internal data structures of the XArray allocator.
*/
void amdgpu_pasid_mgr_cleanup(void)
{
- spin_lock(&amdgpu_pasid_idr_lock);
- idr_destroy(&amdgpu_pasid_idr);
- spin_unlock(&amdgpu_pasid_idr_lock);
+ xa_destroy(&amdgpu_pasid_xa);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3] drm/amdgpu: replace PASID IDR with XArray
2026-03-30 11:35 [PATCH v3] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
@ 2026-03-30 13:10 ` Lazar, Lijo
2026-03-31 7:18 ` Claude review: " Claude Code Review Bot
2026-03-31 7:18 ` Claude Code Review Bot
2 siblings, 0 replies; 4+ messages in thread
From: Lazar, Lijo @ 2026-03-30 13:10 UTC (permalink / raw)
To: Mikhail Gavrilov, Alex Deucher, Christian König
Cc: Eric Huang, David Airlie, Simona Vetter, amd-gfx, dri-devel,
stable
On 30-Mar-26 5:05 PM, Mikhail Gavrilov wrote:
> Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> converted the global PASID allocator from IDA to IDR with a spinlock
> for cyclic allocation, but introduced two locking bugs:
>
> 1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
> which can sleep.
>
> 2) amdgpu_pasid_free() can be called from hardirq context via the
> fence signal path (amdgpu_pasid_free_cb), but the lock is taken
> with plain spin_lock() in process context, creating a potential
> deadlock:
>
> CPU0
> ----
> spin_lock(&amdgpu_pasid_idr_lock) // process context, IRQs on
> <Interrupt>
> spin_lock(&amdgpu_pasid_idr_lock) // deadlock
>
> The hardirq call chain is:
>
> sdma_v6_0_process_trap_irq
> -> amdgpu_fence_process
> -> dma_fence_signal
> -> drm_sched_job_done
> -> dma_fence_signal
> -> amdgpu_pasid_free_cb
> -> amdgpu_pasid_free
>
> This was observed on an RX 7900 XTX when exiting a Vulkan game
> running under Proton/Wine, which triggers the fence callback path
> during VM teardown.
>
> Replace the IDR + spinlock with an XArray, which provides built-in
> cyclic allocation (__xa_alloc_cyclic) and fine-grained IRQ-safe
> locking (xa_lock_irqsave). This fixes both bugs in a single,
> cleaner conversion.
>
> Suggested-by: Lazar, Lijo <lijo.lazar@amd.com>
> Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
> ---
>
> v3: Replace IDR with XArray instead of fixing the spinlock, as
> suggested by Lijo Lazar.
> https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
> lock inconsistency (spin_lock -> spin_lock_irqsave).
> https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
> using idr_preload/GFP_NOWAIT.
> https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 50 +++++++++++++------------
> 1 file changed, 27 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index d88523568b62..1e660fbc42ff 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -22,7 +22,7 @@
> */
> #include "amdgpu_ids.h"
>
> -#include <linux/idr.h>
> +#include <linux/xarray.h>
> #include <linux/dma-fence-array.h>
>
>
> @@ -35,13 +35,13 @@
> * PASIDs are global address space identifiers that can be shared
> * between the GPU, an IOMMU and the driver. VMs on different devices
> * may use the same PASID if they share the same address
> - * space. Therefore PASIDs are allocated using IDR cyclic allocator
> - * (similar to kernel PID allocation) which naturally delays reuse.
> - * VMs are looked up from the PASID per amdgpu_device.
> + * space. Therefore PASIDs are allocated using an XArray cyclic
> + * allocator (similar to kernel PID allocation) which naturally delays
> + * reuse. VMs are looked up from the PASID per amdgpu_device.
> */
>
> -static DEFINE_IDR(amdgpu_pasid_idr);
> -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> +static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
> +static u32 amdgpu_pasid_xa_next;
>
> /* Helper to free pasid from a fence callback */
> struct amdgpu_pasid_cb {
> @@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
> * amdgpu_pasid_alloc - Allocate a PASID
> * @bits: Maximum width of the PASID in bits, must be at least 1
> *
> - * Uses kernel's IDR cyclic allocator (same as PID allocation).
> - * Allocates sequentially with automatic wrap-around.
> + * Uses XArray cyclic allocator for sequential allocation with wrap-around.
> *
> * Returns a positive integer on success. Returns %-EINVAL if bits==0.
> * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
> @@ -62,20 +61,25 @@ struct amdgpu_pasid_cb {
> */
> int amdgpu_pasid_alloc(unsigned int bits)
> {
> - int pasid;
> + unsigned long flags;
> + u32 pasid;
> + int r;
>
> if (bits == 0)
> return -EINVAL;
>
> - spin_lock(&amdgpu_pasid_idr_lock);
> - pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> - 1U << bits, GFP_KERNEL);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_lock_irqsave(&amdgpu_pasid_xa, flags);
> + r = __xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> + XA_LIMIT(1, (1U << bits) - 1),
> + &amdgpu_pasid_xa_next, GFP_ATOMIC);
> + xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
I think xarray takes care of GFP_KERNEL alloc (not to do it under
spinlock). The regular xa_alloc_cyclic with GFP_KERNEL may be good
enough. This may not be required.
Thanks,
Lijo
>
> - if (pasid >= 0)
> + if (r >= 0) {
> trace_amdgpu_pasid_allocated(pasid);
> + return pasid;
> + }
>
> - return pasid;
> + return r;
> }
>
> /**
> @@ -84,11 +88,13 @@ int amdgpu_pasid_alloc(unsigned int bits)
> */
> void amdgpu_pasid_free(u32 pasid)
> {
> + unsigned long flags;
> +
> trace_amdgpu_pasid_freed(pasid);
>
> - spin_lock(&amdgpu_pasid_idr_lock);
> - idr_remove(&amdgpu_pasid_idr, pasid);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_lock_irqsave(&amdgpu_pasid_xa, flags);
> + __xa_erase(&amdgpu_pasid_xa, pasid);
> + xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
> }
>
> static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> @@ -625,13 +631,11 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
> }
>
> /**
> - * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
> + * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
> *
> - * Cleanup the IDR allocator.
> + * Free all internal data structures of the XArray allocator.
> */
> void amdgpu_pasid_mgr_cleanup(void)
> {
> - spin_lock(&amdgpu_pasid_idr_lock);
> - idr_destroy(&amdgpu_pasid_idr);
> - spin_unlock(&amdgpu_pasid_idr_lock);
> + xa_destroy(&amdgpu_pasid_xa);
> }
^ permalink raw reply [flat|nested] 4+ messages in thread
* Claude review: drm/amdgpu: replace PASID IDR with XArray
2026-03-30 11:35 [PATCH v3] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-30 13:10 ` Lazar, Lijo
@ 2026-03-31 7:18 ` Claude Code Review Bot
2026-03-31 7:18 ` Claude Code Review Bot
2 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 7:18 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 2
Reviewed: 2026-03-31T17:18:18.250988
---
This is a single-patch series (v3) that fixes two real locking bugs introduced by commit `8f1de51f49be` in the amdgpu PASID allocator:
1. `GFP_KERNEL` allocation under `spin_lock()` (can sleep)
2. Plain `spin_lock()` used in process context while `amdgpu_pasid_free()` can be called from hardirq context via fence callbacks (deadlock)
The approach of replacing IDR+spinlock with XArray is clean and was suggested by the AMD maintainer (Lijo Lazar). XArray provides both built-in cyclic allocation and IRQ-safe internal locking, fixing both bugs in one conversion. The patch is well-motivated, the commit message is excellent with clear documentation of the bugs and call chains, and the v3 changelog shows good iteration based on reviewer feedback.
**Verdict: Looks good overall, with one minor nit.**
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
* Claude review: drm/amdgpu: replace PASID IDR with XArray
2026-03-30 11:35 [PATCH v3] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-30 13:10 ` Lazar, Lijo
2026-03-31 7:18 ` Claude review: " Claude Code Review Bot
@ 2026-03-31 7:18 ` Claude Code Review Bot
2 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 7:18 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Commit message quality:** Excellent. The two bugs are clearly described, the hardirq call chain is documented, and the real-world trigger scenario (RX 7900 XTX + Vulkan/Proton) is mentioned. The Fixes/Cc-stable tags are correct.
**Code review:**
The XArray conversion is mechanically correct:
- `DEFINE_XARRAY_ALLOC` is the right macro for an XArray that will use `xa_alloc` / `__xa_alloc_cyclic`.
- The `static u32 amdgpu_pasid_xa_next` cursor for cyclic allocation is correct — `__xa_alloc_cyclic` requires a caller-managed `next` variable.
- `xa_lock_irqsave` / `xa_unlock_irqrestore` properly handles the hardirq context issue (bug #2).
- `GFP_ATOMIC` is correct since allocation happens under the xa_lock (bug #1 fix).
- `xa_mk_value(0)` as the stored entry is fine — XArray treats `NULL` as "empty slot", so a non-NULL value is needed to mark the PASID as allocated. Using an internal value entry avoids memory allocation for the stored pointer.
- The limit `XA_LIMIT(1, (1U << bits) - 1)` correctly maps the old IDR range `[1, 1U << bits)` — IDR's `end` parameter is exclusive while XA_LIMIT's max is inclusive, so `(1U << bits) - 1` is the right translation.
**Minor nit on return value handling:**
```c
r = __xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
XA_LIMIT(1, (1U << bits) - 1),
&amdgpu_pasid_xa_next, GFP_ATOMIC);
xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
if (r >= 0) {
trace_amdgpu_pasid_allocated(pasid);
return pasid;
}
return r;
```
`__xa_alloc_cyclic` returns 0 on success, 1 if it wrapped around (both >= 0), or a negative errno. The `r >= 0` check is correct. However, note that `__xa_alloc_cyclic` can return 1 when the allocation wrapped, which is not an error — the code handles this correctly since `r >= 0` covers both cases. Good.
**Cleanup function:**
```c
void amdgpu_pasid_mgr_cleanup(void)
{
xa_destroy(&amdgpu_pasid_xa);
}
```
`xa_destroy()` does not require external locking — it frees internal nodes and resets the array. This is correct and simpler than the IDR version which took the spinlock around `idr_destroy()`. At cleanup time there should be no concurrent access anyway.
**No issues found.** The patch is a clean, correct conversion that fixes both bugs. The only potential concern is that `GFP_ATOMIC` is more restrictive than `GFP_KERNEL` (could fail under memory pressure), but this is unavoidable when allocating under a lock that may be taken in IRQ context, and PASID allocation failures are already handled by callers.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-31 7:18 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30 11:35 [PATCH v3] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-30 13:10 ` Lazar, Lijo
2026-03-31 7:18 ` Claude review: " Claude Code Review Bot
2026-03-31 7:18 ` Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox