public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v4] drm/amdgpu: replace PASID IDR with XArray
@ 2026-03-30 14:50 Mikhail Gavrilov
  2026-03-30 17:32 ` Lazar, Lijo
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Mikhail Gavrilov @ 2026-03-30 14:50 UTC (permalink / raw)
  To: Alex Deucher, Christian König
  Cc: lijo.lazar, Eric Huang, David Airlie, Simona Vetter, amd-gfx,
	dri-devel, stable, Mikhail Gavrilov

Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
converted the global PASID allocator from IDA to IDR with a spinlock
for cyclic allocation, but introduced two locking bugs:

1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
   which can sleep.

2) amdgpu_pasid_free() can be called from hardirq context via the
   fence signal path (amdgpu_pasid_free_cb), but the lock is taken
   with plain spin_lock() in process context, creating a potential
   deadlock:

     CPU0
     ----
     spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
     <Interrupt>
       spin_lock(&amdgpu_pasid_idr_lock) // deadlock

   The hardirq call chain is:

     sdma_v6_0_process_trap_irq
      -> amdgpu_fence_process
       -> dma_fence_signal
        -> drm_sched_job_done
         -> dma_fence_signal
          -> amdgpu_pasid_free_cb
           -> amdgpu_pasid_free

   This was observed on an RX 7900 XTX when exiting a Vulkan game
   running under Proton/Wine, which triggers the fence callback path
   during VM teardown.

Replace the IDR + spinlock with XArray.  xa_alloc_cyclic() handles
GFP_KERNEL pre-allocation and IRQ-safe locking internally, and
xa_erase() is already IRQ-safe, so no explicit locking is needed.
This fixes both bugs in a single, cleaner conversion.

Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
Cc: stable@vger.kernel.org
Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
---

v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
    xa_lock_irqsave, as suggested by Lijo Lazar.
v3: Replace IDR with XArray instead of fixing the spinlock, as
    suggested by Lijo Lazar.
    https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
    lock inconsistency (spin_lock -> spin_lock_irqsave).
    https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
    using idr_preload/GFP_NOWAIT.
    https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/

 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 43 +++++++++++--------------
 1 file changed, 19 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index d88523568b62..2b63b54eaaa7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -22,7 +22,7 @@
  */
 #include "amdgpu_ids.h"
 
-#include <linux/idr.h>
+#include <linux/xarray.h>
 #include <linux/dma-fence-array.h>
 
 
@@ -35,13 +35,13 @@
  * PASIDs are global address space identifiers that can be shared
  * between the GPU, an IOMMU and the driver. VMs on different devices
  * may use the same PASID if they share the same address
- * space. Therefore PASIDs are allocated using IDR cyclic allocator
- * (similar to kernel PID allocation) which naturally delays reuse.
- * VMs are looked up from the PASID per amdgpu_device.
+ * space. Therefore PASIDs are allocated using an XArray cyclic
+ * allocator (similar to kernel PID allocation) which naturally delays
+ * reuse. VMs are looked up from the PASID per amdgpu_device.
  */
 
-static DEFINE_IDR(amdgpu_pasid_idr);
-static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
+static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
+static u32 amdgpu_pasid_xa_next;
 
 /* Helper to free pasid from a fence callback */
 struct amdgpu_pasid_cb {
@@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
  * amdgpu_pasid_alloc - Allocate a PASID
  * @bits: Maximum width of the PASID in bits, must be at least 1
  *
- * Uses kernel's IDR cyclic allocator (same as PID allocation).
- * Allocates sequentially with automatic wrap-around.
+ * Uses XArray cyclic allocator for sequential allocation with wrap-around.
  *
  * Returns a positive integer on success. Returns %-EINVAL if bits==0.
  * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
@@ -62,20 +61,22 @@ struct amdgpu_pasid_cb {
  */
 int amdgpu_pasid_alloc(unsigned int bits)
 {
-	int pasid;
+	u32 pasid;
+	int r;
 
 	if (bits == 0)
 		return -EINVAL;
 
-	spin_lock(&amdgpu_pasid_idr_lock);
-	pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
-				 1U << bits, GFP_KERNEL);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
+			    XA_LIMIT(1, (1U << bits) - 1),
+			    &amdgpu_pasid_xa_next, GFP_KERNEL);
 
-	if (pasid >= 0)
+	if (r >= 0) {
 		trace_amdgpu_pasid_allocated(pasid);
+		return pasid;
+	}
 
-	return pasid;
+	return r;
 }
 
 /**
@@ -86,9 +87,7 @@ void amdgpu_pasid_free(u32 pasid)
 {
 	trace_amdgpu_pasid_freed(pasid);
 
-	spin_lock(&amdgpu_pasid_idr_lock);
-	idr_remove(&amdgpu_pasid_idr, pasid);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	xa_erase(&amdgpu_pasid_xa, pasid);
 }
 
 static void amdgpu_pasid_free_cb(struct dma_fence *fence,
@@ -625,13 +624,9 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
 }
 
 /**
- * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
- *
- * Cleanup the IDR allocator.
+ * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
  */
 void amdgpu_pasid_mgr_cleanup(void)
 {
-	spin_lock(&amdgpu_pasid_idr_lock);
-	idr_destroy(&amdgpu_pasid_idr);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	xa_destroy(&amdgpu_pasid_xa);
 }
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v4] drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 14:50 [PATCH v4] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
@ 2026-03-30 17:32 ` Lazar, Lijo
  2026-03-30 19:40   ` Mikhail Gavrilov
  2026-03-31  7:08 ` Claude review: " Claude Code Review Bot
  2026-03-31  7:08 ` Claude Code Review Bot
  2 siblings, 1 reply; 5+ messages in thread
From: Lazar, Lijo @ 2026-03-30 17:32 UTC (permalink / raw)
  To: Mikhail Gavrilov, Alex Deucher, Christian König
  Cc: Eric Huang, David Airlie, Simona Vetter, amd-gfx, dri-devel,
	stable



On 30-Mar-26 8:20 PM, Mikhail Gavrilov wrote:
> Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> converted the global PASID allocator from IDA to IDR with a spinlock
> for cyclic allocation, but introduced two locking bugs:
> 
> 1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
>     which can sleep.
> 
> 2) amdgpu_pasid_free() can be called from hardirq context via the
>     fence signal path (amdgpu_pasid_free_cb), but the lock is taken
>     with plain spin_lock() in process context, creating a potential
>     deadlock:
> 
>       CPU0
>       ----
>       spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
>       <Interrupt>
>         spin_lock(&amdgpu_pasid_idr_lock) // deadlock
> 
>     The hardirq call chain is:
> 
>       sdma_v6_0_process_trap_irq
>        -> amdgpu_fence_process
>         -> dma_fence_signal
>          -> drm_sched_job_done
>           -> dma_fence_signal
>            -> amdgpu_pasid_free_cb
>             -> amdgpu_pasid_free
> 
>     This was observed on an RX 7900 XTX when exiting a Vulkan game
>     running under Proton/Wine, which triggers the fence callback path
>     during VM teardown.
> 
> Replace the IDR + spinlock with XArray.  xa_alloc_cyclic() handles
> GFP_KERNEL pre-allocation and IRQ-safe locking internally, and
> xa_erase() is already IRQ-safe, so no explicit locking is needed.
> This fixes both bugs in a single, cleaner conversion.
> 
> Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
> Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
> ---
> 
> v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
>      xa_lock_irqsave, as suggested by Lijo Lazar.

Sorry, I didn't mean to confuse. In v3, was only talking about 
alloc_cyclic call.

As per the call trace posted, amdgpu_pasid_free() has a chance to be 
called from irq context and that may still use irq save/restore 
approach. Eric/Christian, could you confirm?

Thanks,
Lijo

> v3: Replace IDR with XArray instead of fixing the spinlock, as
>      suggested by Lijo Lazar.
>      https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
> v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
>      lock inconsistency (spin_lock -> spin_lock_irqsave).
>      https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
>      using idr_preload/GFP_NOWAIT.
>      https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
> 
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 43 +++++++++++--------------
>   1 file changed, 19 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index d88523568b62..2b63b54eaaa7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -22,7 +22,7 @@
>    */
>   #include "amdgpu_ids.h"
>   
> -#include <linux/idr.h>
> +#include <linux/xarray.h>
>   #include <linux/dma-fence-array.h>
>   
>   
> @@ -35,13 +35,13 @@
>    * PASIDs are global address space identifiers that can be shared
>    * between the GPU, an IOMMU and the driver. VMs on different devices
>    * may use the same PASID if they share the same address
> - * space. Therefore PASIDs are allocated using IDR cyclic allocator
> - * (similar to kernel PID allocation) which naturally delays reuse.
> - * VMs are looked up from the PASID per amdgpu_device.
> + * space. Therefore PASIDs are allocated using an XArray cyclic
> + * allocator (similar to kernel PID allocation) which naturally delays
> + * reuse. VMs are looked up from the PASID per amdgpu_device.
>    */
>   
> -static DEFINE_IDR(amdgpu_pasid_idr);
> -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> +static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
> +static u32 amdgpu_pasid_xa_next;
>   
>   /* Helper to free pasid from a fence callback */
>   struct amdgpu_pasid_cb {
> @@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
>    * amdgpu_pasid_alloc - Allocate a PASID
>    * @bits: Maximum width of the PASID in bits, must be at least 1
>    *
> - * Uses kernel's IDR cyclic allocator (same as PID allocation).
> - * Allocates sequentially with automatic wrap-around.
> + * Uses XArray cyclic allocator for sequential allocation with wrap-around.
>    *
>    * Returns a positive integer on success. Returns %-EINVAL if bits==0.
>    * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
> @@ -62,20 +61,22 @@ struct amdgpu_pasid_cb {
>    */
>   int amdgpu_pasid_alloc(unsigned int bits)
>   {
> -	int pasid;
> +	u32 pasid;
> +	int r;
>   
>   	if (bits == 0)
>   		return -EINVAL;
>   
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> -				 1U << bits, GFP_KERNEL);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> +			    XA_LIMIT(1, (1U << bits) - 1),
> +			    &amdgpu_pasid_xa_next, GFP_KERNEL);
>   
> -	if (pasid >= 0)
> +	if (r >= 0) {
>   		trace_amdgpu_pasid_allocated(pasid);
> +		return pasid;
> +	}
>   
> -	return pasid;
> +	return r;
>   }
>   
>   /**
> @@ -86,9 +87,7 @@ void amdgpu_pasid_free(u32 pasid)
>   {
>   	trace_amdgpu_pasid_freed(pasid);
>   
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	idr_remove(&amdgpu_pasid_idr, pasid);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	xa_erase(&amdgpu_pasid_xa, pasid);
>   }
>   
>   static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> @@ -625,13 +624,9 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
>   }
>   
>   /**
> - * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
> - *
> - * Cleanup the IDR allocator.
> + * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
>    */
>   void amdgpu_pasid_mgr_cleanup(void)
>   {
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	idr_destroy(&amdgpu_pasid_idr);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	xa_destroy(&amdgpu_pasid_xa);
>   }


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v4] drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 17:32 ` Lazar, Lijo
@ 2026-03-30 19:40   ` Mikhail Gavrilov
  0 siblings, 0 replies; 5+ messages in thread
From: Mikhail Gavrilov @ 2026-03-30 19:40 UTC (permalink / raw)
  To: Lazar, Lijo
  Cc: Alex Deucher, Christian König, Eric Huang, David Airlie,
	Simona Vetter, amd-gfx, dri-devel, stable

On Mon, Mar 30, 2026 at 10:33 PM Lazar, Lijo <lijo.lazar@amd.com> wrote:
>
> Sorry, I didn't mean to confuse. In v3, was only talking about
> alloc_cyclic call.
>
> As per the call trace posted, amdgpu_pasid_free() has a chance to be
> called from irq context and that may still use irq save/restore
> approach. Eric/Christian, could you confirm?

Hi Lijo,

You're right, xa_erase() uses plain xa_lock() without irqsave —
I verified in lib/xarray.c.

I've sent v5 which uses xa_lock_irqsave/__xa_erase for
amdgpu_pasid_free() since it can be called from hardirq via
amdgpu_pasid_free_cb.  xa_alloc_cyclic() in amdgpu_pasid_alloc()
is kept as-is since it handles irq-safe locking internally.

https://lore.kernel.org/all/20260330191120.105065-1-mikhail.v.gavrilov@gmail.com/

Thanks for catching this.

-- 
Best Regards,
Mike Gavrilov.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 14:50 [PATCH v4] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
  2026-03-30 17:32 ` Lazar, Lijo
  2026-03-31  7:08 ` Claude review: " Claude Code Review Bot
@ 2026-03-31  7:08 ` Claude Code Review Bot
  2 siblings, 0 replies; 5+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  7:08 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 3
Reviewed: 2026-03-31T17:08:59.509731

---

This is a single well-motivated bugfix patch (v4) that replaces the IDR + spinlock PASID allocator with XArray to fix two real locking bugs introduced by commit 8f1de51f49be:

1. `GFP_KERNEL` allocation under `spin_lock()` (can sleep)
2. Plain `spin_lock()` used in process context while `amdgpu_pasid_free()` can be called from hardirq context (deadlock)

The XArray conversion is the right approach — `xa_alloc_cyclic()` handles internal locking with proper IRQ safety, and `xa_erase()` is inherently IRQ-safe. The patch is clean, well-explained, and has gone through good iteration (v1→v4) with reviewer feedback from Lijo Lazar.

**Verdict: This patch looks correct and ready for merge.** One minor nit below.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 14:50 [PATCH v4] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
  2026-03-30 17:32 ` Lazar, Lijo
@ 2026-03-31  7:08 ` Claude Code Review Bot
  2026-03-31  7:08 ` Claude Code Review Bot
  2 siblings, 0 replies; 5+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  7:08 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Commit message:** Excellent. Clearly describes both bugs, includes the deadlock call chain, identifies the real-world trigger (RX 7900 XTX + Vulkan/Proton), and explains why XArray fixes both issues. The version history is thorough. Has appropriate Fixes/Cc-stable tags.

**Code review:**

The conversion is straightforward and correct:

```c
static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
static u32 amdgpu_pasid_xa_next;
```
`DEFINE_XARRAY_ALLOC` is the right macro for cyclic allocation use. The `u32` next cursor is correct for `xa_alloc_cyclic()`.

```c
r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
		    XA_LIMIT(1, (1U << bits) - 1),
		    &amdgpu_pasid_xa_next, GFP_KERNEL);
```
This is correct. The limit upper bound is `(1U << bits) - 1`, which is right — the old IDR code used `1U << bits` as the exclusive end, while `XA_LIMIT` takes an inclusive max. `xa_mk_value(0)` stores a tagged NULL-ish value, which is fine since the entry just needs to be non-NULL to mark the PASID as allocated (XArray skips NULL entries in allocation).

**Minor nit:** `xa_alloc_cyclic()` returns 1 when wrapping occurs and 0 on normal success. The check `if (r >= 0)` handles both correctly. The return-value restructuring (early return on success, fall through to return error) is fine and arguably cleaner than the original.

```c
xa_erase(&amdgpu_pasid_xa, pasid);
```
Correct — `xa_erase()` uses `xa_lock_irq()` internally, so it's safe from hardirq context. This directly fixes the deadlock bug.

```c
xa_destroy(&amdgpu_pasid_xa);
```
Correct — no external locking needed, `xa_destroy()` handles it.

**One minor observation:** The `bits` parameter could theoretically be 32, in which case `1U << 32` is undefined behavior in C. However, this is not a regression introduced by this patch — the original IDR code had the same issue, and in practice `bits` is always 16 (from `amdgpu_vm_pasid_bits`). Not something to block this patch on.

**Reviewed-by worthy:** Yes. The patch is correct, well-documented, and a clear improvement over the buggy IDR+spinlock approach.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-31  7:08 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30 14:50 [PATCH v4] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-30 17:32 ` Lazar, Lijo
2026-03-30 19:40   ` Mikhail Gavrilov
2026-03-31  7:08 ` Claude review: " Claude Code Review Bot
2026-03-31  7:08 ` Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox