public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v6] drm/amdgpu: replace PASID IDR with XArray
@ 2026-03-31 11:17 Mikhail Gavrilov
  2026-03-31 12:28 ` Christian König
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Mikhail Gavrilov @ 2026-03-31 11:17 UTC (permalink / raw)
  To: Alex Deucher, Christian König
  Cc: lijo.lazar, Eric Huang, David Airlie, Simona Vetter, amd-gfx,
	dri-devel, Mikhail Gavrilov

Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
converted the global PASID allocator from IDA to IDR with a spinlock
for cyclic allocation, but introduced two locking bugs:

1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
   which can sleep.

2) amdgpu_pasid_free() can be called from hardirq context via the
   fence signal path (amdgpu_pasid_free_cb), but the lock is taken
   with plain spin_lock() in process context, creating a potential
   deadlock:

     CPU0
     ----
     spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
     <Interrupt>
       spin_lock(&amdgpu_pasid_idr_lock) // deadlock

   The hardirq call chain is:

     sdma_v6_0_process_trap_irq
      -> amdgpu_fence_process
       -> dma_fence_signal
        -> drm_sched_job_done
         -> dma_fence_signal
          -> amdgpu_pasid_free_cb
           -> amdgpu_pasid_free

   This was observed on an RX 7900 XTX when exiting a Vulkan game
   running under Proton/Wine, which triggers the fence callback path
   during VM teardown.

Replace the IDR + spinlock with XArray using XA_FLAGS_LOCK_IRQ (all
xa operations use IRQ-safe locking internally) and XA_FLAGS_ALLOC1
(zero is not a valid PASID).  Both xa_alloc_cyclic() and xa_erase()
then handle locking consistently, fixing both bugs.

Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
---

v6: Use DEFINE_XARRAY_FLAGS with XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1
    so all xa operations use IRQ-safe locking internally.  Drop
    Cc: stable since the regression was never released to any stable
    kernel. (Christian König)
v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
    since xa_erase() only uses plain xa_lock() which is not safe from
    hardirq context.
    https://lore.kernel.org/all/20260330191120.105065-1-mikhail.v.gavrilov@gmail.com/
v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
    xa_lock_irqsave, as suggested by Lijo Lazar.
    https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
v3: Replace IDR with XArray instead of fixing the spinlock, as
    suggested by Lijo Lazar.
    https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
    lock inconsistency (spin_lock -> spin_lock_irqsave).
    https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
    using idr_preload/GFP_NOWAIT.
    https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/

 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 43 +++++++++++--------------
 1 file changed, 19 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index d88523568b62..9f264d439f3d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -22,7 +22,7 @@
  */
 #include "amdgpu_ids.h"
 
-#include <linux/idr.h>
+#include <linux/xarray.h>
 #include <linux/dma-fence-array.h>
 
 
@@ -35,13 +35,13 @@
  * PASIDs are global address space identifiers that can be shared
  * between the GPU, an IOMMU and the driver. VMs on different devices
  * may use the same PASID if they share the same address
- * space. Therefore PASIDs are allocated using IDR cyclic allocator
- * (similar to kernel PID allocation) which naturally delays reuse.
- * VMs are looked up from the PASID per amdgpu_device.
+ * space. Therefore PASIDs are allocated using an XArray cyclic
+ * allocator (similar to kernel PID allocation) which naturally delays
+ * reuse. VMs are looked up from the PASID per amdgpu_device.
  */
 
-static DEFINE_IDR(amdgpu_pasid_idr);
-static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
+static DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1);
+static u32 amdgpu_pasid_xa_next;
 
 /* Helper to free pasid from a fence callback */
 struct amdgpu_pasid_cb {
@@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
  * amdgpu_pasid_alloc - Allocate a PASID
  * @bits: Maximum width of the PASID in bits, must be at least 1
  *
- * Uses kernel's IDR cyclic allocator (same as PID allocation).
- * Allocates sequentially with automatic wrap-around.
+ * Uses XArray cyclic allocator for sequential allocation with wrap-around.
  *
  * Returns a positive integer on success. Returns %-EINVAL if bits==0.
  * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
@@ -62,20 +61,22 @@ struct amdgpu_pasid_cb {
  */
 int amdgpu_pasid_alloc(unsigned int bits)
 {
-	int pasid;
+	u32 pasid;
+	int r;
 
 	if (bits == 0)
 		return -EINVAL;
 
-	spin_lock(&amdgpu_pasid_idr_lock);
-	pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
-				 1U << bits, GFP_KERNEL);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
+			    XA_LIMIT(1, (1U << bits) - 1),
+			    &amdgpu_pasid_xa_next, GFP_KERNEL);
 
-	if (pasid >= 0)
+	if (r >= 0) {
 		trace_amdgpu_pasid_allocated(pasid);
+		return pasid;
+	}
 
-	return pasid;
+	return r;
 }
 
 /**
@@ -86,9 +87,7 @@ void amdgpu_pasid_free(u32 pasid)
 {
 	trace_amdgpu_pasid_freed(pasid);
 
-	spin_lock(&amdgpu_pasid_idr_lock);
-	idr_remove(&amdgpu_pasid_idr, pasid);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	xa_erase(&amdgpu_pasid_xa, pasid);
 }
 
 static void amdgpu_pasid_free_cb(struct dma_fence *fence,
@@ -625,13 +624,9 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
 }
 
 /**
- * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
- *
- * Cleanup the IDR allocator.
+ * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
  */
 void amdgpu_pasid_mgr_cleanup(void)
 {
-	spin_lock(&amdgpu_pasid_idr_lock);
-	idr_destroy(&amdgpu_pasid_idr);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	xa_destroy(&amdgpu_pasid_xa);
 }
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v6] drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 11:17 [PATCH v6] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
@ 2026-03-31 12:28 ` Christian König
  2026-03-31 14:30   ` Mikhail Gavrilov
  2026-03-31 21:48 ` Claude review: " Claude Code Review Bot
  2026-03-31 21:48 ` Claude Code Review Bot
  2 siblings, 1 reply; 5+ messages in thread
From: Christian König @ 2026-03-31 12:28 UTC (permalink / raw)
  To: Mikhail Gavrilov, Alex Deucher
  Cc: lijo.lazar, Eric Huang, David Airlie, Simona Vetter, amd-gfx,
	dri-devel

On 3/31/26 13:17, Mikhail Gavrilov wrote:
> Commit 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")
> converted the global PASID allocator from IDA to IDR with a spinlock
> for cyclic allocation, but introduced two locking bugs:
> 
> 1) idr_alloc_cyclic() is called with GFP_KERNEL under spin_lock(),
>    which can sleep.
> 
> 2) amdgpu_pasid_free() can be called from hardirq context via the
>    fence signal path (amdgpu_pasid_free_cb), but the lock is taken
>    with plain spin_lock() in process context, creating a potential
>    deadlock:
> 
>      CPU0
>      ----
>      spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
>      <Interrupt>
>        spin_lock(&amdgpu_pasid_idr_lock) // deadlock
> 
>    The hardirq call chain is:
> 
>      sdma_v6_0_process_trap_irq
>       -> amdgpu_fence_process
>        -> dma_fence_signal
>         -> drm_sched_job_done
>          -> dma_fence_signal
>           -> amdgpu_pasid_free_cb
>            -> amdgpu_pasid_free
> 
>    This was observed on an RX 7900 XTX when exiting a Vulkan game
>    running under Proton/Wine, which triggers the fence callback path
>    during VM teardown.
> 
> Replace the IDR + spinlock with XArray using XA_FLAGS_LOCK_IRQ (all
> xa operations use IRQ-safe locking internally) and XA_FLAGS_ALLOC1
> (zero is not a valid PASID).  Both xa_alloc_cyclic() and xa_erase()
> then handle locking consistently, fixing both bugs.
> 
> Suggested-by: Lijo Lazar <lijo.lazar@amd.com>

> Fixes: 8f1de51f49be ("drm/amdgpu: prevent immediate PASID reuse case")

That should be unecessary. We already replaced GFP_KERNEL with GFP_ATOMIC in Alex fixes pull.

> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
> ---
> 
> v6: Use DEFINE_XARRAY_FLAGS with XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1
>     so all xa operations use IRQ-safe locking internally.  Drop
>     Cc: stable since the regression was never released to any stable
>     kernel. (Christian König)
> v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
>     since xa_erase() only uses plain xa_lock() which is not safe from
>     hardirq context.
>     https://lore.kernel.org/all/20260330191120.105065-1-mikhail.v.gavrilov@gmail.com/
> v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
>     xa_lock_irqsave, as suggested by Lijo Lazar.
>     https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
> v3: Replace IDR with XArray instead of fixing the spinlock, as
>     suggested by Lijo Lazar.
>     https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
> v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
>     lock inconsistency (spin_lock -> spin_lock_irqsave).
>     https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
>     using idr_preload/GFP_NOWAIT.
>     https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 43 +++++++++++--------------
>  1 file changed, 19 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index d88523568b62..9f264d439f3d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -22,7 +22,7 @@
>   */
>  #include "amdgpu_ids.h"
>  
> -#include <linux/idr.h>
> +#include <linux/xarray.h>
>  #include <linux/dma-fence-array.h>
>  
>  
> @@ -35,13 +35,13 @@
>   * PASIDs are global address space identifiers that can be shared
>   * between the GPU, an IOMMU and the driver. VMs on different devices
>   * may use the same PASID if they share the same address
> - * space. Therefore PASIDs are allocated using IDR cyclic allocator
> - * (similar to kernel PID allocation) which naturally delays reuse.
> - * VMs are looked up from the PASID per amdgpu_device.
> + * space. Therefore PASIDs are allocated using an XArray cyclic
> + * allocator (similar to kernel PID allocation) which naturally delays
> + * reuse. VMs are looked up from the PASID per amdgpu_device.
>   */
>  
> -static DEFINE_IDR(amdgpu_pasid_idr);
> -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> +static DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1);
> +static u32 amdgpu_pasid_xa_next;
>  
>  /* Helper to free pasid from a fence callback */
>  struct amdgpu_pasid_cb {
> @@ -53,8 +53,7 @@ struct amdgpu_pasid_cb {
>   * amdgpu_pasid_alloc - Allocate a PASID
>   * @bits: Maximum width of the PASID in bits, must be at least 1
>   *
> - * Uses kernel's IDR cyclic allocator (same as PID allocation).
> - * Allocates sequentially with automatic wrap-around.
> + * Uses XArray cyclic allocator for sequential allocation with wrap-around.
>   *
>   * Returns a positive integer on success. Returns %-EINVAL if bits==0.
>   * Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
> @@ -62,20 +61,22 @@ struct amdgpu_pasid_cb {
>   */
>  int amdgpu_pasid_alloc(unsigned int bits)
>  {
> -	int pasid;
> +	u32 pasid;
> +	int r;
>  
>  	if (bits == 0)
>  		return -EINVAL;
>  
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> -				 1U << bits, GFP_KERNEL);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> +			    XA_LIMIT(1, (1U << bits) - 1),
> +			    &amdgpu_pasid_xa_next, GFP_KERNEL);
>  
> -	if (pasid >= 0)
> +	if (r >= 0) {

I would turn that around, e.g. if (r < 0) return r;

Apart from that looks good to me.

Regards,
Christian.

>  		trace_amdgpu_pasid_allocated(pasid);
> +		return pasid;
> +	}
>  
> -	return pasid;
> +	return r;
>  }
>  
>  /**
> @@ -86,9 +87,7 @@ void amdgpu_pasid_free(u32 pasid)
>  {
>  	trace_amdgpu_pasid_freed(pasid);
>  
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	idr_remove(&amdgpu_pasid_idr, pasid);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	xa_erase(&amdgpu_pasid_xa, pasid);
>  }
>  
>  static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> @@ -625,13 +624,9 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
>  }
>  
>  /**
> - * amdgpu_pasid_mgr_cleanup - cleanup PASID manager
> - *
> - * Cleanup the IDR allocator.
> + * amdgpu_pasid_mgr_cleanup - Cleanup PASID manager
>   */
>  void amdgpu_pasid_mgr_cleanup(void)
>  {
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	idr_destroy(&amdgpu_pasid_idr);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	xa_destroy(&amdgpu_pasid_xa);
>  }


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v6] drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 12:28 ` Christian König
@ 2026-03-31 14:30   ` Mikhail Gavrilov
  0 siblings, 0 replies; 5+ messages in thread
From: Mikhail Gavrilov @ 2026-03-31 14:30 UTC (permalink / raw)
  To: Christian König
  Cc: Alex Deucher, lijo.lazar, Eric Huang, David Airlie, Simona Vetter,
	amd-gfx, dri-devel

On Tue, Mar 31, 2026 at 5:28 PM Christian König
<christian.koenig@amd.com> wrote:
>
> That should be unecessary. We already replaced GFP_KERNEL with GFP_ATOMIC in Alex fixes pull.
>
> I would turn that around, e.g. if (r < 0) return r;
>
> Apart from that looks good to me.

Hi Christian,

Thanks for the review. I've sent v7 rebased on amd-staging-drm-next
(on top of dccd79bb1c7f) with the inverted error check as you
suggested.

The Fixes: tag now points to e6d765de3d6b which introduced the
plain spin_lock() for the PASID IDR.  The sleeping-under-spinlock
is already fixed by dccd79bb1c7f, but the IRQ safety issue remains —
amdgpu_pasid_free() is reachable from hardirq via the fence callback
path while the lock is taken without irqsave in process context.
I hit this lockdep WARNING on an RX 7900 XTX when exiting a Vulkan
game under Proton.  The xarray conversion with XA_FLAGS_LOCK_IRQ
fixes this.

https://lore.kernel.org/all/20260331142127.52796-1-mikhail.v.gavrilov@gmail.com/

-- 
Thanks,
Mike Gavrilov.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 11:17 [PATCH v6] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
  2026-03-31 12:28 ` Christian König
  2026-03-31 21:48 ` Claude review: " Claude Code Review Bot
@ 2026-03-31 21:48 ` Claude Code Review Bot
  2 siblings, 0 replies; 5+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 21:48 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 3
Reviewed: 2026-04-01T07:48:28.604416

---

This is a single-patch v6 series replacing an IDR+spinlock PASID allocator with XArray to fix two locking bugs introduced by commit 8f1de51f49be. The commit message is excellent — it clearly describes both bugs, includes the hardirq call chain, and documents the real-world trigger scenario.

**However, the patch has a critical correctness bug: it does not actually fix the hardirq deadlock (bug #2) that it claims to fix.** The patch uses `xa_erase()` and `xa_alloc_cyclic()`, which both use plain `xa_lock()`/`xa_unlock()` (i.e., `spin_lock()`/`spin_unlock()`). The `XA_FLAGS_LOCK_IRQ` flag does **not** make these wrapper functions use IRQ-safe locking — it only affects the internal lock drop/reacquire path used during memory allocation inside the XArray internals.

The v5 approach (explicit `xa_lock_irqsave`/`__xa_erase`) was actually on the right track. The v6 commit message states *"all xa operations use IRQ-safe locking internally"* but this is incorrect.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 11:17 [PATCH v6] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
  2026-03-31 12:28 ` Christian König
@ 2026-03-31 21:48 ` Claude Code Review Bot
  2026-03-31 21:48 ` Claude Code Review Bot
  2 siblings, 0 replies; 5+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 21:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Critical bug — hardirq deadlock not fixed:**

The core issue is that `amdgpu_pasid_free()` is called from hardirq context (via `amdgpu_pasid_free_cb` → fence signal path), and `amdgpu_pasid_alloc()` is called from process context. Both must use IRQ-disabling locking to prevent deadlock.

```c
xa_erase(&amdgpu_pasid_xa, pasid);
```

`xa_erase()` is defined in `lib/xarray.c` as:
```c
void *xa_erase(struct xarray *xa, unsigned long index)
{
	xa_lock(xa);        /* = spin_lock() -- IRQs NOT disabled */
	entry = __xa_erase(xa, index);
	xa_unlock(xa);      /* = spin_unlock() */
	return entry;
}
```

Similarly, `xa_alloc_cyclic()` in `include/linux/xarray.h`:
```c
static inline int xa_alloc_cyclic(...)
{
	xa_lock(xa);        /* = spin_lock() -- IRQs NOT disabled */
	err = __xa_alloc_cyclic(...);
	xa_unlock(xa);
	return err < 0 ? err : 0;
}
```

The `XA_FLAGS_LOCK_IRQ` flag only affects `xas_lock_type()`/`xas_unlock_type()` used internally by `__xas_nomem()` when the XArray needs to drop the lock for memory allocation. It has **no effect** on the outer locking in `xa_erase()` or `xa_alloc_cyclic()`.

The XArray API provides dedicated IRQ-safe wrappers for exactly this situation:
- `xa_erase_irq()` — uses `xa_lock_irq()`/`xa_unlock_irq()`
- `xa_alloc_cyclic_irq()` — uses `xa_lock_irq()`/`xa_unlock_irq()`

**However**, `xa_erase_irq()` uses `spin_unlock_irq()` which unconditionally re-enables interrupts. Calling this from hardirq context would re-enable interrupts inside the hardirq handler, which is also wrong.

**The correct fix for `amdgpu_pasid_free()`** (callable from any context) should use `xa_lock_irqsave`/`xa_unlock_irqrestore` as v5 proposed:

```c
void amdgpu_pasid_free(u32 pasid)
{
	unsigned long flags;

	trace_amdgpu_pasid_freed(pasid);

	xa_lock_irqsave(&amdgpu_pasid_xa, flags);
	__xa_erase(&amdgpu_pasid_xa, pasid);
	xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
}
```

For `amdgpu_pasid_alloc()` (process context only), `xa_alloc_cyclic_irq()` would work correctly, or alternatively the same `irqsave` pattern.

**Other observations (minor, all correct):**

- `DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1)` — using `XA_FLAGS_ALLOC1` to prevent PASID 0 allocation is appropriate, and makes the explicit `XA_LIMIT(1, ...)` minimum redundant but harmless.

- The return value handling is correct: `xa_alloc_cyclic()` returns 0 on success (squashing the wrap indicator), stores the ID in `&pasid`, and the code correctly checks `r >= 0` then returns `pasid`.

- Storing `xa_mk_value(0)` as the entry is fine — XArray needs a non-NULL entry for the slot to be considered allocated, and `xa_mk_value(0)` serves this purpose since the original IDR code stored `NULL` via `idr_alloc_cyclic(..., NULL, ...)`.

- The `xa_destroy()` call in `amdgpu_pasid_mgr_cleanup()` is correct and does not need external locking.

- Dropping `Cc: stable` is correct since the regression (commit 8f1de51f49be) hasn't reached a stable kernel.

**Recommendation: NAK in current form.** The patch must use IRQ-safe locking wrappers. The simplest correct approach would be to use `xa_alloc_cyclic_irq()` for allocation and `xa_lock_irqsave`/`__xa_erase`/`xa_unlock_irqrestore` for free, as the v5 approach partially identified.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-31 21:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 11:17 [PATCH v6] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-31 12:28 ` Christian König
2026-03-31 14:30   ` Mikhail Gavrilov
2026-03-31 21:48 ` Claude review: " Claude Code Review Bot
2026-03-31 21:48 ` Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox