public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 19:11 [PATCH v5] " Mikhail Gavrilov
@ 2026-03-31  6:57 ` Claude Code Review Bot
  2026-03-31  6:57 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  6:57 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 3
Reviewed: 2026-03-31T16:57:38.374656

---

This is a single v5 patch converting amdgpu's PASID allocator from IDR+spinlock to XArray, fixing two real bugs introduced by commit 8f1de51f49be: sleeping under spinlock and a process/hardirq lock deadlock. The conversion is clean and well-motivated, but **the patch has a remaining deadlock vulnerability** — the same class of bug it's trying to fix.

The core issue: `xa_alloc_cyclic()` uses plain `xa_lock()` internally (IRQs remain enabled), while `amdgpu_pasid_free()` correctly uses `xa_lock_irqsave()`. If a hardirq fires on the same CPU while `xa_alloc_cyclic()` holds the xa_lock, and the IRQ handler calls `amdgpu_pasid_free()` → `xa_lock_irqsave()`, it will deadlock on the already-held lock — exactly the same pattern as the original bug.

**The fix is straightforward**: use `xa_alloc_cyclic_irq()` instead of `xa_alloc_cyclic()`. This variant (at `include/linux/xarray.h:1054`) takes and releases the xa_lock while disabling interrupts, which prevents the deadlock.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 19:11 [PATCH v5] " Mikhail Gavrilov
  2026-03-31  6:57 ` Claude review: " Claude Code Review Bot
@ 2026-03-31  6:57 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  6:57 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Commit message**: Excellent. The bug description is thorough, with clear lockdep-style annotation and a concrete reproduction scenario (RX 7900 XTX, Vulkan game under Proton). The changelog across v1-v5 is well documented.

**Bug — xa_alloc_cyclic() is not IRQ-safe** (line 70-72 of patched file):

```c
r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
		    XA_LIMIT(1, (1U << bits) - 1),
		    &amdgpu_pasid_xa_next, GFP_KERNEL);
```

`xa_alloc_cyclic()` uses plain `xa_lock()` (see `include/linux/xarray.h:982`):
```c
xa_lock(xa);
err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
xa_unlock(xa);
```

Since `amdgpu_pasid_free()` can be called from hardirq context (the entire reason for this patch), an IRQ arriving while this lock is held will deadlock. This must be `xa_alloc_cyclic_irq()` which disables IRQs around the lock (`include/linux/xarray.h:1054`).

**Return value change** (lines 74-79): The original `idr_alloc_cyclic()` returned the allocated ID on success (positive) or negative errno. `xa_alloc_cyclic()` returns 0 on success and writes the ID to `*pasid`. The patch correctly handles this — checking `r >= 0` and returning `pasid` rather than `r`. Note that `xa_alloc_cyclic()` actually swallows the wrap-around indicator (returns 0 even on wrap), but `__xa_alloc_cyclic()` returns 1 on wrap. Since this code doesn't care about wrap notification, this is fine.

**Return value for -EBUSY** (line 79): The original IDR code returned `-ENOSPC` on exhaustion (per IDR semantics), while `xa_alloc_cyclic()` returns `-EBUSY`. The docstring at line 59 still says "Returns %-ENOSPC if no PASID was available" — this should be updated to `-EBUSY` to match reality. Minor documentation nit.

**amdgpu_pasid_free() locking** (lines 92-94): Correct. Using `xa_lock_irqsave()`/`__xa_erase()`/`xa_unlock_irqrestore()` is the right approach for hardirq context.

**xa_mk_value(0) as stored entry** (line 70): Storing `xa_mk_value(0)` works — XArray treats NULL as "empty", so an explicit non-NULL value is needed to mark the slot as occupied. `xa_mk_value(0)` is the lightest-weight way to do this. Fine.

**Cleanup** (line 261): `xa_destroy()` without locking is correct — this is called at module teardown when no concurrent access is possible.

**Recommendation**: NAK as-is due to the `xa_alloc_cyclic()` deadlock. The one-line fix is to change it to `xa_alloc_cyclic_irq()`. With that change, the patch would be correct and ready.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 14:50 [PATCH v4] " Mikhail Gavrilov
@ 2026-03-31  7:08 ` Claude Code Review Bot
  2026-03-31  7:08 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  7:08 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 3
Reviewed: 2026-03-31T17:08:59.509731

---

This is a single well-motivated bugfix patch (v4) that replaces the IDR + spinlock PASID allocator with XArray to fix two real locking bugs introduced by commit 8f1de51f49be:

1. `GFP_KERNEL` allocation under `spin_lock()` (can sleep)
2. Plain `spin_lock()` used in process context while `amdgpu_pasid_free()` can be called from hardirq context (deadlock)

The XArray conversion is the right approach — `xa_alloc_cyclic()` handles internal locking with proper IRQ safety, and `xa_erase()` is inherently IRQ-safe. The patch is clean, well-explained, and has gone through good iteration (v1→v4) with reviewer feedback from Lijo Lazar.

**Verdict: This patch looks correct and ready for merge.** One minor nit below.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 14:50 [PATCH v4] " Mikhail Gavrilov
  2026-03-31  7:08 ` Claude review: " Claude Code Review Bot
@ 2026-03-31  7:08 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  7:08 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Commit message:** Excellent. Clearly describes both bugs, includes the deadlock call chain, identifies the real-world trigger (RX 7900 XTX + Vulkan/Proton), and explains why XArray fixes both issues. The version history is thorough. Has appropriate Fixes/Cc-stable tags.

**Code review:**

The conversion is straightforward and correct:

```c
static DEFINE_XARRAY_ALLOC(amdgpu_pasid_xa);
static u32 amdgpu_pasid_xa_next;
```
`DEFINE_XARRAY_ALLOC` is the right macro for cyclic allocation use. The `u32` next cursor is correct for `xa_alloc_cyclic()`.

```c
r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
		    XA_LIMIT(1, (1U << bits) - 1),
		    &amdgpu_pasid_xa_next, GFP_KERNEL);
```
This is correct. The limit upper bound is `(1U << bits) - 1`, which is right — the old IDR code used `1U << bits` as the exclusive end, while `XA_LIMIT` takes an inclusive max. `xa_mk_value(0)` stores a tagged NULL-ish value, which is fine since the entry just needs to be non-NULL to mark the PASID as allocated (XArray skips NULL entries in allocation).

**Minor nit:** `xa_alloc_cyclic()` returns 1 when wrapping occurs and 0 on normal success. The check `if (r >= 0)` handles both correctly. The return-value restructuring (early return on success, fall through to return error) is fine and arguably cleaner than the original.

```c
xa_erase(&amdgpu_pasid_xa, pasid);
```
Correct — `xa_erase()` uses `xa_lock_irq()` internally, so it's safe from hardirq context. This directly fixes the deadlock bug.

```c
xa_destroy(&amdgpu_pasid_xa);
```
Correct — no external locking needed, `xa_destroy()` handles it.

**One minor observation:** The `bits` parameter could theoretically be 32, in which case `1U << 32` is undefined behavior in C. However, this is not a regression introduced by this patch — the original IDR code had the same issue, and in practice `bits` is always 16 (from `amdgpu_vm_pasid_bits`). Not something to block this patch on.

**Reviewed-by worthy:** Yes. The patch is correct, well-documented, and a clear improvement over the buggy IDR+spinlock approach.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 11:35 [PATCH v3] " Mikhail Gavrilov
@ 2026-03-31  7:18 ` Claude Code Review Bot
  2026-03-31  7:18 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  7:18 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 2
Reviewed: 2026-03-31T17:18:18.250988

---

This is a single-patch series (v3) that fixes two real locking bugs introduced by commit `8f1de51f49be` in the amdgpu PASID allocator:

1. `GFP_KERNEL` allocation under `spin_lock()` (can sleep)
2. Plain `spin_lock()` used in process context while `amdgpu_pasid_free()` can be called from hardirq context via fence callbacks (deadlock)

The approach of replacing IDR+spinlock with XArray is clean and was suggested by the AMD maintainer (Lijo Lazar). XArray provides both built-in cyclic allocation and IRQ-safe internal locking, fixing both bugs in one conversion. The patch is well-motivated, the commit message is excellent with clear documentation of the bugs and call chains, and the v3 changelog shows good iteration based on reviewer feedback.

**Verdict: Looks good overall, with one minor nit.**

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-30 11:35 [PATCH v3] " Mikhail Gavrilov
  2026-03-31  7:18 ` Claude review: " Claude Code Review Bot
@ 2026-03-31  7:18 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31  7:18 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Commit message quality:** Excellent. The two bugs are clearly described, the hardirq call chain is documented, and the real-world trigger scenario (RX 7900 XTX + Vulkan/Proton) is mentioned. The Fixes/Cc-stable tags are correct.

**Code review:**

The XArray conversion is mechanically correct:

- `DEFINE_XARRAY_ALLOC` is the right macro for an XArray that will use `xa_alloc` / `__xa_alloc_cyclic`.

- The `static u32 amdgpu_pasid_xa_next` cursor for cyclic allocation is correct — `__xa_alloc_cyclic` requires a caller-managed `next` variable.

- `xa_lock_irqsave` / `xa_unlock_irqrestore` properly handles the hardirq context issue (bug #2).

- `GFP_ATOMIC` is correct since allocation happens under the xa_lock (bug #1 fix).

- `xa_mk_value(0)` as the stored entry is fine — XArray treats `NULL` as "empty slot", so a non-NULL value is needed to mark the PASID as allocated. Using an internal value entry avoids memory allocation for the stored pointer.

- The limit `XA_LIMIT(1, (1U << bits) - 1)` correctly maps the old IDR range `[1, 1U << bits)` — IDR's `end` parameter is exclusive while XA_LIMIT's max is inclusive, so `(1U << bits) - 1` is the right translation.

**Minor nit on return value handling:**

```c
	r = __xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
			      XA_LIMIT(1, (1U << bits) - 1),
			      &amdgpu_pasid_xa_next, GFP_ATOMIC);
	xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);

	if (r >= 0) {
		trace_amdgpu_pasid_allocated(pasid);
		return pasid;
	}

	return r;
```

`__xa_alloc_cyclic` returns 0 on success, 1 if it wrapped around (both >= 0), or a negative errno. The `r >= 0` check is correct. However, note that `__xa_alloc_cyclic` can return 1 when the allocation wrapped, which is not an error — the code handles this correctly since `r >= 0` covers both cases. Good.

**Cleanup function:**

```c
void amdgpu_pasid_mgr_cleanup(void)
{
	xa_destroy(&amdgpu_pasid_xa);
}
```

`xa_destroy()` does not require external locking — it frees internal nodes and resets the array. This is correct and simpler than the IDR version which took the spinlock around `idr_destroy()`. At cleanup time there should be no concurrent access anyway.

**No issues found.** The patch is a clean, correct conversion that fixes both bugs. The only potential concern is that `GFP_ATOMIC` is more restrictive than `GFP_KERNEL` (could fail under memory pressure), but this is unavoidable when allocating under a lock that may be taken in IRQ context, and PASID allocation failures are already handled by callers.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v7] drm/amdgpu: replace PASID IDR with XArray
@ 2026-03-31 14:21 Mikhail Gavrilov
  2026-03-31 14:29 ` Christian König
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Mikhail Gavrilov @ 2026-03-31 14:21 UTC (permalink / raw)
  To: Alex Deucher, Christian König
  Cc: lijo.lazar, Eric Huang, David Airlie, Simona Vetter, amd-gfx,
	dri-devel, Mikhail Gavrilov

Replace the PASID IDR + spinlock with XArray as noted in the TODO
left by commit dccd79bb1c7f ("drm/amdgpu: fix the idr allocation
flags").

The IDR conversion still has an IRQ safety issue:
amdgpu_pasid_free() can be called from hardirq context via the fence
signal path, but amdgpu_pasid_idr_lock is taken with plain spin_lock()
in process context, creating a potential deadlock:

     CPU0
     ----
     spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
     <Interrupt>
       spin_lock(&amdgpu_pasid_idr_lock) // deadlock

   The hardirq call chain is:

     sdma_v6_0_process_trap_irq
      -> amdgpu_fence_process
       -> dma_fence_signal
        -> drm_sched_job_done
         -> dma_fence_signal
          -> amdgpu_pasid_free_cb
           -> amdgpu_pasid_free

Use XArray with XA_FLAGS_LOCK_IRQ (all xa operations use IRQ-safe
locking internally) and XA_FLAGS_ALLOC1 (zero is not a valid PASID).
Both xa_alloc_cyclic() and xa_erase() then handle locking
consistently, fixing the IRQ safety issue and removing the need for
an explicit spinlock.

Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
Fixes: e6d765de3d6b ("drm/amdgpu: prevent immediate PASID reuse case")
Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
---

v7: Rebased on amd-staging-drm-next which already includes
    dccd79bb1c7f ("drm/amdgpu: fix the idr allocation flags").
    Updated commit message to reflect that sleeping-under-spinlock
    is already fixed and the xarray conversion now addresses the
    remaining IRQ safety issue.  Inverted error check to
    if (r < 0) return r; per Christian König.
v6: Use DEFINE_XARRAY_FLAGS with XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1
    so all xa operations use IRQ-safe locking internally.  Drop
    Cc: stable since the regression was never released to any stable
    kernel. (Christian König)
    https://lore.kernel.org/all/20260331111733.118553-1-mikhail.v.gavrilov@gmail.com/
v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
    since xa_erase() only uses plain xa_lock() which is not safe from
    hardirq context.
    https://lore.kernel.org/all/20260330191120.105065-1-mikhail.v.gavrilov@gmail.com/
v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
    xa_lock_irqsave, as suggested by Lijo Lazar.
    https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
v3: Replace IDR with XArray instead of fixing the spinlock, as
    suggested by Lijo Lazar.
    https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
    lock inconsistency (spin_lock -> spin_lock_irqsave).
    https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
    using idr_preload/GFP_NOWAIT.
    https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/

 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 34 ++++++++++---------------
 1 file changed, 13 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index e495a8fa13fd..a6ac3b4ce0df 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -22,7 +22,7 @@
  */
 #include "amdgpu_ids.h"
 
-#include <linux/idr.h>
+#include <linux/xarray.h>
 #include <linux/dma-fence-array.h>
 
 
@@ -40,8 +40,8 @@
  * VMs are looked up from the PASID per amdgpu_device.
  */
 
-static DEFINE_IDR(amdgpu_pasid_idr);
-static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
+static DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1);
+static u32 amdgpu_pasid_xa_next;
 
 /* Helper to free pasid from a fence callback */
 struct amdgpu_pasid_cb {
@@ -62,22 +62,19 @@ struct amdgpu_pasid_cb {
  */
 int amdgpu_pasid_alloc(unsigned int bits)
 {
-	int pasid;
+	u32 pasid;
+	int r;
 
 	if (bits == 0)
 		return -EINVAL;
 
-	spin_lock(&amdgpu_pasid_idr_lock);
-	/* TODO: Need to replace the idr with an xarry, and then
-	 * handle the internal locking with ATOMIC safe paths.
-	 */
-	pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
-				 1U << bits, GFP_ATOMIC);
-	spin_unlock(&amdgpu_pasid_idr_lock);
-
-	if (pasid >= 0)
-		trace_amdgpu_pasid_allocated(pasid);
+	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
+			    XA_LIMIT(1, (1U << bits) - 1),
+			    &amdgpu_pasid_xa_next, GFP_KERNEL);
+	if (r < 0)
+		return r;
 
+	trace_amdgpu_pasid_allocated(pasid);
 	return pasid;
 }
 
@@ -88,10 +85,7 @@ int amdgpu_pasid_alloc(unsigned int bits)
 void amdgpu_pasid_free(u32 pasid)
 {
 	trace_amdgpu_pasid_freed(pasid);
-
-	spin_lock(&amdgpu_pasid_idr_lock);
-	idr_remove(&amdgpu_pasid_idr, pasid);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	xa_erase(&amdgpu_pasid_xa, pasid);
 }
 
 static void amdgpu_pasid_free_cb(struct dma_fence *fence,
@@ -634,7 +628,5 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
  */
 void amdgpu_pasid_mgr_cleanup(void)
 {
-	spin_lock(&amdgpu_pasid_idr_lock);
-	idr_destroy(&amdgpu_pasid_idr);
-	spin_unlock(&amdgpu_pasid_idr_lock);
+	xa_destroy(&amdgpu_pasid_xa);
 }
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v7] drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 14:21 [PATCH v7] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
@ 2026-03-31 14:29 ` Christian König
  2026-03-31 14:38   ` Alex Deucher
  2026-03-31 21:43 ` Claude review: " Claude Code Review Bot
  2026-03-31 21:43 ` Claude Code Review Bot
  2 siblings, 1 reply; 14+ messages in thread
From: Christian König @ 2026-03-31 14:29 UTC (permalink / raw)
  To: Mikhail Gavrilov, Alex Deucher
  Cc: lijo.lazar, Eric Huang, David Airlie, Simona Vetter, amd-gfx,
	dri-devel



On 3/31/26 16:21, Mikhail Gavrilov wrote:
> Replace the PASID IDR + spinlock with XArray as noted in the TODO
> left by commit dccd79bb1c7f ("drm/amdgpu: fix the idr allocation
> flags").
> 
> The IDR conversion still has an IRQ safety issue:
> amdgpu_pasid_free() can be called from hardirq context via the fence
> signal path, but amdgpu_pasid_idr_lock is taken with plain spin_lock()
> in process context, creating a potential deadlock:
> 
>      CPU0
>      ----
>      spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
>      <Interrupt>
>        spin_lock(&amdgpu_pasid_idr_lock) // deadlock
> 
>    The hardirq call chain is:
> 
>      sdma_v6_0_process_trap_irq
>       -> amdgpu_fence_process
>        -> dma_fence_signal
>         -> drm_sched_job_done
>          -> dma_fence_signal
>           -> amdgpu_pasid_free_cb
>            -> amdgpu_pasid_free
> 
> Use XArray with XA_FLAGS_LOCK_IRQ (all xa operations use IRQ-safe
> locking internally) and XA_FLAGS_ALLOC1 (zero is not a valid PASID).
> Both xa_alloc_cyclic() and xa_erase() then handle locking
> consistently, fixing the IRQ safety issue and removing the need for
> an explicit spinlock.
> 
> Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
> Fixes: e6d765de3d6b ("drm/amdgpu: prevent immediate PASID reuse case")
> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
> 
> v7: Rebased on amd-staging-drm-next which already includes
>     dccd79bb1c7f ("drm/amdgpu: fix the idr allocation flags").
>     Updated commit message to reflect that sleeping-under-spinlock
>     is already fixed and the xarray conversion now addresses the
>     remaining IRQ safety issue.  Inverted error check to
>     if (r < 0) return r; per Christian König.
> v6: Use DEFINE_XARRAY_FLAGS with XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1
>     so all xa operations use IRQ-safe locking internally.  Drop
>     Cc: stable since the regression was never released to any stable
>     kernel. (Christian König)
>     https://lore.kernel.org/all/20260331111733.118553-1-mikhail.v.gavrilov@gmail.com/
> v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
>     since xa_erase() only uses plain xa_lock() which is not safe from
>     hardirq context.
>     https://lore.kernel.org/all/20260330191120.105065-1-mikhail.v.gavrilov@gmail.com/
> v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
>     xa_lock_irqsave, as suggested by Lijo Lazar.
>     https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
> v3: Replace IDR with XArray instead of fixing the spinlock, as
>     suggested by Lijo Lazar.
>     https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
> v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
>     lock inconsistency (spin_lock -> spin_lock_irqsave).
>     https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
>     using idr_preload/GFP_NOWAIT.
>     https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 34 ++++++++++---------------
>  1 file changed, 13 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index e495a8fa13fd..a6ac3b4ce0df 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -22,7 +22,7 @@
>   */
>  #include "amdgpu_ids.h"
>  
> -#include <linux/idr.h>
> +#include <linux/xarray.h>
>  #include <linux/dma-fence-array.h>
>  
>  
> @@ -40,8 +40,8 @@
>   * VMs are looked up from the PASID per amdgpu_device.
>   */
>  
> -static DEFINE_IDR(amdgpu_pasid_idr);
> -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> +static DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1);
> +static u32 amdgpu_pasid_xa_next;
>  
>  /* Helper to free pasid from a fence callback */
>  struct amdgpu_pasid_cb {
> @@ -62,22 +62,19 @@ struct amdgpu_pasid_cb {
>   */
>  int amdgpu_pasid_alloc(unsigned int bits)
>  {
> -	int pasid;
> +	u32 pasid;
> +	int r;
>  
>  	if (bits == 0)
>  		return -EINVAL;
>  
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	/* TODO: Need to replace the idr with an xarry, and then
> -	 * handle the internal locking with ATOMIC safe paths.
> -	 */
> -	pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> -				 1U << bits, GFP_ATOMIC);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> -
> -	if (pasid >= 0)
> -		trace_amdgpu_pasid_allocated(pasid);
> +	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> +			    XA_LIMIT(1, (1U << bits) - 1),
> +			    &amdgpu_pasid_xa_next, GFP_KERNEL);
> +	if (r < 0)
> +		return r;
>  
> +	trace_amdgpu_pasid_allocated(pasid);
>  	return pasid;
>  }
>  
> @@ -88,10 +85,7 @@ int amdgpu_pasid_alloc(unsigned int bits)
>  void amdgpu_pasid_free(u32 pasid)
>  {
>  	trace_amdgpu_pasid_freed(pasid);
> -
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	idr_remove(&amdgpu_pasid_idr, pasid);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	xa_erase(&amdgpu_pasid_xa, pasid);
>  }
>  
>  static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> @@ -634,7 +628,5 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
>   */
>  void amdgpu_pasid_mgr_cleanup(void)
>  {
> -	spin_lock(&amdgpu_pasid_idr_lock);
> -	idr_destroy(&amdgpu_pasid_idr);
> -	spin_unlock(&amdgpu_pasid_idr_lock);
> +	xa_destroy(&amdgpu_pasid_xa);
>  }


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7] drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 14:29 ` Christian König
@ 2026-03-31 14:38   ` Alex Deucher
  2026-03-31 17:12     ` Mikhail Gavrilov
  0 siblings, 1 reply; 14+ messages in thread
From: Alex Deucher @ 2026-03-31 14:38 UTC (permalink / raw)
  To: Christian König
  Cc: Mikhail Gavrilov, Alex Deucher, lijo.lazar, Eric Huang,
	David Airlie, Simona Vetter, amd-gfx, dri-devel

Applied.  Thanks!

Alex

On Tue, Mar 31, 2026 at 10:29 AM Christian König
<christian.koenig@amd.com> wrote:
>
>
>
> On 3/31/26 16:21, Mikhail Gavrilov wrote:
> > Replace the PASID IDR + spinlock with XArray as noted in the TODO
> > left by commit dccd79bb1c7f ("drm/amdgpu: fix the idr allocation
> > flags").
> >
> > The IDR conversion still has an IRQ safety issue:
> > amdgpu_pasid_free() can be called from hardirq context via the fence
> > signal path, but amdgpu_pasid_idr_lock is taken with plain spin_lock()
> > in process context, creating a potential deadlock:
> >
> >      CPU0
> >      ----
> >      spin_lock(&amdgpu_pasid_idr_lock)   // process context, IRQs on
> >      <Interrupt>
> >        spin_lock(&amdgpu_pasid_idr_lock) // deadlock
> >
> >    The hardirq call chain is:
> >
> >      sdma_v6_0_process_trap_irq
> >       -> amdgpu_fence_process
> >        -> dma_fence_signal
> >         -> drm_sched_job_done
> >          -> dma_fence_signal
> >           -> amdgpu_pasid_free_cb
> >            -> amdgpu_pasid_free
> >
> > Use XArray with XA_FLAGS_LOCK_IRQ (all xa operations use IRQ-safe
> > locking internally) and XA_FLAGS_ALLOC1 (zero is not a valid PASID).
> > Both xa_alloc_cyclic() and xa_erase() then handle locking
> > consistently, fixing the IRQ safety issue and removing the need for
> > an explicit spinlock.
> >
> > Suggested-by: Lijo Lazar <lijo.lazar@amd.com>
> > Fixes: e6d765de3d6b ("drm/amdgpu: prevent immediate PASID reuse case")
> > Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
>
> Reviewed-by: Christian König <christian.koenig@amd.com>
>
> > ---
> >
> > v7: Rebased on amd-staging-drm-next which already includes
> >     dccd79bb1c7f ("drm/amdgpu: fix the idr allocation flags").
> >     Updated commit message to reflect that sleeping-under-spinlock
> >     is already fixed and the xarray conversion now addresses the
> >     remaining IRQ safety issue.  Inverted error check to
> >     if (r < 0) return r; per Christian König.
> > v6: Use DEFINE_XARRAY_FLAGS with XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1
> >     so all xa operations use IRQ-safe locking internally.  Drop
> >     Cc: stable since the regression was never released to any stable
> >     kernel. (Christian König)
> >     https://lore.kernel.org/all/20260331111733.118553-1-mikhail.v.gavrilov@gmail.com/
> > v5: Use explicit xa_lock_irqsave/__xa_erase for amdgpu_pasid_free()
> >     since xa_erase() only uses plain xa_lock() which is not safe from
> >     hardirq context.
> >     https://lore.kernel.org/all/20260330191120.105065-1-mikhail.v.gavrilov@gmail.com/
> > v4: Use xa_alloc_cyclic/xa_erase directly instead of explicit
> >     xa_lock_irqsave, as suggested by Lijo Lazar.
> >     https://lore.kernel.org/all/20260330162038.25073-1-mikhail.v.gavrilov@gmail.com/
> > v3: Replace IDR with XArray instead of fixing the spinlock, as
> >     suggested by Lijo Lazar.
> >     https://lore.kernel.org/all/20260330110346.16548-1-mikhail.v.gavrilov@gmail.com/
> > v2: Added second patch fixing the {HARDIRQ-ON-W} -> {IN-HARDIRQ-W}
> >     lock inconsistency (spin_lock -> spin_lock_irqsave).
> >     https://lore.kernel.org/all/20260330053025.19203-1-mikhail.v.gavrilov@gmail.com/
> > v1: Fixed sleeping-under-spinlock (idr_alloc_cyclic with GFP_KERNEL)
> >     using idr_preload/GFP_NOWAIT.
> >     https://lore.kernel.org/all/20260328213900.19255-1-mikhail.v.gavrilov@gmail.com/
> >
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 34 ++++++++++---------------
> >  1 file changed, 13 insertions(+), 21 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > index e495a8fa13fd..a6ac3b4ce0df 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > @@ -22,7 +22,7 @@
> >   */
> >  #include "amdgpu_ids.h"
> >
> > -#include <linux/idr.h>
> > +#include <linux/xarray.h>
> >  #include <linux/dma-fence-array.h>
> >
> >
> > @@ -40,8 +40,8 @@
> >   * VMs are looked up from the PASID per amdgpu_device.
> >   */
> >
> > -static DEFINE_IDR(amdgpu_pasid_idr);
> > -static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
> > +static DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1);
> > +static u32 amdgpu_pasid_xa_next;
> >
> >  /* Helper to free pasid from a fence callback */
> >  struct amdgpu_pasid_cb {
> > @@ -62,22 +62,19 @@ struct amdgpu_pasid_cb {
> >   */
> >  int amdgpu_pasid_alloc(unsigned int bits)
> >  {
> > -     int pasid;
> > +     u32 pasid;
> > +     int r;
> >
> >       if (bits == 0)
> >               return -EINVAL;
> >
> > -     spin_lock(&amdgpu_pasid_idr_lock);
> > -     /* TODO: Need to replace the idr with an xarry, and then
> > -      * handle the internal locking with ATOMIC safe paths.
> > -      */
> > -     pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
> > -                              1U << bits, GFP_ATOMIC);
> > -     spin_unlock(&amdgpu_pasid_idr_lock);
> > -
> > -     if (pasid >= 0)
> > -             trace_amdgpu_pasid_allocated(pasid);
> > +     r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
> > +                         XA_LIMIT(1, (1U << bits) - 1),
> > +                         &amdgpu_pasid_xa_next, GFP_KERNEL);
> > +     if (r < 0)
> > +             return r;
> >
> > +     trace_amdgpu_pasid_allocated(pasid);
> >       return pasid;
> >  }
> >
> > @@ -88,10 +85,7 @@ int amdgpu_pasid_alloc(unsigned int bits)
> >  void amdgpu_pasid_free(u32 pasid)
> >  {
> >       trace_amdgpu_pasid_freed(pasid);
> > -
> > -     spin_lock(&amdgpu_pasid_idr_lock);
> > -     idr_remove(&amdgpu_pasid_idr, pasid);
> > -     spin_unlock(&amdgpu_pasid_idr_lock);
> > +     xa_erase(&amdgpu_pasid_xa, pasid);
> >  }
> >
> >  static void amdgpu_pasid_free_cb(struct dma_fence *fence,
> > @@ -634,7 +628,5 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
> >   */
> >  void amdgpu_pasid_mgr_cleanup(void)
> >  {
> > -     spin_lock(&amdgpu_pasid_idr_lock);
> > -     idr_destroy(&amdgpu_pasid_idr);
> > -     spin_unlock(&amdgpu_pasid_idr_lock);
> > +     xa_destroy(&amdgpu_pasid_xa);
> >  }
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7] drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 14:38   ` Alex Deucher
@ 2026-03-31 17:12     ` Mikhail Gavrilov
  0 siblings, 0 replies; 14+ messages in thread
From: Mikhail Gavrilov @ 2026-03-31 17:12 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Christian König, Alex Deucher, lijo.lazar, Eric Huang,
	David Airlie, Simona Vetter, amd-gfx, dri-devel

On Tue, Mar 31, 2026 at 7:38 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> Applied.  Thanks!
>

Hi Christian, Alex,

While testing v7, I noticed that xa_erase() and xa_alloc_cyclic()
use plain xa_lock()/xa_unlock() regardless of XA_FLAGS_LOCK_IRQ —
the flag only affects lockdep annotations, not runtime locking.

The XArray API provides separate _irq variants for this:
xa_alloc_cyclic_irq() and xa_erase_irq(), both defined as inlines
in include/linux/xarray.h using xa_lock_irq/xa_unlock_irq.

I confirmed this by hitting the same lockdep WARNING with v6
(which has the same locking), and by reading lib/xarray.c:

  void *xa_erase(struct xarray *xa, unsigned long index)
  {
      xa_lock(xa);              // plain spin_lock, not _irq
      entry = __xa_erase(xa, index);
      xa_unlock(xa);
      return entry;
  }

Should I send a v8 using xa_alloc_cyclic_irq() and xa_erase_irq()?

Thanks,
Mikhail

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 14:21 [PATCH v7] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
  2026-03-31 14:29 ` Christian König
@ 2026-03-31 21:43 ` Claude Code Review Bot
  2026-03-31 21:43 ` Claude Code Review Bot
  2 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 21:43 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 4
Reviewed: 2026-04-01T07:43:13.815301

---

This is a single patch (v7) replacing the PASID IDR with XArray. The conversion is well-motivated — it addresses a real IRQ safety issue in `amdgpu_pasid_free()` which can be called from hardirq context. The code is clean and the approach is correct in principle. However, **the patch has a critical bug: it uses the wrong XArray API functions**.

The commit message claims that `XA_FLAGS_LOCK_IRQ` makes "all xa operations use IRQ-safe locking internally", but this is **incorrect**. The `XA_FLAGS_LOCK_IRQ` flag only affects the lock type stored in the xarray (so that `xa_lock()`/`xa_unlock()` resolve to the right primitive internally), but the convenience wrappers `xa_alloc_cyclic()` and `xa_erase()` use plain `xa_lock()`/`xa_unlock()` — **not** `xa_lock_irq()`/`xa_unlock_irq()`. The IRQ-safe convenience wrappers are separate functions: `xa_alloc_cyclic_irq()` and `xa_erase_irq()`.

This means the patch **does not actually fix the IRQ safety issue** it claims to fix. `amdgpu_pasid_free()` calling `xa_erase()` from hardirq context is still unsafe, exactly as the original `spin_lock()` was.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 14:21 [PATCH v7] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
  2026-03-31 14:29 ` Christian König
  2026-03-31 21:43 ` Claude review: " Claude Code Review Bot
@ 2026-03-31 21:43 ` Claude Code Review Bot
  2 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 21:43 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Critical bug — wrong locking wrappers:**

```c
+	r = xa_alloc_cyclic(&amdgpu_pasid_xa, &pasid, xa_mk_value(0),
+			    XA_LIMIT(1, (1U << bits) - 1),
+			    &amdgpu_pasid_xa_next, GFP_KERNEL);
```

This must be `xa_alloc_cyclic_irq()` to disable interrupts while holding the xa_lock. The plain `xa_alloc_cyclic()` uses `xa_lock()`/`xa_unlock()` (see `include/linux/xarray.h:976-987`), which does not disable IRQs.

```c
+	xa_erase(&amdgpu_pasid_xa, pasid);
```

This must be `xa_erase_irq()`. The plain `xa_erase()` uses `xa_lock()`/`xa_unlock()` (see `lib/xarray.c:1665-1674`), which does not disable IRQs. Since `amdgpu_pasid_free()` can be called from hardirq context (via the fence signal path documented in the commit message), and `xa_alloc_cyclic()` in process context takes the same lock without disabling IRQs, the exact same deadlock scenario described in the commit message still applies.

**The fix:** Replace `xa_alloc_cyclic()` with `xa_alloc_cyclic_irq()` and `xa_erase()` with `xa_erase_irq()`. Alternatively, use explicit `xa_lock_irqsave()`/`__xa_erase()`/`xa_unlock_irqrestore()` in `amdgpu_pasid_free()` (since it's called from hardirq, `_irqsave` is the safest choice there), and `xa_alloc_cyclic_irq()` for the allocation path.

**Note:** The v5 changelog mentions that the author previously used explicit `xa_lock_irqsave`/`__xa_erase` for this exact reason, but was told to use the convenience wrappers in v6. The v6/v7 approach unfortunately reintroduces the problem.

**Minor nits (non-blocking):**

- The `XA_FLAGS_LOCK_IRQ` flag in `DEFINE_XARRAY_FLAGS` is not harmful, but it also isn't doing what the author thinks. It sets the lock class for lockdep — it doesn't make convenience wrappers use IRQ-safe locking. It's still correct to set it (it tells lockdep the lock is used in IRQ context), but the actual API calls must be the `_irq` variants.

- The `xa_mk_value(0)` store is fine — it stores a non-NULL value so the slot is occupied, preventing double-allocation of the same PASID.

- The `XA_FLAGS_ALLOC1` usage is correct (zero is not a valid PASID), though the `XA_LIMIT(1, ...)` already enforces the minimum of 1.

- The cleanup function change to `xa_destroy()` is correct and straightforward.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 11:17 [PATCH v6] " Mikhail Gavrilov
  2026-03-31 21:48 ` Claude review: " Claude Code Review Bot
@ 2026-03-31 21:48 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 21:48 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/amdgpu: replace PASID IDR with XArray
Author: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Patches: 3
Reviewed: 2026-04-01T07:48:28.604416

---

This is a single-patch v6 series replacing an IDR+spinlock PASID allocator with XArray to fix two locking bugs introduced by commit 8f1de51f49be. The commit message is excellent — it clearly describes both bugs, includes the hardirq call chain, and documents the real-world trigger scenario.

**However, the patch has a critical correctness bug: it does not actually fix the hardirq deadlock (bug #2) that it claims to fix.** The patch uses `xa_erase()` and `xa_alloc_cyclic()`, which both use plain `xa_lock()`/`xa_unlock()` (i.e., `spin_lock()`/`spin_unlock()`). The `XA_FLAGS_LOCK_IRQ` flag does **not** make these wrapper functions use IRQ-safe locking — it only affects the internal lock drop/reacquire path used during memory allocation inside the XArray internals.

The v5 approach (explicit `xa_lock_irqsave`/`__xa_erase`) was actually on the right track. The v6 commit message states *"all xa operations use IRQ-safe locking internally"* but this is incorrect.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/amdgpu: replace PASID IDR with XArray
  2026-03-31 11:17 [PATCH v6] " Mikhail Gavrilov
@ 2026-03-31 21:48 ` Claude Code Review Bot
  2026-03-31 21:48 ` Claude Code Review Bot
  1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-31 21:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Critical bug — hardirq deadlock not fixed:**

The core issue is that `amdgpu_pasid_free()` is called from hardirq context (via `amdgpu_pasid_free_cb` → fence signal path), and `amdgpu_pasid_alloc()` is called from process context. Both must use IRQ-disabling locking to prevent deadlock.

```c
xa_erase(&amdgpu_pasid_xa, pasid);
```

`xa_erase()` is defined in `lib/xarray.c` as:
```c
void *xa_erase(struct xarray *xa, unsigned long index)
{
	xa_lock(xa);        /* = spin_lock() -- IRQs NOT disabled */
	entry = __xa_erase(xa, index);
	xa_unlock(xa);      /* = spin_unlock() */
	return entry;
}
```

Similarly, `xa_alloc_cyclic()` in `include/linux/xarray.h`:
```c
static inline int xa_alloc_cyclic(...)
{
	xa_lock(xa);        /* = spin_lock() -- IRQs NOT disabled */
	err = __xa_alloc_cyclic(...);
	xa_unlock(xa);
	return err < 0 ? err : 0;
}
```

The `XA_FLAGS_LOCK_IRQ` flag only affects `xas_lock_type()`/`xas_unlock_type()` used internally by `__xas_nomem()` when the XArray needs to drop the lock for memory allocation. It has **no effect** on the outer locking in `xa_erase()` or `xa_alloc_cyclic()`.

The XArray API provides dedicated IRQ-safe wrappers for exactly this situation:
- `xa_erase_irq()` — uses `xa_lock_irq()`/`xa_unlock_irq()`
- `xa_alloc_cyclic_irq()` — uses `xa_lock_irq()`/`xa_unlock_irq()`

**However**, `xa_erase_irq()` uses `spin_unlock_irq()` which unconditionally re-enables interrupts. Calling this from hardirq context would re-enable interrupts inside the hardirq handler, which is also wrong.

**The correct fix for `amdgpu_pasid_free()`** (callable from any context) should use `xa_lock_irqsave`/`xa_unlock_irqrestore` as v5 proposed:

```c
void amdgpu_pasid_free(u32 pasid)
{
	unsigned long flags;

	trace_amdgpu_pasid_freed(pasid);

	xa_lock_irqsave(&amdgpu_pasid_xa, flags);
	__xa_erase(&amdgpu_pasid_xa, pasid);
	xa_unlock_irqrestore(&amdgpu_pasid_xa, flags);
}
```

For `amdgpu_pasid_alloc()` (process context only), `xa_alloc_cyclic_irq()` would work correctly, or alternatively the same `irqsave` pattern.

**Other observations (minor, all correct):**

- `DEFINE_XARRAY_FLAGS(amdgpu_pasid_xa, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ALLOC1)` — using `XA_FLAGS_ALLOC1` to prevent PASID 0 allocation is appropriate, and makes the explicit `XA_LIMIT(1, ...)` minimum redundant but harmless.

- The return value handling is correct: `xa_alloc_cyclic()` returns 0 on success (squashing the wrap indicator), stores the ID in `&pasid`, and the code correctly checks `r >= 0` then returns `pasid`.

- Storing `xa_mk_value(0)` as the entry is fine — XArray needs a non-NULL entry for the slot to be considered allocated, and `xa_mk_value(0)` serves this purpose since the original IDR code stored `NULL` via `idr_alloc_cyclic(..., NULL, ...)`.

- The `xa_destroy()` call in `amdgpu_pasid_mgr_cleanup()` is correct and does not need external locking.

- Dropping `Cc: stable` is correct since the regression (commit 8f1de51f49be) hasn't reached a stable kernel.

**Recommendation: NAK in current form.** The patch must use IRQ-safe locking wrappers. The simplest correct approach would be to use `xa_alloc_cyclic_irq()` for allocation and `xa_lock_irqsave`/`__xa_erase`/`xa_unlock_irqrestore` for free, as the v5 approach partially identified.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-03-31 21:48 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 14:21 [PATCH v7] drm/amdgpu: replace PASID IDR with XArray Mikhail Gavrilov
2026-03-31 14:29 ` Christian König
2026-03-31 14:38   ` Alex Deucher
2026-03-31 17:12     ` Mikhail Gavrilov
2026-03-31 21:43 ` Claude review: " Claude Code Review Bot
2026-03-31 21:43 ` Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-03-31 11:17 [PATCH v6] " Mikhail Gavrilov
2026-03-31 21:48 ` Claude review: " Claude Code Review Bot
2026-03-31 21:48 ` Claude Code Review Bot
2026-03-30 19:11 [PATCH v5] " Mikhail Gavrilov
2026-03-31  6:57 ` Claude review: " Claude Code Review Bot
2026-03-31  6:57 ` Claude Code Review Bot
2026-03-30 14:50 [PATCH v4] " Mikhail Gavrilov
2026-03-31  7:08 ` Claude review: " Claude Code Review Bot
2026-03-31  7:08 ` Claude Code Review Bot
2026-03-30 11:35 [PATCH v3] " Mikhail Gavrilov
2026-03-31  7:18 ` Claude review: " Claude Code Review Bot
2026-03-31  7:18 ` Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox