From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/qxl: Convert qxl release idr to xarray Date: Thu, 23 Apr 2026 10:09:16 +1000 Message-ID: In-Reply-To: <20260420083646.179545-1-liuqiangneo@163.com> References: <20260420083646.179545-1-liuqiangneo@163.com> <20260420083646.179545-1-liuqiangneo@163.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **Bug: `GFP_NOWAIT` should be `GFP_KERNEL`** The original code used the IDR preload pattern to ensure allocation success: ```c idr_preload(GFP_KERNEL); spin_lock(&qdev->release_idr_lock); handle =3D idr_alloc(&qdev->release_idr, release, 1, 0, GFP_NOWAIT); ... spin_unlock(&qdev->release_idr_lock); idr_preload_end(); ``` `idr_preload(GFP_KERNEL)` pre-allocates memory (sleeping if necessary) so t= hat the subsequent `GFP_NOWAIT` allocation under the spinlock is guaranteed= to succeed. The replacement: ```c r =3D xa_alloc(&qdev->release_xa, &handle, release, xa_limit_32b, GFP_NOWAI= T); ``` uses `GFP_NOWAIT` without any preloading. Since `xa_alloc` may need to allo= cate internal xarray nodes, and `GFP_NOWAIT` prevents sleeping, this will f= ail more readily under memory pressure than the original code. Now that no = external spinlock is held, `GFP_KERNEL` should be used instead. **Unnecessary locking around `xa_load`** In `qxl_release_from_id_locked`: ```c xa_lock(&qdev->release_xa); release =3D xa_load(&qdev->release_xa, id); xa_unlock(&qdev->release_xa); ``` `xa_load` uses `rcu_read_lock()` internally and is safe to call without hol= ding the xa_lock. The xa_lock is only needed for store operations. This add= s unnecessary spinlock contention. A plain `xa_load()` call without the sur= rounding lock/unlock is sufficient and correct. **Seqno atomicity =E2=80=94 correct but worth noting** The change from `++qdev->release_seqno` (under spinlock) to `atomic_inc_ret= urn(&qdev->release_seqno)` (after xa_alloc returns) introduces a theoretica= l window where two concurrent allocations could get handles in one order bu= t seqnos in the reverse order. However, since `dma_fence_init` uses `releas= e->id | 0xf0000000` as the fence context (unique per release), cross-releas= e seqno ordering doesn't matter. This is fine. **Minor: seqno incremented on allocation failure** ```c r =3D xa_alloc(&qdev->release_xa, &handle, release, xa_limit_32b, GFP_NOWAI= T); release->base.seqno =3D atomic_inc_return(&qdev->release_seqno); if (r < 0) { ``` The seqno is bumped before the error check, wasting a sequence number on fa= ilure. The original code had the same behavior, so this isn't a regression,= but while touching this code it would be easy to move the seqno assignment= after the error check. **Missing `xa_destroy` in teardown** `qxl_device_fini()` doesn't call `xa_destroy()` for the new xarray. The ori= ginal code also omitted `idr_destroy()`, so this isn't a regression, but it= would be good to clean up here while converting. **Variable renames =E2=80=94 fine** The `idr_ret` =E2=86=92 `xa_ret` renames in `qxl_alloc_surface_release_rese= rved` and `qxl_alloc_release_reserved` are straightforward and correct. The= error message update from `release_idr` to `release_xa` is appropriate. --- Generated by Claude Code Patch Reviewer