From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: Re: [PATCH 2/2] drm/amdgpu: protect queue access in signal IOCTL Date: Tue, 10 Mar 2026 12:44:43 +1000 Message-ID: In-Reply-To: References: <20260309022229.63071-1-chenyuan_mi@163.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **Already fixed upstream via refcounting.** The current tree uses: ```c queue =3D amdgpu_userq_get(userq_mgr, args->queue_id); ``` (line 522) instead of raw `xa_load()`. Combined with `amdgpu_userq_put(queu= e)` at the cleanup (line 614), this provides proper lifetime protection. **Additional concerns with this patch's approach:** - **Reliance on implicit locking side-effect**: The patch relies on the fac= t that `amdgpu_userq_ensure_ev_fence()` acquires `userq_mutex` as a side-ef= fect, and then adds `mutex_unlock()` calls in error paths between it and th= e existing unlock. This coupling is fragile =E2=80=94 the comment `/* Retri= eve the user queue under userq_mutex (held by ensure_ev_fence) */` document= s a non-obvious invariant. If `ensure_ev_fence`'s locking behavior ever cha= nges, this code silently breaks. It would be cleaner to acquire the mutex e= xplicitly and separately. - **`amdgpu_userq_fence_alloc` under mutex**: Moving `ensure_ev_fence` earl= ier means `amdgpu_userq_fence_alloc()` (a `GFP_KERNEL` allocation from a sl= ab cache) is now called under `userq_mutex`. While this works, it needlessl= y extends the critical section. The allocation doesn't access queue state a= nd doesn't need mutex protection. - **Fixes tag**: The `Fixes:` tag references `a292fdecd728`, the original s= ignal/wait IOCTL commit. The `Cc: stable` tag is appropriate for the severi= ty, but given the patch doesn't apply to current trees, backporting would n= eed to be done carefully against the actual stable tree base. --- Generated by Claude Code Patch Reviewer