* [PATCH v2] drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM
@ 2026-04-21 17:57 Icenowy Zheng
2026-04-22 22:12 ` Claude review: " Claude Code Review Bot
2026-04-22 22:12 ` Claude Code Review Bot
0 siblings, 2 replies; 3+ messages in thread
From: Icenowy Zheng @ 2026-04-21 17:57 UTC (permalink / raw)
To: Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter
Cc: Brendan King, Danilo Krummrich, dri-devel, linux-kernel,
Icenowy Zheng, Icenowy Zheng, stable
The drm gpuvm code doesn't protect find operation against map operation,
and the driver needs to ensure a map operation shouldn't happen when a
find operation is in progress.
As all occurences of drm_gpuva_find*() is already guarded by
vm_ctx->lock, make pvr_vm_map() to acquire this lock to prevent
disturbing any find operation.
This fixes occasional NULL deference in drm_gpuva_find*().
Cc: stable@vger.kernel.org
Fixes: 4bc736f890ce ("drm/imagination: vm: make use of GPUVM's drm_exec helper")
Signed-off-by: Icenowy Zheng <zhengxingda@iscas.ac.cn>
---
Changes in v2:
- Fixed wrong commit prefix.
drivers/gpu/drm/imagination/pvr_vm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index e1ec60f34b6e6..eea88e7ad03c1 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -747,6 +747,7 @@ pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
pvr_gem_object_get(pvr_obj);
+ mutex_lock(&vm_ctx->lock);
err = drm_gpuvm_exec_lock(&vm_exec);
if (err)
goto err_cleanup;
@@ -754,9 +755,11 @@ pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
err = pvr_vm_bind_op_exec(&bind_op);
drm_gpuvm_exec_unlock(&vm_exec);
+ mutex_unlock(&vm_ctx->lock);
err_cleanup:
pvr_vm_bind_op_fini(&bind_op);
+ mutex_unlock(&vm_ctx->lock);
return err;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* Claude review: drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM
2026-04-21 17:57 [PATCH v2] drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
@ 2026-04-22 22:12 ` Claude Code Review Bot
2026-04-22 22:12 ` Claude Code Review Bot
1 sibling, 0 replies; 3+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:12 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM
Author: Icenowy Zheng <zhengxingda@iscas.ac.cn>
Patches: 1
Reviewed: 2026-04-23T08:12:42.085445
---
This is a single-patch fix for a race condition in the PowerVR (Imagination) GPU driver where `pvr_vm_map()` can concurrently modify the GPUVM data structure while `drm_gpuva_find*()` is searching it, leading to NULL dereferences. The motivation is sound — looking at `pvr_vm_unmap()` (line 865) and `pvr_vm_unmap_all()` (line 891), both already hold `vm_ctx->lock` around their GPUVM operations, so `pvr_vm_map()` needs to do the same.
**However, the patch as written has a critical double-unlock bug on the success path** and must not be applied in its current form.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 3+ messages in thread
* Claude review: drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM
2026-04-21 17:57 [PATCH v2] drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
2026-04-22 22:12 ` Claude review: " Claude Code Review Bot
@ 2026-04-22 22:12 ` Claude Code Review Bot
1 sibling, 0 replies; 3+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:12 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Bug: Double `mutex_unlock` on the success path**
After applying the patch, the success path through `pvr_vm_map()` executes:
```c
mutex_lock(&vm_ctx->lock); // line 750: acquire
err = drm_gpuvm_exec_lock(&vm_exec);
// ... succeeds, no goto ...
err = pvr_vm_bind_op_exec(&bind_op);
drm_gpuvm_exec_unlock(&vm_exec);
mutex_unlock(&vm_ctx->lock); // line 758: first unlock
err_cleanup:
pvr_vm_bind_op_fini(&bind_op);
mutex_unlock(&vm_ctx->lock); // line 762: SECOND unlock — BUG
```
On success, execution falls through past the first `mutex_unlock` at line 758 into `err_cleanup:`, where the mutex is unlocked a **second time**. This is undefined behavior for a non-recursive mutex — it corrupts the mutex state, can cause subsequent lock attempts to succeed without mutual exclusion, or trigger a `DEBUG_LOCKS_WARN_ON` splat. This effectively trades a NULL dereference race for a lock corruption bug.
The error path (where `drm_gpuvm_exec_lock` fails and jumps to `err_cleanup`) is correct — the mutex is only unlocked once.
**Fix:** Remove the `mutex_unlock` after `drm_gpuvm_exec_unlock()` and keep only the single unlock at `err_cleanup`. This mirrors the structure used in `pvr_vm_unmap_obj_locked()` (lines 800–819) where the cleanup path is the only exit:
```c
mutex_lock(&vm_ctx->lock);
err = drm_gpuvm_exec_lock(&vm_exec);
if (err)
goto err_cleanup;
err = pvr_vm_bind_op_exec(&bind_op);
drm_gpuvm_exec_unlock(&vm_exec);
err_cleanup:
pvr_vm_bind_op_fini(&bind_op);
mutex_unlock(&vm_ctx->lock);
return err;
```
**Minor nit:** The commit message says "all occurences" (should be "occurrences") and "NULL deference" (should be "dereference").
**Verdict:** The intent is correct and the fix is needed, but the implementation has a critical double-unlock bug. NAK in current form; a v3 with the single-unlock-point fix described above should be straightforward.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-04-22 22:12 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 17:57 [PATCH v2] drm/imagination: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
2026-04-22 22:12 ` Claude review: " Claude Code Review Bot
2026-04-22 22:12 ` Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox