public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH] pvr: acquire vm_ctx->lock before mapping memory to GPU VM
@ 2026-04-21 17:52 Icenowy Zheng
  2026-04-21 17:56 ` Icenowy Zheng
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Icenowy Zheng @ 2026-04-21 17:52 UTC (permalink / raw)
  To: Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter
  Cc: Brendan King, Danilo Krummrich, Donald Robson, dri-devel,
	linux-kernel, Icenowy Zheng, Icenowy Zheng, stable

The drm gpuvm code doesn't protect find operation against map operation,
and the driver needs to ensure a map operation shouldn't happen when a
find operation is in progress.

As all occurences of drm_gpuva_find*() is already guarded by
vm_ctx->lock, make pvr_vm_map() to acquire this lock to prevent
disturbing any find operation.

This fixes occasional NULL deference in drm_gpuva_find*().

Cc: stable@vger.kernel.org
Fixes: 4bc736f890ce ("drm/imagination: vm: make use of GPUVM's drm_exec helper")
Signed-off-by: Icenowy Zheng <zhengxingda@iscas.ac.cn>
---
 drivers/gpu/drm/imagination/pvr_vm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index e1ec60f34b6e6..eea88e7ad03c1 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -747,6 +747,7 @@ pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
 
 	pvr_gem_object_get(pvr_obj);
 
+	mutex_lock(&vm_ctx->lock);
 	err = drm_gpuvm_exec_lock(&vm_exec);
 	if (err)
 		goto err_cleanup;
@@ -754,9 +755,11 @@ pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
 	err = pvr_vm_bind_op_exec(&bind_op);
 
 	drm_gpuvm_exec_unlock(&vm_exec);
+	mutex_unlock(&vm_ctx->lock);
 
 err_cleanup:
 	pvr_vm_bind_op_fini(&bind_op);
+	mutex_unlock(&vm_ctx->lock);
 
 	return err;
 }
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] pvr: acquire vm_ctx->lock before mapping memory to GPU VM
  2026-04-21 17:52 [PATCH] pvr: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
@ 2026-04-21 17:56 ` Icenowy Zheng
  2026-04-22 22:14 ` Claude review: " Claude Code Review Bot
  2026-04-22 22:14 ` Claude Code Review Bot
  2 siblings, 0 replies; 4+ messages in thread
From: Icenowy Zheng @ 2026-04-21 17:56 UTC (permalink / raw)
  To: Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter
  Cc: Brendan King, Danilo Krummrich, Donald Robson, dri-devel,
	linux-kernel, stable

Oops, wrong commit title prefix... Wrote too commits for Mesa recently.

Should be drm/imagination: instead.

Will send a replacement patch (and remove an unreachable Cc).

Quite sorry,
Icenowy

在 2026-04-22三的 01:52 +0800,Icenowy Zheng写道:
> The drm gpuvm code doesn't protect find operation against map
> operation,
> and the driver needs to ensure a map operation shouldn't happen when
> a
> find operation is in progress.
> 
> As all occurences of drm_gpuva_find*() is already guarded by
> vm_ctx->lock, make pvr_vm_map() to acquire this lock to prevent
> disturbing any find operation.
> 
> This fixes occasional NULL deference in drm_gpuva_find*().
> 
> Cc: stable@vger.kernel.org
> Fixes: 4bc736f890ce ("drm/imagination: vm: make use of GPUVM's
> drm_exec helper")
> Signed-off-by: Icenowy Zheng <zhengxingda@iscas.ac.cn>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Claude review: pvr: acquire vm_ctx->lock before mapping memory to GPU VM
  2026-04-21 17:52 [PATCH] pvr: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
  2026-04-21 17:56 ` Icenowy Zheng
@ 2026-04-22 22:14 ` Claude Code Review Bot
  2026-04-22 22:14 ` Claude Code Review Bot
  2 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:14 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: pvr: acquire vm_ctx->lock before mapping memory to GPU VM
Author: Icenowy Zheng <zhengxingda@iscas.ac.cn>
Patches: 2
Reviewed: 2026-04-23T08:14:32.334692

---

This is a single-patch series that adds `vm_ctx->lock` around the `drm_gpuvm_exec_lock()`/`drm_gpuvm_exec_unlock()` + `pvr_vm_bind_op_exec()` sequence in `pvr_vm_map()`. The stated goal — protecting `drm_gpuva_find*()` from concurrent map operations — is reasonable and consistent with the locking pattern already used in `pvr_vm_unmap()` (line 865) and `pvr_vm_unmap_all()` (line 891).

**However, the patch has a critical bug: it double-unlocks `vm_ctx->lock` on the success path.**

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Claude review: pvr: acquire vm_ctx->lock before mapping memory to GPU VM
  2026-04-21 17:52 [PATCH] pvr: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
  2026-04-21 17:56 ` Icenowy Zheng
  2026-04-22 22:14 ` Claude review: " Claude Code Review Bot
@ 2026-04-22 22:14 ` Claude Code Review Bot
  2 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:14 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Bug: double `mutex_unlock` on the success path**

After the patch, `pvr_vm_map()` looks like this (lines 750–764):

```c
	mutex_lock(&vm_ctx->lock);          // line 750
	err = drm_gpuvm_exec_lock(&vm_exec);
	if (err)
		goto err_cleanup;

	err = pvr_vm_bind_op_exec(&bind_op);

	drm_gpuvm_exec_unlock(&vm_exec);
	mutex_unlock(&vm_ctx->lock);        // line 758 — first unlock

err_cleanup:
	pvr_vm_bind_op_fini(&bind_op);
	mutex_unlock(&vm_ctx->lock);        // line 762 — SECOND unlock!

	return err;
```

On the **success path** (no error), execution flows:
1. `mutex_lock` at line 750
2. `mutex_unlock` at line 758 (after `drm_gpuvm_exec_unlock`)
3. Falls through to `err_cleanup`
4. `mutex_unlock` at line 762 — **double unlock**, which is undefined behavior on a non-recursive mutex and will trigger `DEBUG_LOCKS_WARN_ON` with lockdep enabled.

On the **error path** (`drm_gpuvm_exec_lock` fails), it correctly jumps to `err_cleanup` and unlocks once.

**Fix:** The success path needs to skip the second unlock. The cleanest approaches would be either:

1. Replace the `mutex_unlock` + fall-through with a `goto` past `err_cleanup`:
```c
	drm_gpuvm_exec_unlock(&vm_exec);

err_cleanup:
	pvr_vm_bind_op_fini(&bind_op);
	mutex_unlock(&vm_ctx->lock);

	return err;
```
This mirrors the existing pattern for `drm_gpuvm_exec_unlock` where it unlocks before the label but the `_fini` is shared. Just move the single `mutex_unlock` to the shared cleanup block.

2. Or, use `goto` to skip the second unlock on the success path.

**Minor observation on the commit message:**
- "NULL deference" should be "NULL dereference" (typo).
- "all occurences" should be "all occurrences".
- The commit message claims "all occurrences of `drm_gpuva_find*()` is already guarded by `vm_ctx->lock`," which is accurate based on the code at lines 865–867 and 891–898. The rationale is sound — the issue is purely in the implementation.

**Verdict:** The motivation is correct and the fix is needed, but this patch **must not be applied as-is** due to the double-unlock bug. It needs a v2.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-04-22 22:14 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 17:52 [PATCH] pvr: acquire vm_ctx->lock before mapping memory to GPU VM Icenowy Zheng
2026-04-21 17:56 ` Icenowy Zheng
2026-04-22 22:14 ` Claude review: " Claude Code Review Bot
2026-04-22 22:14 ` Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox