From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: drm/msm: always recover the gpu
Date: Wed, 11 Feb 2026 16:22:11 +1000 [thread overview]
Message-ID: <review-patch1-20260210-recovery_suspend_fix-v1-1-00ed9013da04@gmail.com> (raw)
In-Reply-To: <20260210-recovery_suspend_fix-v1-1-00ed9013da04@gmail.com>
Patch Review
**Commit Message Analysis:**
The commit message describes the issue but has some clarity problems:
```
Recover_worker will first increment the fence of the hung ring so, if
there's only one job submitted to a ring and that causes an hang, it
will early out.
```
This sentence is unclear. I believe you're saying that after incrementing the fence in the earlier code (lines 137-138 in context), the GPU may appear to have no active work, causing `msm_gpu_active()` to return false. Please clarify this logic flow more explicitly.
```
There's no guarantee that the gpu will suspend and resume before more
work is submitted and if the gpu is in a hung state it will stay in that
state and probably trigger a timeout again.
```
This is the core issue but needs evidence: Have you observed this behavior? What testing demonstrated this? The claim that "it will stay in that state" is critical to justifying this change.
**Technical Analysis:**
The change removes 28 lines (a conditional wrapper) and the code just gets de-indented. Let's examine the implications:
```c
- if (msm_gpu_active(gpu)) {
- /* retire completed submits, plus the one that hung: */
- retire_submits(gpu);
+ /* retire completed submits, plus the one that hung: */
+ retire_submits(gpu);
- gpu->funcs->recover(gpu);
+ gpu->funcs->recover(gpu);
```
**Questions/Concerns:**
1. **Power management impact:** The original code appeared to rely on suspend/resume to clear the GPU state when idle. By removing this optimization:
- Will this cause unnecessary recovery operations on idle GPUs?
- What's the power consumption impact of calling `gpu->funcs->recover()` unconditionally?
- Does `recover()` do expensive operations even when the GPU is idle?
2. **Race condition analysis:** The commit message mentions "no guarantee that the gpu will suspend and resume before more work is submitted." However:
- What prevents the GPU from suspending *after* you check `msm_gpu_active()` but *before* calling `recover()`?
- If work can arrive at any time, wouldn't the same race exist with the original code?
- Is there locking that prevents work submission during recovery?
3. **Historical context missing:**
- Why was the `msm_gpu_active()` check added originally?
- Was it a power optimization, or was it trying to avoid some other issue?
- Have you checked `git blame` and the original commit message?
4. **Testing evidence:**
- What specific test case triggers this bug?
- Have you verified this fixes the issue?
- What regression testing has been done to ensure this doesn't break suspend/resume?
- CC'ing stable suggests this is a real bug - what's the user-visible impact?
**Code-level observations:**
```c
pm_runtime_put(&gpu->pdev->dev);
```
At line 188 (after your changes), there's a pm_runtime_put(). This pairs with a pm_runtime_get earlier in the function. By always running the recovery path, you're ensuring the GPU stays powered up longer. Is this intentional and acceptable?
The code flow after your change:
1. Increment fences (marks work as complete)
2. Retire submits (may remove all pending work)
3. **ALWAYS** call `gpu->funcs->recover()` (even if GPU now appears idle)
4. Replay all submits from all rings (may be empty lists)
5. Power management put
**Alternative approaches to consider:**
1. Could you check if there are actually pending submits *before* deciding to skip recovery?
2. Should the fence increment happen *after* the active check, not before?
3. Is there a more targeted fix that addresses the specific race without removing the optimization entirely?
**Style/formatting:** The code change itself is clean - pure de-indentation with no logic changes beyond removing the conditional.
**Stability tag:** The `Cc: stable@vger.kernel.org` tag is present but no `Fixes:` tag. If this fixes a specific commit, please add:
```
Fixes: <commit-id> ("original commit title")
```
**Verdict:** The change may be correct, but the justification is insufficient. Before this can be accepted, please provide:
1. More detailed explanation of the race condition or state persistence issue
2. Evidence from testing (logs, traces, reproducers)
3. Analysis of the power management impact
4. Historical context on why the check existed
5. A `Fixes:` tag if this regresses a specific commit
The core insight (that recovery should run even if the GPU appears idle after fence increment) seems reasonable, but needs stronger supporting evidence and impact analysis.
---
Generated by Claude Code Patch Reviewer
next prev parent reply other threads:[~2026-02-11 6:22 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-10 16:29 [PATCH] drm/msm: always recover the gpu Anna Maniscalco
2026-02-11 6:22 ` Claude Code Review Bot [this message]
2026-02-11 6:22 ` Claude review: " Claude Code Review Bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=review-patch1-20260210-recovery_suspend_fix-v1-1-00ed9013da04@gmail.com \
--to=claude-review@example.com \
--cc=dri-devel-reviews@example.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox