public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: gpu: nova-core: fix stack overflow in GSP memory allocation
Date: Tue, 10 Mar 2026 12:10:29 +1000	[thread overview]
Message-ID: <review-patch4-20260309-drm-rust-next-v4-4-4ef485b19a4c@proton.me> (raw)
In-Reply-To: <20260309-drm-rust-next-v4-4-4ef485b19a4c@proton.me>

Patch Review

This is the actual bug fix. The `PteArray<NUM_PAGES>` was `[u64; NUM_PAGES]` on the stack, which for typical GSP page sizes resulted in 8216+ bytes of stack usage, causing overflow.

**cmdq.rs changes** — Uses the projection macro to write PTEs one-by-one:

```rust
for i in 0..NUM_PTES {
    dma_write!(
        gsp_mem,
        [0]?.ptes.0[i],
        PteArray::<NUM_PTES>::entry(start, i)?
    );
}
```

The use of infallible `[i]` (not `[i]?`) for the inner array index is interesting. This relies on the compiler (LLVM) proving that `i < NUM_PTES` from the `0..NUM_PTES` loop bound, so `build_error!()` is eliminated as dead code. If the optimizer cannot prove this, compilation will fail, which acts as a safety net. This is acceptable but somewhat fragile — if future changes complicate the loop, it could become a compile error. Using `[i]?` would be safer and essentially free at runtime.

**gsp.rs LogBuffer::new changes** — Does NOT use the projection macro or `PteArray::entry()`:

```rust
let pte_region = unsafe {
    obj.0
        .as_slice_mut(size_of::<u64>(), NUM_PAGES * size_of::<u64>())?
};
for (i, chunk) in pte_region.chunks_exact_mut(size_of::<u64>()).enumerate() {
    let pte_value = start_addr
        .checked_add(num::usize_as_u64(i) << GSP_PAGE_SHIFT)
        .ok_or(EOVERFLOW)?;
    chunk.copy_from_slice(&pte_value.to_ne_bytes());
}
```

**Issue: Code duplication** — The PTE value calculation here (`start_addr.checked_add(...)`) duplicates the logic of the newly added `PteArray::entry()` method. The `entry()` method was added in the same patch but is only called from `cmdq.rs`. The `gsp.rs` path should use it too:

```rust
let pte_value = PteArray::<NUM_PAGES>::entry(start_addr, i)?;
```

This would reduce duplication and keep the PTE calculation logic in one place.

**Issue: Inconsistent approaches** — The two call sites use fundamentally different strategies: `cmdq.rs` uses `dma_write!` with projection, while `gsp.rs` uses raw `as_slice_mut` + `chunks_exact_mut` + manual byte copying. This is understandable since `LogBuffer` wraps `CoherentAllocation<u8>` (byte-level access) while `DmaGspMem` wraps `CoherentAllocation<GspMem>` (typed access), but a brief comment in `gsp.rs` explaining why the projection approach isn't used here would help future readers.

**Minor style nit**: The comment at line 295 has a double space:

```rust
// This is a  one by one GSP Page write to the memory
```

Overall the fix is correct and achieves the goal of eliminating the stack overflow. The inconsistency between the two call sites and the unused `PteArray::entry()` in `gsp.rs` are the main items worth addressing before merge.

---
Generated by Claude Code Patch Reviewer

  parent reply	other threads:[~2026-03-10  2:10 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-09 16:34 [PATCH v4 0/4] Fixes the stack overflow Tim Kovalenko via B4 Relay
2026-03-09 16:34 ` [PATCH v4 1/4] rust: ptr: add `KnownSize` trait to support DST size info extraction Tim Kovalenko via B4 Relay
2026-03-10  2:10   ` Claude review: " Claude Code Review Bot
2026-03-09 16:34 ` [PATCH v4 2/4] rust: ptr: add projection infrastructure Tim Kovalenko via B4 Relay
2026-03-10  2:10   ` Claude review: " Claude Code Review Bot
2026-03-09 16:34 ` [PATCH v4 3/4] rust: dma: use pointer projection infra for `dma_{read,write}` macro Tim Kovalenko via B4 Relay
2026-03-10  2:10   ` Claude review: " Claude Code Review Bot
2026-03-09 16:34 ` [PATCH v4 4/4] gpu: nova-core: fix stack overflow in GSP memory allocation Tim Kovalenko via B4 Relay
2026-03-09 19:40   ` Danilo Krummrich
2026-03-09 22:40     ` Miguel Ojeda
2026-03-10  1:40   ` Alexandre Courbot
2026-03-10  1:51     ` Gary Guo
2026-03-10  2:10   ` Claude Code Review Bot [this message]
2026-03-09 17:00 ` [PATCH v4 0/4] Fixes the stack overflow Danilo Krummrich
2026-03-10  2:10 ` Claude review: " Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-02-13 19:40 [PATCH v2] gpu: nova-core: fix stack overflow in GSP memory allocation Tim Kovalenko via B4 Relay
2026-02-13 21:16 ` Claude review: " Claude Code Review Bot
2026-02-13 21:16 ` Claude Code Review Bot
2026-02-13  3:49 [PATCH] " Tim Kovalenko via B4 Relay
2026-02-13  8:06 ` Claude review: " Claude Code Review Bot
2026-02-13  8:06 ` Claude Code Review Bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=review-patch4-20260309-drm-rust-next-v4-4-4ef485b19a4c@proton.me \
    --to=claude-review@example.com \
    --cc=dri-devel-reviews@example.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox