public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: gpu: nova-core: fix stack overflow in GSP memory allocation
Date: Sat, 14 Feb 2026 07:16:12 +1000	[thread overview]
Message-ID: <review-patch1-20260213-drm-rust-next-v2-1-aa094f78721a@proton.me> (raw)
In-Reply-To: <20260213-drm-rust-next-v2-1-aa094f78721a@proton.me>

Patch Review

**LogBuffer changes (gsp.rs)**

The `LogBuffer::new` changes look correct. The original code:

> -        let ptes = PteArray::<NUM_PAGES>::new(obj.0.dma_handle())?;
> -        // SAFETY: `obj` has just been created and we are its sole user.
> -        unsafe {
> -            // Copy the self-mapping PTE at the expected location.
> -            obj.0
> -                .as_slice_mut(size_of::<u64>(), size_of_val(&ptes))?
> -                .copy_from_slice(ptes.as_bytes())
> -        };

is replaced with:

> +        let start_addr = obj.0.dma_handle();
> +
> +        // SAFETY: `obj` has just been created and we are its sole user.
> +        let pte_region = unsafe {
> +            obj.0
> +                .as_slice_mut(size_of::<u64>(), NUM_PAGES * size_of::<u64>())?
> +        };
> +
> +        // As in [`DmaGspMem`], this is a  one by one GSP Page write to the memory
> +        // to avoid stack overflow when allocating the whole array at once.
> +        for (i, chunk) in pte_region.chunks_exact_mut(size_of::<u64>()).enumerate() {
> +            let pte_value = start_addr
> +                .checked_add(num::usize_as_u64(i) << GSP_PAGE_SHIFT)
> +                .ok_or(EOVERFLOW)?;
> +
> +            chunk.copy_from_slice(&pte_value.to_ne_bytes());
> +        }

Since the `CoherentAllocation` is `CoherentAllocation<u8>`, the `as_slice_mut` offset and count parameters are in bytes. The original code passed `size_of::<u64>()` (8) as the offset and `size_of_val(&ptes)` as the count — which was `NUM_PAGES * 8`. The new code passes `size_of::<u64>()` (8) as offset and `NUM_PAGES * size_of::<u64>()` as count, which is equivalent. The `chunks_exact_mut(size_of::<u64>())` then iterates over 8-byte chunks, and `NUM_PAGES * 8 / 8 = NUM_PAGES` chunks, so all PTEs are written. This looks correct.

Minor nit: the comment has a double space ("this is a  one by one").

**Cmdq changes (gsp/cmdq.rs)**

> +        const NUM_PAGES: usize = GSP_PAGE_SIZE / size_of::<u64>();

This computes 4096 / 8 = 512, matching the original `PteArray<{ GSP_PAGE_SIZE / size_of::<u64>() }>` generic parameter. However, this `NUM_PAGES` name is potentially misleading — this isn't the number of GSP pages in the `GspMem` allocation, it's the number of PTE *entries* that fit in one GSP page. The existing code used this same expression as the array size, so this isn't a new issue, but the name could cause confusion with the `Cmdq::NUM_PTES` constant which equals `size_of::<GspMem>() >> GSP_PAGE_SHIFT` (the actual number of pages in the GspMem structure). That said, this is a naming preference not a bug.

> +        let item = gsp_mem.item_from_index(0)?;
> +        for i in 0..NUM_PAGES {
> +            let pte_value = gsp_mem
> +                .dma_handle()
> +                .checked_add(num::usize_as_u64(i) << GSP_PAGE_SHIFT)
> +                .ok_or(EOVERFLOW)?;
> +
> +            // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be
> +            // dereferenced. The compiler also further validates the expression on whether `field`
> +            // is a member of `item` when expanded by the macro.
> +            //
> +            // Further, this is dma_write! macro expanded and modified to allow for individual
> +            // page write.
> +            unsafe {
> +                let ptr_field = core::ptr::addr_of_mut!((*item).ptes[i]);
> +                gsp_mem.field_write(ptr_field, pte_value);
> +            }
> +        }

The safety comment says this is the `dma_write!` macro "expanded and modified," which is an accurate description of what's happening. The `field_write` method performs a volatile write of each `u64` PTE value. The original `dma_write!` macro for the whole `PteArray` would have written the entire struct via a single volatile write — but since the GSP isn't running at this point during initialization (the command queue hasn't been used yet), element-by-element volatile writes should be functionally equivalent.

The struct field type was also changed:

> -    ptes: PteArray<{ GSP_PAGE_SIZE / size_of::<u64>() }>,
> +    ptes: [u64; GSP_PAGE_SIZE / size_of::<u64>()],

This is correct since `PteArray` was just a `#[repr(C)]` newtype around `[u64; N]`, so the layout is identical and the `GspMem` struct layout is preserved. The `AsBytes`/`FromBytes` impls for `GspMem` are manual `unsafe impl`s so they don't depend on `PteArray` implementing those traits.

One thing worth verifying: the `PteArray` type is also imported in `cmdq.rs`:

> -        PteArray,

This import is removed, consistent with `PteArray` being deleted entirely from `gsp.rs`. The unused `DmaAddress` import is also correctly removed from `gsp.rs` since `dma_handle()` returns a `DmaAddress` but it's now used via method call rather than being named explicitly.

The `use core::iter::Iterator` addition at the top of `gsp.rs`:

> +use core::iter::Iterator;

This appears unnecessary — `Iterator` is in the prelude and the patch doesn't use it explicitly as a trait bound anywhere in `gsp.rs`. The `chunks_exact_mut` method is on `[T]`, not on `Iterator`. Was this left over from an earlier version of the patch?

Overall this is a straightforward and correct fix. The only items worth addressing are the unnecessary `use core::iter::Iterator` import and the double-space typo in the comment.

---
Generated by Claude Code Patch Reviewer

  reply	other threads:[~2026-02-13 21:16 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-13 19:40 [PATCH v2] gpu: nova-core: fix stack overflow in GSP memory allocation Tim Kovalenko via B4 Relay
2026-02-13 21:16 ` Claude Code Review Bot [this message]
2026-02-13 21:16 ` Claude review: " Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-03-09 16:34 [PATCH v4 0/4] Fixes the stack overflow Tim Kovalenko via B4 Relay
2026-03-09 16:34 ` [PATCH v4 4/4] gpu: nova-core: fix stack overflow in GSP memory allocation Tim Kovalenko via B4 Relay
2026-03-10  2:10   ` Claude review: " Claude Code Review Bot
2026-02-13  3:49 [PATCH] " Tim Kovalenko via B4 Relay
2026-02-13  8:06 ` Claude review: " Claude Code Review Bot
2026-02-13  8:06 ` Claude Code Review Bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=review-patch1-20260213-drm-rust-next-v2-1-aa094f78721a@proton.me \
    --to=claude-review@example.com \
    --cc=dri-devel-reviews@example.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox