From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/tyr: add GPU virtual memory module Date: Tue, 03 Mar 2026 12:48:20 +1000 Message-ID: In-Reply-To: <20260302232500.244489-10-deborah.brouwer@collabora.com> References: <20260302232500.244489-1-deborah.brouwer@collabora.com> <20260302232500.244489-10-deborah.brouwer@collabora.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **Author:** Boris Brezillon / Daniel Almeida / Deborah Brouwer This is the largest patch. The VM module integrates GPUVM with the hardware= page table. Observations: 1. `reserve_range` is set to `0..0u64` (empty range): ```rust + let reserve_range =3D 0..0u64; ``` This means no VA space is reserved for the kernel. For the MCU firmware VM = this is fine since the driver controls all mappings, but for user VMs this = will need to be revisited. 2. In `sm_step_map`, when a mapping fails partway through SGT entries, the = cleanup unmaps from `start_iova`: ```rust + Err(e) =3D> { + let total_mapped =3D iova - start_iova; + if total_mapped > 0 { + pt_unmap(context.pt, start_iova..(start_iova + tot= al_mapped)).ok(); + } + return Err(e); + } ``` This is correct =E2=80=94 it rolls back successful mappings from previous S= GT entries. However, the `.ok()` silently ignores unmap failures during err= or recovery. A `pr_warn!` or similar would help debug issues. 3. The `get_pgsize()` function handles 4K and 2M pages: ```rust +fn get_pgsize(addr: u64, size: u64) -> (u64, u64) { + let blk_offset_2m =3D addr.wrapping_neg() % (SZ_2M as u64); ``` Using `wrapping_neg()` to compute distance to next boundary is idiomatic. T= he function also caps the number of 2M pages at the 1G boundary, which is c= orrect for AArch64 LPAE page tables where the next level table covers 1G. 4. The `PtUpdateContext::drop()` calls both `end_vm_update()` and `flush_vm= ()`, logging errors but not propagating them. This is unavoidable in `Drop`= , but means that if the flush fails, GPU state may be inconsistent. 5. In `pt_map()`: ```rust + let (mapped, result) =3D unsafe { + pt.map_pages( + curr_iova as usize, + (curr_paddr as usize).try_into().unwrap(), ``` The `unwrap()` on `try_into()` could panic if `curr_paddr` doesn't fit in t= he target type of `PhysAddr`. On 32-bit systems this could be an issue, but= the driver requires ARM64, so this is safe in practice. Worth a comment. 6. The SAFETY comments for `pt.map_pages()` and `pt.unmap_pages()` referenc= e "PtUpdateContext" ensuring exclusive access. This is sound because the MM= U lock + start_vm_update() sequence prevents concurrent page table modifica= tions. --- Generated by Claude Code Patch Reviewer