From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: rust: drm: gem: Add vmap functions to shmem bindings Date: Thu, 23 Apr 2026 08:09:06 +1000 Message-ID: In-Reply-To: <20260421234234.638503-6-lyude@redhat.com> References: <20260421234234.638503-1-lyude@redhat.com> <20260421234234.638503-6-lyude@redhat.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **1. Field reordering fix:** ```rust pub struct Object { + sgt_res: UnsafeCell>>>, obj: Opaque, parent_resv_obj: Option>>, - sgt_res: UnsafeCell>>>, inner: T, } ``` This ensures `sgt_res` is dropped before `obj`, which is critical for corre= ctness. As noted above, this fix should arguably be in patch 4 to avoid a b= isectability issue. **2. `Clone` for `VMapOwned` uses `unwrap_unchecked`:** ```rust impl Clone for VMapOwned { fn clone(&self) -> Self { unsafe { self.owner.owned_vmap().unwrap_unchecked() } } } ``` The comment says "We have a successful vmap already, so this can't fail." H= owever, `drm_gem_shmem_vmap_locked` internally calls `drm_gem_shmem_get_pag= es_locked` which does `shmem_read_mapping_page` =E2=80=94 this can fail und= er memory pressure even on a subsequent call (the pages_use_count refcount = bump path skips this, but if someone called vunmap in between and pages wer= e released...). Actually, in the vmap case, `drm_gem_shmem_vmap_locked` inc= rements `vmap_use_count` and if it's already > 0, it just returns the exist= ing vaddr. So cloning while holding an existing VMap should indeed be safe = since `vmap_use_count > 0` means the pages are pinned. This is sound. **3. `raw_vmap` return value leak if `to_result` fails:** ```rust fn raw_vmap(&self, min_size: usize) -> Result<*mut c_void> { // ... let mut map: MaybeUninit =3D MaybeUninit::uninit(); to_result(unsafe { bindings::dma_resv_lock(self.raw_dma_resv(), ptr::null_mut()); let ret =3D bindings::drm_gem_shmem_vmap_locked(self.as_raw_shmem()= , map.as_mut_ptr()); bindings::dma_resv_unlock(self.raw_dma_resv()); ret })?; ``` If `drm_gem_shmem_vmap_locked` fails, `dma_resv_unlock` is still called =E2= =80=94 that's correct. The `to_result` then propagates the error and `map` = (uninitialized) is not read. Good. **4. iomem handling returns `ENOTSUPP`:** ```rust if map.is_iomem { unsafe { self.raw_vunmap(map) }; Err(ENOTSUPP) } ``` This is a reasonable limitation to document. The `XXX` comment acknowledges= it. In practice, shmem objects should never produce iomem mappings, so thi= s is defensive. **5. The `From for VMapOwned` uses `mem::forget`:** ```rust impl<'a, D: DriverObject, const SIZE: usize> From> for= VMapOwned { fn from(value: VMapRef<'a, D, SIZE>) -> Self { let this =3D Self { addr: value.addr, owner: value.owner.into(), }; mem::forget(value); this } } ``` This transfers the vmap from the borrowed to the owned variant without incr= ementing/decrementing the vmap refcount. The `mem::forget` prevents the `VM= apRef`'s `Drop` from running `raw_vunmap`. This is correct =E2=80=94 the ma= pping is logically transferred. **6. The kunit tests are well-structured** and test both compile-time size = validation and actual I/O through the vmap. The `vmap_io` test doing byte-l= evel reads after a 32-bit write to verify byte ordering is a nice touch, th= ough it's architecture-dependent (assumes little-endian). This should be fi= ne for the test environments this runs on. **7. The `impl_vmap_io_capable!` macro:** ```rust unsafe fn io_read(&self, address: usize) -> $ty { let ptr =3D address as *mut $ty; unsafe { ptr::read(ptr) } } ``` Using `ptr::read`/`ptr::write` (not volatile) is intentional for non-iomem = shmem mappings. This is correct =E2=80=94 shmem vmap memory is regular kern= el memory, not MMIO. **Minor nit:** The `BaseObject` import in patch 5 (`use gem::BaseObject`) i= sn't visibly used in the diff. It might be needed for `self.size()` to be a= vailable, which goes through the `BaseObject` trait. Worth confirming it's = actually needed and not a stale import. --- **Summary of actionable items:** 1. Patch 2: Add a NULL check or document that `shmem->sgt` must be non-NULL= in `__drm_gem_shmem_free_sgt_locked()`. 2. Patch 4: The field ordering issue means patch 4 alone has a potential dr= op-order bug; consider moving the field reorder from patch 5 into patch 4 f= or bisectability. 3. The interaction between `drm_gem_shmem_release()` freeing the sgt and th= e `Devres` drop trying to free it again deserves a safety comme= nt explaining why double-free doesn't occur. --- Generated by Claude Code Patch Reviewer