From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/shmem-helper: Fix Map huge page mapping in fault handler Date: Mon, 16 Mar 2026 11:46:49 +1000 Message-ID: In-Reply-To: <20260316002649.211819-1-pedrodemargomes@gmail.com> References: <20260316002649.211819-1-pedrodemargomes@gmail.com> <20260316002649.211819-1-pedrodemargomes@gmail.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **Race condition analysis is correct.** The commit message clearly explains= the problem and reproducer. The Fixes tag is appropriate. **Issue 1: Missing fallback from huge_fault to PTE mapping.** When `.huge_fault` is called and the folio is PMD-mappable but alignment fa= ils, the function returns `VM_FAULT_FALLBACK`. The MM layer will then call = `.fault` which does PTE insertion. This is correct. However, if the folio i= s PMD-mappable and aligned but `vmf_insert_pfn_pmd()` itself fails (returns= something other than `VM_FAULT_NOPAGE`), the error is propagated correctly= too. This path looks fine. **Issue 2: `pmd_none()` check was dropped.** The original code had: ```c if (aligned && pmd_none(*vmf->pmd) && folio_test_pmd_mappable(page_folio(page))) { ``` The new code removes the `pmd_none(*vmf->pmd)` check: ```c if (aligned && folio_test_pmd_mappable(page_folio(pages[page_offset]))) { ``` In the `.huge_fault` path, the PMD should be guaranteed to be in a suitable= state by the MM layer, so dropping this check is likely fine =E2=80=94 `vm= f_insert_pfn_pmd()` handles it internally. But it's worth confirming this i= s intentional and that `vmf_insert_pfn_pmd` handles the non-none PMD case g= racefully (returning an appropriate error rather than corrupting the page t= able). **Issue 3: Needs rebase on drm-next.** The current drm-next tree at `drivers/gpu/drm/drm_gem_shmem_helper.c:557-61= 2` has substantially different code: - The helper is now called `drm_gem_shmem_try_insert_pfn_pmd()` (not `drm_g= em_shmem_try_map_pmd`) - There's a `folio_mark_accessed()` call on success (line 605-606) - A `pfn_mkwrite` handler has been added (line 648-664) - Variable declarations differ (`page`, `folio` locals; `drm_dev` used for = warnings) The patch needs to be rebased to target the current tree. **Issue 4: `folio_mark_accessed()` is missing from the patched code.** The current drm-next code calls `folio_mark_accessed(folio)` on successful = mapping. The patch (based on an older tree) does not include this. If this = patch is meant to be backported to the older tree that didn't have it, this= is fine; but when rebased on drm-next, it should be preserved. **Minor: Subject line capitalization.** "Fix Map huge page" =E2=80=94 the "= Map" should be lowercase: "Fix map huge page mapping in fault handler" or p= erhaps rephrased to "Fix race between PMD and PTE fault paths". **Overall:** The fix is correct in approach. Using `.huge_fault` for PMD-or= der mappings and `.fault` for PTE-order is the right way to let the MM laye= r serialize concurrent faults at different granularities. The patch needs a= rebase onto drm-next before it can be applied. --- Generated by Claude Code Patch Reviewer