From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/pagemap: Enable THP support for GPU memory migration Date: Fri, 13 Mar 2026 13:55:14 +1000 Message-ID: In-Reply-To: <20260312192126.2024853-5-francois.dugast@intel.com> References: <20260312192126.2024853-1-francois.dugast@intel.com> <20260312192126.2024853-5-francois.dugast@intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review This is the main patch. Several observations: **Concern =E2=80=94 unsafe cast in `drm_pagemap_get_devmem_page`:** ```c zone_device_folio_init((struct folio *)page, zdd->dpagemap->pagemap, order); folio_set_zone_device_data(page_folio(page), drm_pagemap_zdd_get(zdd)); ``` The first line casts `struct page *` directly to `struct folio *`. When `or= der > 0`, this assumes `page` points to a compound page's head page. This s= hould be safe given that `populate_devmem_pfn` returns fresh device memory = pages and `pfn_to_page(migrate.dst[i])` at the call site gives the base pag= e, but the second line then calls `page_folio(page)` which is the proper AP= I. The inconsistency suggests the first line should also use `page_folio(pa= ge)` or an explicit comment explaining why the direct cast is needed (perha= ps because the folio hasn't been initialized yet at that point, so `page_fo= lio()` wouldn't work for order-0 =E2=86=92 order-N transition). **Concern =E2=80=94 `pages[]` array population in evict/migrate-to-RAM loop= s:** ```c for (i =3D 0; i < npages;) { unsigned int order =3D 0; pages[i] =3D migrate_pfn_to_page(src[i]); if (pages[i]) order =3D folio_order(page_folio(pages[i])); i +=3D NR_PAGES(order); } ``` This sets `pages[i]` for the head page index only. The `copy_to_ram` callba= ck receives the `pages` array with only head-page entries set and NULL/unin= itialized for tail indices. The callback must be aware it needs to handle t= his sparsely-populated array =E2=80=94 presumably it uses `npages` and the = folio order to figure things out. This is an implicit contract that should = ideally be documented. **Good =E2=80=94 `drm_pagemap_folio_split` implementation:** ```c static void drm_pagemap_folio_split(struct folio *orig_folio, struct folio = *new_folio) { if (!new_folio) return; new_folio->pgmap =3D orig_folio->pgmap; zdd =3D folio_zone_device_data(orig_folio); folio_set_zone_device_data(new_folio, drm_pagemap_zdd_get(zdd)); } ``` Correctly handles the split by propagating pgmap and taking a reference on = the zdd. The NULL check on `new_folio` matches the kernel convention for th= e `folio_split` callback. **Observation =E2=80=94 MIGRATE_PFN_COMPOUND in `drm_pagemap_migrate_popula= te_ram_pfn`:** ```c if (order) mpfn[i] |=3D MIGRATE_PFN_COMPOUND; ``` This is set when populating RAM PFNs for the migrate-to-RAM path, correctly= signaling to the migration framework that the destination should be a comp= ound page. **Minor =E2=80=94 hardcoded HPAGE_PMD_ORDER assumption:** ```c order =3D HPAGE_PMD_ORDER; ``` The series only supports PMD-level (2MB on x86) huge pages. This is fine fo= r now but means any future support for other huge page sizes (e.g., PUD-lev= el 1GB) would require rework. The `drm_WARN_ONCE` check correctly validates= this assumption. --- Generated by Claude Code Patch Reviewer