From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/pagemap: Enable THP support for GPU memory migration Date: Fri, 13 Mar 2026 14:02:32 +1000 Message-ID: In-Reply-To: <20260312151726.1779566-5-francois.dugast@intel.com> References: <20260312151726.1779566-1-francois.dugast@intel.com> <20260312151726.1779566-5-francois.dugast@intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review This is the main enablement patch. Several observations: **1. Ordering change in `drm_pagemap_get_devmem_page`:** ```c + zone_device_folio_init((struct folio *)page, zdd->dpagemap->pagemap, + order); + folio_set_zone_device_data(page_folio(page), drm_pagemap_zdd_get(zdd)); ``` The original code set `zone_device_data` *before* `zone_device_page_init`. = The new code sets it *after* `zone_device_folio_init`. This is fine because= `zone_device_page_init` doesn't read `zone_device_data`, but the `(struct = folio *)page` cast is a raw cast that bypasses `page_folio()` checks. This = is safe here because `page` is the head page from `pfn_to_page(migrate.dst[= i])`, but it's worth noting this assumption. Also, the original used `page_pgmap(page)` while the new code uses `zdd->dp= agemap->pagemap`. These should be equivalent for destination pages in migra= te-to-devmem, but it's a subtle semantic change. **2. Eviction page array gaps:** In both `drm_pagemap_evict_to_ram` and `__drm_pagemap_migrate_to_ram`, the = loop that populates `pages[]` now skips tail page entries: ```c + for (i =3D 0; i < npages;) { + unsigned int order =3D 0; + pages[i] =3D migrate_pfn_to_page(src[i]); + if (pages[i]) + order =3D folio_order(page_folio(pages[i])); + + i +=3D NR_PAGES(order); + } ``` Previously every `pages[i]` was populated. Now only head-page entries are s= et; tail entries remain NULL (from `kcalloc`). The `copy_to_ram` callback m= ust be aware of this =E2=80=94 it will see a non-NULL entry for the head pa= ge followed by NR_PAGES(order)-1 NULL entries. If `copy_to_ram` implementat= ions iterate linearly and check `pages[i]` for NULL to skip, this works. Bu= t this is a behavioral contract change that should be documented or validat= ed against all `copy_to_ram` implementations. **3. `drm_pagemap_folio_split` callback:** ```c +static void drm_pagemap_folio_split(struct folio *orig_folio, struct folio= *new_folio) +{ + if (!new_folio) + return; + + new_folio->pgmap =3D orig_folio->pgmap; + zdd =3D folio_zone_device_data(orig_folio); + folio_set_zone_device_data(new_folio, drm_pagemap_zdd_get(zdd)); +} ``` This correctly takes a new zdd reference for the split folio and copies `pg= map`. The `new_folio =3D=3D NULL` early return is consistent with the defau= lt handler in `zone_device_private_split_cb`. This looks correct. **4. Missing THP allocation fallback:** The existing code in `drm_pagemap_migrate_populate_ram_pfn` already has: ```c /* TODO: Support fallback to single pages if THP allocation fails */ ``` With THP now enabled, a failed order-9 allocation will cause the entire mig= ration to fail (`goto free_pages`). For production robustness, a fallback t= o base-page allocation would be important. This is a known limitation (mark= ed TODO), not a bug introduced by this patch. **Verdict: Correct overall. The `pages[]` sparse population contract with `= copy_to_ram` callbacks should be verified/documented.** --- Generated by Claude Code Patch Reviewer