From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Date: Sun, 12 Apr 2026 09:24:52 +1000 Message-ID: In-Reply-To: <20260410205929.3914474-5-matthew.brost@intel.com> References: <20260410205929.3914474-1-matthew.brost@intel.com> <20260410205929.3914474-5-matthew.brost@intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review The `struct drm_pagemap_iova_state` wrapper is well-designed, combining `dma_iova_state` with a tracking `offset` that serves as the iterator for packing IOVA addresses contiguously. The `try_alloc` flag pattern in `drm_pagemap_migrate_map_system_pages` (`drm_pagemap.c:348-355`) is better than Patch 1's `!i` approach -- it correctly handles leading NULL pages by triggering on the first valid page. The alignment hint logic is reasonable: ```c (npages - i) * PAGE_SIZE >= HPAGE_PMD_SIZE ? HPAGE_PMD_SIZE : 0 ``` This passes an alignment hint only when there are enough remaining pages to benefit from huge page IOMMU mappings. The unmap path in `drm_pagemap_migrate_unmap_pages` (`drm_pagemap.c:413-416`) correctly does an early return after `dma_iova_destroy` when IOVA is active. This is safe because this function is only called from paths that map system pages (not device-private pages), so the loop below for device pages is not needed. The `state` re-initialization in `drm_pagemap_evict_to_ram` on retry (`drm_pagemap.c:1189`) is correct and necessary. No issues beyond what's noted above. --- --- Generated by Claude Code Patch Reviewer