From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Date: Sun, 12 Apr 2026 12:09:35 +1000 Message-ID: In-Reply-To: <20260408201537.3580549-2-matthew.brost@intel.com> References: <20260408201537.3580549-1-matthew.brost@intel.com> <20260408201537.3580549-2-matthew.brost@intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review **Reviewed-by: Thomas Hellstrom** -- good to see. **The try/fallback pattern is correct.** The code calls `dma_iova_try_alloc` on the first iteration (`!i`), then checks `dma_use_iova(state)` to decide the path. This matches the documented API usage where `dma_iova_try_alloc` returns false if the device doesn't support the IOVA API. **Nit: Misleading `phys` parameter:** ```c + if (!i) + dma_iova_try_alloc(gpusvm->drm->dev, state, + npages * PAGE_SIZE >= + HPAGE_PMD_SIZE ? + HPAGE_PMD_SIZE : 0, + npages * PAGE_SIZE); ``` The `phys` parameter to `dma_iova_try_alloc` is documented as "only used to calculate the IOVA alignment. Callers that always do PAGE_SIZE aligned transfers can safely pass 0 here." Since all transfers here are PAGE_SIZE-aligned, `HPAGE_PMD_SIZE` and `0` both produce `iova_offset() == 0` -- the conditional is dead logic. Other callers (HMM, mlx5, dma-buf) just pass `0` or the actual physical address of the first page. Passing `HPAGE_PMD_SIZE` as a fake physical address is confusing. Consider just passing `0`. **Unmap path looks correct:** ```c + bool use_iova = dma_use_iova(&svm_pages->state); + + if (use_iova) { + dma_iova_unlink(dev, &svm_pages->state, 0, + svm_pages->state_offset, + svm_pages->dma_addr[0].dir, 0); + dma_iova_free(dev, &svm_pages->state); + } ``` The bulk unlink + free before the per-page loop, with the per-page `dma_unmap_page` skipped via `!use_iova`, is clean. **Minor: Over-allocation of IOVA space.** `dma_iova_try_alloc` allocates `npages * PAGE_SIZE`, but some entries in the pfn array may be device-private pages (handled differently) or NULL pages, meaning the IOVA space won't be fully used. The `state_offset` correctly tracks only the actually-linked portion. Not a bug, just a minor waste of IOVA address space. **Error path:** On `err_unmap`, `__drm_gpusvm_unmap_pages` is called which handles the IOVA unlink/free. On `retry`, `state` is re-zeroed before `map_pages:`. Looks correct. --- Generated by Claude Code Patch Reviewer