* [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap
@ 2026-04-10 20:59 Matthew Brost
2026-04-10 20:59 ` [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
` (5 more replies)
0 siblings, 6 replies; 13+ messages in thread
From: Matthew Brost @ 2026-04-10 20:59 UTC (permalink / raw)
To: intel-xe, dri-devel
The dma-map IOVA alloc, link, and sync APIs perform significantly better
than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
This difference is especially noticeable when mapping a 2MB region in
4KB pages.
Use dma-map IOVA alloc, link, and sync APIs for GPU SVM and DRM page,
which mappings between the CPU and GPU.
Initial results are promising.
Baseline CPU time during 2M / 64K fault with a migration (device THP disabled):
Average migrate 2M cpu time (us, percentage): 333.99665178571428571429, .61102853199282922865
Average migrate 64K cpu time (us, percentage): 18.62723214285714285714, .30127985269960467173
After this series CPU time during 2M / 64K fault with a migration (device THP disabled):
Average migrate 2M cpu time (us, percentage): 224.81808035714285714286, .51412827364772602557
Average migrate 64K cpu time (us, percentage): 14.65625000000000000000, .25659463050529524405
Matt
v2:
- Include missing basline patch for CI
v3:
- Fix memory corruption
- PoC IOVA alloc for multi-GPU
v4:
- Pack IOVA / drop dummy pages
- Drop multi-GPU IOVA alloc
v5:
- Address Thomas's comments
v6:
- Address Francois's comments
v7:
- Address sashiko's comments
Matthew Brost (5):
drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
drm/pagemap: Drop source_peer_migrates flag and assume true
drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM
pagemap
drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc
drivers/gpu/drm/drm_gpusvm.c | 53 ++++++--
drivers/gpu/drm/drm_pagemap.c | 229 ++++++++++++++++++++++++----------
drivers/gpu/drm/xe/xe_svm.c | 1 -
include/drm/drm_gpusvm.h | 5 +
include/drm/drm_pagemap.h | 9 +-
5 files changed, 218 insertions(+), 79 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
@ 2026-04-10 20:59 ` Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 2/5] drm/pagemap: Drop source_peer_migrates flag and assume true Matthew Brost
` (4 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Matthew Brost @ 2026-04-10 20:59 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Thomas Hellström
The dma-map IOVA alloc, link, and sync APIs perform significantly better
than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
This difference is especially noticeable when mapping a 2MB region in
4KB pages.
Use the IOVA alloc, link, and sync APIs for GPU SVM, which create DMA
mappings between the CPU and GPU.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 53 ++++++++++++++++++++++++++++++------
include/drm/drm_gpusvm.h | 5 ++++
2 files changed, 50 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 7993e85c0566..365a9c0b522a 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1144,11 +1144,17 @@ static void __drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_pages_flags flags = {
.__flags = svm_pages->flags.__flags,
};
+ bool use_iova = dma_use_iova(&svm_pages->state);
+
+ if (use_iova)
+ dma_iova_destroy(dev, &svm_pages->state,
+ svm_pages->state_offset,
+ svm_pages->dma_addr[0].dir, 0);
for (i = 0, j = 0; i < npages; j++) {
struct drm_pagemap_addr *addr = &svm_pages->dma_addr[j];
- if (addr->proto == DRM_INTERCONNECT_SYSTEM)
+ if (!use_iova && addr->proto == DRM_INTERCONNECT_SYSTEM)
dma_unmap_page(dev,
addr->addr,
PAGE_SIZE << addr->order,
@@ -1413,6 +1419,7 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_pages_flags flags;
enum dma_data_direction dma_dir = ctx->read_only ? DMA_TO_DEVICE :
DMA_BIDIRECTIONAL;
+ struct dma_iova_state *state = &svm_pages->state;
retry:
if (time_after(jiffies, timeout))
@@ -1451,6 +1458,9 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
if (err)
goto err_free;
+ *state = (struct dma_iova_state){};
+ svm_pages->state_offset = 0;
+
map_pages:
/*
* Perform all dma mappings under the notifier lock to not
@@ -1544,13 +1554,33 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
goto err_unmap;
}
- addr = dma_map_page(gpusvm->drm->dev,
- page, 0,
- PAGE_SIZE << order,
- dma_dir);
- if (dma_mapping_error(gpusvm->drm->dev, addr)) {
- err = -EFAULT;
- goto err_unmap;
+ if (!i)
+ dma_iova_try_alloc(gpusvm->drm->dev, state,
+ npages * PAGE_SIZE >=
+ HPAGE_PMD_SIZE ?
+ HPAGE_PMD_SIZE : 0,
+ npages * PAGE_SIZE);
+
+ if (dma_use_iova(state)) {
+ err = dma_iova_link(gpusvm->drm->dev, state,
+ hmm_pfn_to_phys(pfns[i]),
+ svm_pages->state_offset,
+ PAGE_SIZE << order,
+ dma_dir, 0);
+ if (err)
+ goto err_unmap;
+
+ addr = state->addr + svm_pages->state_offset;
+ svm_pages->state_offset += PAGE_SIZE << order;
+ } else {
+ addr = dma_map_page(gpusvm->drm->dev,
+ page, 0,
+ PAGE_SIZE << order,
+ dma_dir);
+ if (dma_mapping_error(gpusvm->drm->dev, addr)) {
+ err = -EFAULT;
+ goto err_unmap;
+ }
}
svm_pages->dma_addr[j] = drm_pagemap_addr_encode
@@ -1562,6 +1592,13 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
flags.has_dma_mapping = true;
}
+ if (dma_use_iova(state)) {
+ err = dma_iova_sync(gpusvm->drm->dev, state, 0,
+ svm_pages->state_offset);
+ if (err)
+ goto err_unmap;
+ }
+
if (pagemap) {
flags.has_devmem_pages = true;
drm_pagemap_get(dpagemap);
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 2578ac92a8d4..cd94bb2ee6ee 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -6,6 +6,7 @@
#ifndef __DRM_GPUSVM_H__
#define __DRM_GPUSVM_H__
+#include <linux/dma-mapping.h>
#include <linux/kref.h>
#include <linux/interval_tree.h>
#include <linux/mmu_notifier.h>
@@ -136,6 +137,8 @@ struct drm_gpusvm_pages_flags {
* @dma_addr: Device address array
* @dpagemap: The struct drm_pagemap of the device pages we're dma-mapping.
* Note this is assuming only one drm_pagemap per range is allowed.
+ * @state: DMA IOVA state for mapping.
+ * @state_offset: DMA IOVA offset for mapping.
* @notifier_seq: Notifier sequence number of the range's pages
* @flags: Flags for range
* @flags.migrate_devmem: Flag indicating whether the range can be migrated to device memory
@@ -147,6 +150,8 @@ struct drm_gpusvm_pages_flags {
struct drm_gpusvm_pages {
struct drm_pagemap_addr *dma_addr;
struct drm_pagemap *dpagemap;
+ struct dma_iova_state state;
+ unsigned long state_offset;
unsigned long notifier_seq;
struct drm_gpusvm_pages_flags flags;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 2/5] drm/pagemap: Drop source_peer_migrates flag and assume true
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-04-10 20:59 ` [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
@ 2026-04-10 20:59 ` Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
` (3 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Matthew Brost @ 2026-04-10 20:59 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Francois Dugast
All current users of DRM pagemap set source_peer_migrates to true during
migration, and it is unclear whether any user would ever want to disable
this for performance reasons or for features such as compression. It is
also questionable whether this flag could be made to work with
high-speed fabric mapping APIs.
Drop the flag and make DRM pagemap unconditionally assume that
source_peer_migrates is true.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 10 ++++------
drivers/gpu/drm/xe/xe_svm.c | 1 -
include/drm/drm_pagemap.h | 9 ++-------
3 files changed, 6 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 5002049e0198..63f32cf6e1a7 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -651,12 +651,10 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
own_pages++;
goto next;
}
- if (mdetails->source_peer_migrates) {
- cur.dpagemap = src_zdd->dpagemap;
- cur.ops = src_zdd->devmem_allocation->ops;
- cur.device = cur.dpagemap->drm->dev;
- pages[i] = src_page;
- }
+ cur.dpagemap = src_zdd->dpagemap;
+ cur.ops = src_zdd->devmem_allocation->ops;
+ cur.device = cur.dpagemap->drm->dev;
+ pages[i] = src_page;
}
if (!pages[i]) {
cur.dpagemap = NULL;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index ba67355f64cb..e1651e70c8f0 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1055,7 +1055,6 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct xe_pagemap *xpagemap = container_of(dpagemap, typeof(*xpagemap), dpagemap);
struct drm_pagemap_migrate_details mdetails = {
.timeslice_ms = timeslice_ms,
- .source_peer_migrates = 1,
};
struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
struct dma_fence *pre_migrate_fence = NULL;
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index 75e6ca58922d..95eb4b66b057 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -329,17 +329,12 @@ struct drm_pagemap_devmem {
* struct drm_pagemap_migrate_details - Details to govern migration.
* @timeslice_ms: The time requested for the migrated pagemap pages to
* be present in @mm before being allowed to be migrated back.
- * @can_migrate_same_pagemap: Whether the copy function as indicated by
- * the @source_peer_migrates flag, can migrate device pages within a
- * single drm_pagemap.
- * @source_peer_migrates: Whether on p2p migration, The source drm_pagemap
- * should use the copy_to_ram() callback rather than the destination
- * drm_pagemap should use the copy_to_devmem() callback.
+ * @can_migrate_same_pagemap: Whether the copy function can migrate
+ * device pages within a single drm_pagemap.
*/
struct drm_pagemap_migrate_details {
unsigned long timeslice_ms;
u32 can_migrate_same_pagemap : 1;
- u32 source_peer_migrates : 1;
};
#if IS_ENABLED(CONFIG_ZONE_DEVICE)
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-04-10 20:59 ` [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-04-10 20:59 ` [PATCH v7 2/5] drm/pagemap: Drop source_peer_migrates flag and assume true Matthew Brost
@ 2026-04-10 20:59 ` Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 4/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
` (2 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Matthew Brost @ 2026-04-10 20:59 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Francois Dugast
Split drm_pagemap_migrate_map_pages into device / system helpers clearly
seperating these operations. Will help with upcoming changes to split
IOVA allocation steps.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 151 ++++++++++++++++++++++------------
1 file changed, 100 insertions(+), 51 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 63f32cf6e1a7..ee4d9f90bf67 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -216,7 +216,8 @@ static void drm_pagemap_get_devmem_page(struct page *page,
}
/**
- * drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM migration
+ * drm_pagemap_migrate_map_device_private_pages() - Map device private migration
+ * pages for GPU SVM migration
* @dev: The device performing the migration.
* @local_dpagemap: The drm_pagemap local to the migrating device.
* @pagemap_addr: Array to store DMA information corresponding to mapped pages.
@@ -232,58 +233,50 @@ static void drm_pagemap_get_devmem_page(struct page *page,
*
* Returns: 0 on success, -EFAULT if an error occurs during mapping.
*/
-static int drm_pagemap_migrate_map_pages(struct device *dev,
- struct drm_pagemap *local_dpagemap,
- struct drm_pagemap_addr *pagemap_addr,
- unsigned long *migrate_pfn,
- unsigned long npages,
- enum dma_data_direction dir,
- const struct drm_pagemap_migrate_details *mdetails)
+static int
+drm_pagemap_migrate_map_device_private_pages(struct device *dev,
+ struct drm_pagemap *local_dpagemap,
+ struct drm_pagemap_addr *pagemap_addr,
+ unsigned long *migrate_pfn,
+ unsigned long npages,
+ enum dma_data_direction dir,
+ const struct drm_pagemap_migrate_details *mdetails)
{
unsigned long num_peer_pages = 0, num_local_pages = 0, i;
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
- dma_addr_t dma_addr;
+ struct drm_pagemap_zdd *zdd;
+ struct drm_pagemap *dpagemap;
+ struct drm_pagemap_addr addr;
struct folio *folio;
unsigned int order = 0;
if (!page)
goto next;
+ WARN_ON_ONCE(!is_device_private_page(page));
folio = page_folio(page);
order = folio_order(folio);
- if (is_device_private_page(page)) {
- struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
- struct drm_pagemap *dpagemap = zdd->dpagemap;
- struct drm_pagemap_addr addr;
-
- if (dpagemap == local_dpagemap) {
- if (!mdetails->can_migrate_same_pagemap)
- goto next;
-
- num_local_pages += NR_PAGES(order);
- } else {
- num_peer_pages += NR_PAGES(order);
- }
+ zdd = drm_pagemap_page_zone_device_data(page);
+ dpagemap = zdd->dpagemap;
- addr = dpagemap->ops->device_map(dpagemap, dev, page, order, dir);
- if (dma_mapping_error(dev, addr.addr))
- return -EFAULT;
+ if (dpagemap == local_dpagemap) {
+ if (!mdetails->can_migrate_same_pagemap)
+ goto next;
- pagemap_addr[i] = addr;
+ num_local_pages += NR_PAGES(order);
} else {
- dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
- if (dma_mapping_error(dev, dma_addr))
- return -EFAULT;
-
- pagemap_addr[i] =
- drm_pagemap_addr_encode(dma_addr,
- DRM_INTERCONNECT_SYSTEM,
- order, dir);
+ num_peer_pages += NR_PAGES(order);
}
+ addr = dpagemap->ops->device_map(dpagemap, dev, page, order, dir);
+ if (dma_mapping_error(dev, addr.addr))
+ return -EFAULT;
+
+ pagemap_addr[i] = addr;
+
next:
i += NR_PAGES(order);
}
@@ -298,6 +291,60 @@ static int drm_pagemap_migrate_map_pages(struct device *dev,
return 0;
}
+/**
+ * drm_pagemap_migrate_map_system_pages() - Map system or device coherent
+ * migration pages for GPU SVM migration
+ * @dev: The device performing the migration.
+ * @pagemap_addr: Array to store DMA information corresponding to mapped pages.
+ * @migrate_pfn: Array of page frame numbers of system pages or peer pages to map.
+ * @npages: Number of system or device coherent pages to map.
+ * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
+ *
+ * This function maps pages of memory for migration usage in GPU SVM. It
+ * iterates over each page frame number provided in @migrate_pfn, maps the
+ * corresponding page, and stores the DMA address in the provided @dma_addr
+ * array.
+ *
+ * Returns: 0 on success, -EFAULT if an error occurs during mapping.
+ */
+static int
+drm_pagemap_migrate_map_system_pages(struct device *dev,
+ struct drm_pagemap_addr *pagemap_addr,
+ unsigned long *migrate_pfn,
+ unsigned long npages,
+ enum dma_data_direction dir)
+{
+ unsigned long i;
+
+ for (i = 0; i < npages;) {
+ struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
+ dma_addr_t dma_addr;
+ struct folio *folio;
+ unsigned int order = 0;
+
+ if (!page)
+ goto next;
+
+ WARN_ON_ONCE(is_device_private_page(page));
+ folio = page_folio(page);
+ order = folio_order(folio);
+
+ dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
+ if (dma_mapping_error(dev, dma_addr))
+ return -EFAULT;
+
+ pagemap_addr[i] =
+ drm_pagemap_addr_encode(dma_addr,
+ DRM_INTERCONNECT_SYSTEM,
+ order, dir);
+
+next:
+ i += NR_PAGES(order);
+ }
+
+ return 0;
+}
+
/**
* drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped for GPU SVM migration
* @dev: The device for which the pages were mapped
@@ -358,9 +405,13 @@ drm_pagemap_migrate_remote_to_local(struct drm_pagemap_devmem *devmem,
const struct drm_pagemap_migrate_details *mdetails)
{
- int err = drm_pagemap_migrate_map_pages(remote_device, remote_dpagemap,
- pagemap_addr, local_pfns,
- npages, DMA_FROM_DEVICE, mdetails);
+ int err = drm_pagemap_migrate_map_device_private_pages(remote_device,
+ remote_dpagemap,
+ pagemap_addr,
+ local_pfns,
+ npages,
+ DMA_FROM_DEVICE,
+ mdetails);
if (err)
goto out;
@@ -379,12 +430,11 @@ drm_pagemap_migrate_sys_to_dev(struct drm_pagemap_devmem *devmem,
struct page *local_pages[],
struct drm_pagemap_addr pagemap_addr[],
unsigned long npages,
- const struct drm_pagemap_devmem_ops *ops,
- const struct drm_pagemap_migrate_details *mdetails)
+ const struct drm_pagemap_devmem_ops *ops)
{
- int err = drm_pagemap_migrate_map_pages(devmem->dev, devmem->dpagemap,
- pagemap_addr, sys_pfns, npages,
- DMA_TO_DEVICE, mdetails);
+ int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
+ pagemap_addr, sys_pfns,
+ npages, DMA_TO_DEVICE);
if (err)
goto out;
@@ -448,7 +498,7 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
&pages[last->start],
&pagemap_addr[last->start],
cur->start - last->start,
- last->ops, mdetails);
+ last->ops);
out:
*last = *cur;
@@ -1010,7 +1060,6 @@ EXPORT_SYMBOL(drm_pagemap_put);
int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
{
const struct drm_pagemap_devmem_ops *ops = devmem_allocation->ops;
- struct drm_pagemap_migrate_details mdetails = {};
unsigned long npages, mpages = 0;
struct page **pages;
unsigned long *src, *dst;
@@ -1049,10 +1098,10 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
if (err || !mpages)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(devmem_allocation->dev,
- devmem_allocation->dpagemap, pagemap_addr,
- dst, npages, DMA_FROM_DEVICE,
- &mdetails);
+ err = drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
+ pagemap_addr,
+ dst, npages,
+ DMA_FROM_DEVICE);
if (err)
goto err_finalize;
@@ -1121,7 +1170,6 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
MIGRATE_VMA_SELECT_COMPOUND,
.fault_page = page,
};
- struct drm_pagemap_migrate_details mdetails = {};
struct drm_pagemap_zdd *zdd;
const struct drm_pagemap_devmem_ops *ops;
struct device *dev = NULL;
@@ -1179,8 +1227,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
if (err)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(dev, zdd->dpagemap, pagemap_addr, migrate.dst, npages,
- DMA_FROM_DEVICE, &mdetails);
+ err = drm_pagemap_migrate_map_system_pages(dev, pagemap_addr,
+ migrate.dst, npages,
+ DMA_FROM_DEVICE);
if (err)
goto err_finalize;
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 4/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (2 preceding siblings ...)
2026-04-10 20:59 ` [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
@ 2026-04-10 20:59 ` Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
2026-04-11 23:24 ` Claude review: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Claude Code Review Bot
5 siblings, 1 reply; 13+ messages in thread
From: Matthew Brost @ 2026-04-10 20:59 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Francois Dugast
The dma-map IOVA alloc, link, and sync APIs perform significantly better
than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
This difference is especially noticeable when mapping a 2MB region in
4KB pages.
Use the IOVA alloc, link, and sync APIs for DRM pagemap, which create DMA
mappings between the CPU and GPU for copying data.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 85 ++++++++++++++++++++++++++++-------
1 file changed, 70 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index ee4d9f90bf67..bd2037c77c92 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -291,6 +291,19 @@ drm_pagemap_migrate_map_device_private_pages(struct device *dev,
return 0;
}
+/**
+ * struct drm_pagemap_iova_state - DRM pagemap IOVA state
+ * @dma_state: DMA IOVA state.
+ * @offset: Current offset in IOVA.
+ *
+ * This structure acts as an iterator for packing all IOVA addresses within a
+ * contiguous range.
+ */
+struct drm_pagemap_iova_state {
+ struct dma_iova_state dma_state;
+ unsigned long offset;
+};
+
/**
* drm_pagemap_migrate_map_system_pages() - Map system or device coherent
* migration pages for GPU SVM migration
@@ -299,22 +312,25 @@ drm_pagemap_migrate_map_device_private_pages(struct device *dev,
* @migrate_pfn: Array of page frame numbers of system pages or peer pages to map.
* @npages: Number of system or device coherent pages to map.
* @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
+ * @state: DMA IOVA state for mapping.
*
* This function maps pages of memory for migration usage in GPU SVM. It
* iterates over each page frame number provided in @migrate_pfn, maps the
* corresponding page, and stores the DMA address in the provided @dma_addr
* array.
*
- * Returns: 0 on success, -EFAULT if an error occurs during mapping.
+ * Returns: 0 on success, negative error code on failure.
*/
static int
drm_pagemap_migrate_map_system_pages(struct device *dev,
struct drm_pagemap_addr *pagemap_addr,
unsigned long *migrate_pfn,
unsigned long npages,
- enum dma_data_direction dir)
+ enum dma_data_direction dir,
+ struct drm_pagemap_iova_state *state)
{
unsigned long i;
+ bool try_alloc = false;
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
@@ -329,9 +345,31 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
folio = page_folio(page);
order = folio_order(folio);
- dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
- if (dma_mapping_error(dev, dma_addr))
- return -EFAULT;
+ if (!try_alloc) {
+ dma_iova_try_alloc(dev, &state->dma_state,
+ (npages - i) * PAGE_SIZE >=
+ HPAGE_PMD_SIZE ?
+ HPAGE_PMD_SIZE : 0,
+ npages * PAGE_SIZE);
+ try_alloc = true;
+ }
+
+ if (dma_use_iova(&state->dma_state)) {
+ int err = dma_iova_link(dev, &state->dma_state,
+ page_to_phys(page),
+ state->offset, page_size(page),
+ dir, 0);
+ if (err)
+ return err;
+
+ dma_addr = state->dma_state.addr + state->offset;
+ state->offset += page_size(page);
+ } else {
+ dma_addr = dma_map_page(dev, page, 0, page_size(page),
+ dir);
+ if (dma_mapping_error(dev, dma_addr))
+ return -EFAULT;
+ }
pagemap_addr[i] =
drm_pagemap_addr_encode(dma_addr,
@@ -342,6 +380,9 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
i += NR_PAGES(order);
}
+ if (dma_use_iova(&state->dma_state))
+ return dma_iova_sync(dev, &state->dma_state, 0, state->offset);
+
return 0;
}
@@ -353,6 +394,7 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
* @pagemap_addr: Array of DMA information corresponding to mapped pages
* @npages: Number of pages to unmap
* @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
+ * @state: DMA IOVA state for mapping.
*
* This function unmaps previously mapped pages of memory for GPU Shared Virtual
* Memory (SVM). It iterates over each DMA address provided in @dma_addr, checks
@@ -362,10 +404,16 @@ static void drm_pagemap_migrate_unmap_pages(struct device *dev,
struct drm_pagemap_addr *pagemap_addr,
unsigned long *migrate_pfn,
unsigned long npages,
- enum dma_data_direction dir)
+ enum dma_data_direction dir,
+ struct drm_pagemap_iova_state *state)
{
unsigned long i;
+ if (state && dma_use_iova(&state->dma_state)) {
+ dma_iova_destroy(dev, &state->dma_state, state->offset, dir, 0);
+ return;
+ }
+
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
@@ -420,7 +468,7 @@ drm_pagemap_migrate_remote_to_local(struct drm_pagemap_devmem *devmem,
devmem->pre_migrate_fence);
out:
drm_pagemap_migrate_unmap_pages(remote_device, pagemap_addr, local_pfns,
- npages, DMA_FROM_DEVICE);
+ npages, DMA_FROM_DEVICE, NULL);
return err;
}
@@ -430,11 +478,13 @@ drm_pagemap_migrate_sys_to_dev(struct drm_pagemap_devmem *devmem,
struct page *local_pages[],
struct drm_pagemap_addr pagemap_addr[],
unsigned long npages,
- const struct drm_pagemap_devmem_ops *ops)
+ const struct drm_pagemap_devmem_ops *ops,
+ struct drm_pagemap_iova_state *state)
{
int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
pagemap_addr, sys_pfns,
- npages, DMA_TO_DEVICE);
+ npages, DMA_TO_DEVICE,
+ state);
if (err)
goto out;
@@ -443,7 +493,7 @@ drm_pagemap_migrate_sys_to_dev(struct drm_pagemap_devmem *devmem,
devmem->pre_migrate_fence);
out:
drm_pagemap_migrate_unmap_pages(devmem->dev, pagemap_addr, sys_pfns, npages,
- DMA_TO_DEVICE);
+ DMA_TO_DEVICE, state);
return err;
}
@@ -471,6 +521,7 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
const struct migrate_range_loc *cur,
const struct drm_pagemap_migrate_details *mdetails)
{
+ struct drm_pagemap_iova_state state = {};
int ret = 0;
if (cur->start == 0)
@@ -498,7 +549,7 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
&pages[last->start],
&pagemap_addr[last->start],
cur->start - last->start,
- last->ops);
+ last->ops, &state);
out:
*last = *cur;
@@ -1060,6 +1111,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
{
const struct drm_pagemap_devmem_ops *ops = devmem_allocation->ops;
+ struct drm_pagemap_iova_state state = {};
unsigned long npages, mpages = 0;
struct page **pages;
unsigned long *src, *dst;
@@ -1101,7 +1153,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
err = drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
pagemap_addr,
dst, npages,
- DMA_FROM_DEVICE);
+ DMA_FROM_DEVICE, &state);
if (err)
goto err_finalize;
@@ -1125,7 +1177,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
migrate_device_pages(src, dst, npages);
migrate_device_finalize(src, dst, npages);
drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr, dst, npages,
- DMA_FROM_DEVICE);
+ DMA_FROM_DEVICE, &state);
err_free:
kvfree(buf);
@@ -1137,6 +1189,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
if (retry_count--) {
cond_resched();
+ state = (struct drm_pagemap_iova_state){};
goto retry;
}
@@ -1170,6 +1223,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
MIGRATE_VMA_SELECT_COMPOUND,
.fault_page = page,
};
+ struct drm_pagemap_iova_state state = {};
struct drm_pagemap_zdd *zdd;
const struct drm_pagemap_devmem_ops *ops;
struct device *dev = NULL;
@@ -1229,7 +1283,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
err = drm_pagemap_migrate_map_system_pages(dev, pagemap_addr,
migrate.dst, npages,
- DMA_FROM_DEVICE);
+ DMA_FROM_DEVICE, &state);
if (err)
goto err_finalize;
@@ -1254,7 +1308,8 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
migrate_vma_finalize(&migrate);
if (dev)
drm_pagemap_migrate_unmap_pages(dev, pagemap_addr, migrate.dst,
- npages, DMA_FROM_DEVICE);
+ npages, DMA_FROM_DEVICE,
+ &state);
err_free:
kvfree(buf);
err_out:
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (3 preceding siblings ...)
2026-04-10 20:59 ` [PATCH v7 4/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
@ 2026-04-10 20:59 ` Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-11 23:24 ` Claude review: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Claude Code Review Bot
5 siblings, 1 reply; 13+ messages in thread
From: Matthew Brost @ 2026-04-10 20:59 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Francois Dugast
Replace @dma_addr with @pagemap_addr in the function documentation,
as @pagemap_addr is the actual name of the function argument.
Suggested-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index bd2037c77c92..d82ea7ccb8da 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -397,8 +397,9 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
* @state: DMA IOVA state for mapping.
*
* This function unmaps previously mapped pages of memory for GPU Shared Virtual
- * Memory (SVM). It iterates over each DMA address provided in @dma_addr, checks
- * if it's valid and not already unmapped, and unmaps the corresponding page.
+ * Memory (SVM). It iterates over each DMA address provided in @pagemap_addr,
+ * checks if it's valid and not already unmapped, and unmaps the corresponding
+ * page.
*/
static void drm_pagemap_migrate_unmap_pages(struct device *dev,
struct drm_pagemap_addr *pagemap_addr,
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Claude review: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (4 preceding siblings ...)
2026-04-10 20:59 ` [PATCH v7 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
@ 2026-04-11 23:24 ` Claude Code Review Bot
5 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-11 23:24 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap
Author: Matthew Brost <matthew.brost@intel.com>
Patches: 6
Reviewed: 2026-04-12T09:24:50.817651
---
This v7 series converts GPU SVM and DRM pagemap migration paths from per-page `dma_map_page`/`dma_unmap_page` to the batched IOVA alloc/link/sync API, avoiding costly per-page IOMMU synchronizations. The performance numbers in the cover letter (33% reduction in 2M migrate CPU time) are compelling.
The series is well-structured: Patch 1 handles GPU SVM, Patches 2-3 prepare DRM pagemap by simplifying and splitting, Patch 4 adds IOVA to pagemap, and Patch 5 is a trivial doc fix. The fallback to `dma_map_page` when `dma_iova_try_alloc` fails is correctly handled in all paths.
There is one correctness issue in the GPU SVM unmap path (Patch 1) where system pages can incorrectly fall through to `device_unmap` when IOVA is active, one performance gap in the IOVA allocation trigger condition, and one documentation bug introduced in Patch 3 that Patch 5 partially addresses.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Claude review: drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
2026-04-10 20:59 ` [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
@ 2026-04-11 23:24 ` Claude Code Review Bot
0 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-11 23:24 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Bug: Incorrect unmap logic for mixed system+device pages**
In `__drm_gpusvm_unmap_pages` (`drm_gpusvm.c:1149-1161`), the conditional structure is wrong when `use_iova` is true and the range contains both system and device pages:
```c
if (!use_iova && addr->proto == DRM_INTERCONNECT_SYSTEM)
dma_unmap_page(dev,
addr->addr,
PAGE_SIZE << addr->order,
addr->dir);
else if (dpagemap && dpagemap->ops->device_unmap)
dpagemap->ops->device_unmap(dpagemap,
dev, addr);
```
When `use_iova` is true and a system page entry is encountered, `!use_iova` is false, so the first condition fails. The code then falls through to the `else if`, which calls `device_unmap` on a system page address that was already freed by `dma_iova_destroy` above. This is a use-after-free of the IOVA address from the device_unmap callback's perspective.
This can occur when `ctx->allow_mixed` is true and a range contains both device pages (mapped via `device_map`) and system pages (mapped via IOVA). The fix should be:
```c
if (addr->proto == DRM_INTERCONNECT_SYSTEM) {
if (!use_iova)
dma_unmap_page(dev, addr->addr,
PAGE_SIZE << addr->order,
addr->dir);
} else if (dpagemap && dpagemap->ops->device_unmap) {
dpagemap->ops->device_unmap(dpagemap, dev, addr);
}
```
**Performance gap: IOVA allocation trigger uses `!i` instead of a flag**
In `drm_gpusvm_get_pages` (`drm_gpusvm.c:1552-1557`):
```c
if (!i)
dma_iova_try_alloc(gpusvm->drm->dev, state,
npages * PAGE_SIZE >=
HPAGE_PMD_SIZE ?
HPAGE_PMD_SIZE : 0,
npages * PAGE_SIZE);
```
The `!i` condition means IOVA is only allocated when the very first page (i==0) is a valid system page. If page 0 is invalid (`HMM_PFN_VALID` not set) or is a device page, `i` advances past 0 and IOVA is never allocated for any subsequent system pages. They all fall back to per-page `dma_map_page`. Contrast this with Patch 4's pagemap version which uses a `bool try_alloc` flag that triggers on the first valid page regardless of index -- that pattern would be more robust here.
**Overall**: The structure of adding `state` and `state_offset` to `struct drm_gpusvm_pages` is clean. The initialization at `drm_gpusvm.c:1456-1457` is correct, and the `dma_iova_sync` call after the mapping loop at `drm_gpusvm.c:1590-1594` is properly placed. The fallback path when `dma_use_iova` returns false is correct.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Claude review: drm/pagemap: Drop source_peer_migrates flag and assume true
2026-04-10 20:59 ` [PATCH v7 2/5] drm/pagemap: Drop source_peer_migrates flag and assume true Matthew Brost
@ 2026-04-11 23:24 ` Claude Code Review Bot
0 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-11 23:24 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean simplification. The commit message correctly justifies the removal: all current users set this to true, and the flag's interaction with high-speed fabric mapping APIs is questionable.
The code change in `drm_pagemap.c:651-662` correctly removes the `if (mdetails->source_peer_migrates)` guard and unconditionally executes the body. The `xe_svm.c` removal of the `.source_peer_migrates = 1` initializer and the `drm_pagemap.h` struct cleanup are consistent.
The kerneldoc update for `can_migrate_same_pagemap` is a nice touch -- removing the stale reference to `source_peer_migrates` in its description.
No issues.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Claude review: drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
2026-04-10 20:59 ` [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
@ 2026-04-11 23:24 ` Claude Code Review Bot
0 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-11 23:24 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Good preparatory refactor. Splitting the combined `drm_pagemap_migrate_map_pages` into `drm_pagemap_migrate_map_device_private_pages` and `drm_pagemap_migrate_map_system_pages` cleanly separates concerns and makes the IOVA addition in Patch 4 more localized.
The addition of `WARN_ON_ONCE(!is_device_private_page(page))` in the device-private helper and `WARN_ON_ONCE(is_device_private_page(page))` in the system helper are good defensive checks.
The `drm_pagemap_migrate_sys_to_dev` signature change (dropping `mdetails`) is correct since `mdetails` was only needed for the device-private page logic (`can_migrate_same_pagemap`), and system pages don't need it.
**Minor doc issue**: The new kerneldoc for `drm_pagemap_migrate_map_system_pages` says:
```
* corresponding page, and stores the DMA address in the provided @dma_addr
* array.
```
The parameter is `@pagemap_addr`, not `@dma_addr`. Patch 5 fixes this exact same issue in the unmap function but misses it in this map function's doc.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Claude review: drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-04-10 20:59 ` [PATCH v7 4/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
@ 2026-04-11 23:24 ` Claude Code Review Bot
0 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-11 23:24 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The `struct drm_pagemap_iova_state` wrapper is well-designed, combining `dma_iova_state` with a tracking `offset` that serves as the iterator for packing IOVA addresses contiguously.
The `try_alloc` flag pattern in `drm_pagemap_migrate_map_system_pages` (`drm_pagemap.c:348-355`) is better than Patch 1's `!i` approach -- it correctly handles leading NULL pages by triggering on the first valid page.
The alignment hint logic is reasonable:
```c
(npages - i) * PAGE_SIZE >= HPAGE_PMD_SIZE ? HPAGE_PMD_SIZE : 0
```
This passes an alignment hint only when there are enough remaining pages to benefit from huge page IOMMU mappings.
The unmap path in `drm_pagemap_migrate_unmap_pages` (`drm_pagemap.c:413-416`) correctly does an early return after `dma_iova_destroy` when IOVA is active. This is safe because this function is only called from paths that map system pages (not device-private pages), so the loop below for device pages is not needed.
The `state` re-initialization in `drm_pagemap_evict_to_ram` on retry (`drm_pagemap.c:1189`) is correct and necessary.
No issues beyond what's noted above.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Claude review: drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc
2026-04-10 20:59 ` [PATCH v7 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
@ 2026-04-11 23:24 ` Claude Code Review Bot
0 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-11 23:24 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Trivial and correct doc fix. Changes `@dma_addr` to `@pagemap_addr` in the unmap function's kerneldoc. As noted above, the same `@dma_addr` reference exists in the map function's kerneldoc (introduced in Patch 3) and should also be fixed for consistency.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Claude review: drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc
2026-04-08 20:15 ` [PATCH v6 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
@ 2026-04-12 2:09 ` Claude Code Review Bot
0 siblings, 0 replies; 13+ messages in thread
From: Claude Code Review Bot @ 2026-04-12 2:09 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Trivial doc fix:
```c
- * Memory (SVM). It iterates over each DMA address provided in @dma_addr, checks
- * if it's valid and not already unmapped, and unmaps the corresponding page.
+ * Memory (SVM). It iterates over each DMA address provided in @pagemap_addr,
+ * checks if it's valid and not already unmapped, and unmaps the corresponding
+ * page.
```
Correct. As noted above, the same stale `@dma_addr` reference exists in `drm_pagemap_migrate_map_system_pages` (introduced in patch 3) and should also be fixed.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-04-12 2:09 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-04-10 20:59 ` [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 2/5] drm/pagemap: Drop source_peer_migrates flag and assume true Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 4/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-10 20:59 ` [PATCH v7 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
2026-04-11 23:24 ` Claude review: " Claude Code Review Bot
2026-04-11 23:24 ` Claude review: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-04-08 20:15 [PATCH v6 0/5] " Matthew Brost
2026-04-08 20:15 ` [PATCH v6 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
2026-04-12 2:09 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox