public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor
@ 2026-03-13 15:09 Adrián Larumbe
  2026-03-13 15:09 ` [PATCH v5 01/11] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
                   ` (11 more replies)
  0 siblings, 12 replies; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe

This patch series adds OP_MAP_REPEAT flag, which lets the user map a BO
region over an address range repeatedly with just one map operation.

Sparse resources in the Vulkan API let the user leave regions of a
resource unmapped (from the API perspective.) Accesses to such regions
must not result in program termination, but loads produce undefined
values.

To implement this feature on Mali hardware, Vulkan sparse unmap is
implemented by mapping the specified region to a "dummy bo" so that the
accesses do not fault. A newly created sparse resource starts off
unmapped, and therefore also has to be mapped to the "dummy bo".  This
"dummy bo" is small (a page size) in comparison to the sizes of va
ranges that we might want to map to it, and a large number of vm_bind
ops can be necessary. For example, if the user were to create a
100e6-byte sparse resident resource, we'd have to poke VM_BIND with
ceil(100e6/0x1000)=24415 map operations.

OP_MAP_REPEAT addresses this particular inefficiency by letting us
implement a single Vulkan sparse unmap operation and sparse resident
resource initialization with just one map operation.

The panvk changes making use of this uapi can be found at [1]
Link to the conversation for the previous patch series revision at [2]

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/40400
[2] https://lore.kernel.org/dri-devel/20250707170442.1437009-1-caterina.shablia@collabora.com/

Changes in v5:
 - Minor fixes to drm_gpuvm.c.
 - Add panthor MMU page sizes device queriable param.
 - Add helper to make sure unmaps of repeated regions are correct.
 - Some fixes to Panthor's repeat mappings implementation.
 - Lump arguments to panthor_vm_prepare_map_op_ctx into a single struct.

Changes in v4:
 - Fixed the warnings reported by the kernel test robot.
  https://lore.kernel.org/oe-kbuild-all/202507041635.WyDu3TQ1-lkp@intel.com/
 - Fixed the warnings reported by the CI.
  https://patchwork.freedesktop.org/series/151264/

No changes in v3.

Changes in v2:
 - Make panthor use this stuff.
 - Make it possible to express a repeated mappina of any suitably sized
  and aligned range of a BO, rather than strictly the page size -sized
  prefix, generalizing the API. Rename DRM_GPUVA_SINGLE_PAGE to
  DRM_GPUVA_REPEAT.
 - Clean up parts of drm/gpuvm affected by these changes.

Adrián Larumbe (7):
  drm/panthor: Expose GPU page sizes to UM
  drm/gpuvm: Remove dead code
  drm/gpuvm: Fix comment to reflect remap operation operand status
  drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions
  drm/panthor: Handle remap case for repeated mappings
  drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
  drm/panthor: Bump the driver version to 1.8

Asahi Lina (2):
  drm/gpuvm: Add a flags field to drm_gpuva_op_map
  drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic

Boris Brezillon (2):
  drm/gpuvm: Add a helper to check if two VA can be merged
  drm/panthor: Add support for repeated mappings

 drivers/gpu/drm/drm_gpuvm.c              | 214 ++++++++++++++++++-----
 drivers/gpu/drm/panthor/panthor_device.h |   3 +
 drivers/gpu/drm/panthor/panthor_drv.c    |  12 +-
 drivers/gpu/drm/panthor/panthor_mmu.c    | 176 +++++++++++++++----
 include/drm/drm_gpuvm.h                  |  63 ++++++-
 include/uapi/drm/panthor_drm.h           |  33 ++++
 6 files changed, 427 insertions(+), 74 deletions(-)

--
2.53.0

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v5 01/11] drm/panthor: Expose GPU page sizes to UM
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 02/11] drm/gpuvm: Remove dead code Adrián Larumbe
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Daniel Almeida, Alice Ryhl

In future commits that will implement repeated mappings, only repeat
values multiple of GPU page sizes will be tolerated. That means these
values must be made known to UM. Do it through a queriable GPU info
value.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_device.h |  3 +++
 drivers/gpu/drm/panthor/panthor_drv.c    |  8 ++++++++
 drivers/gpu/drm/panthor/panthor_mmu.c    |  9 ++++++++-
 include/uapi/drm/panthor_drm.h           | 13 +++++++++++++
 4 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
index b6696f73a536..91bfba9018cf 100644
--- a/drivers/gpu/drm/panthor/panthor_device.h
+++ b/drivers/gpu/drm/panthor/panthor_device.h
@@ -157,6 +157,9 @@ struct panthor_device {
 	/** @csif_info: Command stream interface information. */
 	struct drm_panthor_csif_info csif_info;
 
+	/** @mmu_info: MMU info */
+	struct drm_panthor_mmu_info mmu_info;
+
 	/** @hw: GPU-specific data. */
 	struct panthor_hw *hw;
 
diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index 165dddfde6ca..8a901e06a9c9 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -172,6 +172,7 @@ panthor_get_uobj_array(const struct drm_panthor_obj_array *in, u32 min_stride,
 	_Generic(_obj_name, \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_gpu_info, tiler_present), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_csif_info, pad), \
+		 PANTHOR_UOBJ_DECL(struct drm_panthor_mmu_info, page_size_bitmap), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_timestamp_info, current_timestamp), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_group_priorities_info, pad), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_sync_op, timeline_value), \
@@ -830,6 +831,10 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
 			args->size = sizeof(ptdev->csif_info);
 			return 0;
 
+		case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
+			args->size = sizeof(ptdev->mmu_info);
+			return 0;
+
 		case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
 			args->size = sizeof(timestamp_info);
 			return 0;
@@ -850,6 +855,9 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
 	case DRM_PANTHOR_DEV_QUERY_CSIF_INFO:
 		return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->csif_info);
 
+	case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
+		return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->mmu_info);
+
 	case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
 		ret = panthor_query_timestamp_info(ptdev, &timestamp_info);
 
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index ba3b7c93303c..07c520475f14 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2463,7 +2463,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
 	refcount_set(&vm->as.active_cnt, 0);
 
 	pgtbl_cfg = (struct io_pgtable_cfg) {
-		.pgsize_bitmap	= SZ_4K | SZ_2M,
+		.pgsize_bitmap	= ptdev->mmu_info.page_size_bitmap,
 		.ias		= va_bits,
 		.oas		= pa_bits,
 		.coherent_walk	= ptdev->coherent,
@@ -2837,6 +2837,11 @@ static void panthor_mmu_release_wq(struct drm_device *ddev, void *res)
 	destroy_workqueue(res);
 }
 
+static void panthor_mmu_info_init(struct panthor_device *ptdev)
+{
+	ptdev->mmu_info.page_size_bitmap = SZ_4K | SZ_2M;
+}
+
 /**
  * panthor_mmu_init() - Initialize the MMU logic.
  * @ptdev: Device.
@@ -2849,6 +2854,8 @@ int panthor_mmu_init(struct panthor_device *ptdev)
 	struct panthor_mmu *mmu;
 	int ret, irq;
 
+	panthor_mmu_info_init(ptdev);
+
 	mmu = drmm_kzalloc(&ptdev->base, sizeof(*mmu), GFP_KERNEL);
 	if (!mmu)
 		return -ENOMEM;
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index b401ac585d6a..4089271f3d36 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -246,6 +246,9 @@ enum drm_panthor_dev_query_type {
 	/** @DRM_PANTHOR_DEV_QUERY_CSIF_INFO: Query command-stream interface information. */
 	DRM_PANTHOR_DEV_QUERY_CSIF_INFO,
 
+	/** @DRM_PANTHOR_DEV_QUERY_MMU_INFO: Query MMU information. */
+	DRM_PANTHOR_DEV_QUERY_MMU_INFO,
+
 	/** @DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: Query timestamp information. */
 	DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO,
 
@@ -409,6 +412,16 @@ struct drm_panthor_csif_info {
 	__u32 pad;
 };
 
+/**
+ * struct drm_panthor_mmu_info - MMU information
+ *
+ * Structure grouping all queryable information relating to the MMU.
+ */
+struct drm_panthor_mmu_info {
+	/** @page_size_bitmap: Allowed page sizes */
+	__u64 page_size_bitmap;
+};
+
 /**
  * struct drm_panthor_timestamp_info - Timestamp information
  *
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 02/11] drm/gpuvm: Remove dead code
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
  2026-03-13 15:09 ` [PATCH v5 01/11] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 03/11] drm/gpuvm: Fix comment to reflect remap operation operand status Adrián Larumbe
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Danilo Krummrich, Matthew Brost,
	Thomas Hellström, Alice Ryhl, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

drm_gpuva_find_next() has no consumers.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/drm_gpuvm.c | 22 ----------------------
 include/drm/drm_gpuvm.h     |  1 -
 2 files changed, 23 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 14469765a780..3c2b6102e818 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2245,28 +2245,6 @@ drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start)
 }
 EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
 
-/**
- * drm_gpuva_find_next() - find the &drm_gpuva after the given address
- * @gpuvm: the &drm_gpuvm to search in
- * @end: the given GPU VA's end address
- *
- * Find the adjacent &drm_gpuva after the GPU VA with given &end address.
- *
- * Note that if there is any free space between the GPU VA mappings no mapping
- * is returned.
- *
- * Returns: a pointer to the found &drm_gpuva or NULL if none was found
- */
-struct drm_gpuva *
-drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end)
-{
-	if (!drm_gpuvm_range_valid(gpuvm, end, 1))
-		return NULL;
-
-	return drm_gpuva_it_iter_first(&gpuvm->rb.tree, end, end + 1);
-}
-EXPORT_SYMBOL_GPL(drm_gpuva_find_next);
-
 /**
  * drm_gpuvm_interval_empty() - indicate whether a given interval of the VA space
  * is empty
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 655bd9104ffb..625958fce7fd 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -160,7 +160,6 @@ struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
 struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
 				       u64 addr, u64 range);
 struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
-struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
 
 /**
  * drm_gpuva_invalidate() - sets whether the backing GEM of this &drm_gpuva is
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 03/11] drm/gpuvm: Fix comment to reflect remap operation operand status
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
  2026-03-13 15:09 ` [PATCH v5 01/11] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
  2026-03-13 15:09 ` [PATCH v5 02/11] drm/gpuvm: Remove dead code Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 04/11] drm/gpuvm: Add a helper to check if two VA can be merged Adrián Larumbe
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Danilo Krummrich, Matthew Brost,
	Thomas Hellström, Alice Ryhl, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

When a new mapping intersects with an existing GPU VA, but either
end lies before or beyond the existing VA's edges, then the prev
and next mapping operations part of a remap will reflect this
condition by being set to NULL.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 include/drm/drm_gpuvm.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 625958fce7fd..6a6e64cd2cce 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -929,6 +929,8 @@ struct drm_gpuva_op_unmap {
  * If either a new mapping's start address is aligned with the start address
  * of the old mapping or the new mapping's end address is aligned with the
  * end address of the old mapping, either @prev or @next is NULL.
+ * This might also be the case when the requested mapping extends over the
+ * lower and upper boundaries of the intersecting GPU VA.
  *
  * Note, the reason for a dedicated remap operation, rather than arbitrary
  * unmap and map operations, is to give drivers the chance of extracting driver
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 04/11] drm/gpuvm: Add a helper to check if two VA can be merged
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (2 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 03/11] drm/gpuvm: Fix comment to reflect remap operation operand status Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 05/11] drm/gpuvm: Add a flags field to drm_gpuva_op_map Adrián Larumbe
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Caterina Shablia, Danilo Krummrich,
	Matthew Brost, Thomas Hellström, Alice Ryhl,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter

From: Boris Brezillon <boris.brezillon@collabora.com>

We are going to add flags/properties that will impact the VA merging
ability. Instead of sprinkling tests all over the place in
__drm_gpuvm_sm_map(), let's add a helper aggregating all these checks
can call it for every existing VA we walk through in the
__drm_gpuvm_sm_map() loop.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
---
 drivers/gpu/drm/drm_gpuvm.c | 46 +++++++++++++++++++++++++++----------
 1 file changed, 34 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 3c2b6102e818..4af7b71abcb4 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2378,16 +2378,47 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
 	return fn->sm_step_unmap(&op, priv);
 }
 
+static bool can_merge(struct drm_gpuvm *gpuvm, const struct drm_gpuva *va,
+		      const struct drm_gpuva_op_map *new_map)
+{
+	struct drm_gpuva_op_map existing_map = {
+		.va.addr = va->va.addr,
+		.va.range = va->va.range,
+		.gem.offset = va->gem.offset,
+		.gem.obj = va->gem.obj,
+	};
+	const struct drm_gpuva_op_map *a = new_map, *b = &existing_map;
+
+	/* Only GEM-based mappings can be merged, and they must point to
+	 * the same GEM object.
+	 */
+	if (a->gem.obj != b->gem.obj || !a->gem.obj)
+		return false;
+
+	/* Order VAs for the rest of the checks. */
+	if (a->va.addr > b->va.addr)
+		swap(a, b);
+
+	/* We assume the caller already checked that VAs overlap or are
+	 * contiguous.
+	 */
+	if (drm_WARN_ON(gpuvm->drm, b->va.addr > a->va.addr + a->va.range))
+		return false;
+
+	/* We intentionally ignore u64 underflows because all we care about
+	 * here is whether the VA diff matches the GEM offset diff.
+	 */
+	return b->va.addr - a->va.addr == b->gem.offset - a->gem.offset;
+}
+
 static int
 __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 		   const struct drm_gpuvm_ops *ops, void *priv,
 		   const struct drm_gpuvm_map_req *req,
 		   bool madvise)
 {
-	struct drm_gem_object *req_obj = req->map.gem.obj;
 	const struct drm_gpuvm_map_req *op_map = madvise ? NULL : req;
 	struct drm_gpuva *va, *next;
-	u64 req_offset = req->map.gem.offset;
 	u64 req_range = req->map.va.range;
 	u64 req_addr = req->map.va.addr;
 	u64 req_end = req_addr + req_range;
@@ -2402,15 +2433,12 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 		u64 addr = va->va.addr;
 		u64 range = va->va.range;
 		u64 end = addr + range;
-		bool merge = !!va->gem.obj;
+		bool merge = can_merge(gpuvm, va, &req->map);
 
 		if (madvise && obj)
 			continue;
 
 		if (addr == req_addr) {
-			merge &= obj == req_obj &&
-				 offset == req_offset;
-
 			if (end == req_end) {
 				ret = op_unmap_cb(ops, priv, va, merge, madvise);
 				if (ret)
@@ -2455,8 +2483,6 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 			};
 			struct drm_gpuva_op_unmap u = { .va = va };
 
-			merge &= obj == req_obj &&
-				 offset + ls_range == req_offset;
 			u.keep = merge;
 
 			if (end == req_end) {
@@ -2506,10 +2532,6 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				break;
 			}
 		} else if (addr > req_addr) {
-			merge &= obj == req_obj &&
-				 offset == req_offset +
-					   (addr - req_addr);
-
 			if (end == req_end) {
 				ret = op_unmap_cb(ops, priv, va, merge, madvise);
 				if (ret)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 05/11] drm/gpuvm: Add a flags field to drm_gpuva_op_map
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (3 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 04/11] drm/gpuvm: Add a helper to check if two VA can be merged Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 06/11] drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic Adrián Larumbe
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Asahi Lina, Caterina Shablia,
	Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Alice Ryhl, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter

From: Asahi Lina <lina+kernel@asahilina.net>

drm_gpuva objects have a flags field. Currently, this can be managed by
drivers out-of-band, without any special handling in drm_gpuvm.

To be able to introduce flags that do affect the logic in the drm_gpuvm
core, we need to plumb it through the map calls. This will allow the
core to check the flags on map and alter the merge/split logic depending
on the requested flags and the flags of the existing drm_gpuva ranges
that are being split.

Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
---
 drivers/gpu/drm/drm_gpuvm.c | 14 ++++++++++++--
 include/drm/drm_gpuvm.h     | 18 ++++++++++++++++++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 4af7b71abcb4..0d9c821d1b34 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2386,6 +2386,7 @@ static bool can_merge(struct drm_gpuvm *gpuvm, const struct drm_gpuva *va,
 		.va.range = va->va.range,
 		.gem.offset = va->gem.offset,
 		.gem.obj = va->gem.obj,
+		.flags = va->flags,
 	};
 	const struct drm_gpuva_op_map *a = new_map, *b = &existing_map;
 
@@ -2395,6 +2396,10 @@ static bool can_merge(struct drm_gpuvm *gpuvm, const struct drm_gpuva *va,
 	if (a->gem.obj != b->gem.obj || !a->gem.obj)
 		return false;
 
+	/* For two VAs to be merged, their flags must be compatible */
+	if ((a->flags & VA_MERGE_MUST_MATCH_FLAGS) != (b->flags & VA_MERGE_MUST_MATCH_FLAGS))
+		return false;
+
 	/* Order VAs for the rest of the checks. */
 	if (a->va.addr > b->va.addr)
 		swap(a, b);
@@ -2459,6 +2464,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					.va.range = range - req_range,
 					.gem.obj = obj,
 					.gem.offset = offset + req_range,
+					.flags = va->flags,
 				};
 				struct drm_gpuva_op_unmap u = {
 					.va = va,
@@ -2480,6 +2486,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				.va.range = ls_range,
 				.gem.obj = obj,
 				.gem.offset = offset,
+				.flags = va->flags,
 			};
 			struct drm_gpuva_op_unmap u = { .va = va };
 
@@ -2519,8 +2526,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					.va.addr = req_end,
 					.va.range = end - req_end,
 					.gem.obj = obj,
-					.gem.offset = offset + ls_range +
-						      req_range,
+					.gem.offset = offset + ls_range + req_range,
+					.flags = va->flags,
 				};
 
 				ret = op_remap_cb(ops, priv, &p, &n, &u);
@@ -2554,6 +2561,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					.va.range = end - req_end,
 					.gem.obj = obj,
 					.gem.offset = offset + req_end - addr,
+					.flags = va->flags,
 				};
 				struct drm_gpuva_op_unmap u = {
 					.va = va,
@@ -2605,6 +2613,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
 			prev.va.range = req_addr - addr;
 			prev.gem.obj = obj;
 			prev.gem.offset = offset;
+			prev.flags = va->flags;
 
 			prev_split = true;
 		}
@@ -2614,6 +2623,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
 			next.va.range = end - req_end;
 			next.gem.obj = obj;
 			next.gem.offset = offset + (req_end - addr);
+			next.flags = va->flags;
 
 			next_split = true;
 		}
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 6a6e64cd2cce..5bf37deb282d 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -63,6 +63,8 @@ enum drm_gpuva_flags {
 	DRM_GPUVA_USERBITS = (1 << 2),
 };
 
+#define VA_MERGE_MUST_MATCH_FLAGS (DRM_GPUVA_SPARSE)
+
 /**
  * struct drm_gpuva - structure to track a GPU VA mapping
  *
@@ -886,6 +888,11 @@ struct drm_gpuva_op_map {
 		 */
 		struct drm_gem_object *obj;
 	} gem;
+
+	/**
+	 * @flags: requested flags for the &drm_gpuva for this mapping
+	 */
+	enum drm_gpuva_flags flags;
 };
 
 /**
@@ -1124,6 +1131,7 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
 static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
 					  const struct drm_gpuva_op_map *op)
 {
+	va->flags = op->flags;
 	va->va.addr = op->va.addr;
 	va->va.range = op->va.range;
 	va->gem.obj = op->gem.obj;
@@ -1249,6 +1257,16 @@ struct drm_gpuvm_ops {
 	 * used.
 	 */
 	int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv);
+
+	/**
+	 * @sm_can_merge_flags: called during &drm_gpuvm_sm_map
+	 *
+	 * This callback is called to determine whether two va ranges can be merged,
+	 * based on their flags.
+	 *
+	 * If NULL, va ranges can only be merged if their flags are equal.
+	 */
+	bool (*sm_can_merge_flags)(enum drm_gpuva_flags a, enum drm_gpuva_flags b);
 };
 
 int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 06/11] drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (4 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 05/11] drm/gpuvm: Add a flags field to drm_gpuva_op_map Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 07/11] drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions Adrián Larumbe
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Asahi Lina, Caterina Shablia,
	Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Alice Ryhl, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter

From: Asahi Lina <lina+kernel@asahilina.net>

To be able to support "fake sparse" mappings without relying on GPU page
fault handling, drivers may need to create large (e.g. 4GiB) mappings of
the same page repeatedly (or same range of pages). Doing this through
individual mappings would be very wasteful. This can be handled better
by using a flag on map creation, but to do it safely, drm_gpuvm needs to
be aware of this special case.

Add a flag that signals that a given mapping is a page mapping, which is
repeated all over the entire requested VA range. This tweaks the
sm_map() logic to treat the GEM offsets differently when mappings are
a repeated ones so they are not incremented as they would be with regular
mappings.

The size of the GEM portion to repeat is passed through
drm_gpuva::gem::range. Most of the time it will be a page size, but
it can be bigger as long as it's less than drm_gpuva::va::range, and
drm_gpuva::va::range is a multiple of drm_gpuva::gem::range.

Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
---
 drivers/gpu/drm/drm_gpuvm.c | 67 ++++++++++++++++++++++++++++++++++---
 include/drm/drm_gpuvm.h     | 35 ++++++++++++++++++-
 2 files changed, 96 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 0d9c821d1b34..ca7445f767fc 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2340,6 +2340,8 @@ op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
 	op.map.va.range = req->map.va.range;
 	op.map.gem.obj = req->map.gem.obj;
 	op.map.gem.offset = req->map.gem.offset;
+	op.map.gem.repeat_range = req->map.gem.repeat_range;
+	op.map.flags = req->map.flags;
 
 	return fn->sm_step_map(&op, priv);
 }
@@ -2410,12 +2412,56 @@ static bool can_merge(struct drm_gpuvm *gpuvm, const struct drm_gpuva *va,
 	if (drm_WARN_ON(gpuvm->drm, b->va.addr > a->va.addr + a->va.range))
 		return false;
 
+	if (a->flags & DRM_GPUVA_REPEAT) {
+		u64 va_diff = b->va.addr - a->va.addr;
+
+		/* If this is a repeated mapping, both the GEM range
+		 * and offset must match.
+		 */
+		if (a->gem.repeat_range != b->gem.repeat_range ||
+		    a->gem.offset != b->gem.offset)
+			return false;
+
+		/* The difference between the VA addresses must be a
+		 * multiple of the repeated range, otherwise there's
+		 * a shift.
+		 */
+		if (do_div(va_diff, a->gem.repeat_range))
+			return false;
+
+		return true;
+	}
+
 	/* We intentionally ignore u64 underflows because all we care about
 	 * here is whether the VA diff matches the GEM offset diff.
 	 */
 	return b->va.addr - a->va.addr == b->gem.offset - a->gem.offset;
 }
 
+static int validate_map_request(struct drm_gpuvm *gpuvm,
+				const struct drm_gpuva_op_map *op)
+{
+	if (unlikely(!drm_gpuvm_range_valid(gpuvm, op->va.addr, op->va.range)))
+		return -EINVAL;
+
+	if (op->flags & DRM_GPUVA_REPEAT) {
+		u64 va_range = op->va.range;
+
+		/* For a repeated mapping, GEM range must be > 0
+		 * and a multiple of the VA range.
+		 */
+		if (unlikely(!op->gem.repeat_range ||
+			     va_range < op->gem.repeat_range ||
+			     do_div(va_range, op->gem.repeat_range)))
+			return -EINVAL;
+	}
+
+	if (op->flags & DRM_GPUVA_INVALIDATED)
+		return -EINVAL;
+
+	return 0;
+}
+
 static int
 __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 		   const struct drm_gpuvm_ops *ops, void *priv,
@@ -2429,7 +2475,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 	u64 req_end = req_addr + req_range;
 	int ret;
 
-	if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
+	ret = validate_map_request(gpuvm, &req->map);
+	if (unlikely(ret))
 		return -EINVAL;
 
 	drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
@@ -2463,7 +2510,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					.va.addr = req_end,
 					.va.range = range - req_range,
 					.gem.obj = obj,
-					.gem.offset = offset + req_range,
+					.gem.repeat_range = va->gem.repeat_range,
+					.gem.offset = offset +
+						(va->flags & DRM_GPUVA_REPEAT ? 0 : req_range),
 					.flags = va->flags,
 				};
 				struct drm_gpuva_op_unmap u = {
@@ -2485,6 +2534,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				.va.addr = addr,
 				.va.range = ls_range,
 				.gem.obj = obj,
+				.gem.repeat_range = va->gem.repeat_range,
 				.gem.offset = offset,
 				.flags = va->flags,
 			};
@@ -2526,7 +2576,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					.va.addr = req_end,
 					.va.range = end - req_end,
 					.gem.obj = obj,
-					.gem.offset = offset + ls_range + req_range,
+					.gem.repeat_range = va->gem.repeat_range,
+					.gem.offset = offset + (va->flags & DRM_GPUVA_REPEAT ?
+								0 : ls_range + req_range),
 					.flags = va->flags,
 				};
 
@@ -2560,7 +2612,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					.va.addr = req_end,
 					.va.range = end - req_end,
 					.gem.obj = obj,
-					.gem.offset = offset + req_end - addr,
+					.gem.repeat_range = va->gem.repeat_range,
+					.gem.offset = offset +
+						(va->flags & DRM_GPUVA_REPEAT ? 0 : req_end - addr),
 					.flags = va->flags,
 				};
 				struct drm_gpuva_op_unmap u = {
@@ -2612,6 +2666,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
 			prev.va.addr = addr;
 			prev.va.range = req_addr - addr;
 			prev.gem.obj = obj;
+			prev.gem.repeat_range = va->gem.repeat_range;
 			prev.gem.offset = offset;
 			prev.flags = va->flags;
 
@@ -2622,7 +2677,9 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
 			next.va.addr = req_end;
 			next.va.range = end - req_end;
 			next.gem.obj = obj;
-			next.gem.offset = offset + (req_end - addr);
+			next.gem.repeat_range = va->gem.repeat_range;
+			next.gem.offset = offset +
+				(va->flags & DRM_GPUVA_REPEAT ? 0 : req_end - addr);
 			next.flags = va->flags;
 
 			next_split = true;
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 5bf37deb282d..cd2f55bc1707 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -57,10 +57,19 @@ enum drm_gpuva_flags {
 	 */
 	DRM_GPUVA_SPARSE = (1 << 1),
 
+	/**
+	 * @DRM_GPUVA_REPEAT:
+	 *
+	 * Flag indicating that the &drm_gpuva is a mapping of a GEM
+	 * object with a certain range that is repeated multiple times to
+	 * fill the virtual address range.
+	 */
+	DRM_GPUVA_REPEAT = (1 << 2),
+
 	/**
 	 * @DRM_GPUVA_USERBITS: user defined bits
 	 */
-	DRM_GPUVA_USERBITS = (1 << 2),
+	DRM_GPUVA_USERBITS = (1 << 3),
 };
 
 #define VA_MERGE_MUST_MATCH_FLAGS (DRM_GPUVA_SPARSE)
@@ -114,6 +123,18 @@ struct drm_gpuva {
 		 */
 		u64 offset;
 
+		/**
+		 * @gem.repeat_range: the range of the GEM that is mapped
+		 *
+		 * When dealing with normal mappings, this must be zero.
+		 * When flags has DRM_GPUVA_REPEAT set, this field must be
+		 * smaller than va.range and va.range must be a multiple of
+		 * gem.repeat_range.
+		 * This is a u32 not a u64 because we expect repeated mappings
+		 * to be pointing to relatively small portions of a GEM object.
+		 */
+		u32 repeat_range;
+
 		/**
 		 * @gem.obj: the mapped &drm_gem_object
 		 */
@@ -883,6 +904,17 @@ struct drm_gpuva_op_map {
 		 */
 		u64 offset;
 
+		/**
+		 * @gem.repeat_range: the range of the GEM that is mapped
+		 *
+		 * When dealing with normal mappings, this must be zero.
+		 * When flags has DRM_GPUVA_REPEAT set, va.range must be
+		 * a multiple of gem.repeat_range. This is a u32 not a u64
+		 * because we expect repeated mappings to be pointing to
+		 * a relatively small portion of a GEM object.
+		 */
+		u32 repeat_range;
+
 		/**
 		 * @gem.obj: the &drm_gem_object to map
 		 */
@@ -1136,6 +1168,7 @@ static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
 	va->va.range = op->va.range;
 	va->gem.obj = op->gem.obj;
 	va->gem.offset = op->gem.offset;
+	va->gem.repeat_range = op->gem.repeat_range;
 }
 
 /**
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 07/11] drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (5 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 06/11] drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 08/11] drm/panthor: Add support for repeated mappings Adrián Larumbe
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Danilo Krummrich, Matthew Brost,
	Thomas Hellström, Alice Ryhl, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

When an unmap or map operation that leads to a remap intersects with a
GPU VA that spans over a repeated range, the newly spawned VAs must
preserve the repeated property, ie, VA's range must be a multiple of
gem.range, and also the VA's start address must be on a gem.range
boundary. When this doesn't hold, disallow such operations and notify
UM with an invalid argument error.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/drm_gpuvm.c | 67 +++++++++++++++++++++++++++++++++++++
 include/drm/drm_gpuvm.h     |  7 +++-
 2 files changed, 73 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index ca7445f767fc..80750119221d 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2462,6 +2462,65 @@ static int validate_map_request(struct drm_gpuvm *gpuvm,
 	return 0;
 }
 
+static int
+validate_repeated_unmap_request(struct drm_gpuvm *gpuvm,
+				u64 req_addr, u64 req_end)
+{
+	struct drm_gpuva *first, *last, *va;
+	u64 multiple;
+
+	if (!(gpuvm->flags & DRM_GPUVM_HAS_REPEAT_MAPS))
+		return 0;
+
+	/* Find the first and last VAs the map request intersects with */
+	first = last = NULL;
+	drm_gpuvm_for_each_va_range(va, gpuvm, req_addr, req_end) {
+		if (!first)
+			first = va;
+		last = va;
+	}
+
+	if (!first)
+		return 0;
+
+	if (first->flags & DRM_GPUVA_REPEAT) {
+		u64 addr = first->va.addr;
+		u64 range = first->va.range;
+		u64 end = addr + range;
+
+		drm_WARN_ON(gpuvm->drm, first->gem.repeat_range == 0);
+
+		if (addr < req_addr) {
+			multiple = req_addr;
+			if (do_div(multiple, first->gem.repeat_range))
+				return -EINVAL;
+		}
+
+		if (end > req_end) {
+			multiple = req_end;
+			if (do_div(multiple, first->gem.repeat_range))
+				return -EINVAL;
+			return 0;
+		}
+	}
+
+	if ((first != last) && (last->flags & DRM_GPUVA_REPEAT)) {
+		u64 addr = last->va.addr;
+		u64 range = last->va.range;
+		u64 end = addr + range;
+
+		drm_WARN_ON(last->vm->drm, last->gem.repeat_range == 0);
+
+		if (end > req_end) {
+			multiple = req_end;
+			if (do_div(multiple, last->gem.repeat_range))
+				return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
 static int
 __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 		   const struct drm_gpuvm_ops *ops, void *priv,
@@ -2479,6 +2538,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 	if (unlikely(ret))
 		return -EINVAL;
 
+	ret = validate_repeated_unmap_request(gpuvm, req_addr, req_end);
+	if (ret)
+		return ret;
+
 	drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
 		struct drm_gem_object *obj = va->gem.obj;
 		u64 offset = va->gem.offset;
@@ -2653,6 +2716,10 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
 	if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
 		return -EINVAL;
 
+	ret = validate_repeated_unmap_request(gpuvm, req_addr, req_end);
+	if (ret)
+		return ret;
+
 	drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
 		struct drm_gpuva_op_map prev = {}, next = {};
 		bool prev_split = false, next_split = false;
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index cd2f55bc1707..61f66dfe4ed7 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -230,10 +230,15 @@ enum drm_gpuvm_flags {
 	 */
 	DRM_GPUVM_IMMEDIATE_MODE = BIT(1),
 
+	/**
+	 * @DRM_GPUVM_HAS_REPEAT_MAPS: There are repeated VAs in the GPUVM
+	 */
+	DRM_GPUVM_HAS_REPEAT_MAPS = BIT(2),
+
 	/**
 	 * @DRM_GPUVM_USERBITS: user defined bits
 	 */
-	DRM_GPUVM_USERBITS = BIT(2),
+	DRM_GPUVM_USERBITS = BIT(3),
 };
 
 /**
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 08/11] drm/panthor: Add support for repeated mappings
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (6 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 07/11] drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 09/11] drm/panthor: Handle remap case " Adrián Larumbe
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Caterina Shablia, Liviu Dudau,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Daniel Almeida, Alice Ryhl

From: Boris Brezillon <boris.brezillon@collabora.com>

This allows us to optimize mapping of a relatively small
portion of a BO over and over in a large VA range, which
is useful to support Vulkan sparse bindings in an efficient
way.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Caterina Shablia <caterina.shablia@collabora.com>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_mmu.c | 109 +++++++++++++++++++++++---
 include/uapi/drm/panthor_drm.h        |  20 +++++
 2 files changed, 120 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 07c520475f14..a357063bb9f6 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -190,6 +190,9 @@ struct panthor_vm_op_ctx {
 		/** @map.bo_offset: Offset in the buffer object. */
 		u64 bo_offset;
 
+		/** @map.bo_repeat_range: Repeated BO range. */
+		u32 bo_repeat_range;
+
 		/**
 		 * @map.sgt: sg-table pointing to pages backing the GEM object.
 		 *
@@ -1003,6 +1006,29 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
 	return 0;
 }
 
+static int
+panthor_vm_repeated_map_pages(struct panthor_vm *vm, u64 iova, int prot,
+			      struct sg_table *sgt, u64 offset, u64 size,
+			      u64 count)
+{
+	int ret;
+	u64 i;
+
+	/* FIXME: we really need to optimize this at the io_pgtable level. */
+	for (i = 0; i < count; i++) {
+		ret = panthor_vm_map_pages(vm, iova + (size * i), prot,
+					   sgt, offset, size);
+		if (ret)
+			goto err_unmap;
+	}
+
+	return 0;
+
+err_unmap:
+	panthor_vm_unmap_pages(vm, iova, size * (i - 1));
+	return ret;
+}
+
 static int flags_to_prot(u32 flags)
 {
 	int prot = 0;
@@ -1184,12 +1210,14 @@ panthor_vm_op_ctx_prealloc_vmas(struct panthor_vm_op_ctx *op_ctx)
 	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
 	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
 	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
+	 DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT | \
 	 DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
 
 static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 					 struct panthor_vm *vm,
 					 struct panthor_gem_object *bo,
 					 u64 offset,
+					 u64 repeat_range,
 					 u64 size, u64 va,
 					 u32 flags)
 {
@@ -1205,9 +1233,28 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	    (flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
 		return -EINVAL;
 
-	/* Make sure the VA and size are in-bounds. */
-	if (size > bo->base.base.size || offset > bo->base.base.size - size)
-		return -EINVAL;
+	if (!(flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT)) {
+		/* Make sure the VA and size are in-bounds. */
+		if (size > bo->base.base.size || offset > bo->base.base.size - size)
+			return -EINVAL;
+	} else {
+		/* Current drm api uses 32-bit for repeat range, */
+		if (repeat_range > U32_MAX)
+			return -EINVAL;
+
+		/* Make sure the repeat_range is in-bounds. */
+		if (repeat_range > bo->base.base.size || offset > bo->base.base.size - repeat_range)
+			return -EINVAL;
+
+		/* Repeat range must a multiple of the minimum GPU page size */
+		if (repeat_range & ((1u << (ffs(vm->ptdev->mmu_info.page_size_bitmap) - 1)) - 1))
+			return -EINVAL;
+
+		u64 repeat_count = size;
+
+		if (do_div(repeat_count, repeat_range))
+			return -EINVAL;
+	}
 
 	/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
 	if (bo->exclusive_vm_root_gem &&
@@ -1257,6 +1304,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
 
 	op_ctx->map.bo_offset = offset;
+	op_ctx->map.bo_repeat_range = repeat_range;
 
 	/* L1, L2 and L3 page tables.
 	 * We could optimize L3 allocation by iterating over the sgt and merging
@@ -2088,9 +2136,29 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
 
 	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
 
-	ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
-				   op_ctx->map.sgt, op->map.gem.offset,
-				   op->map.va.range);
+	if (op_ctx->flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT) {
+		u64 repeat_count = op->map.va.range;
+
+		do_div(repeat_count, op->map.gem.repeat_range);
+
+		if (drm_WARN_ON(&vm->ptdev->base, !repeat_count))
+			return -EINVAL;
+
+		ret = panthor_vm_repeated_map_pages(vm, op->map.va.addr,
+						    flags_to_prot(vma->flags),
+						    op_ctx->map.sgt,
+						    op->map.gem.offset,
+						    op->map.gem.repeat_range,
+						    repeat_count);
+		if (!ret)
+			vm->base.flags |= DRM_GPUVM_HAS_REPEAT_MAPS;
+	} else {
+		ret = panthor_vm_map_pages(vm, op->map.va.addr,
+					   flags_to_prot(vma->flags),
+					   op_ctx->map.sgt, op->map.gem.offset,
+					   op->map.va.range);
+	}
+
 	if (ret) {
 		panthor_vm_op_ctx_return_vma(op_ctx, vma);
 		return ret;
@@ -2165,8 +2233,22 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
 	 * page and then remap the difference between the huge page minus the requested
 	 * unmap region. Calculating the right start address and range for the expanded
 	 * unmap operation is the responsibility of the following function.
+	 * However, we never allow partial unmaps of repeated regions.
 	 */
-	unmap_hugepage_align(&op->remap, &unmap_start, &unmap_range);
+	if (op->remap.next && op->remap.prev) {
+		if (drm_WARN_ON(&vm->ptdev->base,
+				(op->remap.next->flags & DRM_GPUVA_REPEAT) !=
+				(op->remap.prev->flags & DRM_GPUVA_REPEAT)))
+			return -EINVAL;
+		if (drm_WARN_ON(&vm->ptdev->base,
+				op->remap.next->gem.repeat_range !=
+				op->remap.prev->gem.repeat_range))
+			return -EINVAL;
+	}
+
+	if (!(op->remap.next && (op->remap.next->flags & DRM_GPUVA_REPEAT)) &&
+	    !(op->remap.prev && (op->remap.prev->flags & DRM_GPUVA_REPEAT)))
+		unmap_hugepage_align(&op->remap, &unmap_start, &unmap_range);
 
 	/* If the range changed, we might have to lock a wider region to guarantee
 	 * atomicity. panthor_vm_lock_region() bails out early if the new region
@@ -2283,7 +2365,7 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
 
 	switch (op_type) {
 	case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
-		const struct drm_gpuvm_map_req map_req = {
+		struct drm_gpuvm_map_req map_req = {
 			.map.va.addr = op->va.addr,
 			.map.va.range = op->va.range,
 			.map.gem.obj = op->map.vm_bo->obj,
@@ -2295,6 +2377,11 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
 			break;
 		}
 
+		if (op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT) {
+			map_req.map.flags |= DRM_GPUVA_REPEAT;
+			map_req.map.gem.repeat_range = op->map.bo_repeat_range;
+		}
+
 		ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
 		break;
 	}
@@ -2544,6 +2631,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
 		ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
 						    gem ? to_panthor_bo(gem) : NULL,
 						    op->bo_offset,
+						    op->bo_repeat_range,
 						    op->size,
 						    op->va,
 						    op->flags);
@@ -2745,7 +2833,10 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
 	struct panthor_vm_op_ctx op_ctx;
 	int ret;
 
-	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, size, va, flags);
+	if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT))
+		return -EINVAL;
+
+	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, 0, size, va, flags);
 	if (ret)
 		return ret;
 
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index 4089271f3d36..46217ce2c0f5 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -555,6 +555,17 @@ enum drm_panthor_vm_bind_op_flags {
 	 */
 	DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
 
+	/**
+	 * @DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT: Repeat a BO range
+	 *
+	 * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
+	 *
+	 * When this is set, a BO range is repeated over the VA range.
+	 * drm_panthor_vm_bind_op::bo_repeat_range defines the size of the
+	 * BO range to repeat.
+	 */
+	DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT = 1 << 3,
+
 	/**
 	 * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
 	 */
@@ -619,6 +630,15 @@ struct drm_panthor_vm_bind_op {
 	 */
 	struct drm_panthor_obj_array syncs;
 
+	/**
+	 * @bo_repeat_range: The size of the range to be repeated.
+	 *
+	 * Must be zero if DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT is not set in
+	 * flags.
+	 *
+	 * Size must be a multiple of bo_repeat_range.
+	 */
+	__u64 bo_repeat_range;
 };
 
 /**
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 09/11] drm/panthor: Handle remap case for repeated mappings
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (7 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 08/11] drm/panthor: Add support for repeated mappings Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 10/11] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

When a GPUVA remap is triggered as a consequence of a VM operation
insersecting with existing VAs, when mapping the split VAs one must take
into account whether they were repeat-mapped.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_mmu.c | 63 +++++++++++++++++----------
 1 file changed, 39 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index a357063bb9f6..ba322e2029b9 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2124,41 +2124,52 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
 	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
 	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
 
-static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
+static int
+panthor_vm_map_range(struct panthor_vm *vm, bool repeat, struct sg_table *sgt,
+		     u64 addr, u64 offset, u64 size, u32 repeat_range, int prot)
 {
-	struct panthor_vm *vm = priv;
-	struct panthor_vm_op_ctx *op_ctx = vm->op_ctx;
-	struct panthor_vma *vma = panthor_vm_op_ctx_get_vma(op_ctx);
 	int ret;
 
-	if (!vma)
-		return -EINVAL;
-
-	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
+	if (!size)
+		return 0;
 
-	if (op_ctx->flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT) {
-		u64 repeat_count = op->map.va.range;
+	if (repeat) {
+		u64 repeat_count = size;
 
-		do_div(repeat_count, op->map.gem.repeat_range);
+		do_div(repeat_count, repeat_range);
 
 		if (drm_WARN_ON(&vm->ptdev->base, !repeat_count))
 			return -EINVAL;
 
-		ret = panthor_vm_repeated_map_pages(vm, op->map.va.addr,
-						    flags_to_prot(vma->flags),
-						    op_ctx->map.sgt,
-						    op->map.gem.offset,
-						    op->map.gem.repeat_range,
+		ret = panthor_vm_repeated_map_pages(vm, addr, prot, sgt,
+						    offset, repeat_range,
 						    repeat_count);
 		if (!ret)
 			vm->base.flags |= DRM_GPUVM_HAS_REPEAT_MAPS;
 	} else {
-		ret = panthor_vm_map_pages(vm, op->map.va.addr,
-					   flags_to_prot(vma->flags),
-					   op_ctx->map.sgt, op->map.gem.offset,
-					   op->map.va.range);
+		ret = panthor_vm_map_pages(vm, addr, prot, sgt,
+					   offset, size);
 	}
 
+	return ret;
+}
+
+static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
+{
+	struct panthor_vm *vm = priv;
+	struct panthor_vm_op_ctx *op_ctx = vm->op_ctx;
+	struct panthor_vma *vma = panthor_vm_op_ctx_get_vma(op_ctx);
+	int ret;
+
+	if (!vma)
+		return -EINVAL;
+
+	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
+
+	ret = panthor_vm_map_range(vm, op_ctx->flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT,
+				   op_ctx->map.sgt, op->map.va.addr, op->map.gem.offset,
+				   op->map.va.range, op->map.gem.repeat_range,
+				   flags_to_prot(vma->flags));
 	if (ret) {
 		panthor_vm_op_ctx_return_vma(op_ctx, vma);
 		return ret;
@@ -2262,8 +2273,10 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
 		u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
 		u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
 
-		ret = panthor_vm_map_pages(vm, unmap_start, flags_to_prot(unmap_vma->flags),
-					   bo->base.sgt, offset, size);
+		ret = panthor_vm_map_range(vm, op->remap.prev->flags & DRM_GPUVA_REPEAT,
+					   bo->base.sgt, op->remap.prev->va.addr, offset,
+					   size, op->remap.prev->gem.repeat_range,
+					   flags_to_prot(unmap_vma->flags));
 		if (ret)
 			return ret;
 
@@ -2276,8 +2289,10 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
 		u64 addr = op->remap.next->va.addr;
 		u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
 
-		ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
-					   bo->base.sgt, op->remap.next->gem.offset, size);
+		ret = panthor_vm_map_range(vm, op->remap.next->flags & DRM_GPUVA_REPEAT,
+					   bo->base.sgt, addr, op->remap.next->gem.offset,
+					   size, op->remap.next->gem.repeat_range,
+					   flags_to_prot(unmap_vma->flags));
 		if (ret)
 			return ret;
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 10/11] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (8 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 09/11] drm/panthor: Handle remap case " Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 15:09 ` [PATCH v5 11/11] drm/panthor: Bump the driver version to 1.8 Adrián Larumbe
  2026-03-13 20:48 ` Claude review: Support repeated mappings in GPUVM and Panthor Claude Code Review Bot
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

Instead of passing its constituent elements, pass the whole struct to
simplify the function prototype.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_mmu.c | 57 ++++++++++++++-------------
 1 file changed, 30 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index ba322e2029b9..a62ac715265b 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1216,10 +1216,7 @@ panthor_vm_op_ctx_prealloc_vmas(struct panthor_vm_op_ctx *op_ctx)
 static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 					 struct panthor_vm *vm,
 					 struct panthor_gem_object *bo,
-					 u64 offset,
-					 u64 repeat_range,
-					 u64 size, u64 va,
-					 u32 flags)
+					 const struct drm_panthor_vm_bind_op *op)
 {
 	struct drm_gpuvm_bo *preallocated_vm_bo;
 	struct sg_table *sgt = NULL;
@@ -1229,30 +1226,32 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	if (!bo)
 		return -EINVAL;
 
-	if ((flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
-	    (flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
+	if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
+	    (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
 		return -EINVAL;
 
-	if (!(flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT)) {
+	if (!(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT)) {
 		/* Make sure the VA and size are in-bounds. */
-		if (size > bo->base.base.size || offset > bo->base.base.size - size)
+		if (op->size > bo->base.base.size || op->bo_offset > bo->base.base.size - op->size)
 			return -EINVAL;
 	} else {
 		/* Current drm api uses 32-bit for repeat range, */
-		if (repeat_range > U32_MAX)
+		if (op->bo_repeat_range > U32_MAX)
 			return -EINVAL;
 
 		/* Make sure the repeat_range is in-bounds. */
-		if (repeat_range > bo->base.base.size || offset > bo->base.base.size - repeat_range)
+		if (op->bo_repeat_range > bo->base.base.size ||
+		    op->bo_offset > bo->base.base.size - op->bo_repeat_range)
 			return -EINVAL;
 
 		/* Repeat range must a multiple of the minimum GPU page size */
-		if (repeat_range & ((1u << (ffs(vm->ptdev->mmu_info.page_size_bitmap) - 1)) - 1))
+		if (op->bo_repeat_range &
+		    ((1u << (ffs(vm->ptdev->mmu_info.page_size_bitmap) - 1)) - 1))
 			return -EINVAL;
 
-		u64 repeat_count = size;
+		u64 repeat_count = op->size;
 
-		if (do_div(repeat_count, repeat_range))
+		if (do_div(repeat_count, op->bo_repeat_range))
 			return -EINVAL;
 	}
 
@@ -1262,9 +1261,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 		return -EINVAL;
 
 	memset(op_ctx, 0, sizeof(*op_ctx));
-	op_ctx->flags = flags;
-	op_ctx->va.range = size;
-	op_ctx->va.addr = va;
+	op_ctx->flags = op->flags;
+	op_ctx->va.range = op->size;
+	op_ctx->va.addr = op->va;
 
 	ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
 	if (ret)
@@ -1303,17 +1302,17 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 
 	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
 
-	op_ctx->map.bo_offset = offset;
-	op_ctx->map.bo_repeat_range = repeat_range;
+	op_ctx->map.bo_offset = op->bo_offset;
+	op_ctx->map.bo_repeat_range = op->bo_repeat_range;
 
 	/* L1, L2 and L3 page tables.
 	 * We could optimize L3 allocation by iterating over the sgt and merging
 	 * 2M contiguous blocks, but it's simpler to over-provision and return
 	 * the pages if they're not used.
 	 */
-	pt_count = ((ALIGN(va + size, 1ull << 39) - ALIGN_DOWN(va, 1ull << 39)) >> 39) +
-		   ((ALIGN(va + size, 1ull << 30) - ALIGN_DOWN(va, 1ull << 30)) >> 30) +
-		   ((ALIGN(va + size, 1ull << 21) - ALIGN_DOWN(va, 1ull << 21)) >> 21);
+	pt_count = ((ALIGN(op->va + op->size, 1ull << 39) - ALIGN_DOWN(op->va, 1ull << 39)) >> 39) +
+		   ((ALIGN(op->va + op->size, 1ull << 30) - ALIGN_DOWN(op->va, 1ull << 30)) >> 30) +
+		   ((ALIGN(op->va + op->size, 1ull << 21) - ALIGN_DOWN(op->va, 1ull << 21)) >> 21);
 
 	op_ctx->rsvd_page_tables.pages = kcalloc(pt_count,
 						 sizeof(*op_ctx->rsvd_page_tables.pages),
@@ -2645,11 +2644,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
 		gem = drm_gem_object_lookup(file, op->bo_handle);
 		ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
 						    gem ? to_panthor_bo(gem) : NULL,
-						    op->bo_offset,
-						    op->bo_repeat_range,
-						    op->size,
-						    op->va,
-						    op->flags);
+						    op);
 		drm_gem_object_put(gem);
 		return ret;
 
@@ -2845,13 +2840,21 @@ int panthor_vm_bind_exec_sync_op(struct drm_file *file,
 int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo,
 			    u64 offset, u64 size, u64 va, u32 flags)
 {
+	struct drm_panthor_vm_bind_op op = {0};
 	struct panthor_vm_op_ctx op_ctx;
 	int ret;
 
 	if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT))
 		return -EINVAL;
 
-	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, 0, size, va, flags);
+	op = (struct drm_panthor_vm_bind_op){
+		.bo_offset = offset,
+		.size = size,
+		.va = va,
+		.flags = flags,
+	};
+
+	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
 	if (ret)
 		return ret;
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v5 11/11] drm/panthor: Bump the driver version to 1.8
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (9 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 10/11] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
@ 2026-03-13 15:09 ` Adrián Larumbe
  2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
  2026-03-13 20:48 ` Claude review: Support repeated mappings in GPUVM and Panthor Claude Code Review Bot
  11 siblings, 1 reply; 24+ messages in thread
From: Adrián Larumbe @ 2026-03-13 15:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, Janne Grunau, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

Bump the driver version to reflect the new MMU info query ioctl
parameter and the VM_BIND repeat flag.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_drv.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index 8a901e06a9c9..c965d58a05cb 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -1688,6 +1688,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
  *       - adds DRM_IOCTL_PANTHOR_BO_SYNC ioctl
  *       - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
  *       - adds drm_panthor_gpu_info::selected_coherency
+ * - 1.8 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
+ *       - adds DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT flag
  */
 static const struct drm_driver panthor_drm_driver = {
 	.driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
@@ -1701,7 +1703,7 @@ static const struct drm_driver panthor_drm_driver = {
 	.name = "panthor",
 	.desc = "Panthor DRM driver",
 	.major = 1,
-	.minor = 7,
+	.minor = 8,
 
 	.gem_create_object = panthor_gem_create_object,
 	.gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Claude review: Support repeated mappings in GPUVM and Panthor
  2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
                   ` (10 preceding siblings ...)
  2026-03-13 15:09 ` [PATCH v5 11/11] drm/panthor: Bump the driver version to 1.8 Adrián Larumbe
@ 2026-03-13 20:48 ` Claude Code Review Bot
  11 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: Support repeated mappings in GPUVM and Panthor
Author: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= <adrian.larumbe@collabora.com>
Patches: 12
Reviewed: 2026-03-14T06:48:44.249462

---

This is a v5 series adding "repeated mapping" support to the DRM GPUVM subsystem and the Panthor (Mali) GPU driver. The feature allows mapping a small BO region (e.g. one page) repeatedly across a large VA range with a single operation, which is critical for efficient Vulkan sparse resource implementation on Mali hardware. The series is well-structured with clear separation between GPUVM core changes and Panthor driver integration. There are several issues worth flagging, ranging from a UAPI concern to off-by-one errors in cleanup paths and a dead callback declaration.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/panthor: Expose GPU page sizes to UM
  2026-03-13 15:09 ` [PATCH v5 01/11] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**UAPI concern: enum value insertion breaks ABI.**
`DRM_PANTHOR_DEV_QUERY_MMU_INFO` is inserted *between* `DRM_PANTHOR_DEV_QUERY_CSIF_INFO` and `DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO` in the enum. Since these are auto-numbered C enum values, this shifts the values of `DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO` and `DRM_PANTHOR_DEV_QUERY_GROUP_PRIORITIES_INFO`, breaking existing userspace compiled against the old header. The new query should be added at the end of the enum (before any terminator/max, if any), or should use an explicit assigned value.

```c
+	/** @DRM_PANTHOR_DEV_QUERY_MMU_INFO: Query MMU information. */
+	DRM_PANTHOR_DEV_QUERY_MMU_INFO,
+
 	/** @DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: Query timestamp information. */
 	DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO,
```

Similarly, in `panthor_drv.c`, the `DRM_PANTHOR_DEV_QUERY_MMU_INFO` case is inserted in the middle of the switch. While this is cosmetic and functionally fine (switch doesn't depend on ordering), it should match wherever the enum ends up.

The struct `drm_panthor_mmu_info` has only a single `__u64` member. This is fine for now but consider whether any padding field is needed for future extensibility (it's a single u64 so no alignment issues).

Otherwise the implementation is clean -- extracting the hardcoded `SZ_4K | SZ_2M` into a queryable parameter is a reasonable change.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/gpuvm: Remove dead code
  2026-03-13 15:09 ` [PATCH v5 02/11] drm/gpuvm: Remove dead code Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Straightforward removal of `drm_gpuva_find_next()` which has no callers. No issues.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/gpuvm: Fix comment to reflect remap operation operand status
  2026-03-13 15:09 ` [PATCH v5 03/11] drm/gpuvm: Fix comment to reflect remap operation operand status Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Minor doc fix, correct and clear.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/gpuvm: Add a helper to check if two VA can be merged
  2026-03-13 15:09 ` [PATCH v5 04/11] drm/gpuvm: Add a helper to check if two VA can be merged Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

The `can_merge()` helper consolidates scattered merge checks into one function. The logic looks correct:
- Same GEM object required, must be non-NULL
- VAs must be contiguous/overlapping (WARN_ON if gap)
- VA diff must match GEM offset diff for merge eligibility

One minor style note: the comment style uses `/* ... */` block comments which is fine but inconsistent -- some start with `/*` on the same line as text. Kernel style typically has `/*` on its own line for multi-line comments.

The removal of the inline merge checks in `__drm_gpuvm_sm_map()` is clean and the replacement with `can_merge(gpuvm, va, &req->map)` is correct.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/gpuvm: Add a flags field to drm_gpuva_op_map
  2026-03-13 15:09 ` [PATCH v5 05/11] drm/gpuvm: Add a flags field to drm_gpuva_op_map Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

The `flags` field is added to `drm_gpuva_op_map` and propagated through all remap/unmap split paths. The `VA_MERGE_MUST_MATCH_FLAGS` mask approach is reasonable.

**Dead callback declaration:** The `sm_can_merge_flags` callback is added to `struct drm_gpuvm_ops` but is never referenced or called anywhere in this series. It should either be used in `can_merge()` or removed:

```c
+	bool (*sm_can_merge_flags)(enum drm_gpuva_flags a, enum drm_gpuva_flags b);
```

This adds dead code to the UAPI-adjacent header and API surface with no implementation.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic
  2026-03-13 15:09 ` [PATCH v5 06/11] drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

The core of the series. Several observations:

**`do_div` usage:** `do_div` modifies its first argument in-place (it's a macro that assigns the quotient back). The code uses it correctly for the divisibility check (checking the remainder), but it's worth noting that `va_diff` is modified in the can_merge path:

```c
+		if (do_div(va_diff, a->gem.repeat_range))
+			return false;
```

This is fine since `va_diff` isn't used after.

**`repeat_range` as u32 but comparisons with u64:** The `gem.repeat_range` field is `u32` in `drm_gpuva` and `drm_gpuva_op_map`, but `va.range` is `u64`. The `do_div` macro on 32-bit architectures expects a u32 divisor, which is satisfied here. However, in `validate_map_request`:

```c
+		if (unlikely(!op->gem.repeat_range ||
+			     va_range < op->gem.repeat_range ||
+			     do_div(va_range, op->gem.repeat_range)))
```

`va_range` is modified by `do_div` to contain the quotient -- this is intentional since it's only used for the check. Fine.

**DRM_GPUVA_USERBITS shift:** Moving `DRM_GPUVA_USERBITS` from `(1 << 2)` to `(1 << 3)` is a kernel-internal change. As long as no userspace driver depends on this value (which it shouldn't since it's not UAPI), this is fine.

The offset handling for repeated mappings is the critical logic -- when splitting a repeated VA, the GEM offset stays the same (offset + 0) rather than advancing. This is correct for the semantics: each repetition starts at the same BO offset.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions
  2026-03-13 15:09 ` [PATCH v5 07/11] drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**`validate_repeated_unmap_request` has an issue with `do_div` modifying `multiple`:** The variable `multiple` is set to `req_addr` or `req_end`, then `do_div(multiple, ...)` is called. `do_div` replaces `multiple` with the quotient, and the remainder is the return value. The intent is to check alignment (remainder != 0 → error). This is correct.

**Missing alignment check for `req_addr` on the last VA:** The function checks `req_addr` alignment only against the `first` VA and `req_end` alignment against both `first` (if it extends past) and `last`. But if `first` is fully contained within the unmap range and `last` is a repeat VA, the start of `last` is necessarily aligned (since it was created that way), so only `req_end` matters. This seems correct.

**`DRM_GPUVM_HAS_REPEAT_MAPS` flag:** This is set but never cleared. If all repeat mappings are removed, subsequent map/unmap operations will still walk VAs twice (once in `validate_repeated_unmap_request` and once in the main loop). This is a performance concern but not a correctness bug. Worth a `FIXME` comment.

**Variable shadowing in `__drm_gpuvm_sm_unmap`:**

```c
 	drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
 		struct drm_gpuva_op_map prev = {}, next = {};
```

The `next` variable in the for-each loop is shadowed by the local `struct drm_gpuva_op_map next`. This is pre-existing code, not introduced by this patch, but worth noting that adding `validate_repeated_unmap_request` before this loop (which also uses `drm_gpuvm_for_each_va_range`) means the VA tree is iterated twice.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/panthor: Add support for repeated mappings
  2026-03-13 15:09 ` [PATCH v5 08/11] drm/panthor: Add support for repeated mappings Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Off-by-one in error cleanup path:**

```c
+err_unmap:
+	panthor_vm_unmap_pages(vm, iova, size * (i - 1));
+	return ret;
```

If the first iteration (`i=0`) fails, `i - 1` wraps to `U64_MAX` (since `i` is `u64`), which would be catastrophic. Even if it doesn't fail on `i=0`, when `i=1` and the second map fails, the cleanup should unmap `size * 1` bytes starting from `iova`, but `size * (i-1) = size * 0 = 0` would unmap nothing. The correct cleanup is `size * i`:

```c
err_unmap:
	panthor_vm_unmap_pages(vm, iova, size * i);
	return ret;
```

Because at iteration `i`, maps 0 through i-1 succeeded, so `i * size` bytes need unmapping.

**Declaration after statement:**

```c
+		u64 repeat_count = size;
+
+		if (do_div(repeat_count, repeat_range))
+			return -EINVAL;
```

This `u64 repeat_count` declaration is after the preceding `if` statement block. While C99/GNU89 allows this and the kernel has moved to accept it, some coding style checkers may flag it. More importantly, `repeat_count` is not used after the check -- it's only used to verify divisibility. A comment would help clarity.

**UAPI: `bo_repeat_range` added after `syncs` in `drm_panthor_vm_bind_op`:**

```c
+	__u64 bo_repeat_range;
 };
```

This is added at the end of the struct, which is correct for UAPI extensibility. However, the existing struct ends with `struct drm_panthor_obj_array syncs;` followed by an empty line and `};`. The new field is after `syncs`. Since userspace that doesn't know about this field will pass a smaller struct, the kernel copy_from_user logic (via `PANTHOR_UOBJ`) needs to handle this gracefully (zero-filling unknown trailing bytes). The panthor UOBJ infrastructure does handle this, so this should be fine.

**`vm->base.flags |= DRM_GPUVM_HAS_REPEAT_MAPS` without locking consideration:** This is done inside `panthor_gpuva_sm_step_map` which should be called under appropriate locks, but worth confirming.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/panthor: Handle remap case for repeated mappings
  2026-03-13 15:09 ` [PATCH v5 09/11] drm/panthor: Handle remap case " Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

The refactoring to extract `panthor_vm_map_range()` is clean. The remap paths now correctly pass through the repeat flag and range.

**Potential issue in prev remap path:**

```c
+		ret = panthor_vm_map_range(vm, op->remap.prev->flags & DRM_GPUVA_REPEAT,
+					   bo->base.sgt, op->remap.prev->va.addr, offset,
+					   size, op->remap.prev->gem.repeat_range,
+					   flags_to_prot(unmap_vma->flags));
```

The `addr` parameter passed is `op->remap.prev->va.addr`, but the `size` is calculated as `op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start`. This `size` represents the portion of the prev region that overlaps with the unmap range (the part that needs remapping). But the mapping starts at `op->remap.prev->va.addr` (the original prev start), not at `unmap_start`. Looking at this more carefully: in the remap flow, the prev region represents the portion *before* the unmap that needs to be re-mapped after the full unmap. The `unmap_start` may have been adjusted by `unmap_hugepage_align`. The `size` here is the extra area that was unmapped beyond what was requested, which needs to be remapped. The address and offset calculations look correct in context.

However, for repeated mappings, `panthor_vm_map_range` is called with `repeat=true`, but the `offset` passed was calculated as `op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr`. For a repeated mapping, this offset computation (which adds a VA-space delta to the GEM offset) may not be correct since repeat mappings don't advance the GEM offset linearly with VA. This needs careful verification that patch 6's GPUVM core already handles this (setting offset to the original offset for repeated regions in the remap prev/next operations). Looking back at patch 6, for repeated VAs, the gem.offset is kept the same (offset + 0), so by the time we reach this panthor code, `op->remap.prev->gem.offset` should already be the correct (unchanged) offset. This should be fine.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
  2026-03-13 15:09 ` [PATCH v5 10/11] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Clean refactoring. The function signature is simplified by passing the whole `struct drm_panthor_vm_bind_op *op` instead of individual fields. The `panthor_vm_map_bo_range` caller constructs a temporary `drm_panthor_vm_bind_op` on the stack, which is reasonable.

Minor nit: `struct drm_panthor_vm_bind_op op = {0};` followed by reassignment with a designated initializer is redundant -- the designated initializer is sufficient.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Claude review: drm/panthor: Bump the driver version to 1.8
  2026-03-13 15:09 ` [PATCH v5 11/11] drm/panthor: Bump the driver version to 1.8 Adrián Larumbe
@ 2026-03-13 20:48   ` Claude Code Review Bot
  0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 20:48 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Straightforward version bump. Note this is numbered 11/11 in the subject but appears before patch 10 in the mbox ordering -- this is just mbox delivery ordering, not a logical concern.

---

**Summary of actionable issues:**

1. **Critical (UAPI):** Patch 1 inserts `DRM_PANTHOR_DEV_QUERY_MMU_INFO` in the middle of the enum, shifting existing values. Must be appended at the end.
2. **Bug:** Patch 8 `panthor_vm_repeated_map_pages` error path uses `size * (i - 1)` which is wrong -- should be `size * i`.
3. **Dead code:** Patch 5 adds `sm_can_merge_flags` callback to `drm_gpuvm_ops` but it's never used.
4. **Minor:** `DRM_GPUVM_HAS_REPEAT_MAPS` is never cleared when all repeat mappings are removed.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2026-03-13 20:48 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-13 15:09 [PATCH v5 00/11] Support repeated mappings in GPUVM and Panthor Adrián Larumbe
2026-03-13 15:09 ` [PATCH v5 01/11] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 02/11] drm/gpuvm: Remove dead code Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 03/11] drm/gpuvm: Fix comment to reflect remap operation operand status Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 04/11] drm/gpuvm: Add a helper to check if two VA can be merged Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 05/11] drm/gpuvm: Add a flags field to drm_gpuva_op_map Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 06/11] drm/gpuvm: Add DRM_GPUVA_REPEAT flag and logic Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 07/11] drm/gpuvm: Ensure correctness of unmap/remaps of repeated regions Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 08/11] drm/panthor: Add support for repeated mappings Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 09/11] drm/panthor: Handle remap case " Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 10/11] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 15:09 ` [PATCH v5 11/11] drm/panthor: Bump the driver version to 1.8 Adrián Larumbe
2026-03-13 20:48   ` Claude review: " Claude Code Review Bot
2026-03-13 20:48 ` Claude review: Support repeated mappings in GPUVM and Panthor Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox