public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status
@ 2026-02-09 13:27 Thomas Zimmermann
  2026-02-09 13:27 ` [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Track page access/dirty status in gem-shmem for better integration with
the overall memory management. Gem-shmem has long had two flag bits in
struct drm_gem_shmem_object, named pages_mark_accessed_on_put and
pages_mark_dirty_on_put, but never used them much; except for some odd
cases in drivers. Therefore pages in gem-shmem where never marked
correctly. (Other DRM memory managers do some course-grain tracking at
least).

Patches 1 to 4 prepare the mmap page-fault handler for tracking page
status easily. The pages are already available; only the mmap handling
needs to be adapted. The way the shmem code interacts with huge-page
support is also not optimal, hence refactor it slightly.

Patch 5 adds tracking access and dirty status in mmap. With the earlier
patches, this change simply falls into place.

Patch 6 adds tracking access and dirty status in vmap. Because there's
no fault handling here, we refer to the existing status bits in struct
drm_gem_shmem_object. Each page's status will be updated by the page
release in drm_gem_put_pages(). The imagination driver requires a small
fix to make it work correctly.

Tested with CONFIG_VM=y by running animations on DRM's bochs driver for
several hours. This uses gem-shmem's mmap and vmap extensively.

v3:
- rewrite for VM_PFNMAP
- do more preparational patches
v2:
- fix possible OOB access into page array (Matthew)
- simplify fault-handler error handling (Boris)
- simplify internal interfaces (Matthew)

Thomas Zimmermann (6):
  drm/gem-shmem: Use obj directly where appropriate in fault handler
  drm/gem-shmem: Test for existence of page in mmap fault handler
  drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
  drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()
  drm/gem-shmem: Track folio accessed/dirty status in mmap
  drm/gem-shmem: Track folio accessed/dirty status in vmap

 drivers/gpu/drm/drm_gem_shmem_helper.c | 77 ++++++++++++++++----------
 drivers/gpu/drm/imagination/pvr_gem.c  |  6 +-
 2 files changed, 53 insertions(+), 30 deletions(-)


base-commit: 0082025812a31eda451fb14f13f52683ed375c49
-- 
2.52.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
@ 2026-02-09 13:27 ` Thomas Zimmermann
  2026-02-09 14:10   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  2026-02-09 13:27 ` [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Replace shmem->base with obj in several places. It is the same value,
but the latter is easier to read.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3871a6d92f77..5bced7db0f1f 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -584,7 +584,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	/* Offset to faulty address in the VMA. */
 	page_offset = vmf->pgoff - vma->vm_pgoff;
 
-	dma_resv_lock(shmem->base.resv, NULL);
+	dma_resv_lock(obj->resv, NULL);
 
 	if (page_offset >= num_pages ||
 	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
@@ -602,7 +602,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	ret = vmf_insert_pfn(vma, vmf->address, pfn);
 
  out:
-	dma_resv_unlock(shmem->base.resv);
+	dma_resv_unlock(obj->resv);
 
 	return ret;
 }
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap fault handler
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
  2026-02-09 13:27 ` [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
@ 2026-02-09 13:27 ` Thomas Zimmermann
  2026-02-09 14:10   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  2026-02-09 13:27 ` [PATCH v3 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Not having a page pointer in the mmap fault handler is an error. Test
for this situation and return VM_FAULT_SIGBUS if so. Also replace several
lookups of the page with a local variable.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 5bced7db0f1f..3ee54c24e535 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -574,31 +574,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
+	struct drm_device *dev = obj->dev;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 	loff_t num_pages = obj->size >> PAGE_SHIFT;
-	vm_fault_t ret;
+	vm_fault_t ret = VM_FAULT_SIGBUS;
 	struct page **pages = shmem->pages;
-	pgoff_t page_offset;
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+	struct page *page;
 	unsigned long pfn;
 
-	/* Offset to faulty address in the VMA. */
-	page_offset = vmf->pgoff - vma->vm_pgoff;
-
 	dma_resv_lock(obj->resv, NULL);
 
-	if (page_offset >= num_pages ||
-	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
-	    shmem->madv < 0) {
-		ret = VM_FAULT_SIGBUS;
+	if (page_offset >= num_pages || drm_WARN_ON_ONCE(dev, !shmem->pages) ||
+	    shmem->madv < 0)
+		goto out;
+
+	page = pages[page_offset];
+	if (drm_WARN_ON_ONCE(dev, !page))
 		goto out;
-	}
 
-	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) {
+	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)) {
 		ret = VM_FAULT_NOPAGE;
 		goto out;
 	}
 
-	pfn = page_to_pfn(pages[page_offset]);
+	pfn = page_to_pfn(page);
 	ret = vmf_insert_pfn(vma, vmf->address, pfn);
 
  out:
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
  2026-02-09 13:27 ` [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
  2026-02-09 13:27 ` [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
@ 2026-02-09 13:27 ` Thomas Zimmermann
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  2026-02-09 13:27 ` [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Return the exact VM_FAULT_ mask from drm_gem_shmem_try_map_pmd(). Gives
the caller better insight into the result. Return 0 if nothing was done.

If the caller sees VM_FAULT_NOPAGE, drm_gem_shmem_try_map_pmd() added a
PMD entry to the page table. As before, return early from the page-fault
handler in that case.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3ee54c24e535..ab8e331005f9 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -550,8 +550,8 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
 
-static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
-				      struct page *page)
+static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
+					    struct page *page)
 {
 #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
 	unsigned long pfn = page_to_pfn(page);
@@ -562,12 +562,11 @@ static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
 	    pmd_none(*vmf->pmd) &&
 	    folio_test_pmd_mappable(page_folio(page))) {
 		pfn &= PMD_MASK >> PAGE_SHIFT;
-		if (vmf_insert_pfn_pmd(vmf, pfn, false) == VM_FAULT_NOPAGE)
-			return true;
+		return vmf_insert_pfn_pmd(vmf, pfn, false);
 	}
 #endif
 
-	return false;
+	return 0;
 }
 
 static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
@@ -593,10 +592,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	if (drm_WARN_ON_ONCE(dev, !page))
 		goto out;
 
-	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)) {
-		ret = VM_FAULT_NOPAGE;
+	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
+	if (ret == VM_FAULT_NOPAGE)
 		goto out;
-	}
 
 	pfn = page_to_pfn(page);
 	ret = vmf_insert_pfn(vma, vmf->address, pfn);
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
                   ` (2 preceding siblings ...)
  2026-02-09 13:27 ` [PATCH v3 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-09 13:27 ` Thomas Zimmermann
  2026-02-09 14:25   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  2026-02-09 13:27 ` [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

The current mmap page-fault handler requires some changes before it
can track folio access.

Call to folio_test_pmd_mappable() into the mmap page-fault handler
before calling drm_gem_shmem_try_map_pmd(). The folio will become
useful for tracking the access status.

Also rename drm_gem_shmem_try_map_pmd() to _try_insert_pfn_pmd()
and only pass the page fault and page-frame number. The new name and
parameters make it similar to vmf_insert_pfn_pmd().

No functional changes. If PMD mapping fails or is not supported,
insert a regular PFN as before.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ab8e331005f9..c3a054899ba3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -550,17 +550,14 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
 
-static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
-					    struct page *page)
+static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn)
 {
 #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
-	unsigned long pfn = page_to_pfn(page);
 	unsigned long paddr = pfn << PAGE_SHIFT;
-	bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
+	bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
 
-	if (aligned &&
-	    pmd_none(*vmf->pmd) &&
-	    folio_test_pmd_mappable(page_folio(page))) {
+	if (aligned && pmd_none(*vmf->pmd)) {
+		/* Read-only mapping; split upon write fault */
 		pfn &= PMD_MASK >> PAGE_SHIFT;
 		return vmf_insert_pfn_pmd(vmf, pfn, false);
 	}
@@ -580,6 +577,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	struct page **pages = shmem->pages;
 	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
 	struct page *page;
+	struct folio *folio;
 	unsigned long pfn;
 
 	dma_resv_lock(obj->resv, NULL);
@@ -591,15 +589,16 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	page = pages[page_offset];
 	if (drm_WARN_ON_ONCE(dev, !page))
 		goto out;
-
-	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
-	if (ret == VM_FAULT_NOPAGE)
-		goto out;
+	folio = page_folio(page);
 
 	pfn = page_to_pfn(page);
-	ret = vmf_insert_pfn(vma, vmf->address, pfn);
 
- out:
+	if (folio_test_pmd_mappable(folio))
+		ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
+	if (ret != VM_FAULT_NOPAGE)
+		ret = vmf_insert_pfn(vma, vmf->address, pfn);
+
+out:
 	dma_resv_unlock(obj->resv);
 
 	return ret;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
                   ` (3 preceding siblings ...)
  2026-02-09 13:27 ` [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-09 13:27 ` Thomas Zimmermann
  2026-02-09 14:23   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  2026-02-09 13:27 ` [PATCH v3 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
  2026-02-11  7:04 ` Claude review: drm/gem-shmem: Track page accessed/dirty status Claude Code Review Bot
  6 siblings, 2 replies; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Invoke folio_mark_accessed() in mmap page faults to add the folio to
the memory manager's LRU list. Userspace invokes mmap to get the memory
for software rendering. Compositors do the same when creating the final
on-screen image, so keeping the pages in LRU makes sense. Avoids paging
out graphics buffers when under memory pressure.

In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
for writeback should the underlying file be paged out from system memory.
This rarely happens in practice, yet it would corrupt the buffer content.

This has little effect on a system's hardware-accelerated rendering, which
only mmaps for an initial setup of textures, meshes, shaders, etc.

v3:
- rewrite for VM_PFNMAP
v2:
- adapt to changes in drm_gem_shmem_try_mmap_pmd()

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index c3a054899ba3..0c86ad40a049 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	if (ret != VM_FAULT_NOPAGE)
 		ret = vmf_insert_pfn(vma, vmf->address, pfn);
 
+	if (likely(!(ret & VM_FAULT_ERROR)))
+		folio_mark_accessed(folio);
+
 out:
 	dma_resv_unlock(obj->resv);
 
@@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
 	drm_gem_vm_close(vma);
 }
 
+static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	struct drm_gem_object *obj = vma->vm_private_data;
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+	struct page *page = shmem->pages[page_offset];
+	struct folio *folio = page_folio(page);
+
+	file_update_time(vma->vm_file);
+
+	folio_mark_dirty(folio);
+
+	return 0;
+}
+
 const struct vm_operations_struct drm_gem_shmem_vm_ops = {
 	.fault = drm_gem_shmem_fault,
 	.open = drm_gem_shmem_vm_open,
 	.close = drm_gem_shmem_vm_close,
+	.pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
 };
 EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
                   ` (4 preceding siblings ...)
  2026-02-09 13:27 ` [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
@ 2026-02-09 13:27 ` Thomas Zimmermann
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  2026-02-11  7:04 ` Claude review: drm/gem-shmem: Track page accessed/dirty status Claude Code Review Bot
  6 siblings, 1 reply; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 13:27 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

On successful vmap, set the page_mark_accessed_on_put and _dirty_on_put
flags in the gem-shmem object. Signals that the contained pages require
LRU and dirty tracking when they are being released back to SHMEM. Clear
these flags on put, so that the buffer remains quiet until the next call
to vmap. There's no means of handling dirty status in vmap as there's no
write-only mapping available.

Both flags, _accessed_on_put and _dirty_on_put, have always been part of
the gem-shmem object, but never used much. So most drivers did not track
the page status correctly.

Only the v3d and imagination drivers make limited use of _dirty_on_put. In
the case of imagination, move the flag setting from init to cleanup. This
ensures writeback of modified pages but does not interfere with the
internal vmap/vunmap calls. V3d already implements this behaviour.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> # gem-shmem
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++++
 drivers/gpu/drm/imagination/pvr_gem.c  | 6 ++++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 0c86ad40a049..dda9af9bf5b3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -265,6 +265,8 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 				  shmem->pages_mark_dirty_on_put,
 				  shmem->pages_mark_accessed_on_put);
 		shmem->pages = NULL;
+		shmem->pages_mark_accessed_on_put = false;
+		shmem->pages_mark_dirty_on_put = false;
 	}
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
@@ -397,6 +399,8 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 		} else {
 			iosys_map_set_vaddr(map, shmem->vaddr);
 			refcount_set(&shmem->vmap_use_count, 1);
+			shmem->pages_mark_accessed_on_put = true;
+			shmem->pages_mark_dirty_on_put = true;
 		}
 	}
 
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
index c07c9a915190..307b02c916d4 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.c
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -25,7 +25,10 @@
 
 static void pvr_gem_object_free(struct drm_gem_object *obj)
 {
-	drm_gem_shmem_object_free(obj);
+	struct drm_gem_shmem_object *shmem_obj = to_drm_gem_shmem_obj(obj);
+
+	shmem_obj->pages_mark_dirty_on_put = true;
+	drm_gem_shmem_free(shmem_obj);
 }
 
 static struct dma_buf *pvr_gem_export(struct drm_gem_object *obj, int flags)
@@ -363,7 +366,6 @@ pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
 	if (IS_ERR(shmem_obj))
 		return ERR_CAST(shmem_obj);
 
-	shmem_obj->pages_mark_dirty_on_put = true;
 	shmem_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
 	pvr_obj = shmem_gem_to_pvr_gem(shmem_obj);
 	pvr_obj->flags = flags;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler
  2026-02-09 13:27 ` [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
@ 2026-02-09 14:10   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Boris Brezillon @ 2026-02-09 14:10 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: loic.molinari, willy, frank.binns, matt.coster, maarten.lankhorst,
	mripard, airlied, simona, dri-devel, linux-mm

On Mon,  9 Feb 2026 14:27:10 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Replace shmem->base with obj in several places. It is the same value,
> but the latter is easier to read.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 3871a6d92f77..5bced7db0f1f 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -584,7 +584,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	/* Offset to faulty address in the VMA. */
>  	page_offset = vmf->pgoff - vma->vm_pgoff;
>  
> -	dma_resv_lock(shmem->base.resv, NULL);
> +	dma_resv_lock(obj->resv, NULL);
>  
>  	if (page_offset >= num_pages ||
>  	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> @@ -602,7 +602,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	ret = vmf_insert_pfn(vma, vmf->address, pfn);
>  
>   out:
> -	dma_resv_unlock(shmem->base.resv);
> +	dma_resv_unlock(obj->resv);
>  
>  	return ret;
>  }


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap fault handler
  2026-02-09 13:27 ` [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
@ 2026-02-09 14:10   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Boris Brezillon @ 2026-02-09 14:10 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: loic.molinari, willy, frank.binns, matt.coster, maarten.lankhorst,
	mripard, airlied, simona, dri-devel, linux-mm

On Mon,  9 Feb 2026 14:27:11 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Not having a page pointer in the mmap fault handler is an error. Test
> for this situation and return VM_FAULT_SIGBUS if so. Also replace several
> lookups of the page with a local variable.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 5bced7db0f1f..3ee54c24e535 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -574,31 +574,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  {
>  	struct vm_area_struct *vma = vmf->vma;
>  	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_device *dev = obj->dev;
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  	loff_t num_pages = obj->size >> PAGE_SHIFT;
> -	vm_fault_t ret;
> +	vm_fault_t ret = VM_FAULT_SIGBUS;
>  	struct page **pages = shmem->pages;
> -	pgoff_t page_offset;
> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +	struct page *page;
>  	unsigned long pfn;
>  
> -	/* Offset to faulty address in the VMA. */
> -	page_offset = vmf->pgoff - vma->vm_pgoff;
> -
>  	dma_resv_lock(obj->resv, NULL);
>  
> -	if (page_offset >= num_pages ||
> -	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> -	    shmem->madv < 0) {
> -		ret = VM_FAULT_SIGBUS;
> +	if (page_offset >= num_pages || drm_WARN_ON_ONCE(dev, !shmem->pages) ||
> +	    shmem->madv < 0)
> +		goto out;
> +
> +	page = pages[page_offset];
> +	if (drm_WARN_ON_ONCE(dev, !page))
>  		goto out;
> -	}
>  
> -	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) {
> +	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)) {
>  		ret = VM_FAULT_NOPAGE;
>  		goto out;
>  	}
>  
> -	pfn = page_to_pfn(pages[page_offset]);
> +	pfn = page_to_pfn(page);
>  	ret = vmf_insert_pfn(vma, vmf->address, pfn);
>  
>   out:


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
  2026-02-09 13:27 ` [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
@ 2026-02-09 14:23   ` Boris Brezillon
  2026-02-09 14:46     ` Thomas Zimmermann
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  1 sibling, 1 reply; 20+ messages in thread
From: Boris Brezillon @ 2026-02-09 14:23 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: loic.molinari, willy, frank.binns, matt.coster, maarten.lankhorst,
	mripard, airlied, simona, dri-devel, linux-mm

On Mon,  9 Feb 2026 14:27:14 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Invoke folio_mark_accessed() in mmap page faults to add the folio to
> the memory manager's LRU list. Userspace invokes mmap to get the memory
> for software rendering. Compositors do the same when creating the final
> on-screen image, so keeping the pages in LRU makes sense. Avoids paging
> out graphics buffers when under memory pressure.
> 
> In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
> for writeback should the underlying file be paged out from system memory.
> This rarely happens in practice, yet it would corrupt the buffer content.
> 
> This has little effect on a system's hardware-accelerated rendering, which
> only mmaps for an initial setup of textures, meshes, shaders, etc.
> 
> v3:
> - rewrite for VM_PFNMAP
> v2:
> - adapt to changes in drm_gem_shmem_try_mmap_pmd()
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index c3a054899ba3..0c86ad40a049 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	if (ret != VM_FAULT_NOPAGE)
>  		ret = vmf_insert_pfn(vma, vmf->address, pfn);
>  
> +	if (likely(!(ret & VM_FAULT_ERROR)))

Can't we just go

	if (ret == VM_FAULT_NOPAGE)

here?

> +		folio_mark_accessed(folio);
> +
>  out:
>  	dma_resv_unlock(obj->resv);
>  
> @@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>  	drm_gem_vm_close(vma);
>  }
>  
> +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
> +{
> +	struct vm_area_struct *vma = vmf->vma;
> +	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +	struct page *page = shmem->pages[page_offset];

Should we have a

	if (WARN_ON(!shmem->pages ||
		    page_offset <= (obj->size >> PAGE_SHIFT)))
		return VM_FAULT_SIGBUS;


?

> +	struct folio *folio = page_folio(page);
> +
> +	file_update_time(vma->vm_file);
> +
> +	folio_mark_dirty(folio);
> +
> +	return 0;
> +}
> +
>  const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>  	.fault = drm_gem_shmem_fault,
>  	.open = drm_gem_shmem_vm_open,
>  	.close = drm_gem_shmem_vm_close,
> +	.pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
>  };
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>  


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()
  2026-02-09 13:27 ` [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-09 14:25   ` Boris Brezillon
  2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Boris Brezillon @ 2026-02-09 14:25 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: loic.molinari, willy, frank.binns, matt.coster, maarten.lankhorst,
	mripard, airlied, simona, dri-devel, linux-mm

On Mon,  9 Feb 2026 14:27:13 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> The current mmap page-fault handler requires some changes before it
> can track folio access.
> 
> Call to folio_test_pmd_mappable() into the mmap page-fault handler
> before calling drm_gem_shmem_try_map_pmd(). The folio will become
> useful for tracking the access status.
> 
> Also rename drm_gem_shmem_try_map_pmd() to _try_insert_pfn_pmd()
> and only pass the page fault and page-frame number. The new name and
> parameters make it similar to vmf_insert_pfn_pmd().
> 
> No functional changes. If PMD mapping fails or is not supported,
> insert a regular PFN as before.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 25 ++++++++++++-------------
>  1 file changed, 12 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index ab8e331005f9..c3a054899ba3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -550,17 +550,14 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>  
> -static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
> -					    struct page *page)
> +static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn)
>  {
>  #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
> -	unsigned long pfn = page_to_pfn(page);
>  	unsigned long paddr = pfn << PAGE_SHIFT;
> -	bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
> +	bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
>  
> -	if (aligned &&
> -	    pmd_none(*vmf->pmd) &&
> -	    folio_test_pmd_mappable(page_folio(page))) {
> +	if (aligned && pmd_none(*vmf->pmd)) {
> +		/* Read-only mapping; split upon write fault */
>  		pfn &= PMD_MASK >> PAGE_SHIFT;
>  		return vmf_insert_pfn_pmd(vmf, pfn, false);
>  	}
> @@ -580,6 +577,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	struct page **pages = shmem->pages;
>  	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
>  	struct page *page;
> +	struct folio *folio;
>  	unsigned long pfn;
>  
>  	dma_resv_lock(obj->resv, NULL);
> @@ -591,15 +589,16 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	page = pages[page_offset];
>  	if (drm_WARN_ON_ONCE(dev, !page))
>  		goto out;
> -
> -	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
> -	if (ret == VM_FAULT_NOPAGE)
> -		goto out;
> +	folio = page_folio(page);
>  
>  	pfn = page_to_pfn(page);
> -	ret = vmf_insert_pfn(vma, vmf->address, pfn);
>  
> - out:
> +	if (folio_test_pmd_mappable(folio))
> +		ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> +	if (ret != VM_FAULT_NOPAGE)
> +		ret = vmf_insert_pfn(vma, vmf->address, pfn);
> +
> +out:
>  	dma_resv_unlock(obj->resv);
>  
>  	return ret;


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
  2026-02-09 14:23   ` Boris Brezillon
@ 2026-02-09 14:46     ` Thomas Zimmermann
  2026-02-09 15:01       ` Boris Brezillon
  0 siblings, 1 reply; 20+ messages in thread
From: Thomas Zimmermann @ 2026-02-09 14:46 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: loic.molinari, willy, frank.binns, matt.coster, maarten.lankhorst,
	mripard, airlied, simona, dri-devel, linux-mm

Hi Boris,

thanks for reviewing the series.

Am 09.02.26 um 15:23 schrieb Boris Brezillon:
> On Mon,  9 Feb 2026 14:27:14 +0100
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>> Invoke folio_mark_accessed() in mmap page faults to add the folio to
>> the memory manager's LRU list. Userspace invokes mmap to get the memory
>> for software rendering. Compositors do the same when creating the final
>> on-screen image, so keeping the pages in LRU makes sense. Avoids paging
>> out graphics buffers when under memory pressure.
>>
>> In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
>> for writeback should the underlying file be paged out from system memory.
>> This rarely happens in practice, yet it would corrupt the buffer content.
>>
>> This has little effect on a system's hardware-accelerated rendering, which
>> only mmaps for an initial setup of textures, meshes, shaders, etc.
>>
>> v3:
>> - rewrite for VM_PFNMAP
>> v2:
>> - adapt to changes in drm_gem_shmem_try_mmap_pmd()
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
>> ---
>>   drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++
>>   1 file changed, 20 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> index c3a054899ba3..0c86ad40a049 100644
>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>>   	if (ret != VM_FAULT_NOPAGE)
>>   		ret = vmf_insert_pfn(vma, vmf->address, pfn);
>>   
>> +	if (likely(!(ret & VM_FAULT_ERROR)))
> Can't we just go
>
> 	if (ret == VM_FAULT_NOPAGE)
>
> here?

After reviewing the code in vmf_insert_pfn, I think so. All we'll see is 
_OOM and _SIGBUS; or _NOPAGE on success.  I'll change it then.


>
>> +		folio_mark_accessed(folio);
>> +
>>   out:
>>   	dma_resv_unlock(obj->resv);
>>   
>> @@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>>   	drm_gem_vm_close(vma);
>>   }
>>   
>> +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
>> +{
>> +	struct vm_area_struct *vma = vmf->vma;
>> +	struct drm_gem_object *obj = vma->vm_private_data;
>> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
>> +	struct page *page = shmem->pages[page_offset];
> Should we have a
>
> 	if (WARN_ON(!shmem->pages ||
> 		    page_offset <= (obj->size >> PAGE_SHIFT)))
> 		return VM_FAULT_SIGBUS;
>
>
> ?

I left it out because it doesn't seem necessary.  In the fault handler 
in drm_gem_shmem_fault(), I can see that we could get an OOB access. But 
we only call pfn_mkwrite() after going through _fault() first. I don't 
see a way of getting here unless we've already tested for the page in 
_fault().

Best regards
Thomas

>
>> +	struct folio *folio = page_folio(page);
>> +
>> +	file_update_time(vma->vm_file);
>> +
>> +	folio_mark_dirty(folio);
>> +
>> +	return 0;
>> +}
>> +
>>   const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>>   	.fault = drm_gem_shmem_fault,
>>   	.open = drm_gem_shmem_vm_open,
>>   	.close = drm_gem_shmem_vm_close,
>> +	.pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
>>   };
>>   EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>>   

-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
  2026-02-09 14:46     ` Thomas Zimmermann
@ 2026-02-09 15:01       ` Boris Brezillon
  0 siblings, 0 replies; 20+ messages in thread
From: Boris Brezillon @ 2026-02-09 15:01 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: loic.molinari, willy, frank.binns, matt.coster, maarten.lankhorst,
	mripard, airlied, simona, dri-devel, linux-mm

On Mon, 9 Feb 2026 15:46:21 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Hi Boris,
> 
> thanks for reviewing the series.
> 
> Am 09.02.26 um 15:23 schrieb Boris Brezillon:
> > On Mon,  9 Feb 2026 14:27:14 +0100
> > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >  
> >> Invoke folio_mark_accessed() in mmap page faults to add the folio to
> >> the memory manager's LRU list. Userspace invokes mmap to get the memory
> >> for software rendering. Compositors do the same when creating the final
> >> on-screen image, so keeping the pages in LRU makes sense. Avoids paging
> >> out graphics buffers when under memory pressure.
> >>
> >> In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
> >> for writeback should the underlying file be paged out from system memory.
> >> This rarely happens in practice, yet it would corrupt the buffer content.
> >>
> >> This has little effect on a system's hardware-accelerated rendering, which
> >> only mmaps for an initial setup of textures, meshes, shaders, etc.
> >>
> >> v3:
> >> - rewrite for VM_PFNMAP
> >> v2:
> >> - adapt to changes in drm_gem_shmem_try_mmap_pmd()
> >>
> >> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> >> ---
> >>   drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++
> >>   1 file changed, 20 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> index c3a054899ba3..0c86ad40a049 100644
> >> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> >>   	if (ret != VM_FAULT_NOPAGE)
> >>   		ret = vmf_insert_pfn(vma, vmf->address, pfn);
> >>   
> >> +	if (likely(!(ret & VM_FAULT_ERROR)))  
> > Can't we just go
> >
> > 	if (ret == VM_FAULT_NOPAGE)
> >
> > here?  
> 
> After reviewing the code in vmf_insert_pfn, I think so. All we'll see is 
> _OOM and _SIGBUS; or _NOPAGE on success.  I'll change it then.
> 
> 
> >  
> >> +		folio_mark_accessed(folio);
> >> +
> >>   out:
> >>   	dma_resv_unlock(obj->resv);
> >>   
> >> @@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> >>   	drm_gem_vm_close(vma);
> >>   }
> >>   
> >> +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
> >> +{
> >> +	struct vm_area_struct *vma = vmf->vma;
> >> +	struct drm_gem_object *obj = vma->vm_private_data;
> >> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> >> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> >> +	struct page *page = shmem->pages[page_offset];  
> > Should we have a
> >
> > 	if (WARN_ON(!shmem->pages ||
> > 		    page_offset <= (obj->size >> PAGE_SHIFT)))
> > 		return VM_FAULT_SIGBUS;
> >
> >
> > ?  
> 
> I left it out because it doesn't seem necessary.  In the fault handler 
> in drm_gem_shmem_fault(), I can see that we could get an OOB access. But 
> we only call pfn_mkwrite() after going through _fault() first. I don't 
> see a way of getting here unless we've already tested for the page in 
> _fault().

I agree it's not supposed to happen, but isn't it what WARN_ON()s are
for (catching unexpected situations)?


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Track page accessed/dirty status
  2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
                   ` (5 preceding siblings ...)
  2026-02-09 13:27 ` [PATCH v3 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
@ 2026-02-11  7:04 ` Claude Code Review Bot
  6 siblings, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/gem-shmem: Track page accessed/dirty status
Author: Thomas Zimmermann <tzimmermann@suse.de>
Patches: 13
Reviewed: 2026-02-11T17:04:32.865608

---

This patch series addresses a long-standing gap in the DRM GEM-SHMEM memory management code: proper tracking of page accessed and dirty status for integration with the Linux kernel memory manager. The series demonstrates good development practices with preparatory refactorings (patches 1-4) before implementing the core functionality (patches 5-6).

**Strengths:**
- Well-structured incremental changes with clear progression
- Proper use of folio APIs for modern memory management
- Addresses technical debt (unused flags in struct drm_gem_shmem_object)
- Includes review feedback incorporation (v3 rewrite for VM_PFNMAP)
- Tested with CONFIG_VM=y using bochs driver over several hours

**Concerns:**
1. **Limited testing scope**: Only tested with bochs (software renderer), not hardware drivers
2. **Performance impact unclear**: No performance testing mentioned for vmap/vunmap paths
3. **Imagination driver change**: Moving dirty flag from init to free changes behavior in subtle ways that need verification
4. **VM_PFNMAP implications**: The v3 rewrite for VM_PFNMAP is mentioned but reasoning not fully explained in commit messages

**Recommendation:** Needs broader hardware driver testing and performance validation before merge. The imagination driver change requires explicit ack from maintainers.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Use obj directly where appropriate in fault handler
  2026-02-09 13:27 ` [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
  2026-02-09 14:10   ` Boris Brezillon
@ 2026-02-11  7:04   ` Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Review:**

Simple readability improvement. The change is sound and straightforward.

```c
-	dma_resv_lock(shmem->base.resv, NULL);
+	dma_resv_lock(obj->resv, NULL);
```

This is cleaner since `obj` is already defined as `vma->vm_private_data` and `obj == &shmem->base`.

**Issues:** None

**Verdict:** ✓ Good cleanup patch

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Test for existence of page in mmap fault handler
  2026-02-09 13:27 ` [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
  2026-02-09 14:10   ` Boris Brezillon
@ 2026-02-11  7:04   ` Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Review:**

This adds important defensive programming by checking for NULL pages:

```c
+	page = pages[page_offset];
+	if (drm_WARN_ON_ONCE(dev, !page))
+		goto out;
```

**Concern 1 - Error path behavior:**
The patch changes the default return value to `VM_FAULT_SIGBUS`:
```c
-	vm_fault_t ret;
+	vm_fault_t ret = VM_FAULT_SIGBUS;
```

This means if the function takes an early error path, it returns SIGBUS. However, looking at the existing error cases:
```c
if (page_offset >= num_pages || drm_WARN_ON_ONCE(dev, !shmem->pages) ||
    shmem->madv < 0)
    goto out;
```

**Question:** Is VM_FAULT_SIGBUS the correct return for all these cases? The `madv < 0` case indicates the object was marked as "don't need", which traditionally should return VM_FAULT_SIGBUS. But are the other cases correctly signaling a bus error to userspace?

**Concern 2 - Variable declaration style:**
```c
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
```

The comment is helpful, but mixing initialized and uninitialized variable declarations:
```c
+	vm_fault_t ret = VM_FAULT_SIGBUS;
+	struct page **pages = shmem->pages;
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff;
+	struct page *page;
+	unsigned long pfn;
```

This is acceptable but slightly inconsistent with kernel style which often groups declarations.

**Concern 3 - WARN_ON vs error handling:**
```c
+	if (drm_WARN_ON_ONCE(dev, !page))
+		goto out;
```

This uses `drm_WARN_ON_ONCE` which is appropriate for "should never happen" conditions. However, **when can `page` be NULL?** If this is a legitimate error condition (e.g., race with page eviction), WARN might be too noisy. If it's truly never supposed to happen, this is correct.

**Verdict:** ⚠ Needs clarification on the NULL page scenario and whether WARN is appropriate

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
  2026-02-09 13:27 ` [PATCH v3 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-11  7:04   ` Claude Code Review Bot
  0 siblings, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Review:**

Good API improvement suggested by Matthew Wilcox. Changes from boolean return to proper fault status:

```c
-static bool drm_gem_shmem_try_map_pmd(...)
+static vm_fault_t drm_gem_shmem_try_map_pmd(...)
```

**Before:**
```c
if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)) {
    ret = VM_FAULT_NOPAGE;
    goto out;
}
```

**After:**
```c
ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
if (ret == VM_FAULT_NOPAGE)
    goto out;
```

This is cleaner and allows the function to propagate error codes properly. The function now returns:
- `VM_FAULT_NOPAGE` if PMD mapping succeeded
- `0` if PMD mapping didn't happen (fall through to regular PFN mapping)
- Potentially error codes from `vmf_insert_pfn_pmd()`

**Issue - Incomplete error handling:**
The caller only checks for `VM_FAULT_NOPAGE`:
```c
ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
if (ret == VM_FAULT_NOPAGE)
    goto out;
```

**What if `vmf_insert_pfn_pmd()` returns an error?** The code will fall through and attempt regular PFN insertion. Is this intentional fallback behavior or a bug?

Looking at the code flow:
1. If `ret == VM_FAULT_NOPAGE`: success, return
2. If `ret == 0`: PMD not attempted, fall through to regular PFN (correct)
3. If `ret == VM_FAULT_ERROR`: PMD failed, fall through to regular PFN (questionable)

**Verdict:** ⚠ Need clarification on error handling - should errors from PMD insertion fall through to regular PFN or be propagated?

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()
  2026-02-09 13:27 ` [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
  2026-02-09 14:25   ` Boris Brezillon
@ 2026-02-11  7:04   ` Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Review:**

This refactors the PMD mapping function:

**Function rename:**
```c
-drm_gem_shmem_try_map_pmd() -> drm_gem_shmem_try_insert_pfn_pmd()
```

Better name that parallels `vmf_insert_pfn_pmd()`.

**Parameter simplification:**
```c
-static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
-                                            struct page *page)
+static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn)
```

**Issue 1 - Lost parameter:**
The old version took `addr` parameter which could differ from `vmf->address`. The new version always uses `vmf->address`:

```c
-	bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
+	bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
```

Looking at the caller in previous patches:
```c
drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)
```

So `addr` was always `vmf->address` anyway. This simplification is correct.

**Issue 2 - Moved check:**
```c
-	if (aligned &&
-	    pmd_none(*vmf->pmd) &&
-	    folio_test_pmd_mappable(page_folio(page))) {
+	if (aligned && pmd_none(*vmf->pmd)) {
```

The `folio_test_pmd_mappable()` check moved to the caller:
```c
+	folio = page_folio(page);
+	pfn = page_to_pfn(page);
+
+	if (folio_test_pmd_mappable(folio))
+		ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
```

**This introduces a subtle behavior change:** Previously, if the folio wasn't PMD mappable, the function would not even attempt PMD insertion. Now, `drm_gem_shmem_try_insert_pfn_pmd()` will be called but might fail differently.

Wait, looking more carefully:
```c
if (folio_test_pmd_mappable(folio))
    ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
if (ret != VM_FAULT_NOPAGE)
    ret = vmf_insert_pfn(vma, vmf->address, pfn);
```

**Bug - Uninitialized variable:** If `folio_test_pmd_mappable(folio)` is false, `ret` is never set before the second `if` statement! The code would use whatever value `ret` had from earlier:

```c
ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);  // Only executed if PMD mappable
if (ret != VM_FAULT_NOPAGE)  // Uses uninitialized 'ret' if not PMD mappable!
    ret = vmf_insert_pfn(vma, vmf->address, pfn);
```

Actually, wait. Looking at the full context from PATCH 2:
```c
vm_fault_t ret = VM_FAULT_SIGBUS;  // Initialized at function start
...
ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
```

So if `folio_test_pmd_mappable()` is false:
- `ret` stays as `VM_FAULT_SIGBUS`
- `ret != VM_FAULT_NOPAGE` is true
- Falls through to `vmf_insert_pfn()`
- Overwrites `ret` with the PFN insertion result

Actually this works correctly, but it's confusing logic. Would be clearer as:
```c
if (folio_test_pmd_mappable(folio))
    ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
else
    ret = 0;  // Explicitly signal no PMD

if (ret != VM_FAULT_NOPAGE)
    ret = vmf_insert_pfn(vma, vmf->address, pfn);
```

**Issue 3 - New comment:**
```c
+		/* Read-only mapping; split upon write fault */
```

This comment suggests the PMD mapping is read-only and will split on write. Looking at the call:
```c
return vmf_insert_pfn_pmd(vmf, pfn, false);  // false = no write
```

The `false` parameter means the PMD is inserted without write permission. **But where is the write fault handled?** I don't see `pfn_mkwrite` for PMD entries. This might cause issues with dirty tracking for huge pages.

**Verdict:** ⚠ Confusing control flow and potential issue with PMD write faults

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Track folio accessed/dirty status in mmap
  2026-02-09 13:27 ` [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
  2026-02-09 14:23   ` Boris Brezillon
@ 2026-02-11  7:04   ` Claude Code Review Bot
  1 sibling, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Review:**

Core functionality: adds folio tracking in mmap path.

**Accessed tracking:**
```c
+	if (likely(!(ret & VM_FAULT_ERROR)))
+		folio_mark_accessed(folio);
```

This marks the folio as accessed on successful page fault. Good for LRU management.

**Issue 1 - PMD vs PTE tracking:**
The `folio_mark_accessed()` is called regardless of whether a PMD or PTE was inserted. For PMD mappings (huge pages), marking the folio accessed is correct since the whole folio was mapped. Good.

**Dirty tracking:**
```c
+static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	struct drm_gem_object *obj = vma->vm_private_data;
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff;
+	struct page *page = shmem->pages[page_offset];
+	struct folio *folio = page_folio(page);
+
+	file_update_time(vma->vm_file);
+	folio_mark_dirty(folio);
+
+	return 0;
+}
```

**Issue 2 - Missing locking:**
The `pfn_mkwrite` handler accesses `shmem->pages[page_offset]` without holding `obj->resv` lock. Compare with the `fault` handler which does:

```c
dma_resv_lock(obj->resv, NULL);
page = pages[page_offset];
...
dma_resv_unlock(obj->resv);
```

**This is a race condition.** Between the time the page is faulted and mkwrite is called, the pages array could be freed/reallocated. The `pfn_mkwrite` needs to:
1. Lock `obj->resv`
2. Verify `shmem->pages` is still valid
3. Verify the page at `page_offset` is still valid
4. Mark dirty
5. Unlock

**Issue 3 - PMD mkwrite:**
As mentioned in PATCH 4, PMDs are inserted read-only:
```c
vmf_insert_pfn_pmd(vmf, pfn, false);  // false = read-only
```

But `pfn_mkwrite` only handles PTE granularity (it uses `vmf->pgoff` to get a single page). **There's no `huge_fault` or PMD-specific mkwrite handler.** This means:

1. PMD inserted read-only
2. Write occurs
3. PMD is split by kernel into PTEs (expensive!)
4. PTE mkwrite called

This defeats the purpose of huge page mappings for write-heavy workloads. Should either:
- Insert PMDs with write permission if mapping allows
- Implement PMD-granularity mkwrite

**Issue 4 - File timestamp update:**
```c
file_update_time(vma->vm_file);
```

This is called on every write fault. For frequently-written pages, this could cause significant overhead. Traditional page cache code only updates time periodically, not on every mkwrite. Consider using `file_modified()` with appropriate checks, or rate-limiting updates.

**Issue 5 - Return value:**
```c
return 0;
```

The function returns 0 (success) unconditionally. What if `folio_mark_dirty()` fails or the folio is in a bad state? Should validate folio state before marking dirty.

**Verdict:** ⚠ Missing locking (critical bug), PMD handling incomplete, potential performance issues

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Claude review: drm/gem-shmem: Track folio accessed/dirty status in vmap
  2026-02-09 13:27 ` [PATCH v3 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
@ 2026-02-11  7:04   ` Claude Code Review Bot
  0 siblings, 0 replies; 20+ messages in thread
From: Claude Code Review Bot @ 2026-02-11  7:04 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Review:**

This adds folio tracking for vmap operations:

**SHMEM helper changes:**
```c
@@ drm_gem_shmem_vmap_locked()
+			shmem->pages_mark_accessed_on_put = true;
+			shmem->pages_mark_dirty_on_put = true;
```

Sets flags so pages are marked when released. This is conservative - assumes all vmap'd pages are accessed and dirtied.

**Issue 1 - Overly conservative:**
The commit message states: "There's no means of handling dirty status in vmap as there's no write-only mapping available."

True, but this means ALL vmap operations mark pages dirty, even read-only ones. This could cause unnecessary writeback for buffers that are only read via vmap (e.g., reading texture data uploaded by client).

Consider:
- Adding a `vmap_readonly` flag to only set `accessed_on_put`
- Let callers specify read-only intent
- Default to current behavior for compatibility

**Issue 2 - Clearing flags:**
```c
+		shmem->pages_mark_accessed_on_put = false;
+		shmem->pages_mark_dirty_on_put = false;
```

These flags are cleared in `drm_gem_shmem_put_pages_locked()`. **But what about the imagination driver change?**

**Imagination driver change:**
```c
-	shmem_obj->pages_mark_dirty_on_put = true;  // Was in pvr_gem_object_create()
+	shmem_obj->pages_mark_dirty_on_put = true;  // Now in pvr_gem_object_free()
```

**Behavior change analysis:**

**Before:**
1. Object created → `pages_mark_dirty_on_put = true`
2. Any get/put pages cycle → marks dirty
3. Internal vmap/vunmap → marks dirty
4. Repeat...
5. Object destroyed

**After:**
1. Object created → flag not set
2. Vmap → sets `pages_mark_dirty_on_put = true`
3. Vunmap/put pages → marks dirty, **clears flag to false**
4. Next vmap → sets flag again
5. Object destroyed → sets `pages_mark_dirty_on_put = true`, then calls `drm_gem_shmem_free()`

**Issue 3 - Imagination driver race:**
At object destruction:
```c
shmem_obj->pages_mark_dirty_on_put = true;
drm_gem_shmem_free(shmem_obj);
```

If pages are still mapped at destruction (refcount > 0), setting the flag might not have the intended effect. The `drm_gem_shmem_free()` should vunmap if needed, which would trigger:
```c
drm_gem_shmem_put_pages_locked() {
    ...
    shmem->pages_mark_dirty_on_put = false;  // Clears the flag we just set!
    ...
}
```

So setting it right before free might be too late. Need to verify the exact sequence in `drm_gem_shmem_free()`.

**Issue 4 - Commit message mismatch:**
The commit says: "move the flag setting from init to cleanup. This ensures writeback of modified pages but does not interfere with the internal vmap/vunmap calls."

But how does moving to cleanup "ensure writeback of modified pages"? The flag is set, then immediately cleared by the put_pages call. The actual writeback happens during the vmap/vunmap cycles due to patch 6's changes to vmap_locked(), not because of the flag in free().

This seems like a workaround to prevent double-marking or interference, but the explanation doesn't match the code.

**Issue 5 - Missing v3d similar fix:**
Commit message: "V3d already implements this behaviour."

Should verify that v3d doesn't need similar changes. If v3d sets `pages_mark_dirty_on_put = true` at init, won't it conflict with the vmap changes that now clear the flag?

**Verdict:** ⚠ Overly conservative marking, imagination driver change needs better explanation and possible race condition, v3d compatibility unclear

---

## SUMMARY OF ISSUES

**Critical (must fix before merge):**
1. **PATCH 5**: Missing locking in `pfn_mkwrite` - race condition with page array access
2. **PATCH 6**: Imagination driver change poorly explained and potentially racy

**Major (should fix):**
3. **PATCH 3**: Error handling from PMD insertion unclear
4. **PATCH 4**: Confusing control flow with uninitialized-looking variable usage
5. **PATCH 5**: PMD write handling incomplete - defeats huge page benefits

**Minor (consider fixing):**
6. **PATCH 2**: Clarify when page can be NULL and if WARN is appropriate
7. **PATCH 5**: file_update_time() overhead on every write
8. **PATCH 6**: Overly conservative dirty marking for read-only vmap
9. **PATCH 6**: Verify v3d driver compatibility

**Testing gaps:**
10. No testing on actual hardware-accelerated drivers
11. No performance benchmarks for vmap/vunmap paths
12. No stress testing for concurrent access patterns

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-02-11  7:04 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-09 13:27 [PATCH v3 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
2026-02-09 13:27 ` [PATCH v3 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
2026-02-09 14:10   ` Boris Brezillon
2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
2026-02-09 13:27 ` [PATCH v3 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
2026-02-09 14:10   ` Boris Brezillon
2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
2026-02-09 13:27 ` [PATCH v3 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
2026-02-09 13:27 ` [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
2026-02-09 14:25   ` Boris Brezillon
2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
2026-02-09 13:27 ` [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
2026-02-09 14:23   ` Boris Brezillon
2026-02-09 14:46     ` Thomas Zimmermann
2026-02-09 15:01       ` Boris Brezillon
2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
2026-02-09 13:27 ` [PATCH v3 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
2026-02-11  7:04   ` Claude review: " Claude Code Review Bot
2026-02-11  7:04 ` Claude review: drm/gem-shmem: Track page accessed/dirty status Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox