* [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
@ 2026-03-20 15:19 Boris Brezillon
2026-03-20 16:38 ` Tommaso Merciai
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Boris Brezillon @ 2026-03-20 15:19 UTC (permalink / raw)
To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, dri-devel
Cc: David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang, linux-mm, Boris Brezillon, kernel, Biju Das,
Tommaso Merciai
Unlike PTEs which are automatically upgraded to writeable entries if
.pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
and we currently pretend to have handled the make-writeable request
even though we only ever map things read-only. Make sure we pass the
proper "write" info to vmf_insert_pfn_pmd() in that case.
This also means we have to record the mkwrite event in the .huge_fault()
path now. Move the dirty tracking logic to a
drm_gem_shmem_record_mkwrite() helper so it can also be called from
drm_gem_shmem_pfn_mkwrite().
Note that this wasn't a problem before commit 28e3918179aa
("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
the pgprot were not lowered to read-only before this commit (see the
vma_wants_writenotify() in vma_set_page_prot()).
Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
---
This patch is based on drm-tip [2], because that's the only branch
that has both [1] and the dirty tracking changes that live in
drm-misc-next.
Also added the THP maintainers in Cc, so I can hopefully get some
feedback on the fix. For instance, I'm still unsure
drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
there? should we call folio_mark_dirty_lock()? should we call the
fault handler directly from there and have all the dirty tracking
in this .[huge_]fault path?).
[1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
[2]https://gitlab.freedesktop.org/drm/tip
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
1 file changed, 32 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 2062ca607833..545933c7f712 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
+static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct drm_gem_object *obj = vma->vm_private_data;
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ loff_t num_pages = obj->size >> PAGE_SHIFT;
+ pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+
+ if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
+ return;
+
+ file_update_time(vma->vm_file);
+ folio_mark_dirty(page_folio(shmem->pages[page_offset]));
+}
+
static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
unsigned long pfn)
{
@@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
if (aligned &&
folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
+ vm_fault_t ret;
+
pfn &= PMD_MASK >> PAGE_SHIFT;
- return vmf_insert_pfn_pmd(vmf, pfn, false);
+
+ /* Unlike PTEs which are automatically upgraded to
+ * writeable entries, the PMD upgrades go through
+ * .huge_fault(). Make sure we pass the "write" info
+ * along in that case.
+ * This also means we have to record the write fault
+ * here, instead of in .pfn_mkwrite().
+ */
+ ret = vmf_insert_pfn_pmd(vmf, pfn,
+ vmf->flags & FAULT_FLAG_WRITE);
+ if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
+ drm_gem_shmem_record_mkwrite(vmf);
+
+ return ret;
}
#endif
}
@@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
{
- struct vm_area_struct *vma = vmf->vma;
- struct drm_gem_object *obj = vma->vm_private_data;
- struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- loff_t num_pages = obj->size >> PAGE_SHIFT;
- pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
-
- if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
- return VM_FAULT_SIGBUS;
-
- file_update_time(vma->vm_file);
-
- folio_mark_dirty(page_folio(shmem->pages[page_offset]));
-
+ drm_gem_shmem_record_mkwrite(vmf);
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
@ 2026-03-20 16:38 ` Tommaso Merciai
2026-03-21 17:39 ` Claude review: " Claude Code Review Bot
2026-03-21 17:39 ` Claude Code Review Bot
2 siblings, 0 replies; 4+ messages in thread
From: Tommaso Merciai @ 2026-03-20 16:38 UTC (permalink / raw)
To: Boris Brezillon
Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, dri-devel,
David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang, linux-mm, kernel, Biju Das
Hi Boris,
Thanks for your patch.
On Fri, Mar 20, 2026 at 04:19:13PM +0100, Boris Brezillon wrote:
> Unlike PTEs which are automatically upgraded to writeable entries if
> .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
> and we currently pretend to have handled the make-writeable request
> even though we only ever map things read-only. Make sure we pass the
> proper "write" info to vmf_insert_pfn_pmd() in that case.
>
> This also means we have to record the mkwrite event in the .huge_fault()
> path now. Move the dirty tracking logic to a
> drm_gem_shmem_record_mkwrite() helper so it can also be called from
> drm_gem_shmem_pfn_mkwrite().
>
> Note that this wasn't a problem before commit 28e3918179aa
> ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
> the pgprot were not lowered to read-only before this commit (see the
> vma_wants_writenotify() in vma_set_page_prot()).
>
> Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Cc: Biju Das <biju.das.jz@bp.renesas.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> ---
>
> This patch is based on drm-tip [2], because that's the only branch
> that has both [1] and the dirty tracking changes that live in
> drm-misc-next.
Tested on RZ/G3E, this fix the issue on my side.
Thanks for your work.
Tested-by: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
Kind Regards,
Tommaso
>
> Also added the THP maintainers in Cc, so I can hopefully get some
> feedback on the fix. For instance, I'm still unsure
> drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
> there? should we call folio_mark_dirty_lock()? should we call the
> fault handler directly from there and have all the dirty tracking
> in this .[huge_]fault path?).
>
> [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
> [2]https://gitlab.freedesktop.org/drm/tip
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
> 1 file changed, 32 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 2062ca607833..545933c7f712 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>
> +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
> +{
> + struct vm_area_struct *vma = vmf->vma;
> + struct drm_gem_object *obj = vma->vm_private_data;
> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> + loff_t num_pages = obj->size >> PAGE_SHIFT;
> + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +
> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> + return;
> +
> + file_update_time(vma->vm_file);
> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> +}
> +
> static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
> unsigned long pfn)
> {
> @@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>
> if (aligned &&
> folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
> + vm_fault_t ret;
> +
> pfn &= PMD_MASK >> PAGE_SHIFT;
> - return vmf_insert_pfn_pmd(vmf, pfn, false);
> +
> + /* Unlike PTEs which are automatically upgraded to
> + * writeable entries, the PMD upgrades go through
> + * .huge_fault(). Make sure we pass the "write" info
> + * along in that case.
> + * This also means we have to record the write fault
> + * here, instead of in .pfn_mkwrite().
> + */
> + ret = vmf_insert_pfn_pmd(vmf, pfn,
> + vmf->flags & FAULT_FLAG_WRITE);
> + if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
> + drm_gem_shmem_record_mkwrite(vmf);
> +
> + return ret;
> }
> #endif
> }
> @@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>
> static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
> {
> - struct vm_area_struct *vma = vmf->vma;
> - struct drm_gem_object *obj = vma->vm_private_data;
> - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - loff_t num_pages = obj->size >> PAGE_SHIFT;
> - pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> -
> - if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> - return VM_FAULT_SIGBUS;
> -
> - file_update_time(vma->vm_file);
> -
> - folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> -
> + drm_gem_shmem_record_mkwrite(vmf);
> return 0;
> }
>
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Claude review: drm/shmem_helper: Make sure PMD entries get the writeable upgrade
2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
2026-03-20 16:38 ` Tommaso Merciai
@ 2026-03-21 17:39 ` Claude Code Review Bot
2026-03-21 17:39 ` Claude Code Review Bot
2 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:39 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/shmem_helper: Make sure PMD entries get the writeable upgrade
Author: Boris Brezillon <boris.brezillon@collabora.com>
Patches: 2
Reviewed: 2026-03-22T03:39:52.517246
---
This is a single-patch fix for a real bug: PMD (huge page) entries in the shmem GEM helper were never upgraded to writeable because `vmf_insert_pfn_pmd()` was always called with `write=false`, and `.pfn_mkwrite()` is not invoked for PMD write-upgrade faults (only `.huge_fault()` is). The fix correctly passes the write flag through and records the dirty state in the PMD path. The logic is sound. However, there is a regression in error handling that should be addressed.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
* Claude review: drm/shmem_helper: Make sure PMD entries get the writeable upgrade
2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
2026-03-20 16:38 ` Tommaso Merciai
2026-03-21 17:39 ` Claude review: " Claude Code Review Bot
@ 2026-03-21 17:39 ` Claude Code Review Bot
2 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:39 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**The core fix is correct.** Passing `vmf->flags & FAULT_FLAG_WRITE` to `vmf_insert_pfn_pmd()` and recording dirty/mkwrite in the PMD path addresses the issue introduced by commit 28e3918179aa.
**Bug: Error handling regression in `drm_gem_shmem_pfn_mkwrite()`.**
The original code returned `VM_FAULT_SIGBUS` on the error path:
```c
if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
return VM_FAULT_SIGBUS;
```
The new `drm_gem_shmem_record_mkwrite()` is a `void` function that simply returns on the WARN path:
```c
static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
{
...
if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
return;
...
}
```
And `drm_gem_shmem_pfn_mkwrite()` now unconditionally returns 0 (success):
```c
static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
{
drm_gem_shmem_record_mkwrite(vmf);
return 0;
}
```
So if the WARN fires (pages are NULL or offset is out of range), the old code signalled `VM_FAULT_SIGBUS` to userspace, but the new code silently succeeds. This should be preserved — either by making `drm_gem_shmem_record_mkwrite()` return a `vm_fault_t`, or by keeping the validation inline in `drm_gem_shmem_pfn_mkwrite()`.
**Minor: Comment style.** The multi-line comment at the PMD insertion site doesn't follow kernel style:
```c
/* Unlike PTEs which are automatically upgraded to
* writeable entries, the PMD upgrades go through
```
Kernel style uses:
```c
/*
* Unlike PTEs which are automatically upgraded to
* writeable entries, the PMD upgrades go through
```
**Locking question.** The author raises this themselves in the cover letter. `drm_gem_shmem_record_mkwrite()` accesses `shmem->pages[page_offset]` without holding `dma_resv_lock`. In the PMD path (called from `drm_gem_shmem_fault()`), the lock is already held, so it's safe. In the PTE `.pfn_mkwrite()` path, the lock is not held — but this was already the case before the patch, so it's not a regression. Still, it would be worth confirming that `shmem->pages` cannot be freed while an mmap exists (it shouldn't be, due to the `pages_use_count` refcount taken in `drm_gem_shmem_mmap()`/`vm_open()`).
**Summary:** The fix is logically correct and well-motivated. The `VM_FAULT_SIGBUS` -> silent success error handling change should be fixed before merging.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-21 17:39 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
2026-03-20 16:38 ` Tommaso Merciai
2026-03-21 17:39 ` Claude review: " Claude Code Review Bot
2026-03-21 17:39 ` Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox