* [PATCH 6.18.y] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info [not found] <2026050146-growl-viselike-2a67@gregkh> @ 2026-05-04 14:19 ` Sasha Levin 2026-05-04 22:15 ` Claude review: " Claude Code Review Bot 2026-05-04 22:15 ` Claude Code Review Bot 0 siblings, 2 replies; 4+ messages in thread From: Sasha Levin @ 2026-05-04 14:19 UTC (permalink / raw) To: stable; +Cc: Thomas Zimmermann, Helge Deller, linux-fbdev, dri-devel, Sasha Levin From: Thomas Zimmermann <tzimmermann@suse.de> [ Upstream commit 9ded47ad003f09a94b6a710b5c47f4aa5ceb7429 ] Hold state of deferred I/O in struct fb_deferred_io_state. Allocate an instance as part of initializing deferred I/O and remove it only after the final mapping has been closed. If the fb_info and the contained deferred I/O meanwhile goes away, clear struct fb_deferred_io_state.info to invalidate the mapping. Any access will then result in a SIGBUS signal. Fixes a long-standing problem, where a device hot-unplug happens while user space still has an active mapping of the graphics memory. The hot- unplug frees the instance of struct fb_info. Accessing the memory will operate on undefined state. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Fixes: 60b59beafba8 ("fbdev: mm: Deferred IO support") Cc: Helge Deller <deller@gmx.de> Cc: linux-fbdev@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: stable@vger.kernel.org # v2.6.22+ Signed-off-by: Helge Deller <deller@gmx.de> [ replaced kzalloc_obj(*fbdefio_state) with kzalloc(sizeof(*fbdefio_state), GFP_KERNEL) ] Signed-off-by: Sasha Levin <sashal@kernel.org> --- drivers/video/fbdev/core/fb_defio.c | 178 ++++++++++++++++++++++------ include/linux/fb.h | 4 +- 2 files changed, 145 insertions(+), 37 deletions(-) diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c index 8df2e51e33909..0b099a89a8234 100644 --- a/drivers/video/fbdev/core/fb_defio.c +++ b/drivers/video/fbdev/core/fb_defio.c @@ -24,6 +24,75 @@ #include <linux/rmap.h> #include <linux/pagemap.h> +/* + * struct fb_deferred_io_state + */ + +struct fb_deferred_io_state { + struct kref ref; + + struct mutex lock; /* mutex that protects the pageref list */ + /* fields protected by lock */ + struct fb_info *info; +}; + +static struct fb_deferred_io_state *fb_deferred_io_state_alloc(void) +{ + struct fb_deferred_io_state *fbdefio_state; + + fbdefio_state = kzalloc(sizeof(*fbdefio_state), GFP_KERNEL); + if (!fbdefio_state) + return NULL; + + kref_init(&fbdefio_state->ref); + mutex_init(&fbdefio_state->lock); + + return fbdefio_state; +} + +static void fb_deferred_io_state_release(struct fb_deferred_io_state *fbdefio_state) +{ + mutex_destroy(&fbdefio_state->lock); + + kfree(fbdefio_state); +} + +static void fb_deferred_io_state_get(struct fb_deferred_io_state *fbdefio_state) +{ + kref_get(&fbdefio_state->ref); +} + +static void __fb_deferred_io_state_release(struct kref *ref) +{ + struct fb_deferred_io_state *fbdefio_state = + container_of(ref, struct fb_deferred_io_state, ref); + + fb_deferred_io_state_release(fbdefio_state); +} + +static void fb_deferred_io_state_put(struct fb_deferred_io_state *fbdefio_state) +{ + kref_put(&fbdefio_state->ref, __fb_deferred_io_state_release); +} + +/* + * struct vm_operations_struct + */ + +static void fb_deferred_io_vm_open(struct vm_area_struct *vma) +{ + struct fb_deferred_io_state *fbdefio_state = vma->vm_private_data; + + fb_deferred_io_state_get(fbdefio_state); +} + +static void fb_deferred_io_vm_close(struct vm_area_struct *vma) +{ + struct fb_deferred_io_state *fbdefio_state = vma->vm_private_data; + + fb_deferred_io_state_put(fbdefio_state); +} + static struct page *fb_deferred_io_get_page(struct fb_info *info, unsigned long offs) { struct fb_deferred_io *fbdefio = info->fbdefio; @@ -121,25 +190,46 @@ static void fb_deferred_io_pageref_put(struct fb_deferred_io_pageref *pageref, /* this is to find and return the vmalloc-ed fb pages */ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) { + struct fb_info *info; unsigned long offset; struct page *page; - struct fb_info *info = vmf->vma->vm_private_data; + vm_fault_t ret; + struct fb_deferred_io_state *fbdefio_state = vmf->vma->vm_private_data; + + mutex_lock(&fbdefio_state->lock); + + info = fbdefio_state->info; + if (!info) { + ret = VM_FAULT_SIGBUS; /* our device is gone */ + goto err_mutex_unlock; + } offset = vmf->pgoff << PAGE_SHIFT; - if (offset >= info->fix.smem_len) - return VM_FAULT_SIGBUS; + if (offset >= info->fix.smem_len) { + ret = VM_FAULT_SIGBUS; + goto err_mutex_unlock; + } page = fb_deferred_io_get_page(info, offset); - if (!page) - return VM_FAULT_SIGBUS; + if (!page) { + ret = VM_FAULT_SIGBUS; + goto err_mutex_unlock; + } if (!vmf->vma->vm_file) fb_err(info, "no mapping available\n"); BUG_ON(!info->fbdefio->mapping); + mutex_unlock(&fbdefio_state->lock); + vmf->page = page; + return 0; + +err_mutex_unlock: + mutex_unlock(&fbdefio_state->lock); + return ret; } int fb_deferred_io_fsync(struct file *file, loff_t start, loff_t end, int datasync) @@ -166,15 +256,24 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_fsync); * Adds a page to the dirty list. Call this from struct * vm_operations_struct.page_mkwrite. */ -static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long offset, - struct page *page) +static vm_fault_t fb_deferred_io_track_page(struct fb_deferred_io_state *fbdefio_state, + unsigned long offset, struct page *page) { - struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_info *info; + struct fb_deferred_io *fbdefio; struct fb_deferred_io_pageref *pageref; vm_fault_t ret; /* protect against the workqueue changing the page list */ - mutex_lock(&fbdefio->lock); + mutex_lock(&fbdefio_state->lock); + + info = fbdefio_state->info; + if (!info) { + ret = VM_FAULT_SIGBUS; /* our device is gone */ + goto err_mutex_unlock; + } + + fbdefio = info->fbdefio; pageref = fb_deferred_io_pageref_get(info, offset, page); if (WARN_ON_ONCE(!pageref)) { @@ -192,50 +291,38 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long */ lock_page(pageref->page); - mutex_unlock(&fbdefio->lock); + mutex_unlock(&fbdefio_state->lock); /* come back after delay to process the deferred IO */ schedule_delayed_work(&info->deferred_work, fbdefio->delay); return VM_FAULT_LOCKED; err_mutex_unlock: - mutex_unlock(&fbdefio->lock); + mutex_unlock(&fbdefio_state->lock); return ret; } -/* - * fb_deferred_io_page_mkwrite - Mark a page as written for deferred I/O - * @fb_info: The fbdev info structure - * @vmf: The VM fault - * - * This is a callback we get when userspace first tries to - * write to the page. We schedule a workqueue. That workqueue - * will eventually mkclean the touched pages and execute the - * deferred framebuffer IO. Then if userspace touches a page - * again, we repeat the same scheme. - * - * Returns: - * VM_FAULT_LOCKED on success, or a VM_FAULT error otherwise. - */ -static vm_fault_t fb_deferred_io_page_mkwrite(struct fb_info *info, struct vm_fault *vmf) +static vm_fault_t fb_deferred_io_page_mkwrite(struct fb_deferred_io_state *fbdefio_state, + struct vm_fault *vmf) { unsigned long offset = vmf->pgoff << PAGE_SHIFT; struct page *page = vmf->page; file_update_time(vmf->vma->vm_file); - return fb_deferred_io_track_page(info, offset, page); + return fb_deferred_io_track_page(fbdefio_state, offset, page); } -/* vm_ops->page_mkwrite handler */ static vm_fault_t fb_deferred_io_mkwrite(struct vm_fault *vmf) { - struct fb_info *info = vmf->vma->vm_private_data; + struct fb_deferred_io_state *fbdefio_state = vmf->vma->vm_private_data; - return fb_deferred_io_page_mkwrite(info, vmf); + return fb_deferred_io_page_mkwrite(fbdefio_state, vmf); } static const struct vm_operations_struct fb_deferred_io_vm_ops = { + .open = fb_deferred_io_vm_open, + .close = fb_deferred_io_vm_close, .fault = fb_deferred_io_fault, .page_mkwrite = fb_deferred_io_mkwrite, }; @@ -252,7 +339,10 @@ int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma) vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP); if (!(info->flags & FBINFO_VIRTFB)) vm_flags_set(vma, VM_IO); - vma->vm_private_data = info; + vma->vm_private_data = info->fbdefio_state; + + fb_deferred_io_state_get(info->fbdefio_state); /* released in vma->vm_ops->close() */ + return 0; } EXPORT_SYMBOL_GPL(fb_deferred_io_mmap); @@ -263,9 +353,10 @@ static void fb_deferred_io_work(struct work_struct *work) struct fb_info *info = container_of(work, struct fb_info, deferred_work.work); struct fb_deferred_io_pageref *pageref, *next; struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_deferred_io_state *fbdefio_state = info->fbdefio_state; /* here we wrprotect the page's mappings, then do all deferred IO. */ - mutex_lock(&fbdefio->lock); + mutex_lock(&fbdefio_state->lock); #ifdef CONFIG_MMU list_for_each_entry(pageref, &fbdefio->pagereflist, list) { struct page *page = pageref->page; @@ -283,12 +374,13 @@ static void fb_deferred_io_work(struct work_struct *work) list_for_each_entry_safe(pageref, next, &fbdefio->pagereflist, list) fb_deferred_io_pageref_put(pageref, info); - mutex_unlock(&fbdefio->lock); + mutex_unlock(&fbdefio_state->lock); } int fb_deferred_io_init(struct fb_info *info) { struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_deferred_io_state *fbdefio_state; struct fb_deferred_io_pageref *pagerefs; unsigned long npagerefs; int ret; @@ -298,7 +390,11 @@ int fb_deferred_io_init(struct fb_info *info) if (WARN_ON(!info->fix.smem_len)) return -EINVAL; - mutex_init(&fbdefio->lock); + fbdefio_state = fb_deferred_io_state_alloc(); + if (!fbdefio_state) + return -ENOMEM; + fbdefio_state->info = info; + INIT_DELAYED_WORK(&info->deferred_work, fb_deferred_io_work); INIT_LIST_HEAD(&fbdefio->pagereflist); if (fbdefio->delay == 0) /* set a default of 1 s */ @@ -315,10 +411,12 @@ int fb_deferred_io_init(struct fb_info *info) info->npagerefs = npagerefs; info->pagerefs = pagerefs; + info->fbdefio_state = fbdefio_state; + return 0; err: - mutex_destroy(&fbdefio->lock); + fb_deferred_io_state_release(fbdefio_state); return ret; } EXPORT_SYMBOL_GPL(fb_deferred_io_init); @@ -352,11 +450,19 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_release); void fb_deferred_io_cleanup(struct fb_info *info) { struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_deferred_io_state *fbdefio_state = info->fbdefio_state; fb_deferred_io_lastclose(info); + info->fbdefio_state = NULL; + + mutex_lock(&fbdefio_state->lock); + fbdefio_state->info = NULL; + mutex_unlock(&fbdefio_state->lock); + + fb_deferred_io_state_put(fbdefio_state); + kvfree(info->pagerefs); - mutex_destroy(&fbdefio->lock); fbdefio->mapping = NULL; } EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup); diff --git a/include/linux/fb.h b/include/linux/fb.h index c3302d5135466..da2fdabd18cb3 100644 --- a/include/linux/fb.h +++ b/include/linux/fb.h @@ -217,13 +217,14 @@ struct fb_deferred_io { unsigned long delay; bool sort_pagereflist; /* sort pagelist by offset */ int open_count; /* number of opened files; protected by fb_info lock */ - struct mutex lock; /* mutex that protects the pageref list */ struct list_head pagereflist; /* list of pagerefs for touched pages */ struct address_space *mapping; /* page cache object for fb device */ /* callback */ struct page *(*get_page)(struct fb_info *info, unsigned long offset); void (*deferred_io)(struct fb_info *info, struct list_head *pagelist); }; + +struct fb_deferred_io_state; #endif /* @@ -490,6 +491,7 @@ struct fb_info { unsigned long npagerefs; struct fb_deferred_io_pageref *pagerefs; struct fb_deferred_io *fbdefio; + struct fb_deferred_io_state *fbdefio_state; #endif const struct fb_ops *fbops; -- 2.53.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Claude review: fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info 2026-05-04 14:19 ` [PATCH 6.18.y] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Sasha Levin @ 2026-05-04 22:15 ` Claude Code Review Bot 2026-05-04 22:15 ` Claude Code Review Bot 1 sibling, 0 replies; 4+ messages in thread From: Claude Code Review Bot @ 2026-05-04 22:15 UTC (permalink / raw) To: dri-devel-reviews Overall Series Review Subject: fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Author: Sasha Levin <sashal@kernel.org> Patches: 1 Reviewed: 2026-05-05T08:15:16.910081 --- This is a single patch being backported to 6.18.y stable from upstream commit `9ded47ad003f`. It introduces a reference-counted `struct fb_deferred_io_state` to decouple the lifetime of deferred I/O state from `struct fb_info`, fixing a long-standing use-after-free on device hot-unplug while userspace still holds active framebuffer mappings. The design is sound: a kref-managed state object is allocated at `fb_deferred_io_init()`, each VMA takes a reference via `vm_ops->open/close`, and the info pointer is NULLed under mutex during cleanup so that subsequent fault/mkwrite accesses return `VM_FAULT_SIGBUS` instead of accessing freed memory. The stable backport adaptation (`kzalloc_obj` → `kzalloc(sizeof(*fbdefio_state), GFP_KERNEL)`) is correctly noted. The pagereflist, open_count, and mapping remain in their original locations (`fb_deferred_io` / `fb_info`), which is appropriate for a minimal backport — drm-next has since moved those into the state struct in follow-up commits. **Recommendation: Acceptable for stable.** The patch is a faithful minimal backport of an important fix. The concerns noted below are either pre-existing in the upstream code or relate to follow-up work not included in this backport. --- --- Generated by Claude Code Patch Reviewer ^ permalink raw reply [flat|nested] 4+ messages in thread
* Claude review: fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info 2026-05-04 14:19 ` [PATCH 6.18.y] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Sasha Levin 2026-05-04 22:15 ` Claude review: " Claude Code Review Bot @ 2026-05-04 22:15 ` Claude Code Review Bot 1 sibling, 0 replies; 4+ messages in thread From: Claude Code Review Bot @ 2026-05-04 22:15 UTC (permalink / raw) To: dri-devel-reviews Patch Review **Positive aspects:** 1. **Correct kref lifecycle**: `fb_deferred_io_state_alloc()` initializes the refcount to 1, `fb_deferred_io_mmap()` takes a reference per VMA, `vm_close` puts it, and `fb_deferred_io_cleanup()` drops the init reference. This ensures the state survives until the last VMA is closed. 2. **Proper mutex protection for info access**: All paths that dereference `fbdefio_state->info` (fault, track_page, work) hold `fbdefio_state->lock`. The cleanup path sets `info = NULL` under the same lock: ```c mutex_lock(&fbdefio_state->lock); fbdefio_state->info = NULL; mutex_unlock(&fbdefio_state->lock); ``` 3. **Clean SIGBUS on hot-unplug**: Both `fb_deferred_io_fault` and `fb_deferred_io_track_page` correctly check for NULL info and return `VM_FAULT_SIGBUS`: ```c info = fbdefio_state->info; if (!info) { ret = VM_FAULT_SIGBUS; /* our device is gone */ goto err_mutex_unlock; } ``` 4. **Lock scope expansion is correct**: The mutex previously only protected pageref list operations in `track_page` and `work`. Now it additionally protects the fault path, which is necessary since fault now needs to validate the info pointer. No deadlock is introduced since none of the callees (`fb_deferred_io_get_page`, `vmalloc_to_page`, `get_page`) take this mutex. **Observations / minor concerns:** 5. **Race window in cleanup**: Between `flush_delayed_work` and setting `fbdefio_state->info = NULL`, a concurrent `page_mkwrite` could still see a valid `info`, complete `track_page`, and call `schedule_delayed_work(&info->deferred_work, ...)`. The scheduled work would then fire after `info` is freed. This is a narrow window and exists identically in the upstream code, so it's not a regression introduced by this backport. In practice, `flush_delayed_work` makes this extremely difficult to hit because all dirty pages are processed and write-protected, requiring a new user-space write to race with teardown. ```c fb_deferred_io_lastclose(info); /* flush_delayed_work */ info->fbdefio_state = NULL; mutex_lock(&fbdefio_state->lock); fbdefio_state->info = NULL; /* <-- race window above */ mutex_unlock(&fbdefio_state->lock); ``` 6. **`schedule_delayed_work` after mutex unlock**: In `fb_deferred_io_track_page`, after releasing the lock, `info` and `fbdefio` are accessed without protection: ```c mutex_unlock(&fbdefio_state->lock); /* come back after delay to process the deferred IO */ schedule_delayed_work(&info->deferred_work, fbdefio->delay); ``` This is the same pattern as the pre-existing code and upstream. It's safe in the common case because cleanup flushes first, but is related to concern #5 above. 7. **Missing `module_get`/`module_put`**: The drm-next version of this code adds `try_module_get(THIS_MODULE)` in `fb_deferred_io_mmap()` and `fb_deferred_io_vm_open()`, with corresponding `module_put` in `vm_close`. These are absent in this backport. Without them, the fbdev module could be unloaded while VMAs still reference the `vm_ops` function pointers. This is likely addressed by a separate upstream commit and would be a good follow-up backport. 8. **`fb_deferred_io_fsync` not protected**: The fsync path still accesses `info` directly via `file->private_data` without going through the state object: ```c struct fb_info *info = file->private_data; ... flush_delayed_work(&info->deferred_work); ``` If fsync races with hot-unplug after `info` is freed, this is a UAF. Same as upstream — the fbdev framework's `fb_release` ordering likely prevents this in practice, but it's worth noting as an area for future hardening. 9. **`BUG_ON` retained**: The backport keeps `BUG_ON(!info->fbdefio->mapping)` in the fault handler, whereas drm-next softened this to `fb_WARN_ON_ONCE`. This is correct for a stable backport (match the existing code style), but a crash here on a misconfigured driver is harsh. 10. **Backport adaptation is clean**: The `kzalloc(sizeof(*fbdefio_state), GFP_KERNEL)` substitution for `kzalloc_obj()` is the only manual adaptation and is correctly noted in the commit trailer. The alloc function is simpler than upstream's `fb_deferred_io_state_alloc(unsigned long len)` because pageref allocation remains in `fb_deferred_io_init()` — this is intentional and correct for a minimal backport. --- Generated by Claude Code Patch Reviewer ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2 0/4] fbdev: defio: Protect against device/module removal
@ 2026-02-24 8:25 Thomas Zimmermann
2026-02-24 8:25 ` [PATCH v2 1/4] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Thomas Zimmermann
0 siblings, 1 reply; 4+ messages in thread
From: Thomas Zimmermann @ 2026-02-24 8:25 UTC (permalink / raw)
To: deller, simona, jayalk; +Cc: linux-fbdev, dri-devel, Thomas Zimmermann
There's a long-standing bug in defio where the framebuffer device or
module gets removed while mmap'ed areas of the framebuffer memory
persists in userspace. Page faults in the area then operate on defined
state.
Patches 1 and 2 fix these problems. Patches 3 and 4 build upon the fix
and put defio state into the correct places.
v2:
- use alloc_obj() functions
Thomas Zimmermann (4):
fbdev: defio: Disconnect deferred I/O from the lifetime of struct
fb_info
fbdev: defio: Keep module reference from VMAs
fbdev: defio: Move variable state into struct fb_deferred_io_state
fbdev: defio: Move pageref array to struct fb_deferred_io_state
drivers/video/fbdev/core/fb_defio.c | 266 ++++++++++++++++++++--------
include/linux/fb.h | 9 +-
2 files changed, 195 insertions(+), 80 deletions(-)
base-commit: 1c44015babd759b8e5234084dffcc08a0b784333
--
2.52.0
^ permalink raw reply [flat|nested] 4+ messages in thread* [PATCH v2 1/4] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info 2026-02-24 8:25 [PATCH v2 0/4] fbdev: defio: Protect against device/module removal Thomas Zimmermann @ 2026-02-24 8:25 ` Thomas Zimmermann 2026-02-27 5:29 ` Claude review: " Claude Code Review Bot 0 siblings, 1 reply; 4+ messages in thread From: Thomas Zimmermann @ 2026-02-24 8:25 UTC (permalink / raw) To: deller, simona, jayalk; +Cc: linux-fbdev, dri-devel, Thomas Zimmermann, stable Hold state of deferred I/O in struct fb_deferred_io_state. Allocate an instance as part of initializing deferred I/O and remove it only after the final mapping has been closed. If the fb_info and the contained deferred I/O meanwhile goes away, clear struct fb_deferred_io_state.info to invalidate the mapping. Any access will then result in a SIGBUS signal. Fixes a long-standing problem, where a device hot-unplug happens while user space still has an active mapping of the graphics memory. The hot- unplug frees the instance of struct fb_info. Accessing the memory will operate on undefined state. v2: - use kzalloc_obj() Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Fixes: 60b59beafba8 ("fbdev: mm: Deferred IO support") Cc: Helge Deller <deller@gmx.de> Cc: linux-fbdev@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: <stable@vger.kernel.org> # v2.6.22+ --- drivers/video/fbdev/core/fb_defio.c | 178 ++++++++++++++++++++++------ include/linux/fb.h | 4 +- 2 files changed, 145 insertions(+), 37 deletions(-) diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c index ca48b89a323d..93bd2f696fa4 100644 --- a/drivers/video/fbdev/core/fb_defio.c +++ b/drivers/video/fbdev/core/fb_defio.c @@ -24,6 +24,75 @@ #include <linux/rmap.h> #include <linux/pagemap.h> +/* + * struct fb_deferred_io_state + */ + +struct fb_deferred_io_state { + struct kref ref; + + struct mutex lock; /* mutex that protects the pageref list */ + /* fields protected by lock */ + struct fb_info *info; +}; + +static struct fb_deferred_io_state *fb_deferred_io_state_alloc(void) +{ + struct fb_deferred_io_state *fbdefio_state; + + fbdefio_state = kzalloc_obj(*fbdefio_state); + if (!fbdefio_state) + return NULL; + + kref_init(&fbdefio_state->ref); + mutex_init(&fbdefio_state->lock); + + return fbdefio_state; +} + +static void fb_deferred_io_state_release(struct fb_deferred_io_state *fbdefio_state) +{ + mutex_destroy(&fbdefio_state->lock); + + kfree(fbdefio_state); +} + +static void fb_deferred_io_state_get(struct fb_deferred_io_state *fbdefio_state) +{ + kref_get(&fbdefio_state->ref); +} + +static void __fb_deferred_io_state_release(struct kref *ref) +{ + struct fb_deferred_io_state *fbdefio_state = + container_of(ref, struct fb_deferred_io_state, ref); + + fb_deferred_io_state_release(fbdefio_state); +} + +static void fb_deferred_io_state_put(struct fb_deferred_io_state *fbdefio_state) +{ + kref_put(&fbdefio_state->ref, __fb_deferred_io_state_release); +} + +/* + * struct vm_operations_struct + */ + +static void fb_deferred_io_vm_open(struct vm_area_struct *vma) +{ + struct fb_deferred_io_state *fbdefio_state = vma->vm_private_data; + + fb_deferred_io_state_get(fbdefio_state); +} + +static void fb_deferred_io_vm_close(struct vm_area_struct *vma) +{ + struct fb_deferred_io_state *fbdefio_state = vma->vm_private_data; + + fb_deferred_io_state_put(fbdefio_state); +} + static struct page *fb_deferred_io_get_page(struct fb_info *info, unsigned long offs) { struct fb_deferred_io *fbdefio = info->fbdefio; @@ -121,25 +190,46 @@ static void fb_deferred_io_pageref_put(struct fb_deferred_io_pageref *pageref, /* this is to find and return the vmalloc-ed fb pages */ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) { + struct fb_info *info; unsigned long offset; struct page *page; - struct fb_info *info = vmf->vma->vm_private_data; + vm_fault_t ret; + struct fb_deferred_io_state *fbdefio_state = vmf->vma->vm_private_data; + + mutex_lock(&fbdefio_state->lock); + + info = fbdefio_state->info; + if (!info) { + ret = VM_FAULT_SIGBUS; /* our device is gone */ + goto err_mutex_unlock; + } offset = vmf->pgoff << PAGE_SHIFT; - if (offset >= info->fix.smem_len) - return VM_FAULT_SIGBUS; + if (offset >= info->fix.smem_len) { + ret = VM_FAULT_SIGBUS; + goto err_mutex_unlock; + } page = fb_deferred_io_get_page(info, offset); - if (!page) - return VM_FAULT_SIGBUS; + if (!page) { + ret = VM_FAULT_SIGBUS; + goto err_mutex_unlock; + } if (!vmf->vma->vm_file) fb_err(info, "no mapping available\n"); BUG_ON(!info->fbdefio->mapping); + mutex_unlock(&fbdefio_state->lock); + vmf->page = page; + return 0; + +err_mutex_unlock: + mutex_unlock(&fbdefio_state->lock); + return ret; } int fb_deferred_io_fsync(struct file *file, loff_t start, loff_t end, int datasync) @@ -166,15 +256,24 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_fsync); * Adds a page to the dirty list. Call this from struct * vm_operations_struct.page_mkwrite. */ -static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long offset, - struct page *page) +static vm_fault_t fb_deferred_io_track_page(struct fb_deferred_io_state *fbdefio_state, + unsigned long offset, struct page *page) { - struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_info *info; + struct fb_deferred_io *fbdefio; struct fb_deferred_io_pageref *pageref; vm_fault_t ret; /* protect against the workqueue changing the page list */ - mutex_lock(&fbdefio->lock); + mutex_lock(&fbdefio_state->lock); + + info = fbdefio_state->info; + if (!info) { + ret = VM_FAULT_SIGBUS; /* our device is gone */ + goto err_mutex_unlock; + } + + fbdefio = info->fbdefio; pageref = fb_deferred_io_pageref_get(info, offset, page); if (WARN_ON_ONCE(!pageref)) { @@ -192,50 +291,38 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long */ lock_page(pageref->page); - mutex_unlock(&fbdefio->lock); + mutex_unlock(&fbdefio_state->lock); /* come back after delay to process the deferred IO */ schedule_delayed_work(&info->deferred_work, fbdefio->delay); return VM_FAULT_LOCKED; err_mutex_unlock: - mutex_unlock(&fbdefio->lock); + mutex_unlock(&fbdefio_state->lock); return ret; } -/* - * fb_deferred_io_page_mkwrite - Mark a page as written for deferred I/O - * @fb_info: The fbdev info structure - * @vmf: The VM fault - * - * This is a callback we get when userspace first tries to - * write to the page. We schedule a workqueue. That workqueue - * will eventually mkclean the touched pages and execute the - * deferred framebuffer IO. Then if userspace touches a page - * again, we repeat the same scheme. - * - * Returns: - * VM_FAULT_LOCKED on success, or a VM_FAULT error otherwise. - */ -static vm_fault_t fb_deferred_io_page_mkwrite(struct fb_info *info, struct vm_fault *vmf) +static vm_fault_t fb_deferred_io_page_mkwrite(struct fb_deferred_io_state *fbdefio_state, + struct vm_fault *vmf) { unsigned long offset = vmf->pgoff << PAGE_SHIFT; struct page *page = vmf->page; file_update_time(vmf->vma->vm_file); - return fb_deferred_io_track_page(info, offset, page); + return fb_deferred_io_track_page(fbdefio_state, offset, page); } -/* vm_ops->page_mkwrite handler */ static vm_fault_t fb_deferred_io_mkwrite(struct vm_fault *vmf) { - struct fb_info *info = vmf->vma->vm_private_data; + struct fb_deferred_io_state *fbdefio_state = vmf->vma->vm_private_data; - return fb_deferred_io_page_mkwrite(info, vmf); + return fb_deferred_io_page_mkwrite(fbdefio_state, vmf); } static const struct vm_operations_struct fb_deferred_io_vm_ops = { + .open = fb_deferred_io_vm_open, + .close = fb_deferred_io_vm_close, .fault = fb_deferred_io_fault, .page_mkwrite = fb_deferred_io_mkwrite, }; @@ -252,7 +339,10 @@ int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma) vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP); if (!(info->flags & FBINFO_VIRTFB)) vm_flags_set(vma, VM_IO); - vma->vm_private_data = info; + vma->vm_private_data = info->fbdefio_state; + + fb_deferred_io_state_get(info->fbdefio_state); /* released in vma->vm_ops->close() */ + return 0; } EXPORT_SYMBOL_GPL(fb_deferred_io_mmap); @@ -263,9 +353,10 @@ static void fb_deferred_io_work(struct work_struct *work) struct fb_info *info = container_of(work, struct fb_info, deferred_work.work); struct fb_deferred_io_pageref *pageref, *next; struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_deferred_io_state *fbdefio_state = info->fbdefio_state; /* here we wrprotect the page's mappings, then do all deferred IO. */ - mutex_lock(&fbdefio->lock); + mutex_lock(&fbdefio_state->lock); #ifdef CONFIG_MMU list_for_each_entry(pageref, &fbdefio->pagereflist, list) { struct page *page = pageref->page; @@ -283,12 +374,13 @@ static void fb_deferred_io_work(struct work_struct *work) list_for_each_entry_safe(pageref, next, &fbdefio->pagereflist, list) fb_deferred_io_pageref_put(pageref, info); - mutex_unlock(&fbdefio->lock); + mutex_unlock(&fbdefio_state->lock); } int fb_deferred_io_init(struct fb_info *info) { struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_deferred_io_state *fbdefio_state; struct fb_deferred_io_pageref *pagerefs; unsigned long npagerefs; int ret; @@ -298,7 +390,11 @@ int fb_deferred_io_init(struct fb_info *info) if (WARN_ON(!info->fix.smem_len)) return -EINVAL; - mutex_init(&fbdefio->lock); + fbdefio_state = fb_deferred_io_state_alloc(); + if (!fbdefio_state) + return -ENOMEM; + fbdefio_state->info = info; + INIT_DELAYED_WORK(&info->deferred_work, fb_deferred_io_work); INIT_LIST_HEAD(&fbdefio->pagereflist); if (fbdefio->delay == 0) /* set a default of 1 s */ @@ -315,10 +411,12 @@ int fb_deferred_io_init(struct fb_info *info) info->npagerefs = npagerefs; info->pagerefs = pagerefs; + info->fbdefio_state = fbdefio_state; + return 0; err: - mutex_destroy(&fbdefio->lock); + fb_deferred_io_state_release(fbdefio_state); return ret; } EXPORT_SYMBOL_GPL(fb_deferred_io_init); @@ -352,11 +450,19 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_release); void fb_deferred_io_cleanup(struct fb_info *info) { struct fb_deferred_io *fbdefio = info->fbdefio; + struct fb_deferred_io_state *fbdefio_state = info->fbdefio_state; fb_deferred_io_lastclose(info); + info->fbdefio_state = NULL; + + mutex_lock(&fbdefio_state->lock); + fbdefio_state->info = NULL; + mutex_unlock(&fbdefio_state->lock); + + fb_deferred_io_state_put(fbdefio_state); + kvfree(info->pagerefs); - mutex_destroy(&fbdefio->lock); fbdefio->mapping = NULL; } EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup); diff --git a/include/linux/fb.h b/include/linux/fb.h index 6d4a58084fd5..aed17567fe50 100644 --- a/include/linux/fb.h +++ b/include/linux/fb.h @@ -218,13 +218,14 @@ struct fb_deferred_io { unsigned long delay; bool sort_pagereflist; /* sort pagelist by offset */ int open_count; /* number of opened files; protected by fb_info lock */ - struct mutex lock; /* mutex that protects the pageref list */ struct list_head pagereflist; /* list of pagerefs for touched pages */ struct address_space *mapping; /* page cache object for fb device */ /* callback */ struct page *(*get_page)(struct fb_info *info, unsigned long offset); void (*deferred_io)(struct fb_info *info, struct list_head *pagelist); }; + +struct fb_deferred_io_state; #endif /* @@ -487,6 +488,7 @@ struct fb_info { unsigned long npagerefs; struct fb_deferred_io_pageref *pagerefs; struct fb_deferred_io *fbdefio; + struct fb_deferred_io_state *fbdefio_state; #endif const struct fb_ops *fbops; -- 2.52.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Claude review: fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info 2026-02-24 8:25 ` [PATCH v2 1/4] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Thomas Zimmermann @ 2026-02-27 5:29 ` Claude Code Review Bot 0 siblings, 0 replies; 4+ messages in thread From: Claude Code Review Bot @ 2026-02-27 5:29 UTC (permalink / raw) To: dri-devel-reviews Patch Review This is the core fix, tagged with `Cc: stable` back to v2.6.22. It introduces `struct fb_deferred_io_state` with kref-based lifetime management, decoupled from `struct fb_info`. **Key design decisions (all good):** - VMAs now store `fbdefio_state` in `vm_private_data` instead of `info`, and hold a kref reference via `vm_open`/`vm_close` ops. - The state lock replaces `fbdefio->lock` for protecting the pageref list. - Fault and mkwrite handlers check `fbdefio_state->info != NULL` under the lock, returning `VM_FAULT_SIGBUS` when the device is gone. **Concern: race between schedule_delayed_work and cleanup** In `fb_deferred_io_track_page()`, after unlocking the state mutex: ```c mutex_unlock(&fbdefio_state->lock); /* come back after delay to process the deferred IO */ schedule_delayed_work(&info->deferred_work, fbdefio->delay); ``` At this point `info` is a local copy obtained under the lock, but the lock is no longer held. A concurrent `fb_deferred_io_cleanup()` could proceed through: ```c fb_deferred_io_lastclose(info); /* cancel_delayed_work_sync */ info->fbdefio_state = NULL; mutex_lock(&fbdefio_state->lock); fbdefio_state->info = NULL; mutex_unlock(&fbdefio_state->lock); fb_deferred_io_state_put(fbdefio_state); kvfree(info->pagerefs); ``` If cleanup's `cancel_delayed_work_sync` runs *before* thread A's `schedule_delayed_work`, the work gets re-queued on potentially soon-to-be-freed `info`. When the work fires, `fb_deferred_io_work` does `info->fbdefio_state` which could be NULL. This is a very narrow window and is essentially pre-existing (the old code had the same pattern with `fbdefio->lock`), so it's not a regression. But if you wanted to close it, one option would be to have `fb_deferred_io_work` check `fbdefio_state->info` under the lock before proceeding, and to ensure `info->fbdefio_state` is not set to NULL before the lock/NULL/unlock sequence. Alternatively, moving `schedule_delayed_work` inside the lock would narrow (but not fully eliminate) the window. **Minor nit:** In `fb_deferred_io_cleanup`, setting `info->fbdefio_state = NULL` is done without holding the state lock. This is fine for the fault/mkwrite paths (they go through the VMA's `vm_private_data`, not through `info`), but `fb_deferred_io_work` reads `info->fbdefio_state` without synchronization. This is safe only because `cancel_delayed_work_sync` prevents the work from running concurrently with this assignment — but see the race discussion above. **Code quality is good.** The refcounting is straightforward, the SIGBUS return for dead devices is the right behavior, and the locking strategy is reasonable. --- --- Generated by Claude Code Patch Reviewer ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-05-04 22:15 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <2026050146-growl-viselike-2a67@gregkh>
2026-05-04 14:19 ` [PATCH 6.18.y] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Sasha Levin
2026-05-04 22:15 ` Claude review: " Claude Code Review Bot
2026-05-04 22:15 ` Claude Code Review Bot
2026-02-24 8:25 [PATCH v2 0/4] fbdev: defio: Protect against device/module removal Thomas Zimmermann
2026-02-24 8:25 ` [PATCH v2 1/4] fbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_info Thomas Zimmermann
2026-02-27 5:29 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox