From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED4E0E81BC4 for ; Mon, 9 Feb 2026 13:32:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4D6F810E3F3; Mon, 9 Feb 2026 13:32:58 +0000 (UTC) Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by gabe.freedesktop.org (Postfix) with ESMTPS id 22A7110E3EF for ; Mon, 9 Feb 2026 13:32:57 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id D08665BD3E; Mon, 9 Feb 2026 13:32:47 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 76F763EA64; Mon, 9 Feb 2026 13:32:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id qMmaG//hiWmTIgAAD6G6ig (envelope-from ); Mon, 09 Feb 2026 13:32:47 +0000 From: Thomas Zimmermann To: boris.brezillon@collabora.com, loic.molinari@collabora.com, willy@infradead.org, frank.binns@imgtec.com, matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch Cc: dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Thomas Zimmermann Subject: [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Date: Mon, 9 Feb 2026 14:27:14 +0100 Message-ID: <20260209133241.238813-6-tzimmermann@suse.de> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260209133241.238813-1-tzimmermann@suse.de> References: <20260209133241.238813-1-tzimmermann@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Queue-Id: D08665BD3E X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Invoke folio_mark_accessed() in mmap page faults to add the folio to the memory manager's LRU list. Userspace invokes mmap to get the memory for software rendering. Compositors do the same when creating the final on-screen image, so keeping the pages in LRU makes sense. Avoids paging out graphics buffers when under memory pressure. In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio for writeback should the underlying file be paged out from system memory. This rarely happens in practice, yet it would corrupt the buffer content. This has little effect on a system's hardware-accelerated rendering, which only mmaps for an initial setup of textures, meshes, shaders, etc. v3: - rewrite for VM_PFNMAP v2: - adapt to changes in drm_gem_shmem_try_mmap_pmd() Signed-off-by: Thomas Zimmermann Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index c3a054899ba3..0c86ad40a049 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) if (ret != VM_FAULT_NOPAGE) ret = vmf_insert_pfn(vma, vmf->address, pfn); + if (likely(!(ret & VM_FAULT_ERROR))) + folio_mark_accessed(folio); + out: dma_resv_unlock(obj->resv); @@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) drm_gem_vm_close(vma); } +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct drm_gem_object *obj = vma->vm_private_data; + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */ + struct page *page = shmem->pages[page_offset]; + struct folio *folio = page_folio(page); + + file_update_time(vma->vm_file); + + folio_mark_dirty(folio); + + return 0; +} + const struct vm_operations_struct drm_gem_shmem_vm_ops = { .fault = drm_gem_shmem_fault, .open = drm_gem_shmem_vm_open, .close = drm_gem_shmem_vm_close, + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite, }; EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops); -- 2.52.0