From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66552E81BCE for ; Mon, 9 Feb 2026 14:23:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BA10410E023; Mon, 9 Feb 2026 14:23:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="OYDI4z+D"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 66DA710E023 for ; Mon, 9 Feb 2026 14:23:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1770647025; bh=lAZj3fiYD2ZcgmVNgscrFnQUNp8/AmVIT6EQcsXvwHw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=OYDI4z+DMY2cli26e14h5OO3uUPfSyZc+1PBhf0+JFw2rszybr3FJ4l2ZW3IOWcEH IPxPkCJqfbfiXm+lvMQ5pRXS21g8tvAcka9izZaP4yxQsEu1Lj83/dAffBMIwhOvd0 iEBhE7vgvSv/gX5nWJWO9R08g3yRIoRfLEOclR19VGTeK7JTRsGliU9aVcoO34C7D5 tZkmTuGa1DJpfLA6xe+7tLuWRkdDnXGRfAsMIw0N1kOhvxF5K/2f5BdON4AueNNlEQ qXgRGJ/Vg94vrAg3CQdbs248Aq9RvjaX3KfO2Dwryd0sUQa+1lpTyA+vfi8Ao6P3tL MCZjnpDJ+mH5A== Received: from fedora (unknown [IPv6:2a01:e0a:2c:6930:d919:a6e:5ea1:8a9f]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (prime256v1) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id 78E4417E13C5; Mon, 9 Feb 2026 15:23:45 +0100 (CET) Date: Mon, 9 Feb 2026 15:23:40 +0100 From: Boris Brezillon To: Thomas Zimmermann Cc: loic.molinari@collabora.com, willy@infradead.org, frank.binns@imgtec.com, matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: Re: [PATCH v3 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Message-ID: <20260209152340.16f9b30a@fedora> In-Reply-To: <20260209133241.238813-6-tzimmermann@suse.de> References: <20260209133241.238813-1-tzimmermann@suse.de> <20260209133241.238813-6-tzimmermann@suse.de> Organization: Collabora X-Mailer: Claws Mail 4.3.1 (GTK 3.24.51; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Mon, 9 Feb 2026 14:27:14 +0100 Thomas Zimmermann wrote: > Invoke folio_mark_accessed() in mmap page faults to add the folio to > the memory manager's LRU list. Userspace invokes mmap to get the memory > for software rendering. Compositors do the same when creating the final > on-screen image, so keeping the pages in LRU makes sense. Avoids paging > out graphics buffers when under memory pressure. > > In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio > for writeback should the underlying file be paged out from system memory. > This rarely happens in practice, yet it would corrupt the buffer content. > > This has little effect on a system's hardware-accelerated rendering, which > only mmaps for an initial setup of textures, meshes, shaders, etc. > > v3: > - rewrite for VM_PFNMAP > v2: > - adapt to changes in drm_gem_shmem_try_mmap_pmd() > > Signed-off-by: Thomas Zimmermann > Reviewed-by: Boris Brezillon > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index c3a054899ba3..0c86ad40a049 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) > if (ret != VM_FAULT_NOPAGE) > ret = vmf_insert_pfn(vma, vmf->address, pfn); > > + if (likely(!(ret & VM_FAULT_ERROR))) Can't we just go if (ret == VM_FAULT_NOPAGE) here? > + folio_mark_accessed(folio); > + > out: > dma_resv_unlock(obj->resv); > > @@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) > drm_gem_vm_close(vma); > } > > +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) > +{ > + struct vm_area_struct *vma = vmf->vma; > + struct drm_gem_object *obj = vma->vm_private_data; > + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); > + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */ > + struct page *page = shmem->pages[page_offset]; Should we have a if (WARN_ON(!shmem->pages || page_offset <= (obj->size >> PAGE_SHIFT))) return VM_FAULT_SIGBUS; ? > + struct folio *folio = page_folio(page); > + > + file_update_time(vma->vm_file); > + > + folio_mark_dirty(folio); > + > + return 0; > +} > + > const struct vm_operations_struct drm_gem_shmem_vm_ops = { > .fault = drm_gem_shmem_fault, > .open = drm_gem_shmem_vm_open, > .close = drm_gem_shmem_vm_close, > + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite, > }; > EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops); >