From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7EC4E81BC3 for ; Mon, 9 Feb 2026 13:32:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3407D10E3EF; Mon, 9 Feb 2026 13:32:58 +0000 (UTC) Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6E3A610E3EF for ; Mon, 9 Feb 2026 13:32:56 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 799EB3E70F; Mon, 9 Feb 2026 13:32:47 +0000 (UTC) Authentication-Results: smtp-out1.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 21E923EA63; Mon, 9 Feb 2026 13:32:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id iAfYBv/hiWmTIgAAD6G6ig (envelope-from ); Mon, 09 Feb 2026 13:32:47 +0000 From: Thomas Zimmermann To: boris.brezillon@collabora.com, loic.molinari@collabora.com, willy@infradead.org, frank.binns@imgtec.com, matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch Cc: dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Thomas Zimmermann Subject: [PATCH v3 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Date: Mon, 9 Feb 2026 14:27:13 +0100 Message-ID: <20260209133241.238813-5-tzimmermann@suse.de> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260209133241.238813-1-tzimmermann@suse.de> References: <20260209133241.238813-1-tzimmermann@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Rspamd-Queue-Id: 799EB3E70F X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The current mmap page-fault handler requires some changes before it can track folio access. Call to folio_test_pmd_mappable() into the mmap page-fault handler before calling drm_gem_shmem_try_map_pmd(). The folio will become useful for tracking the access status. Also rename drm_gem_shmem_try_map_pmd() to _try_insert_pfn_pmd() and only pass the page fault and page-frame number. The new name and parameters make it similar to vmf_insert_pfn_pmd(). No functional changes. If PMD mapping fails or is not supported, insert a regular PFN as before. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_shmem_helper.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index ab8e331005f9..c3a054899ba3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -550,17 +550,14 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create); -static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr, - struct page *page) +static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn) { #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP - unsigned long pfn = page_to_pfn(page); unsigned long paddr = pfn << PAGE_SHIFT; - bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK); + bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK); - if (aligned && - pmd_none(*vmf->pmd) && - folio_test_pmd_mappable(page_folio(page))) { + if (aligned && pmd_none(*vmf->pmd)) { + /* Read-only mapping; split upon write fault */ pfn &= PMD_MASK >> PAGE_SHIFT; return vmf_insert_pfn_pmd(vmf, pfn, false); } @@ -580,6 +577,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) struct page **pages = shmem->pages; pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */ struct page *page; + struct folio *folio; unsigned long pfn; dma_resv_lock(obj->resv, NULL); @@ -591,15 +589,16 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) page = pages[page_offset]; if (drm_WARN_ON_ONCE(dev, !page)) goto out; - - ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page); - if (ret == VM_FAULT_NOPAGE) - goto out; + folio = page_folio(page); pfn = page_to_pfn(page); - ret = vmf_insert_pfn(vma, vmf->address, pfn); - out: + if (folio_test_pmd_mappable(folio)) + ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn); + if (ret != VM_FAULT_NOPAGE) + ret = vmf_insert_pfn(vma, vmf->address, pfn); + +out: dma_resv_unlock(obj->resv); return ret; -- 2.52.0