* [PATCH v8 00/25] gpu: nova-core: Add memory management support
@ 2026-02-24 22:52 Joel Fernandes
2026-02-24 22:52 ` [PATCH v8 01/25] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
` (25 more replies)
0 siblings, 26 replies; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:52 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This series adds support for nova-core memory management, including VRAM
allocation, PRAMIN, VMM, page table walking, and BAR 1 read/writes.
These are critical for channel management, vGPU, and all other memory
management uses of nova-core.
Changes from v7 to v8:
- Incorporated "Select GPU_BUDDY for VRAM allocation" patch from the
dependency series (Alex).
- Significant patch reordering for better logical flow (GSP/FB patches
moved earlier, page table patches, Vmm, Bar1, tests) (Alex).
- Replaced several 'as' usages with into_safe_cast() (Danilo, Alex).
- Updated BAR 1 test cases to include exercising the block size API (Eliot, Danilo).
Changes from v6 to v7:
- Addressed DMA fence signalling usecase per Danilo's feedback.
Pre v6:
- Simplified PRAMIN code (John Hubbard, Alex Courbot).
- Handled different MMU versions: ver2 versus ver3 (John Hubbard).
- Added BAR1 usecase so we have user of DRM Buddy / VMM (John Hubbard).
- Iterating over clist/buddy bindings.
All patches, along with all the dependencies (including CList), can be
found at:
https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (branch nova/mm)
Link to v7: https://lore.kernel.org/all/20260218212020.800836-1-joelagnelf@nvidia.com/
Joel Fernandes (25):
gpu: nova-core: Select GPU_BUDDY for VRAM allocation
gpu: nova-core: Kconfig: Sort select statements alphabetically
gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot()
gpu: nova-core: gsp: Extract usable FB region from GSP
gpu: nova-core: fb: Add usable_vram field to FbLayout
gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM
docs: gpu: nova-core: Document the PRAMIN aperture mechanism
gpu: nova-core: mm: Add common memory management types
gpu: nova-core: mm: Add TLB flush support
gpu: nova-core: mm: Add GpuMm centralized memory manager
gpu: nova-core: mm: Use usable VRAM region for buddy allocator
gpu: nova-core: mm: Add common types for all page table formats
gpu: nova-core: mm: Add MMU v2 page table types
gpu: nova-core: mm: Add MMU v3 page table types
gpu: nova-core: mm: Add unified page table entry wrapper enums
gpu: nova-core: mm: Add page table walker for MMU v2/v3
gpu: nova-core: mm: Add Virtual Memory Manager
gpu: nova-core: mm: Add virtual address range tracking to VMM
gpu: nova-core: mm: Add multi-page mapping API to VMM
gpu: nova-core: Add BAR1 aperture type and size constant
gpu: nova-core: gsp: Add BAR1 PDE base accessors
gpu: nova-core: mm: Add BAR1 user interface
gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot
gpu: nova-core: mm: Add BAR1 memory management self-tests
gpu: nova-core: mm: Add PRAMIN aperture self-tests
Documentation/gpu/nova/core/pramin.rst | 125 ++++++
Documentation/gpu/nova/index.rst | 1 +
drivers/gpu/nova-core/Kconfig | 13 +-
drivers/gpu/nova-core/driver.rs | 9 +-
drivers/gpu/nova-core/fb.rs | 26 +-
drivers/gpu/nova-core/gpu.rs | 130 +++++-
drivers/gpu/nova-core/gsp/boot.rs | 22 +-
drivers/gpu/nova-core/gsp/commands.rs | 18 +-
drivers/gpu/nova-core/gsp/fw/commands.rs | 40 ++
drivers/gpu/nova-core/mm.rs | 245 ++++++++++
drivers/gpu/nova-core/mm/bar_user.rs | 406 +++++++++++++++++
drivers/gpu/nova-core/mm/pagetable.rs | 453 +++++++++++++++++++
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 199 +++++++++
drivers/gpu/nova-core/mm/pagetable/ver3.rs | 302 +++++++++++++
drivers/gpu/nova-core/mm/pagetable/walk.rs | 218 +++++++++
drivers/gpu/nova-core/mm/pramin.rs | 453 +++++++++++++++++++
drivers/gpu/nova-core/mm/tlb.rs | 90 ++++
drivers/gpu/nova-core/mm/vmm.rs | 494 +++++++++++++++++++++
drivers/gpu/nova-core/nova_core.rs | 1 +
drivers/gpu/nova-core/regs.rs | 38 ++
20 files changed, 3273 insertions(+), 10 deletions(-)
create mode 100644 Documentation/gpu/nova/core/pramin.rst
create mode 100644 drivers/gpu/nova-core/mm.rs
create mode 100644 drivers/gpu/nova-core/mm/bar_user.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver2.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver3.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs
create mode 100644 drivers/gpu/nova-core/mm/pramin.rs
create mode 100644 drivers/gpu/nova-core/mm/tlb.rs
create mode 100644 drivers/gpu/nova-core/mm/vmm.rs
base-commit: 2961f841b025fb234860bac26dfb7fa7cb0fb122
prerequisite-patch-id: f8778e0b72697243cfe40a8bfcc3ca1d66160b12
prerequisite-patch-id: 763b30c69f9806de97eb58bc8e3406feb6bf61b4
prerequisite-patch-id: 2cded1a588520cf7d08f51bf5b3646d5b3dc26bd
prerequisite-patch-id: 4933d6b86caab3e4630c92292cfe8e9d3b8f3524
prerequisite-patch-id: 9f58738611f42a330b00bd976fbcafe2b7e6d75e
prerequisite-patch-id: 07f81850399af27b61dcba81ad1bb24bcf977ac1
--
2.34.1
^ permalink raw reply [flat|nested] 55+ messages in thread
* [PATCH v8 01/25] gpu: nova-core: Select GPU_BUDDY for VRAM allocation
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
@ 2026-02-24 22:52 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 02/25] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
` (24 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:52 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
nova-core will use the GPU buddy allocator for physical VRAM management.
Enable it in Kconfig.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 527920f9c4d3..809485167aff 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -5,6 +5,7 @@ config NOVA_CORE
depends on RUST
select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
+ select GPU_BUDDY
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 02/25] gpu: nova-core: Kconfig: Sort select statements alphabetically
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
2026-02-24 22:52 ` [PATCH v8 01/25] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 03/25] gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot() Joel Fernandes
` (23 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Reorder the select statements in NOVA_CORE Kconfig to be in
alphabetical order.
Suggested-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 809485167aff..6513007bf66f 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -3,9 +3,9 @@ config NOVA_CORE
depends on 64BIT
depends on PCI
depends on RUST
- select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
select GPU_BUDDY
+ select RUST_FW_LOADER_ABSTRACTIONS
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 03/25] gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot()
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
2026-02-24 22:52 ` [PATCH v8 01/25] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-02-24 22:53 ` [PATCH v8 02/25] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 04/25] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
` (22 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Refactor the GSP boot function to return the GspStaticInfo and FbLayout.
This enables access required for memory management initialization to:
- bar1_pde_base: BAR1 page directory base.
- bar2_pde_base: BAR2 page directory base.
- usable memory regions in vidmem.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 9 +++++++--
drivers/gpu/nova-core/gsp/boot.rs | 15 ++++++++++++---
2 files changed, 19 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 9b042ef1a308..5e084ec7c926 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -18,7 +18,10 @@
},
fb::SysmemFlush,
gfw,
- gsp::Gsp,
+ gsp::{
+ commands::GetGspStaticInfoReply,
+ Gsp, //
+ },
regs,
};
@@ -252,6 +255,8 @@ pub(crate) struct Gpu {
/// GSP runtime data. Temporarily an empty placeholder.
#[pin]
gsp: Gsp,
+ /// Static GPU information from GSP.
+ gsp_static_info: GetGspStaticInfoReply,
}
impl Gpu {
@@ -283,7 +288,7 @@ pub(crate) fn new<'a>(
gsp <- Gsp::new(pdev),
- _: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
+ gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?.0 },
bar: devres_bar,
})
diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs
index be427fe26a58..7a4a0c759267 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -32,7 +32,10 @@
},
gpu::Chipset,
gsp::{
- commands,
+ commands::{
+ self,
+ GetGspStaticInfoReply, //
+ },
sequencer::{
GspSequencer,
GspSequencerParams, //
@@ -127,6 +130,12 @@ fn run_fwsec_frts(
/// structures that the GSP will use at runtime.
///
/// Upon return, the GSP is up and running, and its runtime object given as return value.
+ ///
+ /// Returns a tuple containing:
+ /// - [`GetGspStaticInfoReply`]: Static GPU information from GSP, including the BAR1 page
+ /// directory base address needed for memory management.
+ /// - [`FbLayout`]: Frame buffer layout computed during boot, containing memory regions
+ /// required for [`GpuMm`] initialization.
pub(crate) fn boot(
mut self: Pin<&mut Self>,
pdev: &pci::Device<device::Bound>,
@@ -134,7 +143,7 @@ pub(crate) fn boot(
chipset: Chipset,
gsp_falcon: &Falcon<Gsp>,
sec2_falcon: &Falcon<Sec2>,
- ) -> Result {
+ ) -> Result<(GetGspStaticInfoReply, FbLayout)> {
let dev = pdev.as_ref();
let bios = Vbios::new(dev, bar)?;
@@ -243,6 +252,6 @@ pub(crate) fn boot(
Err(e) => dev_warn!(pdev.as_ref(), "GPU name unavailable: {:?}\n", e),
}
- Ok(())
+ Ok((info, fb_layout))
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 04/25] gpu: nova-core: gsp: Extract usable FB region from GSP
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (2 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 03/25] gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot() Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 05/25] gpu: nova-core: fb: Add usable_vram field to FbLayout Joel Fernandes
` (21 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add first_usable_fb_region() to GspStaticConfigInfo to extract the first
usable FB region from GSP's fbRegionInfoParams. Usable regions are those
that are not reserved or protected.
The extracted region is stored in GetGspStaticInfoReply and exposed via
usable_fb_region() API for use by the memory subsystem.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gsp/commands.rs | 12 ++++++++-
drivers/gpu/nova-core/gsp/fw/commands.rs | 32 ++++++++++++++++++++++++
2 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 8f270eca33be..c31046df3acf 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -186,9 +186,11 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
}
}
-/// The reply from the GSP to the [`GetGspInfo`] command.
+/// The reply from the GSP to the [`GetGspStaticInfo`] command.
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
+ /// First usable FB region `(base, size)` for memory allocation.
+ usable_fb_region: Option<(u64, u64)>,
}
impl MessageFromGsp for GetGspStaticInfoReply {
@@ -202,6 +204,7 @@ fn read(
) -> Result<Self, Self::InitError> {
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
+ usable_fb_region: msg.first_usable_fb_region(),
})
}
}
@@ -228,6 +231,13 @@ pub(crate) fn gpu_name(&self) -> core::result::Result<&str, GpuNameError> {
.to_str()
.map_err(GpuNameError::InvalidUtf8)
}
+
+ /// Returns the usable FB region `(base, size)` for driver allocation which is
+ /// already retrieved from the GSP.
+ #[expect(dead_code)]
+ pub(crate) fn usable_fb_region(&self) -> Option<(u64, u64)> {
+ self.usable_fb_region
+ }
}
/// Send the [`GetGspInfo`] command and awaits for its reply.
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index 21be44199693..ff771a4aba4f 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -5,6 +5,7 @@
use kernel::{device, pci};
use crate::gsp::GSP_PAGE_SIZE;
+use crate::num::IntoSafeCast;
use super::bindings;
@@ -114,6 +115,37 @@ impl GspStaticConfigInfo {
pub(crate) fn gpu_name_str(&self) -> [u8; 64] {
self.0.gpuNameString
}
+
+ /// Extract the first usable FB region from GSP firmware data.
+ ///
+ /// Returns the first region suitable for driver memory allocation as a `(base, size)` tuple.
+ /// Usable regions are those that:
+ /// - Are not reserved for firmware internal use.
+ /// - Are not protected (hardware-enforced access restrictions).
+ /// - Support compression (can use GPU memory compression for bandwidth).
+ /// - Support ISO (isochronous memory for display requiring guaranteed bandwidth).
+ pub(crate) fn first_usable_fb_region(&self) -> Option<(u64, u64)> {
+ let fb_info = &self.0.fbRegionInfoParams;
+ for i in 0..fb_info.numFBRegions.into_safe_cast() {
+ if let Some(reg) = fb_info.fbRegion.get(i) {
+ // Skip malformed regions where limit < base.
+ if reg.limit < reg.base {
+ continue;
+ }
+
+ // Filter: not reserved, not protected, supports compression and ISO.
+ if reg.reserved == 0
+ && reg.bProtected == 0
+ && reg.supportCompressed != 0
+ && reg.supportISO != 0
+ {
+ let size = reg.limit - reg.base + 1;
+ return Some((reg.base, size));
+ }
+ }
+ }
+ None
+ }
}
// SAFETY: Padding is explicit and will not contain uninitialized data.
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 05/25] gpu: nova-core: fb: Add usable_vram field to FbLayout
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (3 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 04/25] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 06/25] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
` (20 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add usable_vram field to FbLayout to store the usable VRAM region for
driver allocations. This is populated after GSP boot with the region
extracted from GSP's fbRegionInfoParams.
FbLayout is now a two-phase structure:
1. new() computes firmware layout from hardware
2. set_usable_vram() populates usable region from GSP
The new usable_vram field represents the actual usable VRAM region
(~23.7GB on a 24GB GPU GA102 Ampere GPU).
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/fb.rs | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/fb.rs b/drivers/gpu/nova-core/fb.rs
index c62abcaed547..d4d0f50d7afd 100644
--- a/drivers/gpu/nova-core/fb.rs
+++ b/drivers/gpu/nova-core/fb.rs
@@ -97,6 +97,10 @@ pub(crate) fn unregister(&self, bar: &Bar0) {
/// Layout of the GPU framebuffer memory.
///
/// Contains ranges of GPU memory reserved for a given purpose during the GSP boot process.
+///
+/// This structure is populated in 2 steps:
+/// 1. [`FbLayout::new()`] computes firmware layout from hardware.
+/// 2. [`FbLayout::set_usable_vram()`] populates usable region from GSP response.
#[derive(Debug)]
pub(crate) struct FbLayout {
/// Range of the framebuffer. Starts at `0`.
@@ -111,10 +115,16 @@ pub(crate) struct FbLayout {
pub(crate) elf: Range<u64>,
/// WPR2 heap.
pub(crate) wpr2_heap: Range<u64>,
- /// WPR2 region range, starting with an instance of `GspFwWprMeta`.
+ /// WPR2 region range, starting with an instance of [`GspFwWprMeta`].
pub(crate) wpr2: Range<u64>,
+ /// Non-WPR heap carved before WPR2, used by GSP firmware.
pub(crate) heap: Range<u64>,
pub(crate) vf_partition_count: u8,
+ /// Usable VRAM region for driver allocations (from GSP `fbRegionInfoParams`).
+ ///
+ /// Initially [`None`], populated after GSP boot with usable region info.
+ #[allow(dead_code)]
+ pub(crate) usable_vram: Option<Range<u64>>,
}
impl FbLayout {
@@ -212,6 +222,20 @@ pub(crate) fn new(chipset: Chipset, bar: &Bar0, gsp_fw: &GspFirmware) -> Result<
wpr2,
heap,
vf_partition_count: 0,
+ usable_vram: None,
})
}
+
+ /// Set the usable VRAM region from GSP response.
+ ///
+ /// Called after GSP boot with the first usable region extracted from
+ /// GSP's `fbRegionInfoParams`. Usable regions are those that:
+ /// - Are not reserved for firmware internal use.
+ /// - Are not protected (hardware-enforced access restrictions).
+ /// - Support compression (can use GPU memory compression for bandwidth).
+ /// - Support ISO (isochronous memory for display requiring guaranteed bandwidth).
+ #[allow(dead_code)]
+ pub(crate) fn set_usable_vram(&mut self, base: u64, size: u64) {
+ self.usable_vram = Some(base..base.saturating_add(size));
+ }
}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 06/25] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (4 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 05/25] gpu: nova-core: fb: Add usable_vram field to FbLayout Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 07/25] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
` (19 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
PRAMIN apertures are a crucial mechanism to direct read/write to VRAM.
Add support for the same.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 5 +
drivers/gpu/nova-core/mm/pramin.rs | 292 +++++++++++++++++++++++++++++
drivers/gpu/nova-core/nova_core.rs | 1 +
drivers/gpu/nova-core/regs.rs | 5 +
4 files changed, 303 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm.rs
create mode 100644 drivers/gpu/nova-core/mm/pramin.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
new file mode 100644
index 000000000000..7a5dd4220c67
--- /dev/null
+++ b/drivers/gpu/nova-core/mm.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory management subsystems for nova-core.
+
+pub(crate) mod pramin;
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
new file mode 100644
index 000000000000..04b652d3ee4f
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -0,0 +1,292 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Direct VRAM access through the PRAMIN aperture.
+//!
+//! PRAMIN provides a 1MB sliding window into VRAM through BAR0, allowing the CPU to access
+//! video memory directly. Access is managed through a two-level API:
+//!
+//! - [`Pramin`]: The parent object that owns the BAR0 reference and synchronization lock.
+//! - [`PraminWindow`]: A guard object that holds exclusive PRAMIN access for its lifetime.
+//!
+//! The PRAMIN aperture is a 1MB region at BAR0 + 0x700000 for all GPUs. The window base is
+//! controlled by the `NV_PBUS_BAR0_WINDOW` register and must be 64KB aligned.
+//!
+//! # Examples
+//!
+//! ## Basic read/write
+//!
+//! ```no_run
+//! use crate::driver::Bar0;
+//! use crate::mm::pramin;
+//! use kernel::devres::Devres;
+//! use kernel::prelude::*;
+//! use kernel::sync::Arc;
+//!
+//! fn example(devres_bar: Arc<Devres<Bar0>>) -> Result<()> {
+//! let pramin = Arc::pin_init(pramin::Pramin::new(devres_bar)?, GFP_KERNEL)?;
+//! let mut window = pramin.window()?;
+//!
+//! // Write and read back.
+//! window.try_write32(0x100, 0xDEADBEEF)?;
+//! let val = window.try_read32(0x100)?;
+//! assert_eq!(val, 0xDEADBEEF);
+//!
+//! Ok(())
+//! // Original window position restored on drop.
+//! }
+//! ```
+//!
+//! ## Auto-repositioning across VRAM regions
+//!
+//! ```no_run
+//! use crate::driver::Bar0;
+//! use crate::mm::pramin;
+//! use kernel::devres::Devres;
+//! use kernel::prelude::*;
+//! use kernel::sync::Arc;
+//!
+//! fn example(devres_bar: Arc<Devres<Bar0>>) -> Result<()> {
+//! let pramin = Arc::pin_init(pramin::Pramin::new(devres_bar)?, GFP_KERNEL)?;
+//! let mut window = pramin.window()?;
+//!
+//! // Access first 1MB region.
+//! window.try_write32(0x100, 0x11111111)?;
+//!
+//! // Access at 2MB - window auto-repositions.
+//! window.try_write32(0x200000, 0x22222222)?;
+//!
+//! // Back to first region - window repositions again.
+//! let val = window.try_read32(0x100)?;
+//! assert_eq!(val, 0x11111111);
+//!
+//! Ok(())
+//! }
+//! ```
+
+#![expect(unused)]
+
+use crate::{
+ driver::Bar0,
+ num::u64_as_usize,
+ regs, //
+};
+
+use kernel::bits::genmask_u64;
+use kernel::devres::Devres;
+use kernel::io::Io;
+use kernel::new_mutex;
+use kernel::prelude::*;
+use kernel::ptr::{
+ Alignable,
+ Alignment, //
+};
+use kernel::sizes::{
+ SZ_1M,
+ SZ_64K, //
+};
+use kernel::sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+};
+
+/// PRAMIN aperture base offset in BAR0.
+const PRAMIN_BASE: usize = 0x700000;
+
+/// PRAMIN aperture size (1MB).
+const PRAMIN_SIZE: usize = SZ_1M;
+
+/// 64KB alignment for window base.
+const WINDOW_ALIGN: Alignment = Alignment::new::<SZ_64K>();
+
+/// Maximum addressable VRAM offset (40-bit address space).
+///
+/// The `NV_PBUS_BAR0_WINDOW` register has a 24-bit `window_base` field (bits 23:0) that stores
+/// bits [39:16] of the target VRAM address. This limits the addressable space to 2^40 bytes.
+const MAX_VRAM_OFFSET: usize = u64_as_usize(genmask_u64(0..=39));
+
+/// Generate a PRAMIN read accessor.
+macro_rules! define_pramin_read {
+ ($name:ident, $ty:ty) => {
+ #[doc = concat!("Read a `", stringify!($ty), "` from VRAM at the given offset.")]
+ pub(crate) fn $name(&mut self, vram_offset: usize) -> Result<$ty> {
+ // Compute window parameters without bar reference.
+ let (bar_offset, new_base) =
+ self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
+
+ // Update window base if needed and perform read.
+ let bar = self.bar.try_access().ok_or(ENODEV)?;
+ if let Some(base) = new_base {
+ Self::write_window_base(&bar, base);
+ self.state.current_base = base;
+ }
+ bar.$name(bar_offset)
+ }
+ };
+}
+
+/// Generate a PRAMIN write accessor.
+macro_rules! define_pramin_write {
+ ($name:ident, $ty:ty) => {
+ #[doc = concat!("Write a `", stringify!($ty), "` to VRAM at the given offset.")]
+ pub(crate) fn $name(&mut self, vram_offset: usize, value: $ty) -> Result {
+ // Compute window parameters without bar reference.
+ let (bar_offset, new_base) =
+ self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
+
+ // Update window base if needed and perform write.
+ let bar = self.bar.try_access().ok_or(ENODEV)?;
+ if let Some(base) = new_base {
+ Self::write_window_base(&bar, base);
+ self.state.current_base = base;
+ }
+ bar.$name(value, bar_offset)
+ }
+ };
+}
+
+/// PRAMIN state protected by mutex.
+struct PraminState {
+ current_base: usize,
+}
+
+/// PRAMIN aperture manager.
+///
+/// Call [`Pramin::window()`] to acquire exclusive PRAMIN access.
+#[pin_data]
+pub(crate) struct Pramin {
+ bar: Arc<Devres<Bar0>>,
+ /// PRAMIN aperture state, protected by a mutex.
+ ///
+ /// # Safety
+ ///
+ /// This lock is acquired during the DMA fence signalling critical path.
+ /// It must NEVER be held across any reclaimable CPU memory / allocations
+ /// (`GFP_KERNEL`), because the memory reclaim path can call
+ /// `dma_fence_wait()`, which would deadlock with this lock held.
+ #[pin]
+ state: Mutex<PraminState>,
+}
+
+impl Pramin {
+ /// Create a pin-initializer for PRAMIN.
+ pub(crate) fn new(bar: Arc<Devres<Bar0>>) -> Result<impl PinInit<Self>> {
+ let bar_access = bar.try_access().ok_or(ENODEV)?;
+ let current_base = Self::try_read_window_base(&bar_access)?;
+
+ Ok(pin_init!(Self {
+ bar,
+ state <- new_mutex!(PraminState { current_base }, "pramin_state"),
+ }))
+ }
+
+ /// Acquire exclusive PRAMIN access.
+ ///
+ /// Returns a [`PraminWindow`] guard that provides VRAM read/write accessors.
+ /// The [`PraminWindow`] is exclusive and only one can exist at a time.
+ pub(crate) fn window(&self) -> Result<PraminWindow<'_>> {
+ let state = self.state.lock();
+ let saved_base = state.current_base;
+ Ok(PraminWindow {
+ bar: self.bar.clone(),
+ state,
+ saved_base,
+ })
+ }
+
+ /// Read the current window base from the BAR0_WINDOW register.
+ fn try_read_window_base(bar: &Bar0) -> Result<usize> {
+ let reg = regs::NV_PBUS_BAR0_WINDOW::read(bar);
+ let base = u64::from(reg.window_base());
+ let shifted = base.checked_shl(16).ok_or(EOVERFLOW)?;
+ shifted.try_into().map_err(|_| EOVERFLOW)
+ }
+}
+
+/// PRAMIN window guard for direct VRAM access.
+///
+/// This guard holds exclusive access to the PRAMIN aperture. The window auto-repositions
+/// when accessing VRAM offsets outside the current 1MB range. Original window position
+/// is saved on creation and restored on drop.
+///
+/// Only one [`PraminWindow`] can exist at a time per [`Pramin`] instance (enforced by the
+/// internal `MutexGuard`).
+pub(crate) struct PraminWindow<'a> {
+ bar: Arc<Devres<Bar0>>,
+ state: MutexGuard<'a, PraminState>,
+ saved_base: usize,
+}
+
+impl PraminWindow<'_> {
+ /// Write a new window base to the BAR0_WINDOW register.
+ fn write_window_base(bar: &Bar0, base: usize) {
+ // CAST:
+ // - We have guaranteed that the base is within the addressable range (40-bits).
+ // - After >> 16, a 40-bit aligned base becomes 24 bits, which fits in u32.
+ regs::NV_PBUS_BAR0_WINDOW::default()
+ .set_window_base((base >> 16) as u32)
+ .write(bar);
+ }
+
+ /// Compute window parameters for a VRAM access.
+ ///
+ /// Returns (`bar_offset`, `new_base`) where:
+ /// - `bar_offset`: The BAR0 offset to use for the access.
+ /// - `new_base`: `Some(base)` if window needs repositioning, `None` otherwise.
+ fn compute_window(
+ &self,
+ vram_offset: usize,
+ access_size: usize,
+ ) -> Result<(usize, Option<usize>)> {
+ // Validate VRAM offset is within addressable range (40-bit address space).
+ let end_offset = vram_offset.checked_add(access_size).ok_or(EINVAL)?;
+ if end_offset > MAX_VRAM_OFFSET + 1 {
+ return Err(EINVAL);
+ }
+
+ // Calculate which 64KB-aligned base we need.
+ let needed_base = vram_offset.align_down(WINDOW_ALIGN);
+
+ // Calculate offset within the window.
+ let offset_in_window = vram_offset - needed_base;
+
+ // Check if access fits in 1MB window from this base.
+ if offset_in_window + access_size > PRAMIN_SIZE {
+ return Err(EINVAL);
+ }
+
+ // Return bar offset and whether window needs repositioning.
+ let new_base = if self.state.current_base != needed_base {
+ Some(needed_base)
+ } else {
+ None
+ };
+
+ Ok((PRAMIN_BASE + offset_in_window, new_base))
+ }
+
+ define_pramin_read!(try_read8, u8);
+ define_pramin_read!(try_read16, u16);
+ define_pramin_read!(try_read32, u32);
+ define_pramin_read!(try_read64, u64);
+
+ define_pramin_write!(try_write8, u8);
+ define_pramin_write!(try_write16, u16);
+ define_pramin_write!(try_write32, u32);
+ define_pramin_write!(try_write64, u64);
+}
+
+impl Drop for PraminWindow<'_> {
+ fn drop(&mut self) {
+ // Restore the original window base if it changed.
+ if self.state.current_base != self.saved_base {
+ if let Some(bar) = self.bar.try_access() {
+ Self::write_window_base(&bar, self.saved_base);
+
+ // Update state to reflect the restored base.
+ self.state.current_base = self.saved_base;
+ }
+ }
+ // MutexGuard drops automatically, releasing the lock.
+ }
+}
diff --git a/drivers/gpu/nova-core/nova_core.rs b/drivers/gpu/nova-core/nova_core.rs
index c1121e7c64c5..3de00db3279e 100644
--- a/drivers/gpu/nova-core/nova_core.rs
+++ b/drivers/gpu/nova-core/nova_core.rs
@@ -13,6 +13,7 @@
mod gfw;
mod gpu;
mod gsp;
+mod mm;
mod num;
mod regs;
mod sbuffer;
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index ea0d32f5396c..d0982e346f74 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -102,6 +102,11 @@ fn fmt(&self, f: &mut kernel::fmt::Formatter<'_>) -> kernel::fmt::Result {
31:16 frts_err_code as u16;
});
+register!(NV_PBUS_BAR0_WINDOW @ 0x00001700, "BAR0 window control for PRAMIN access" {
+ 25:24 target as u8, "Target memory (0=VRAM, 1=SYS_MEM_COH, 2=SYS_MEM_NONCOH)";
+ 23:0 window_base as u32, "Window base address (bits 39:16 of FB addr)";
+});
+
// PFB
// The following two registers together hold the physical system memory address that is used by the
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 07/25] docs: gpu: nova-core: Document the PRAMIN aperture mechanism
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (5 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 06/25] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 08/25] gpu: nova-core: mm: Add common memory management types Joel Fernandes
` (18 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add documentation for the PRAMIN aperture mechanism used by nova-core
for direct VRAM access.
Nova only uses TARGET=VID_MEM for VRAM access. The SYS_MEM target values
are documented for completeness but not used by the driver.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
Documentation/gpu/nova/core/pramin.rst | 125 +++++++++++++++++++++++++
Documentation/gpu/nova/index.rst | 1 +
2 files changed, 126 insertions(+)
create mode 100644 Documentation/gpu/nova/core/pramin.rst
diff --git a/Documentation/gpu/nova/core/pramin.rst b/Documentation/gpu/nova/core/pramin.rst
new file mode 100644
index 000000000000..55ec9d920629
--- /dev/null
+++ b/Documentation/gpu/nova/core/pramin.rst
@@ -0,0 +1,125 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================
+PRAMIN aperture mechanism
+=========================
+
+.. note::
+ The following description is approximate and current as of the Ampere family.
+ It may change for future generations and is intended to assist in understanding
+ the driver code.
+
+Introduction
+============
+
+PRAMIN is a hardware aperture mechanism that provides CPU access to GPU Video RAM (VRAM) before
+the GPU's Memory Management Unit (MMU) and page tables are initialized. This 1MB sliding window,
+located at a fixed offset within BAR0, is essential for setting up page tables and other critical
+GPU data structures without relying on the GPU's MMU.
+
+Architecture Overview
+=====================
+
+The PRAMIN aperture mechanism is logically implemented by the GPU's PBUS (PCIe Bus Controller Unit)
+and provides a CPU-accessible window into VRAM through the PCIe interface::
+
+ +-----------------+ PCIe +------------------------------+
+ | CPU |<----------->| GPU |
+ +-----------------+ | |
+ | +----------------------+ |
+ | | PBUS | |
+ | | (Bus Controller) | |
+ | | | |
+ | | +--------------+<------------ (window starts at
+ | | | PRAMIN | | | BAR0 + 0x700000)
+ | | | Window | | |
+ | | | (1MB) | | |
+ | | +--------------+ | |
+ | | | | |
+ | +---------|------------+ |
+ | | |
+ | v |
+ | +----------------------+<------------ (Program PRAMIN to any
+ | | VRAM | | 64KB-aligned VRAM boundary)
+ | | (Several GBs) | |
+ | | | |
+ | | FB[0x000000000000] | |
+ | | ... | |
+ | | FB[0x7FFFFFFFFFF] | |
+ | +----------------------+ |
+ +------------------------------+
+
+PBUS (PCIe Bus Controller) is responsible for, among other things, handling MMIO
+accesses to the BAR registers.
+
+PRAMIN Window Operation
+=======================
+
+The PRAMIN window provides a 1MB sliding aperture that can be repositioned over
+the entire VRAM address space using the ``NV_PBUS_BAR0_WINDOW`` register.
+
+Window Control Mechanism
+-------------------------
+
+The window position is controlled via the PBUS ``BAR0_WINDOW`` register::
+
+ NV_PBUS_BAR0_WINDOW Register (0x1700):
+ +-------+--------+--------------------------------------+
+ | 31:26 | 25:24 | 23:0 |
+ | RSVD | TARGET | BASE_ADDR |
+ | | | (bits 39:16 of VRAM address) |
+ +-------+--------+--------------------------------------+
+
+ BASE_ADDR field (bits 23:0):
+ - Contains bits [39:16] of the target VRAM address
+ - Provides 40-bit (1TB) address space coverage
+ - Must be programmed with 64KB-aligned addresses
+
+ TARGET field (bits 25:24):
+ - 0x0: VRAM (Video Memory)
+ - 0x1: SYS_MEM_COH (Coherent System Memory)
+ - 0x2: SYS_MEM_NONCOH (Non-coherent System Memory)
+ - 0x3: Reserved
+
+ .. note::
+ Nova only uses TARGET=VRAM (0x0) for video memory access. The SYS_MEM
+ target values are documented here for hardware completeness but are
+ not used by the driver.
+
+64KB Alignment Requirement
+---------------------------
+
+The PRAMIN window must be aligned to 64KB boundaries in VRAM. This is enforced
+by the ``BASE_ADDR`` field representing bits [39:16] of the target address::
+
+ VRAM Address Calculation:
+ actual_vram_addr = (BASE_ADDR << 16) + pramin_offset
+ Where:
+ - BASE_ADDR: 24-bit value from NV_PBUS_BAR0_WINDOW[23:0]
+ - pramin_offset: 20-bit offset within the PRAMIN window [0x00000-0xFFFFF]
+
+ Example Window Positioning:
+ +---------------------------------------------------------+
+ | VRAM Space |
+ | |
+ | 0x000000000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x0000FFFFF +-----------------+ |
+ | |
+ | | ^ |
+ | | | Window can slide |
+ | v | to any 64KB-aligned boundary |
+ | |
+ | 0x123400000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x1234FFFFF +-----------------+ |
+ | |
+ | ... |
+ | |
+ | 0x7FFFF0000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x7FFFFFFFF +-----------------+ |
+ +---------------------------------------------------------+
diff --git a/Documentation/gpu/nova/index.rst b/Documentation/gpu/nova/index.rst
index e39cb3163581..b8254b1ffe2a 100644
--- a/Documentation/gpu/nova/index.rst
+++ b/Documentation/gpu/nova/index.rst
@@ -32,3 +32,4 @@ vGPU manager VFIO driver and the nova-drm driver.
core/devinit
core/fwsec
core/falcon
+ core/pramin
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 08/25] gpu: nova-core: mm: Add common memory management types
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (6 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 07/25] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 09/25] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
` (17 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add foundational types for GPU memory management. These types are used
throughout the nova memory management subsystem for page table
operations, address translation, and memory allocation.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 174 ++++++++++++++++++++++++++++++++++++
1 file changed, 174 insertions(+)
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 7a5dd4220c67..a8b2e1870566 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -2,4 +2,178 @@
//! Memory management subsystems for nova-core.
+#![expect(dead_code)]
+
pub(crate) mod pramin;
+
+use kernel::sizes::SZ_4K;
+
+use crate::num::u64_as_usize;
+
+/// Page size in bytes (4 KiB).
+pub(crate) const PAGE_SIZE: usize = SZ_4K;
+
+bitfield! {
+ pub(crate) struct VramAddress(u64), "Physical VRAM address in GPU video memory" {
+ 11:0 offset as u64, "Offset within 4KB page";
+ 63:12 frame_number as u64 => Pfn, "Physical frame number";
+ }
+}
+
+impl VramAddress {
+ /// Create a new VRAM address from a raw value.
+ pub(crate) const fn new(addr: u64) -> Self {
+ Self(addr)
+ }
+
+ /// Get the raw address value as `usize` (useful for MMIO offsets).
+ pub(crate) const fn raw(&self) -> usize {
+ u64_as_usize(self.0)
+ }
+
+ /// Get the raw address value as `u64`.
+ pub(crate) const fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+impl PartialEq for VramAddress {
+ fn eq(&self, other: &Self) -> bool {
+ self.0 == other.0
+ }
+}
+
+impl Eq for VramAddress {}
+
+impl PartialOrd for VramAddress {
+ fn partial_cmp(&self, other: &Self) -> Option<core::cmp::Ordering> {
+ Some(self.cmp(other))
+ }
+}
+
+impl Ord for VramAddress {
+ fn cmp(&self, other: &Self) -> core::cmp::Ordering {
+ self.0.cmp(&other.0)
+ }
+}
+
+impl From<Pfn> for VramAddress {
+ fn from(pfn: Pfn) -> Self {
+ Self::default().set_frame_number(pfn)
+ }
+}
+
+bitfield! {
+ pub(crate) struct VirtualAddress(u64), "Virtual address in GPU address space" {
+ 11:0 offset as u64, "Offset within 4KB page";
+ 20:12 l4_index as u64, "Level 4 index (PTE)";
+ 29:21 l3_index as u64, "Level 3 index (Dual PDE)";
+ 38:30 l2_index as u64, "Level 2 index";
+ 47:39 l1_index as u64, "Level 1 index";
+ 56:48 l0_index as u64, "Level 0 index (PDB)";
+ 63:12 frame_number as u64 => Vfn, "Virtual frame number";
+ }
+}
+
+impl VirtualAddress {
+ /// Create a new virtual address from a raw value.
+ #[expect(dead_code)]
+ pub(crate) const fn new(addr: u64) -> Self {
+ Self(addr)
+ }
+
+ /// Get the page table index for a given level (0-5).
+ pub(crate) fn level_index(&self, level: u64) -> u64 {
+ match level {
+ 0 => self.l0_index(),
+ 1 => self.l1_index(),
+ 2 => self.l2_index(),
+ 3 => self.l3_index(),
+ 4 => self.l4_index(),
+
+ // L5 is only used by MMU v3 (PTE level).
+ 5 => self.l4_index(),
+ _ => 0,
+ }
+ }
+}
+
+impl From<Vfn> for VirtualAddress {
+ fn from(vfn: Vfn) -> Self {
+ Self::default().set_frame_number(vfn)
+ }
+}
+
+/// Physical Frame Number.
+///
+/// Represents a physical page in VRAM.
+#[repr(transparent)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) struct Pfn(u64);
+
+impl Pfn {
+ /// Create a new PFN from a frame number.
+ pub(crate) const fn new(frame_number: u64) -> Self {
+ Self(frame_number)
+ }
+
+ /// Get the raw frame number.
+ pub(crate) const fn raw(self) -> u64 {
+ self.0
+ }
+}
+
+impl From<VramAddress> for Pfn {
+ fn from(addr: VramAddress) -> Self {
+ addr.frame_number()
+ }
+}
+
+impl From<u64> for Pfn {
+ fn from(val: u64) -> Self {
+ Self(val)
+ }
+}
+
+impl From<Pfn> for u64 {
+ fn from(pfn: Pfn) -> Self {
+ pfn.0
+ }
+}
+
+/// Virtual Frame Number.
+///
+/// Represents a virtual page in GPU address space.
+#[repr(transparent)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) struct Vfn(u64);
+
+impl Vfn {
+ /// Create a new VFN from a frame number.
+ pub(crate) const fn new(frame_number: u64) -> Self {
+ Self(frame_number)
+ }
+
+ /// Get the raw frame number.
+ pub(crate) const fn raw(self) -> u64 {
+ self.0
+ }
+}
+
+impl From<VirtualAddress> for Vfn {
+ fn from(addr: VirtualAddress) -> Self {
+ addr.frame_number()
+ }
+}
+
+impl From<u64> for Vfn {
+ fn from(val: u64) -> Self {
+ Self(val)
+ }
+}
+
+impl From<Vfn> for u64 {
+ fn from(vfn: Vfn) -> Self {
+ vfn.0
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 09/25] gpu: nova-core: mm: Add TLB flush support
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (7 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 08/25] gpu: nova-core: mm: Add common memory management types Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 10/25] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
` (16 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add TLB (Translation Lookaside Buffer) flush support for GPU MMU.
After modifying page table entries, the GPU's TLB must be invalidated
to ensure the new mappings take effect. The Tlb struct provides flush
functionality through BAR0 registers.
The flush operation writes the page directory base address and triggers
an invalidation, polling for completion with a 2 second timeout matching
the Nouveau driver.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/tlb.rs | 90 +++++++++++++++++++++++++++++++++
drivers/gpu/nova-core/regs.rs | 33 ++++++++++++
3 files changed, 124 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/tlb.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index a8b2e1870566..ca685b5a44f3 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -5,6 +5,7 @@
#![expect(dead_code)]
pub(crate) mod pramin;
+pub(crate) mod tlb;
use kernel::sizes::SZ_4K;
diff --git a/drivers/gpu/nova-core/mm/tlb.rs b/drivers/gpu/nova-core/mm/tlb.rs
new file mode 100644
index 000000000000..23458395511d
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/tlb.rs
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! TLB (Translation Lookaside Buffer) flush support for GPU MMU.
+//!
+//! After modifying page table entries, the GPU's TLB must be flushed to
+//! ensure the new mappings take effect. This module provides TLB flush
+//! functionality for virtual memory managers.
+//!
+//! # Example
+//!
+//! ```ignore
+//! use crate::mm::tlb::Tlb;
+//!
+//! fn page_table_update(tlb: &Tlb, pdb_addr: VramAddress) -> Result<()> {
+//! // ... modify page tables ...
+//!
+//! // Flush TLB to make changes visible (polls for completion).
+//! tlb.flush(pdb_addr)?;
+//!
+//! Ok(())
+//! }
+//! ```
+
+use kernel::{
+ devres::Devres,
+ io::poll::read_poll_timeout,
+ new_mutex,
+ prelude::*,
+ sync::{Arc, Mutex},
+ time::Delta, //
+};
+
+use crate::{
+ driver::Bar0,
+ mm::VramAddress,
+ regs, //
+};
+
+/// TLB manager for GPU translation buffer operations.
+#[pin_data]
+pub(crate) struct Tlb {
+ bar: Arc<Devres<Bar0>>,
+ /// TLB flush serialization lock: This lock is acquired during the
+ /// DMA fence signalling critical path. It must NEVER be held across any
+ /// reclaimable CPU memory allocations because the memory reclaim path can
+ /// call `dma_fence_wait()`, which would deadlock with this lock held.
+ #[pin]
+ lock: Mutex<()>,
+}
+
+impl Tlb {
+ /// Create a new TLB manager.
+ pub(super) fn new(bar: Arc<Devres<Bar0>>) -> impl PinInit<Self> {
+ pin_init!(Self {
+ bar,
+ lock <- new_mutex!((), "tlb_flush"),
+ })
+ }
+
+ /// Flush the GPU TLB for a specific page directory base.
+ ///
+ /// This invalidates all TLB entries associated with the given PDB address.
+ /// Must be called after modifying page table entries to ensure the GPU sees
+ /// the updated mappings.
+ pub(crate) fn flush(&self, pdb_addr: VramAddress) -> Result {
+ let _guard = self.lock.lock();
+
+ let bar = self.bar.try_access().ok_or(ENODEV)?;
+
+ // Write PDB address.
+ regs::NV_TLB_FLUSH_PDB_LO::from_pdb_addr(pdb_addr.raw_u64()).write(&*bar);
+ regs::NV_TLB_FLUSH_PDB_HI::from_pdb_addr(pdb_addr.raw_u64()).write(&*bar);
+
+ // Trigger flush: invalidate all pages and enable.
+ regs::NV_TLB_FLUSH_CTRL::default()
+ .set_page_all(true)
+ .set_enable(true)
+ .write(&*bar);
+
+ // Poll for completion - enable bit clears when flush is done.
+ read_poll_timeout(
+ || Ok(regs::NV_TLB_FLUSH_CTRL::read(&*bar)),
+ |ctrl| !ctrl.enable(),
+ Delta::ZERO,
+ Delta::from_secs(2),
+ )?;
+
+ Ok(())
+ }
+}
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index d0982e346f74..c948f961f307 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -454,3 +454,36 @@ pub(crate) mod ga100 {
0:0 display_disabled as bool;
});
}
+
+// MMU TLB
+
+register!(NV_TLB_FLUSH_PDB_LO @ 0x00b830a0, "TLB flush register: PDB address bits [39:8]" {
+ 31:0 pdb_lo as u32, "PDB address bits [39:8]";
+});
+
+impl NV_TLB_FLUSH_PDB_LO {
+ /// Create a register value from a PDB address.
+ ///
+ /// Extracts bits [39:8] of the address and shifts it right by 8 bits.
+ pub(crate) fn from_pdb_addr(addr: u64) -> Self {
+ Self::default().set_pdb_lo(((addr >> 8) & 0xFFFF_FFFF) as u32)
+ }
+}
+
+register!(NV_TLB_FLUSH_PDB_HI @ 0x00b830a4, "TLB flush register: PDB address bits [47:40]" {
+ 7:0 pdb_hi as u8, "PDB address bits [47:40]";
+});
+
+impl NV_TLB_FLUSH_PDB_HI {
+ /// Create a register value from a PDB address.
+ ///
+ /// Extracts bits [47:40] of the address and shifts it right by 40 bits.
+ pub(crate) fn from_pdb_addr(addr: u64) -> Self {
+ Self::default().set_pdb_hi(((addr >> 40) & 0xFF) as u8)
+ }
+}
+
+register!(NV_TLB_FLUSH_CTRL @ 0x00b830b0, "TLB flush control register" {
+ 0:0 page_all as bool, "Invalidate all pages";
+ 31:31 enable as bool, "Enable/trigger flush (clears when flush completes)";
+});
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 10/25] gpu: nova-core: mm: Add GpuMm centralized memory manager
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (8 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 09/25] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 11/25] gpu: nova-core: mm: Use usable VRAM region for buddy allocator Joel Fernandes
` (15 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Introduce GpuMm as the centralized GPU memory manager that owns:
- Buddy allocator for VRAM allocation.
- PRAMIN window for direct VRAM access.
- TLB manager for translation buffer operations.
This provides clean ownership model where GpuMm provides accessor
methods for its components that can be used for memory management
operations.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 18 ++++++++++
drivers/gpu/nova-core/mm.rs | 66 ++++++++++++++++++++++++++++++++++--
2 files changed, 82 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 5e084ec7c926..ed6c5f63b249 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -4,8 +4,13 @@
device,
devres::Devres,
fmt,
+ gpu::buddy::GpuBuddyParams,
pci,
prelude::*,
+ sizes::{
+ SZ_1M,
+ SZ_4K, //
+ },
sync::Arc, //
};
@@ -22,6 +27,7 @@
commands::GetGspStaticInfoReply,
Gsp, //
},
+ mm::GpuMm,
regs,
};
@@ -252,6 +258,9 @@ pub(crate) struct Gpu {
gsp_falcon: Falcon<GspFalcon>,
/// SEC2 falcon instance, used for GSP boot up and cleanup.
sec2_falcon: Falcon<Sec2Falcon>,
+ /// GPU memory manager owning memory management resources.
+ #[pin]
+ mm: GpuMm,
/// GSP runtime data. Temporarily an empty placeholder.
#[pin]
gsp: Gsp,
@@ -286,6 +295,15 @@ pub(crate) fn new<'a>(
sec2_falcon: Falcon::new(pdev.as_ref(), spec.chipset)?,
+ // Create GPU memory manager owning memory management resources.
+ // This will be initialized with the usable VRAM region from GSP in a later
+ // patch. For now, we use a placeholder of 1MB.
+ mm <- GpuMm::new(devres_bar.clone(), GpuBuddyParams {
+ base_offset_bytes: 0,
+ physical_memory_size_bytes: SZ_1M as u64,
+ chunk_size_bytes: SZ_4K as u64,
+ })?,
+
gsp <- Gsp::new(pdev),
gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?.0 },
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index ca685b5a44f3..a3c84738bac0 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -7,9 +7,71 @@
pub(crate) mod pramin;
pub(crate) mod tlb;
-use kernel::sizes::SZ_4K;
+use kernel::{
+ devres::Devres,
+ gpu::buddy::{
+ GpuBuddy,
+ GpuBuddyParams, //
+ },
+ prelude::*,
+ sizes::SZ_4K,
+ sync::Arc, //
+};
-use crate::num::u64_as_usize;
+use crate::{
+ driver::Bar0,
+ num::u64_as_usize, //
+};
+
+pub(crate) use tlb::Tlb;
+
+/// GPU Memory Manager - owns all core MM components.
+///
+/// Provides centralized ownership of memory management resources:
+/// - [`GpuBuddy`] allocator for VRAM page table allocation.
+/// - [`pramin::Pramin`] for direct VRAM access.
+/// - [`Tlb`] manager for translation buffer flush operations.
+#[pin_data]
+pub(crate) struct GpuMm {
+ buddy: GpuBuddy,
+ #[pin]
+ pramin: pramin::Pramin,
+ #[pin]
+ tlb: Tlb,
+}
+
+impl GpuMm {
+ /// Create a pin-initializer for `GpuMm`.
+ pub(crate) fn new(
+ bar: Arc<Devres<Bar0>>,
+ buddy_params: GpuBuddyParams,
+ ) -> Result<impl PinInit<Self>> {
+ let buddy = GpuBuddy::new(buddy_params)?;
+ let tlb_init = Tlb::new(bar.clone());
+ let pramin_init = pramin::Pramin::new(bar)?;
+
+ Ok(pin_init!(Self {
+ buddy,
+ pramin <- pramin_init,
+ tlb <- tlb_init,
+ }))
+ }
+
+ /// Access the [`GpuBuddy`] allocator.
+ pub(crate) fn buddy(&self) -> &GpuBuddy {
+ &self.buddy
+ }
+
+ /// Access the [`pramin::Pramin`].
+ pub(crate) fn pramin(&self) -> &pramin::Pramin {
+ &self.pramin
+ }
+
+ /// Access the [`Tlb`] manager.
+ pub(crate) fn tlb(&self) -> &Tlb {
+ &self.tlb
+ }
+}
/// Page size in bytes (4 KiB).
pub(crate) const PAGE_SIZE: usize = SZ_4K;
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 11/25] gpu: nova-core: mm: Use usable VRAM region for buddy allocator
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (9 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 10/25] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 12/25] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
` (14 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
The buddy allocator manages the actual usable VRAM. On my GA102 Ampere
with 24GB video memory, that is ~23.7GB on a 24GB GPU enabling proper
GPU memory allocation for driver use.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 66 +++++++++++++++++++++------
drivers/gpu/nova-core/gsp/boot.rs | 7 ++-
drivers/gpu/nova-core/gsp/commands.rs | 1 -
3 files changed, 58 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index ed6c5f63b249..ba769de2f984 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
+use core::cell::Cell;
+
use kernel::{
device,
devres::Devres,
@@ -7,10 +9,7 @@
gpu::buddy::GpuBuddyParams,
pci,
prelude::*,
- sizes::{
- SZ_1M,
- SZ_4K, //
- },
+ sizes::SZ_4K,
sync::Arc, //
};
@@ -28,9 +27,17 @@
Gsp, //
},
mm::GpuMm,
+ num::IntoSafeCast,
regs,
};
+/// Parameters extracted from GSP boot for initializing memory subsystems.
+#[derive(Clone, Copy)]
+struct BootParams {
+ usable_vram_start: u64,
+ usable_vram_size: u64,
+}
+
macro_rules! define_chipset {
({ $($variant:ident = $value:expr),* $(,)* }) =>
{
@@ -274,6 +281,13 @@ pub(crate) fn new<'a>(
devres_bar: Arc<Devres<Bar0>>,
bar: &'a Bar0,
) -> impl PinInit<Self, Error> + 'a {
+ // Cell to share boot parameters between GSP boot and subsequent initializations.
+ // Contains usable VRAM region from FbLayout for use by the buddy allocator.
+ let boot_params: Cell<BootParams> = Cell::new(BootParams {
+ usable_vram_start: 0,
+ usable_vram_size: 0,
+ });
+
try_pin_init!(Self {
spec: Spec::new(pdev.as_ref(), bar).inspect(|spec| {
dev_info!(pdev.as_ref(),"NVIDIA ({})\n", spec);
@@ -295,18 +309,42 @@ pub(crate) fn new<'a>(
sec2_falcon: Falcon::new(pdev.as_ref(), spec.chipset)?,
- // Create GPU memory manager owning memory management resources.
- // This will be initialized with the usable VRAM region from GSP in a later
- // patch. For now, we use a placeholder of 1MB.
- mm <- GpuMm::new(devres_bar.clone(), GpuBuddyParams {
- base_offset_bytes: 0,
- physical_memory_size_bytes: SZ_1M as u64,
- chunk_size_bytes: SZ_4K as u64,
- })?,
-
gsp <- Gsp::new(pdev),
- gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?.0 },
+ // Boot GSP and extract usable VRAM region for buddy allocator.
+ gsp_static_info: {
+ let (info, fb_layout) = gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?;
+
+ let usable_vram = fb_layout.usable_vram.as_ref().ok_or_else(|| {
+ dev_err!(pdev.as_ref(), "No usable FB regions found from GSP\n");
+ ENODEV
+ })?;
+
+ dev_info!(
+ pdev.as_ref(),
+ "Using FB region: {:#x}..{:#x}\n",
+ usable_vram.start,
+ usable_vram.end
+ );
+
+ boot_params.set(BootParams {
+ usable_vram_start: usable_vram.start,
+ usable_vram_size: usable_vram.end - usable_vram.start,
+ });
+
+ info
+ },
+
+ // Create GPU memory manager owning memory management resources.
+ // Uses the usable VRAM region from GSP for buddy allocator.
+ mm <- {
+ let params = boot_params.get();
+ GpuMm::new(devres_bar.clone(), GpuBuddyParams {
+ base_offset_bytes: params.usable_vram_start,
+ physical_memory_size_bytes: params.usable_vram_size,
+ chunk_size_bytes: SZ_4K.into_safe_cast(),
+ })?
+ },
bar: devres_bar,
})
diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs
index 7a4a0c759267..bc4446282613 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -150,7 +150,7 @@ pub(crate) fn boot(
let gsp_fw = KBox::pin_init(GspFirmware::new(dev, chipset, FIRMWARE_VERSION), GFP_KERNEL)?;
- let fb_layout = FbLayout::new(chipset, bar, &gsp_fw)?;
+ let mut fb_layout = FbLayout::new(chipset, bar, &gsp_fw)?;
dev_dbg!(dev, "{:#x?}\n", fb_layout);
Self::run_fwsec_frts(dev, gsp_falcon, bar, &bios, &fb_layout)?;
@@ -252,6 +252,11 @@ pub(crate) fn boot(
Err(e) => dev_warn!(pdev.as_ref(), "GPU name unavailable: {:?}\n", e),
}
+ // Populate usable VRAM from GSP response.
+ if let Some((base, size)) = info.usable_fb_region() {
+ fb_layout.set_usable_vram(base, size);
+ }
+
Ok((info, fb_layout))
}
}
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index c31046df3acf..fc9ba08f9f8d 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -234,7 +234,6 @@ pub(crate) fn gpu_name(&self) -> core::result::Result<&str, GpuNameError> {
/// Returns the usable FB region `(base, size)` for driver allocation which is
/// already retrieved from the GSP.
- #[expect(dead_code)]
pub(crate) fn usable_fb_region(&self) -> Option<(u64, u64)> {
self.usable_fb_region
}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 12/25] gpu: nova-core: mm: Add common types for all page table formats
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (10 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 11/25] gpu: nova-core: mm: Use usable VRAM region for buddy allocator Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 13/25] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
` (13 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add common page table types shared between MMU v2 and v3. These types
are hardware-agnostic and used by both MMU versions.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/pagetable.rs | 149 ++++++++++++++++++++++++++
2 files changed, 150 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index a3c84738bac0..7a76af313d53 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -4,6 +4,7 @@
#![expect(dead_code)]
+pub(crate) mod pagetable;
pub(crate) mod pramin;
pub(crate) mod tlb;
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
new file mode 100644
index 000000000000..aea06e5da4ff
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Common page table types shared between MMU v2 and v3.
+//!
+//! This module provides foundational types used by both MMU versions:
+//! - Page table level hierarchy
+//! - Memory aperture types for PDEs and PTEs
+
+#![expect(dead_code)]
+
+use crate::gpu::Architecture;
+
+/// MMU version enumeration.
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub(crate) enum MmuVersion {
+ /// MMU v2 for Turing/Ampere/Ada.
+ V2,
+ /// MMU v3 for Hopper and later.
+ V3,
+}
+
+impl From<Architecture> for MmuVersion {
+ fn from(arch: Architecture) -> Self {
+ match arch {
+ Architecture::Turing | Architecture::Ampere | Architecture::Ada => Self::V2,
+ // In the future, uncomment:
+ // _ => Self::V3,
+ }
+ }
+}
+
+/// Page Table Level hierarchy for MMU v2/v3.
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub(crate) enum PageTableLevel {
+ /// Level 0 - Page Directory Base (root).
+ Pdb,
+ /// Level 1 - Intermediate page directory.
+ L1,
+ /// Level 2 - Intermediate page directory.
+ L2,
+ /// Level 3 - Intermediate page directory or dual PDE (version-dependent).
+ L3,
+ /// Level 4 - PTE level for v2, intermediate page directory for v3.
+ L4,
+ /// Level 5 - PTE level used for MMU v3 only.
+ L5,
+}
+
+impl PageTableLevel {
+ /// Number of entries per page table (512 for 4KB pages).
+ pub(crate) const ENTRIES_PER_TABLE: usize = 512;
+
+ /// Get the next level in the hierarchy.
+ pub(crate) const fn next(&self) -> Option<PageTableLevel> {
+ match self {
+ Self::Pdb => Some(Self::L1),
+ Self::L1 => Some(Self::L2),
+ Self::L2 => Some(Self::L3),
+ Self::L3 => Some(Self::L4),
+ Self::L4 => Some(Self::L5),
+ Self::L5 => None,
+ }
+ }
+
+ /// Convert level to index.
+ pub(crate) const fn as_index(&self) -> u64 {
+ match self {
+ Self::Pdb => 0,
+ Self::L1 => 1,
+ Self::L2 => 2,
+ Self::L3 => 3,
+ Self::L4 => 4,
+ Self::L5 => 5,
+ }
+ }
+}
+
+/// Memory aperture for Page Table Entries (`PTE`s).
+///
+/// Determines which memory region the `PTE` points to.
+#[repr(u8)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) enum AperturePte {
+ /// Local video memory (VRAM).
+ #[default]
+ VideoMemory = 0,
+ /// Peer GPU's video memory.
+ PeerMemory = 1,
+ /// System memory with cache coherence.
+ SystemCoherent = 2,
+ /// System memory without cache coherence.
+ SystemNonCoherent = 3,
+}
+
+// TODO[FPRI]: Replace with `#[derive(FromPrimitive)]` when available.
+impl From<u8> for AperturePte {
+ fn from(val: u8) -> Self {
+ match val {
+ 0 => Self::VideoMemory,
+ 1 => Self::PeerMemory,
+ 2 => Self::SystemCoherent,
+ 3 => Self::SystemNonCoherent,
+ _ => Self::VideoMemory,
+ }
+ }
+}
+
+// TODO[FPRI]: Replace with `#[derive(ToPrimitive)]` when available.
+impl From<AperturePte> for u8 {
+ fn from(val: AperturePte) -> Self {
+ val as u8
+ }
+}
+
+/// Memory aperture for Page Directory Entries (`PDE`s).
+///
+/// Note: For `PDE`s, `Invalid` (0) means the entry is not valid.
+#[repr(u8)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) enum AperturePde {
+ /// Invalid/unused entry.
+ #[default]
+ Invalid = 0,
+ /// Page table is in video memory.
+ VideoMemory = 1,
+ /// Page table is in system memory with coherence.
+ SystemCoherent = 2,
+ /// Page table is in system memory without coherence.
+ SystemNonCoherent = 3,
+}
+
+// TODO[FPRI]: Replace with `#[derive(FromPrimitive)]` when available.
+impl From<u8> for AperturePde {
+ fn from(val: u8) -> Self {
+ match val {
+ 1 => Self::VideoMemory,
+ 2 => Self::SystemCoherent,
+ 3 => Self::SystemNonCoherent,
+ _ => Self::Invalid,
+ }
+ }
+}
+
+// TODO[FPRI]: Replace with `#[derive(ToPrimitive)]` when available.
+impl From<AperturePde> for u8 {
+ fn from(val: AperturePde) -> Self {
+ val as u8
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 13/25] gpu: nova-core: mm: Add MMU v2 page table types
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (11 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 12/25] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 14/25] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
` (12 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add page table entry and directory structures for MMU version 2
used by Turing/Ampere/Ada GPUs.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 199 +++++++++++++++++++++
2 files changed, 200 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver2.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index aea06e5da4ff..925063fde45d 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -7,6 +7,7 @@
//! - Memory aperture types for PDEs and PTEs
#![expect(dead_code)]
+pub(crate) mod ver2;
use crate::gpu::Architecture;
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver2.rs b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
new file mode 100644
index 000000000000..d30a9a8bddbd
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! MMU v2 page table types for Turing and Ampere GPUs.
+//!
+//! This module defines MMU version 2 specific types (Turing, Ampere and Ada GPUs).
+//!
+//! Bit field layouts derived from the NVIDIA OpenRM documentation:
+//! `open-gpu-kernel-modules/src/common/inc/swref/published/turing/tu102/dev_mmu.h`
+
+#![expect(dead_code)]
+
+use super::{
+ AperturePde,
+ AperturePte,
+ PageTableLevel, //
+};
+use crate::mm::{
+ Pfn,
+ VramAddress, //
+};
+
+/// PDE levels for MMU v2 (5-level hierarchy: PDB -> L1 -> L2 -> L3 -> L4).
+pub(crate) const PDE_LEVELS: &[PageTableLevel] = &[
+ PageTableLevel::Pdb,
+ PageTableLevel::L1,
+ PageTableLevel::L2,
+ PageTableLevel::L3,
+];
+
+/// PTE level for MMU v2.
+pub(crate) const PTE_LEVEL: PageTableLevel = PageTableLevel::L4;
+
+/// Dual PDE level for MMU v2 (128-bit entries).
+pub(crate) const DUAL_PDE_LEVEL: PageTableLevel = PageTableLevel::L3;
+
+// Page Table Entry (PTE) for MMU v2 - 64-bit entry at level 4.
+bitfield! {
+ pub(crate) struct Pte(u64), "Page Table Entry for MMU v2" {
+ 0:0 valid as bool, "Entry is valid";
+ 2:1 aperture as u8 => AperturePte, "Memory aperture type";
+ 3:3 volatile as bool, "Volatile (bypass L2 cache)";
+ 4:4 encrypted as bool, "Encryption enabled (Confidential Computing)";
+ 5:5 privilege as bool, "Privileged access only";
+ 6:6 read_only as bool, "Write protection";
+ 7:7 atomic_disable as bool, "Atomic operations disabled";
+ 53:8 frame_number_sys as u64 => Pfn, "Frame number for system memory";
+ 32:8 frame_number_vid as u64 => Pfn, "Frame number for video memory";
+ 35:33 peer_id as u8, "Peer GPU ID for peer memory (0-7)";
+ 53:36 comptagline as u32, "Compression tag line bits";
+ 63:56 kind as u8, "Surface kind/format";
+ }
+}
+
+impl Pte {
+ /// Create a PTE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PTE for video memory.
+ pub(crate) fn new_vram(pfn: Pfn, writable: bool) -> Self {
+ Self::default()
+ .set_valid(true)
+ .set_aperture(AperturePte::VideoMemory)
+ .set_frame_number_vid(pfn)
+ .set_read_only(!writable)
+ }
+
+ /// Create an invalid PTE.
+ pub(crate) fn invalid() -> Self {
+ Self::default()
+ }
+
+ /// Get the frame number based on aperture type.
+ pub(crate) fn frame_number(&self) -> Pfn {
+ match self.aperture() {
+ AperturePte::VideoMemory => self.frame_number_vid(),
+ _ => self.frame_number_sys(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+// Page Directory Entry (PDE) for MMU v2 - 64-bit entry at levels 0-2.
+bitfield! {
+ pub(crate) struct Pde(u64), "Page Directory Entry for MMU v2" {
+ 0:0 valid_inverted as bool, "Valid bit (inverted logic)";
+ 2:1 aperture as u8 => AperturePde, "Memory aperture type";
+ 3:3 volatile as bool, "Volatile (bypass L2 cache)";
+ 5:5 no_ats as bool, "Disable Address Translation Services";
+ 53:8 table_frame_sys as u64 => Pfn, "Table frame number for system memory";
+ 32:8 table_frame_vid as u64 => Pfn, "Table frame number for video memory";
+ 35:33 peer_id as u8, "Peer GPU ID (0-7)";
+ }
+}
+
+impl Pde {
+ /// Create a PDE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PDE pointing to a page table in video memory.
+ pub(crate) fn new_vram(table_pfn: Pfn) -> Self {
+ Self::default()
+ .set_valid_inverted(false) // 0 = valid
+ .set_aperture(AperturePde::VideoMemory)
+ .set_table_frame_vid(table_pfn)
+ }
+
+ /// Create an invalid PDE.
+ pub(crate) fn invalid() -> Self {
+ Self::default()
+ .set_valid_inverted(true)
+ .set_aperture(AperturePde::Invalid)
+ }
+
+ /// Check if this PDE is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ !self.valid_inverted() && self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the table frame number based on aperture type.
+ pub(crate) fn table_frame(&self) -> Pfn {
+ match self.aperture() {
+ AperturePde::VideoMemory => self.table_frame_vid(),
+ _ => self.table_frame_sys(),
+ }
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM PDE (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::from(self.table_frame_vid())
+ }
+
+ /// Get the raw `u64` value of the PDE.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+/// Dual PDE at Level 3 - 128-bit entry of Large/Small Page Table pointers.
+///
+/// The dual PDE supports both large (64KB) and small (4KB) page tables.
+#[repr(C)]
+#[derive(Debug, Clone, Copy, Default)]
+pub(crate) struct DualPde {
+ /// Large/Big Page Table pointer (lower 64 bits).
+ pub big: Pde,
+ /// Small Page Table pointer (upper 64 bits).
+ pub small: Pde,
+}
+
+impl DualPde {
+ /// Create a dual PDE from raw 128-bit value (two `u64`s).
+ pub(crate) fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: Pde::new(big),
+ small: Pde::new(small),
+ }
+ }
+
+ /// Create a dual PDE with only the small page table pointer set.
+ ///
+ /// Note: The big (LPT) portion is set to 0, not `Pde::invalid()`.
+ /// According to hardware documentation, clearing bit 0 of the 128-bit
+ /// entry makes the PDE behave as a "normal" PDE. Using `Pde::invalid()`
+ /// would set bit 0 (valid_inverted), which breaks page table walking.
+ pub(crate) fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: Pde::new(0),
+ small: Pde::new_vram(table_pfn),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(crate) fn has_small(&self) -> bool {
+ self.small.is_valid()
+ }
+
+ /// Check if the big page table pointer is valid.
+ pub(crate) fn has_big(&self) -> bool {
+ self.big.is_valid()
+ }
+
+ /// Get the small page table PFN.
+ pub(crate) fn small_pfn(&self) -> Pfn {
+ self.small.table_frame()
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 14/25] gpu: nova-core: mm: Add MMU v3 page table types
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (12 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 13/25] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 15/25] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
` (11 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add page table entry and directory structures for MMU version 3
used by Hopper and later GPUs.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/ver3.rs | 302 +++++++++++++++++++++
2 files changed, 303 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver3.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 925063fde45d..5263a8f56529 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -8,6 +8,7 @@
#![expect(dead_code)]
pub(crate) mod ver2;
+pub(crate) mod ver3;
use crate::gpu::Architecture;
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver3.rs b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
new file mode 100644
index 000000000000..e6cab2fe7d33
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
@@ -0,0 +1,302 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! MMU v3 page table types for Hopper and later GPUs.
+//!
+//! This module defines MMU version 3 specific types (Hopper and later GPUs).
+//!
+//! Key differences from MMU v2:
+//! - Unified 40-bit address field for all apertures (v2 had separate sys/vid fields).
+//! - PCF (Page Classification Field) replaces separate privilege/RO/atomic/cache bits.
+//! - KIND field is 4 bits (not 8).
+//! - IS_PTE bit in PDE to support large pages directly.
+//! - No COMPTAGLINE field (compression handled differently in v3).
+//! - No separate ENCRYPTED bit.
+//!
+//! Bit field layouts derived from the NVIDIA OpenRM documentation:
+//! `open-gpu-kernel-modules/src/common/inc/swref/published/hopper/gh100/dev_mmu.h`
+
+#![expect(dead_code)]
+
+use super::{
+ AperturePde,
+ AperturePte,
+ PageTableLevel, //
+};
+use crate::mm::{
+ Pfn,
+ VramAddress, //
+};
+use kernel::prelude::*;
+
+/// PDE levels for MMU v3 (6-level hierarchy).
+pub(crate) const PDE_LEVELS: &[PageTableLevel] = &[
+ PageTableLevel::Pdb,
+ PageTableLevel::L1,
+ PageTableLevel::L2,
+ PageTableLevel::L3,
+ PageTableLevel::L4,
+];
+
+/// PTE level for MMU v3.
+pub(crate) const PTE_LEVEL: PageTableLevel = PageTableLevel::L5;
+
+/// Dual PDE level for MMU v3 (128-bit entries).
+pub(crate) const DUAL_PDE_LEVEL: PageTableLevel = PageTableLevel::L4;
+
+// Page Classification Field (PCF) - 5 bits for PTEs in MMU v3.
+bitfield! {
+ pub(crate) struct PtePcf(u8), "Page Classification Field for PTEs" {
+ 0:0 uncached as bool, "Bypass L2 cache (0=cached, 1=bypass)";
+ 1:1 acd as bool, "Access counting disabled (0=enabled, 1=disabled)";
+ 2:2 read_only as bool, "Read-only access (0=read-write, 1=read-only)";
+ 3:3 no_atomic as bool, "Atomics disabled (0=enabled, 1=disabled)";
+ 4:4 privileged as bool, "Privileged access only (0=regular, 1=privileged)";
+ }
+}
+
+impl PtePcf {
+ /// Create PCF for read-write mapping (cached, no atomics, regular mode).
+ pub(crate) fn rw() -> Self {
+ Self::default().set_no_atomic(true)
+ }
+
+ /// Create PCF for read-only mapping (cached, no atomics, regular mode).
+ pub(crate) fn ro() -> Self {
+ Self::default().set_read_only(true).set_no_atomic(true)
+ }
+
+ /// Get the raw `u8` value.
+ pub(crate) fn raw_u8(&self) -> u8 {
+ self.0
+ }
+}
+
+impl From<u8> for PtePcf {
+ fn from(val: u8) -> Self {
+ Self(val)
+ }
+}
+
+// Page Classification Field (PCF) - 3 bits for PDEs in MMU v3.
+// Controls Address Translation Services (ATS) and caching.
+bitfield! {
+ pub(crate) struct PdePcf(u8), "Page Classification Field for PDEs" {
+ 0:0 uncached as bool, "Bypass L2 cache (0=cached, 1=bypass)";
+ 1:1 no_ats as bool, "Address Translation Services disabled (0=enabled, 1=disabled)";
+ }
+}
+
+impl PdePcf {
+ /// Create PCF for cached mapping with ATS enabled (default).
+ pub(crate) fn cached() -> Self {
+ Self::default()
+ }
+
+ /// Get the raw `u8` value.
+ pub(crate) fn raw_u8(&self) -> u8 {
+ self.0
+ }
+}
+
+impl From<u8> for PdePcf {
+ fn from(val: u8) -> Self {
+ Self(val)
+ }
+}
+
+// Page Table Entry (PTE) for MMU v3.
+bitfield! {
+ pub(crate) struct Pte(u64), "Page Table Entry for MMU v3" {
+ 0:0 valid as bool, "Entry is valid";
+ 2:1 aperture as u8 => AperturePte, "Memory aperture type";
+ 7:3 pcf as u8 => PtePcf, "Page Classification Field";
+ 11:8 kind as u8, "Surface kind (4 bits, 0x0=pitch, 0xF=invalid)";
+ 51:12 frame_number as u64 => Pfn, "Physical frame number (for all apertures)";
+ 63:61 peer_id as u8, "Peer GPU ID for peer memory (0-7)";
+ }
+}
+
+impl Pte {
+ /// Create a PTE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PTE for video memory.
+ pub(crate) fn new_vram(frame: Pfn, writable: bool) -> Self {
+ let pcf = if writable { PtePcf::rw() } else { PtePcf::ro() };
+ Self::default()
+ .set_valid(true)
+ .set_aperture(AperturePte::VideoMemory)
+ .set_pcf(pcf)
+ .set_frame_number(frame)
+ }
+
+ /// Create an invalid PTE.
+ pub(crate) fn invalid() -> Self {
+ Self::default()
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+// Page Directory Entry (PDE) for MMU v3.
+//
+// Note: v3 uses a unified 40-bit address field (v2 had separate sys/vid address fields).
+bitfield! {
+ pub(crate) struct Pde(u64), "Page Directory Entry for MMU v3 (Hopper+)" {
+ 0:0 is_pte as bool, "Entry is a PTE (0=PDE, 1=large page PTE)";
+ 2:1 aperture as u8 => AperturePde, "Memory aperture (0=invalid, 1=vidmem, 2=coherent, 3=non-coherent)";
+ 5:3 pcf as u8 => PdePcf, "Page Classification Field (3 bits for PDE)";
+ 51:12 table_frame as u64 => Pfn, "Table frame number (40-bit unified address)";
+ }
+}
+
+impl Pde {
+ /// Create a PDE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PDE pointing to a page table in video memory.
+ pub(crate) fn new_vram(table_pfn: Pfn) -> Self {
+ Self::default()
+ .set_is_pte(false)
+ .set_aperture(AperturePde::VideoMemory)
+ .set_table_frame(table_pfn)
+ }
+
+ /// Create an invalid PDE.
+ pub(crate) fn invalid() -> Self {
+ Self::default().set_aperture(AperturePde::Invalid)
+ }
+
+ /// Check if this PDE is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM PDE (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::from(self.table_frame())
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+// Big Page Table pointer for Dual PDE - 64-bit lower word of the 128-bit Dual PDE.
+bitfield! {
+ pub(crate) struct DualPdeBig(u64), "Big Page Table pointer in Dual PDE (MMU v3)" {
+ 0:0 is_pte as bool, "Entry is a PTE (for large pages)";
+ 2:1 aperture as u8 => AperturePde, "Memory aperture type";
+ 5:3 pcf as u8 => PdePcf, "Page Classification Field";
+ 51:8 table_frame as u64, "Table frame (table address 256-byte aligned)";
+ }
+}
+
+impl DualPdeBig {
+ /// Create a big page table pointer from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create an invalid big page table pointer.
+ pub(crate) fn invalid() -> Self {
+ Self::default().set_aperture(AperturePde::Invalid)
+ }
+
+ /// Create a valid big PDE pointing to a page table in video memory.
+ pub(crate) fn new_vram(table_addr: VramAddress) -> Result<Self> {
+ // Big page table addresses must be 256-byte aligned (shift 8).
+ if table_addr.raw_u64() & 0xFF != 0 {
+ return Err(EINVAL);
+ }
+
+ let table_frame = table_addr.raw_u64() >> 8;
+ Ok(Self::default()
+ .set_is_pte(false)
+ .set_aperture(AperturePde::VideoMemory)
+ .set_table_frame(table_frame))
+ }
+
+ /// Check if this big PDE is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the VRAM address of the big page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM DualPdeBig (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::new(self.table_frame() << 8)
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+/// Dual PDE at Level 4 for MMU v3 - 128-bit entry.
+///
+/// Contains both big (64KB) and small (4KB) page table pointers:
+/// - Lower 64 bits: Big Page Table pointer.
+/// - Upper 64 bits: Small Page Table pointer.
+///
+/// ## Note
+///
+/// The big and small page table pointers have different address layouts:
+/// - Big address = field value << 8 (256-byte alignment).
+/// - Small address = field value << 12 (4KB alignment).
+///
+/// This is why `DualPdeBig` is a separate type from `Pde`.
+#[repr(C)]
+#[derive(Debug, Clone, Copy, Default)]
+pub(crate) struct DualPde {
+ /// Big Page Table pointer.
+ pub big: DualPdeBig,
+ /// Small Page Table pointer.
+ pub small: Pde,
+}
+
+impl DualPde {
+ /// Create a dual PDE from raw 128-bit value (two `u64`s).
+ pub(crate) fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: DualPdeBig::new(big),
+ small: Pde::new(small),
+ }
+ }
+
+ /// Create a dual PDE with only the small page table pointer set.
+ pub(crate) fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: DualPdeBig::invalid(),
+ small: Pde::new_vram(table_pfn),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(crate) fn has_small(&self) -> bool {
+ self.small.is_valid()
+ }
+
+ /// Check if the big page table pointer is valid.
+ pub(crate) fn has_big(&self) -> bool {
+ self.big.is_valid()
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 15/25] gpu: nova-core: mm: Add unified page table entry wrapper enums
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (13 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 14/25] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
` (10 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add unified Pte, Pde, and DualPde wrapper enums that abstract over
MMU v2 and v3 page table entry formats. These enums allow the page
table walker and VMM to work with both MMU versions.
Each unified type:
- Takes MmuVersion parameter in constructors
- Wraps both ver2 and ver3 variants
- Delegates method calls to the appropriate variant
This enables version-agnostic page table operations while keeping
version-specific implementation details encapsulated in the ver2
and ver3 modules.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 301 ++++++++++++++++++++++++++
1 file changed, 301 insertions(+)
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 5263a8f56529..33acb7053fbe 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -10,6 +10,14 @@
pub(crate) mod ver2;
pub(crate) mod ver3;
+use kernel::prelude::*;
+
+use super::{
+ pramin,
+ Pfn,
+ VramAddress,
+ PAGE_SIZE, //
+};
use crate::gpu::Architecture;
/// MMU version enumeration.
@@ -77,6 +85,74 @@ pub(crate) const fn as_index(&self) -> u64 {
}
}
+impl MmuVersion {
+ /// Get the `PDE` levels (excluding PTE level) for page table walking.
+ pub(crate) fn pde_levels(&self) -> &'static [PageTableLevel] {
+ match self {
+ Self::V2 => ver2::PDE_LEVELS,
+ Self::V3 => ver3::PDE_LEVELS,
+ }
+ }
+
+ /// Get the PTE level for this MMU version.
+ pub(crate) fn pte_level(&self) -> PageTableLevel {
+ match self {
+ Self::V2 => ver2::PTE_LEVEL,
+ Self::V3 => ver3::PTE_LEVEL,
+ }
+ }
+
+ /// Get the dual PDE level (128-bit entries) for this MMU version.
+ pub(crate) fn dual_pde_level(&self) -> PageTableLevel {
+ match self {
+ Self::V2 => ver2::DUAL_PDE_LEVEL,
+ Self::V3 => ver3::DUAL_PDE_LEVEL,
+ }
+ }
+
+ /// Get the number of PDE levels for this MMU version.
+ pub(crate) fn pde_level_count(&self) -> usize {
+ self.pde_levels().len()
+ }
+
+ /// Get the entry size in bytes for a given level.
+ pub(crate) fn entry_size(&self, level: PageTableLevel) -> usize {
+ if level == self.dual_pde_level() {
+ 16 // 128-bit dual PDE
+ } else {
+ 8 // 64-bit PDE/PTE
+ }
+ }
+
+ /// Get the number of entries per page table page for a given level.
+ pub(crate) fn entries_per_page(&self, level: PageTableLevel) -> usize {
+ PAGE_SIZE / self.entry_size(level)
+ }
+
+ /// Compute upper bound on page table pages needed for `num_virt_pages`.
+ ///
+ /// Walks from PTE level up through PDE levels, accumulating the tree.
+ pub(crate) fn pt_pages_upper_bound(&self, num_virt_pages: usize) -> usize {
+ let mut total = 0;
+
+ // PTE pages at the leaf level.
+ let pte_epp = self.entries_per_page(self.pte_level());
+ let mut pages_at_level = num_virt_pages.div_ceil(pte_epp);
+ total += pages_at_level;
+
+ // Walk PDE levels bottom-up (reverse of pde_levels()).
+ for &level in self.pde_levels().iter().rev() {
+ let epp = self.entries_per_page(level);
+ // How many pages at this level do we need to point to
+ // the previous pages_at_level?
+ pages_at_level = pages_at_level.div_ceil(epp);
+ total += pages_at_level;
+ }
+
+ total
+ }
+}
+
/// Memory aperture for Page Table Entries (`PTE`s).
///
/// Determines which memory region the `PTE` points to.
@@ -149,3 +225,228 @@ fn from(val: AperturePde) -> Self {
val as u8
}
}
+
+/// Unified Page Table Entry wrapper for both MMU v2 and v3 `PTE`
+/// types, allowing the walker to work with either format.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum Pte {
+ /// MMU v2 `PTE` (Turing/Ampere/Ada).
+ V2(ver2::Pte),
+ /// MMU v3 `PTE` (Hopper+).
+ V3(ver3::Pte),
+}
+
+impl Pte {
+ /// Create a `PTE` from a raw `u64` value for the given MMU version.
+ pub(crate) fn new(version: MmuVersion, val: u64) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pte::new(val)),
+ MmuVersion::V3 => Self::V3(ver3::Pte::new(val)),
+ }
+ }
+
+ /// Create an invalid `PTE` for the given MMU version.
+ pub(crate) fn invalid(version: MmuVersion) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pte::invalid()),
+ MmuVersion::V3 => Self::V3(ver3::Pte::invalid()),
+ }
+ }
+
+ /// Create a valid `PTE` for video memory.
+ pub(crate) fn new_vram(version: MmuVersion, pfn: Pfn, writable: bool) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pte::new_vram(pfn, writable)),
+ MmuVersion::V3 => Self::V3(ver3::Pte::new_vram(pfn, writable)),
+ }
+ }
+
+ /// Check if this `PTE` is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ match self {
+ Self::V2(p) => p.valid(),
+ Self::V3(p) => p.valid(),
+ }
+ }
+
+ /// Get the physical frame number.
+ pub(crate) fn frame_number(&self) -> Pfn {
+ match self {
+ Self::V2(p) => p.frame_number(),
+ Self::V3(p) => p.frame_number(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(p) => p.raw_u64(),
+ Self::V3(p) => p.raw_u64(),
+ }
+ }
+
+ /// Read a `PTE` from VRAM.
+ pub(crate) fn read(
+ window: &mut pramin::PraminWindow<'_>,
+ addr: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<Self> {
+ let val = window.try_read64(addr.raw())?;
+ Ok(Self::new(mmu_version, val))
+ }
+
+ /// Write this `PTE` to VRAM.
+ pub(crate) fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.raw_u64())
+ }
+}
+
+/// Unified Page Directory Entry wrapper for both MMU v2 and v3 `PDE`.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum Pde {
+ /// MMU v2 `PDE` (Turing/Ampere/Ada).
+ V2(ver2::Pde),
+ /// MMU v3 `PDE` (Hopper+).
+ V3(ver3::Pde),
+}
+
+impl Pde {
+ /// Create a `PDE` from a raw `u64` value for the given MMU version.
+ pub(crate) fn new(version: MmuVersion, val: u64) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pde::new(val)),
+ MmuVersion::V3 => Self::V3(ver3::Pde::new(val)),
+ }
+ }
+
+ /// Create a valid `PDE` pointing to a page table in video memory.
+ pub(crate) fn new_vram(version: MmuVersion, table_pfn: Pfn) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pde::new_vram(table_pfn)),
+ MmuVersion::V3 => Self::V3(ver3::Pde::new_vram(table_pfn)),
+ }
+ }
+
+ /// Create an invalid `PDE` for the given MMU version.
+ pub(crate) fn invalid(version: MmuVersion) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pde::invalid()),
+ MmuVersion::V3 => Self::V3(ver3::Pde::invalid()),
+ }
+ }
+
+ /// Check if this `PDE` is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ match self {
+ Self::V2(p) => p.is_valid(),
+ Self::V3(p) => p.is_valid(),
+ }
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ match self {
+ Self::V2(p) => p.table_vram_address(),
+ Self::V3(p) => p.table_vram_address(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(p) => p.raw_u64(),
+ Self::V3(p) => p.raw_u64(),
+ }
+ }
+
+ /// Read a `PDE` from VRAM.
+ pub(crate) fn read(
+ window: &mut pramin::PraminWindow<'_>,
+ addr: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<Self> {
+ let val = window.try_read64(addr.raw())?;
+ Ok(Self::new(mmu_version, val))
+ }
+
+ /// Write this `PDE` to VRAM.
+ pub(crate) fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.raw_u64())
+ }
+}
+
+/// Unified Dual Page Directory Entry wrapper for both MMU v2 and v3 [`DualPde`].
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum DualPde {
+ /// MMU v2 [`DualPde`] (Turing/Ampere/Ada).
+ V2(ver2::DualPde),
+ /// MMU v3 [`DualPde`] (Hopper+).
+ V3(ver3::DualPde),
+}
+
+impl DualPde {
+ /// Create a [`DualPde`] from raw 128-bit value (two `u64`s) for the given MMU version.
+ pub(crate) fn new(version: MmuVersion, big: u64, small: u64) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::DualPde::new(big, small)),
+ MmuVersion::V3 => Self::V3(ver3::DualPde::new(big, small)),
+ }
+ }
+
+ /// Create a [`DualPde`] with only the small page table pointer set.
+ pub(crate) fn new_small(version: MmuVersion, table_pfn: Pfn) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::DualPde::new_small(table_pfn)),
+ MmuVersion::V3 => Self::V3(ver3::DualPde::new_small(table_pfn)),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(crate) fn has_small(&self) -> bool {
+ match self {
+ Self::V2(d) => d.has_small(),
+ Self::V3(d) => d.has_small(),
+ }
+ }
+
+ /// Get the small page table VRAM address.
+ pub(crate) fn small_vram_address(&self) -> VramAddress {
+ match self {
+ Self::V2(d) => d.small.table_vram_address(),
+ Self::V3(d) => d.small.table_vram_address(),
+ }
+ }
+
+ /// Get the raw `u64` value of the big PDE.
+ pub(crate) fn big_raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(d) => d.big.raw_u64(),
+ Self::V3(d) => d.big.raw_u64(),
+ }
+ }
+
+ /// Get the raw `u64` value of the small PDE.
+ pub(crate) fn small_raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(d) => d.small.raw_u64(),
+ Self::V3(d) => d.small.raw_u64(),
+ }
+ }
+
+ /// Read a dual PDE (128-bit) from VRAM.
+ pub(crate) fn read(
+ window: &mut pramin::PraminWindow<'_>,
+ addr: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<Self> {
+ let lo = window.try_read64(addr.raw())?;
+ let hi = window.try_read64(addr.raw() + 8)?;
+ Ok(Self::new(mmu_version, lo, hi))
+ }
+
+ /// Write this dual PDE (128-bit) to VRAM.
+ pub(crate) fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.big_raw_u64())?;
+ window.try_write64(addr.raw() + 8, self.small_raw_u64())
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (14 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 15/25] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-25 5:39 ` Gary Guo
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 17/25] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
` (9 subsequent siblings)
25 siblings, 2 replies; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the page table walker implementation that traverses the page table
hierarchy for both MMU v2 (5-level) and MMU v3 (6-level) to resolve
virtual addresses to physical addresses or find PTE locations.
Currently only v2 has been tested (nova-core currently boots pre-hopper)
with some initial prepatory work done for v3.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/walk.rs | 218 +++++++++++++++++++++
2 files changed, 219 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 33acb7053fbe..7ebea4cb8437 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -9,6 +9,7 @@
#![expect(dead_code)]
pub(crate) mod ver2;
pub(crate) mod ver3;
+pub(crate) mod walk;
use kernel::prelude::*;
diff --git a/drivers/gpu/nova-core/mm/pagetable/walk.rs b/drivers/gpu/nova-core/mm/pagetable/walk.rs
new file mode 100644
index 000000000000..023226af8816
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/walk.rs
@@ -0,0 +1,218 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Page table walker implementation for NVIDIA GPUs.
+//!
+//! This module provides page table walking functionality for MMU v2 and v3.
+//! The walker traverses the page table hierarchy to resolve virtual addresses
+//! to physical addresses or to find PTE locations.
+//!
+//! # Page Table Hierarchy
+//!
+//! ## MMU v2 (Turing/Ampere/Ada) - 5 levels
+//!
+//! ```text
+//! +-------+ +-------+ +-------+ +---------+ +-------+
+//! | PDB |---->| L1 |---->| L2 |---->| L3 Dual |---->| L4 |
+//! | (L0) | | | | | | PDE | | (PTE) |
+//! +-------+ +-------+ +-------+ +---------+ +-------+
+//! 64-bit 64-bit 64-bit 128-bit 64-bit
+//! PDE PDE PDE (big+small) PTE
+//! ```
+//!
+//! ## MMU v3 (Hopper+) - 6 levels
+//!
+//! ```text
+//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
+//! | PDB |---->| L1 |---->| L2 |---->| L3 |---->| L4 Dual |---->| L5 |
+//! | (L0) | | | | | | | | PDE | | (PTE) |
+//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
+//! 64-bit 64-bit 64-bit 64-bit 128-bit 64-bit
+//! PDE PDE PDE PDE (big+small) PTE
+//! ```
+//!
+//! # Result of a page table walk
+//!
+//! The walker returns a [`WalkResult`] indicating the outcome.
+
+use kernel::prelude::*;
+
+use super::{
+ DualPde,
+ MmuVersion,
+ PageTableLevel,
+ Pde,
+ Pte, //
+};
+use crate::{
+ mm::{
+ pramin,
+ GpuMm,
+ Pfn,
+ Vfn,
+ VirtualAddress,
+ VramAddress, //
+ },
+ num::{
+ IntoSafeCast, //
+ },
+};
+
+/// Result of walking to a PTE.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum WalkResult {
+ /// Intermediate page tables are missing (only returned in lookup mode).
+ PageTableMissing,
+ /// PTE exists but is invalid (page not mapped).
+ Unmapped { pte_addr: VramAddress },
+ /// PTE exists and is valid (page is mapped).
+ Mapped { pte_addr: VramAddress, pfn: Pfn },
+}
+
+/// Result of walking PDE levels only.
+///
+/// Returned by [`PtWalk::walk_pde_levels()`] to indicate whether all PDE levels
+/// resolved or a PDE is missing.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum WalkPdeResult {
+ /// All PDE levels resolved -- returns PTE page table address.
+ Complete {
+ /// VRAM address of the PTE-level page table.
+ pte_table: VramAddress,
+ },
+ /// A PDE is missing and no prepared page was provided by the closure.
+ Missing {
+ /// PDE slot address in the parent page table (where to install).
+ install_addr: VramAddress,
+ /// The page table level that is missing.
+ level: PageTableLevel,
+ },
+}
+
+/// Page table walker for NVIDIA GPUs.
+///
+/// Walks the page table hierarchy (5 levels for v2, 6 for v3) to find PTE
+/// locations or resolve virtual addresses.
+pub(crate) struct PtWalk {
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+}
+
+impl PtWalk {
+ /// Calculate the VRAM address of an entry within a page table.
+ fn entry_addr(
+ table: VramAddress,
+ mmu_version: MmuVersion,
+ level: PageTableLevel,
+ index: u64,
+ ) -> VramAddress {
+ let entry_size: u64 = mmu_version.entry_size(level).into_safe_cast();
+ VramAddress::new(table.raw_u64() + index * entry_size)
+ }
+
+ /// Create a new page table walker.
+ pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Self {
+ Self {
+ pdb_addr,
+ mmu_version,
+ }
+ }
+
+ /// Walk PDE levels with closure-based resolution for missing PDEs.
+ ///
+ /// Traverses all PDE levels for the MMU version. At each level, reads the PDE.
+ /// If valid, extracts the child table address and continues. If missing, calls
+ /// `resolve_prepared(install_addr)` to resolve the missing PDE.
+ pub(crate) fn walk_pde_levels(
+ &self,
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ resolve_prepared: impl Fn(VramAddress) -> Option<VramAddress>,
+ ) -> Result<WalkPdeResult> {
+ let va = VirtualAddress::from(vfn);
+ let mut cur_table = self.pdb_addr;
+
+ for &level in self.mmu_version.pde_levels() {
+ let idx = va.level_index(level.as_index());
+ let install_addr = Self::entry_addr(cur_table, self.mmu_version, level, idx);
+
+ if level == self.mmu_version.dual_pde_level() {
+ // 128-bit dual PDE with big+small page table pointers.
+ let dpde = DualPde::read(window, install_addr, self.mmu_version)?;
+ if dpde.has_small() {
+ cur_table = dpde.small_vram_address();
+ continue;
+ }
+ } else {
+ // Regular 64-bit PDE.
+ let pde = Pde::read(window, install_addr, self.mmu_version)?;
+ if pde.is_valid() {
+ cur_table = pde.table_vram_address();
+ continue;
+ }
+ }
+
+ // PDE missing in HW. Ask caller for resolution.
+ if let Some(prepared_addr) = resolve_prepared(install_addr) {
+ cur_table = prepared_addr;
+ continue;
+ }
+
+ return Ok(WalkPdeResult::Missing {
+ install_addr,
+ level,
+ });
+ }
+
+ Ok(WalkPdeResult::Complete {
+ pte_table: cur_table,
+ })
+ }
+
+ /// Walk to PTE for lookup only (no allocation).
+ ///
+ /// Returns [`WalkResult::PageTableMissing`] if intermediate tables don't exist.
+ pub(crate) fn walk_to_pte_lookup(&self, mm: &GpuMm, vfn: Vfn) -> Result<WalkResult> {
+ let mut window = mm.pramin().window()?;
+ self.walk_to_pte_lookup_with_window(&mut window, vfn)
+ }
+
+ /// Walk to PTE using a caller-provided PRAMIN window (lookup only).
+ ///
+ /// Uses [`PtWalk::walk_pde_levels()`] for the PDE traversal, then reads the PTE at
+ /// the leaf level. Useful when called for multiple VFNs with single PRAMIN window
+ /// acquisition. Used by [`Vmm::execute_map()`] and [`Vmm::unmap_pages()`].
+ pub(crate) fn walk_to_pte_lookup_with_window(
+ &self,
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ ) -> Result<WalkResult> {
+ match self.walk_pde_levels(window, vfn, |_| None)? {
+ WalkPdeResult::Complete { pte_table } => {
+ Self::read_pte_at_level(window, vfn, pte_table, self.mmu_version)
+ }
+ WalkPdeResult::Missing { .. } => Ok(WalkResult::PageTableMissing),
+ }
+ }
+
+ /// Read the PTE at the PTE level given the PTE table address.
+ fn read_pte_at_level(
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ pte_table: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<WalkResult> {
+ let va = VirtualAddress::from(vfn);
+ let pte_level = mmu_version.pte_level();
+ let pte_idx = va.level_index(pte_level.as_index());
+ let pte_addr = Self::entry_addr(pte_table, mmu_version, pte_level, pte_idx);
+ let pte = Pte::read(window, pte_addr, mmu_version)?;
+
+ if pte.is_valid() {
+ return Ok(WalkResult::Mapped {
+ pte_addr,
+ pfn: pte.frame_number(),
+ });
+ }
+ Ok(WalkResult::Unmapped { pte_addr })
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 17/25] gpu: nova-core: mm: Add Virtual Memory Manager
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (15 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 18/25] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
` (8 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the Virtual Memory Manager (VMM) infrastructure for GPU address
space management. Each Vmm instance manages a single address space
identified by its Page Directory Base (PDB) address, used for Channel,
BAR1 and BAR2 mappings.
Mapping APIs and virtual address range tracking are added in later
commits.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/vmm.rs | 63 +++++++++++++++++++++++++++++++++
2 files changed, 64 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/vmm.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 7a76af313d53..594ac93497aa 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -7,6 +7,7 @@
pub(crate) mod pagetable;
pub(crate) mod pramin;
pub(crate) mod tlb;
+pub(crate) mod vmm;
use kernel::{
devres::Devres,
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
new file mode 100644
index 000000000000..0e1b0d668c57
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Virtual Memory Manager for NVIDIA GPU page table management.
+//!
+//! The [`Vmm`] provides high-level page mapping and unmapping operations for GPU
+//! virtual address spaces (Channels, BAR1, BAR2). It wraps the page table walker
+//! and handles TLB flushing after modifications.
+
+#![allow(dead_code)]
+
+use kernel::{
+ gpu::buddy::AllocatedBlocks,
+ prelude::*, //
+};
+
+use crate::mm::{
+ pagetable::{
+ walk::{PtWalk, WalkResult},
+ MmuVersion, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress, //
+};
+
+/// Virtual Memory Manager for a GPU address space.
+///
+/// Each [`Vmm`] instance manages a single address space identified by its Page
+/// Directory Base (`PDB`) address. The [`Vmm`] is used for Channel, BAR1 and
+/// BAR2 mappings.
+pub(crate) struct Vmm {
+ pub(crate) pdb_addr: VramAddress,
+ pub(crate) mmu_version: MmuVersion,
+ /// Page table allocations required for mappings.
+ page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
+}
+
+impl Vmm {
+ /// Create a new [`Vmm`] for the given Page Directory Base address.
+ pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Result<Self> {
+ // Only MMU v2 is supported for now.
+ if mmu_version != MmuVersion::V2 {
+ return Err(ENOTSUPP);
+ }
+
+ Ok(Self {
+ pdb_addr,
+ mmu_version,
+ page_table_allocs: KVec::new(),
+ })
+ }
+
+ /// Read the [`Pfn`] for a mapped [`Vfn`] if one is mapped.
+ pub(crate) fn read_mapping(&self, mm: &GpuMm, vfn: Vfn) -> Result<Option<Pfn>> {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+
+ match walker.walk_to_pte_lookup(mm, vfn)? {
+ WalkResult::Mapped { pfn, .. } => Ok(Some(pfn)),
+ WalkResult::Unmapped { .. } | WalkResult::PageTableMissing => Ok(None),
+ }
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 18/25] gpu: nova-core: mm: Add virtual address range tracking to VMM
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (16 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 17/25] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 19/25] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
` (7 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add virtual address range tracking to the VMM using a buddy allocator.
This enables contiguous virtual address range allocation for mappings.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/vmm.rs | 98 +++++++++++++++++++++++++++++----
1 file changed, 87 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
index 0e1b0d668c57..d17571db2e2d 100644
--- a/drivers/gpu/nova-core/mm/vmm.rs
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -9,19 +9,34 @@
#![allow(dead_code)]
use kernel::{
- gpu::buddy::AllocatedBlocks,
- prelude::*, //
+ gpu::buddy::{
+ AllocatedBlocks,
+ BuddyFlags,
+ GpuBuddy,
+ GpuBuddyAllocParams,
+ GpuBuddyParams, //
+ },
+ prelude::*,
+ sizes::SZ_4K, //
};
-use crate::mm::{
- pagetable::{
- walk::{PtWalk, WalkResult},
- MmuVersion, //
+use core::ops::Range;
+
+use crate::{
+ mm::{
+ pagetable::{
+ walk::{PtWalk, WalkResult},
+ MmuVersion, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::{
+ IntoSafeCast, //
},
- GpuMm,
- Pfn,
- Vfn,
- VramAddress, //
};
/// Virtual Memory Manager for a GPU address space.
@@ -34,23 +49,84 @@ pub(crate) struct Vmm {
pub(crate) mmu_version: MmuVersion,
/// Page table allocations required for mappings.
page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
+ /// Buddy allocator for virtual address range tracking.
+ virt_buddy: GpuBuddy,
}
impl Vmm {
/// Create a new [`Vmm`] for the given Page Directory Base address.
- pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Result<Self> {
+ ///
+ /// The [`Vmm`] will manage a virtual address space of `va_size` bytes.
+ pub(crate) fn new(
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+ va_size: u64,
+ ) -> Result<Self> {
// Only MMU v2 is supported for now.
if mmu_version != MmuVersion::V2 {
return Err(ENOTSUPP);
}
+ let virt_buddy = GpuBuddy::new(GpuBuddyParams {
+ base_offset_bytes: 0,
+ physical_memory_size_bytes: va_size,
+ chunk_size_bytes: SZ_4K.into_safe_cast(),
+ })?;
+
Ok(Self {
pdb_addr,
mmu_version,
page_table_allocs: KVec::new(),
+ virt_buddy,
})
}
+ /// Allocate a contiguous virtual frame number range.
+ ///
+ /// # Arguments
+ ///
+ /// - `num_pages`: Number of pages to allocate.
+ /// - `va_range`: `None` = allocate anywhere, `Some(range)` = constrain allocation to the given
+ /// range.
+ pub(crate) fn alloc_vfn_range(
+ &self,
+ num_pages: usize,
+ va_range: Option<Range<u64>>,
+ ) -> Result<(Vfn, Pin<KBox<AllocatedBlocks>>)> {
+ let np: u64 = num_pages.into_safe_cast();
+ let size_bytes: u64 = np
+ .checked_mul(PAGE_SIZE.into_safe_cast())
+ .ok_or(EOVERFLOW)?;
+
+ let (start, end) = match va_range {
+ Some(r) => {
+ let range_size = r.end.checked_sub(r.start).ok_or(EOVERFLOW)?;
+ if range_size != size_bytes {
+ return Err(EINVAL);
+ }
+ (r.start, r.end)
+ }
+ None => (0, 0),
+ };
+
+ let params = GpuBuddyAllocParams {
+ start_range_address: start,
+ end_range_address: end,
+ size_bytes,
+ min_block_size_bytes: SZ_4K.into_safe_cast(),
+ buddy_flags: BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?,
+ };
+
+ let alloc = KBox::pin_init(self.virt_buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+
+ // Get the starting offset of the first block (only block as range is contiguous).
+ let offset = alloc.iter().next().ok_or(ENOMEM)?.offset();
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+ let vfn = Vfn::new(offset / page_size);
+
+ Ok((vfn, alloc))
+ }
+
/// Read the [`Pfn`] for a mapped [`Vfn`] if one is mapped.
pub(crate) fn read_mapping(&self, mm: &GpuMm, vfn: Vfn) -> Result<Option<Pfn>> {
let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 19/25] gpu: nova-core: mm: Add multi-page mapping API to VMM
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (17 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 18/25] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 20/25] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
` (6 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the page table mapping and unmapping API to the Virtual Memory
Manager, implementing a two-phase prepare/execute model suitable for
use both inside and outside the DMA fence signalling critical path.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/vmm.rs | 359 +++++++++++++++++++++++++++++++-
1 file changed, 357 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
index d17571db2e2d..c639e06c7d57 100644
--- a/drivers/gpu/nova-core/mm/vmm.rs
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -17,16 +17,26 @@
GpuBuddyParams, //
},
prelude::*,
+ rbtree::{RBTree, RBTreeNode},
sizes::SZ_4K, //
};
+use core::cell::Cell;
use core::ops::Range;
use crate::{
mm::{
pagetable::{
- walk::{PtWalk, WalkResult},
- MmuVersion, //
+ walk::{
+ PtWalk,
+ WalkPdeResult,
+ WalkResult, //
+ },
+ DualPde,
+ MmuVersion,
+ PageTableLevel,
+ Pde,
+ Pte, //
},
GpuMm,
Pfn,
@@ -51,6 +61,74 @@ pub(crate) struct Vmm {
page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
/// Buddy allocator for virtual address range tracking.
virt_buddy: GpuBuddy,
+ /// Prepared PT pages pending PDE installation, keyed by `install_addr`.
+ ///
+ /// Populated by `Vmm` mapping prepare phase and drained in the execute phase.
+ /// Shared by all pending maps in the `Vmm`, thus preventing races where 2
+ /// maps might be trying to install the same page table/directory entry pointer.
+ pt_pages: RBTree<VramAddress, PreparedPtPage>,
+}
+
+/// A pre-allocated and zeroed page table page.
+///
+/// Created during the mapping prepare phase and consumed during the mapping execute phase.
+/// Stored in an [`RBTree`] keyed by the PDE slot address (`install_addr`).
+struct PreparedPtPage {
+ /// The allocated and zeroed page table page.
+ alloc: Pin<KBox<AllocatedBlocks>>,
+ /// Page table level -- needed to determine if this PT page is for a dual PDE.
+ level: PageTableLevel,
+}
+
+/// Multi-page prepared mapping -- VA range allocated, ready for execute.
+///
+/// Produced by [`Vmm::prepare_map()`], consumed by [`Vmm::execute_map()`].
+/// The struct owns the VA space allocation between prepare and execute phases.
+pub(crate) struct PreparedMapping {
+ vfn_start: Vfn,
+ num_pages: usize,
+ vfn_alloc: Pin<KBox<AllocatedBlocks>>,
+}
+
+/// Result of a mapping operation -- tracks the active mapped range.
+///
+/// Returned by [`Vmm::execute_map()`] and [`Vmm::map_pages()`].
+/// Owns the VA allocation; the VA range is freed when this is dropped.
+/// Callers must call [`Vmm::unmap_pages()`] before dropping to invalidate
+/// PTEs (dropping only frees the VA range, not the PTE entries).
+pub(crate) struct MappedRange {
+ pub(crate) vfn_start: Vfn,
+ pub(crate) num_pages: usize,
+ /// VA allocation -- freed when [`MappedRange`] is dropped.
+ _vfn_alloc: Pin<KBox<AllocatedBlocks>>,
+ /// Logs a warning if dropped without unmapping.
+ _drop_guard: MustUnmapGuard,
+}
+
+/// Guard that logs a warning once if a [`MappedRange`] is dropped without
+/// calling [`Vmm::unmap_pages()`].
+struct MustUnmapGuard {
+ armed: Cell<bool>,
+}
+
+impl MustUnmapGuard {
+ const fn new() -> Self {
+ Self {
+ armed: Cell::new(true),
+ }
+ }
+
+ fn disarm(&self) {
+ self.armed.set(false);
+ }
+}
+
+impl Drop for MustUnmapGuard {
+ fn drop(&mut self) {
+ if self.armed.get() {
+ kernel::pr_warn!("MappedRange dropped without calling unmap_pages()\n");
+ }
+ }
}
impl Vmm {
@@ -78,6 +156,7 @@ pub(crate) fn new(
mmu_version,
page_table_allocs: KVec::new(),
virt_buddy,
+ pt_pages: RBTree::new(),
})
}
@@ -136,4 +215,280 @@ pub(crate) fn read_mapping(&self, mm: &GpuMm, vfn: Vfn) -> Result<Option<Pfn>> {
WalkResult::Unmapped { .. } | WalkResult::PageTableMissing => Ok(None),
}
}
+
+ /// Allocate and zero a physical page table page for a specific PDE slot.
+ /// Called during the map prepare phase.
+ fn alloc_and_zero_page_table(
+ &mut self,
+ mm: &GpuMm,
+ level: PageTableLevel,
+ ) -> Result<PreparedPtPage> {
+ let params = GpuBuddyAllocParams {
+ start_range_address: 0,
+ end_range_address: 0,
+ size_bytes: SZ_4K.into_safe_cast(),
+ min_block_size_bytes: SZ_4K.into_safe_cast(),
+ buddy_flags: BuddyFlags::try_new(0)?,
+ };
+ let blocks = KBox::pin_init(mm.buddy().alloc_blocks(¶ms), GFP_KERNEL)?;
+
+ // Get page's VRAM address from the allocation.
+ let page_vram = VramAddress::new(blocks.iter().next().ok_or(ENOMEM)?.offset());
+
+ // Zero via PRAMIN.
+ let mut window = mm.pramin().window()?;
+ let base = page_vram.raw();
+ for off in (0..PAGE_SIZE).step_by(8) {
+ window.try_write64(base + off, 0)?;
+ }
+
+ Ok(PreparedPtPage {
+ alloc: blocks,
+ level,
+ })
+ }
+
+ /// Ensure all intermediate page table pages are prepared for a [`Vfn`]. Just
+ /// finds out which PDE pages are missing, allocates pages for them, and defers
+ /// installation to the execute phase.
+ ///
+ /// PRAMIN is released before each allocation and re-acquired after. Memory
+ /// allocations outside of holding this lock to prevent deadlocks with fence signalling
+ /// critical path.
+ fn ensure_pte_path(&mut self, mm: &GpuMm, vfn: Vfn) -> Result {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+ let max_iter = 2 * self.mmu_version.pde_level_count();
+
+ // Keep looping until all PDE levels are resolved.
+ for _ in 0..max_iter {
+ let mut window = mm.pramin().window()?;
+
+ // Walk PDE levels. The closure checks self.pt_pages for prepared-but-uninstalled
+ // pages, letting the walker continue through them as if they were installed in HW.
+ // The walker keeps calling the closure to get these "prepared but not installed" pages.
+ let result = walker.walk_pde_levels(&mut window, vfn, |install_addr| {
+ self.pt_pages
+ .get(&install_addr)
+ .and_then(|p| Some(VramAddress::new(p.alloc.iter().next()?.offset())))
+ })?;
+
+ match result {
+ WalkPdeResult::Complete { .. } => {
+ // All PDE levels resolved.
+ return Ok(());
+ }
+ WalkPdeResult::Missing {
+ install_addr,
+ level,
+ } => {
+ // Drop PRAMIN before allocation.
+ drop(window);
+ let page = self.alloc_and_zero_page_table(mm, level)?;
+ let node = RBTreeNode::new(install_addr, page, GFP_KERNEL)?;
+ let old = self.pt_pages.insert(node);
+ if old.is_some() {
+ kernel::pr_warn_once!(
+ "VMM: duplicate install_addr in pt_pages (internal consistency error)\n"
+ );
+ return Err(EIO);
+ }
+ // Loop: re-acquire PRAMIN and re-walk from root.
+ }
+ }
+ }
+
+ kernel::pr_warn!(
+ "VMM: ensure_pte_path: loop exhausted after {} iters (VFN {:?})\n",
+ max_iter,
+ vfn
+ );
+ Err(EIO)
+ }
+
+ /// Prepare resources for mapping `num_pages` pages.
+ ///
+ /// Allocates a contiguous VA range, then walks the hierarchy per-VFN to prepare pages
+ /// for all missing PDEs. Returns a [`PreparedMapping`] with the VA allocation.
+ ///
+ /// If `va_range` is not `None`, the VA range is constrained to the given range. Safe
+ /// to call outside the fence signalling critical path.
+ pub(crate) fn prepare_map(
+ &mut self,
+ mm: &GpuMm,
+ num_pages: usize,
+ va_range: Option<Range<u64>>,
+ ) -> Result<PreparedMapping> {
+ if num_pages == 0 {
+ return Err(EINVAL);
+ }
+
+ // Pre-reserve so execute_map() can use push_within_capacity (no alloc in
+ // fence signalling critical path).
+ // Upper bound on page table pages needed for the full tree (PTE pages + PDE
+ // pages at all levels).
+ let pt_upper_bound = self.mmu_version.pt_pages_upper_bound(num_pages);
+ self.page_table_allocs.reserve(pt_upper_bound, GFP_KERNEL)?;
+
+ // Allocate contiguous VA range.
+ let (vfn_start, vfn_alloc) = self.alloc_vfn_range(num_pages, va_range)?;
+
+ // Walk the hierarchy per-VFN to prepare pages for all missing PDEs.
+ for i in 0..num_pages {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ self.ensure_pte_path(mm, vfn)?;
+ }
+
+ Ok(PreparedMapping {
+ vfn_start,
+ num_pages,
+ vfn_alloc,
+ })
+ }
+
+ /// Execute a prepared multi-page mapping.
+ ///
+ /// Drain prepared PT pages and install PDEs followed by single TLB flush.
+ pub(crate) fn execute_map(
+ &mut self,
+ mm: &GpuMm,
+ prepared: PreparedMapping,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result<MappedRange> {
+ if pfns.len() != prepared.num_pages {
+ return Err(EINVAL);
+ }
+
+ let PreparedMapping {
+ vfn_start,
+ num_pages,
+ vfn_alloc,
+ } = prepared;
+
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+ let mut window = mm.pramin().window()?;
+
+ // First, drain self.pt_pages, install all pending PDEs.
+ let mut cursor = self.pt_pages.cursor_front_mut();
+ while let Some(c) = cursor {
+ let (next, node) = c.remove_current();
+ let (install_addr, page) = node.to_key_value();
+ let page_vram = VramAddress::new(page.alloc.iter().next().ok_or(ENOMEM)?.offset());
+
+ if page.level == self.mmu_version.dual_pde_level() {
+ let new_dpde = DualPde::new_small(self.mmu_version, Pfn::from(page_vram));
+ new_dpde.write(&mut window, install_addr)?;
+ } else {
+ let new_pde = Pde::new_vram(self.mmu_version, Pfn::from(page_vram));
+ new_pde.write(&mut window, install_addr)?;
+ }
+
+ // Track the allocated pages in the `Vmm`.
+ self.page_table_allocs
+ .push_within_capacity(page.alloc)
+ .map_err(|_| ENOMEM)?;
+
+ cursor = next;
+ }
+
+ // Next, write PTEs (all PDEs now installed in HW).
+ for (i, &pfn) in pfns.iter().enumerate() {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ let result = walker.walk_to_pte_lookup_with_window(&mut window, vfn)?;
+
+ match result {
+ WalkResult::Unmapped { pte_addr } | WalkResult::Mapped { pte_addr, .. } => {
+ let pte = Pte::new_vram(self.mmu_version, pfn, writable);
+ pte.write(&mut window, pte_addr)?;
+ }
+ WalkResult::PageTableMissing => {
+ kernel::pr_warn_once!("VMM: page table missing for VFN {vfn:?}\n");
+ return Err(EIO);
+ }
+ }
+ }
+
+ drop(window);
+
+ // Finally, flush the TLB.
+ mm.tlb().flush(self.pdb_addr)?;
+
+ Ok(MappedRange {
+ vfn_start,
+ num_pages,
+ _vfn_alloc: vfn_alloc,
+ _drop_guard: MustUnmapGuard::new(),
+ })
+ }
+
+ /// Map pages doing prepare and execute in the same call.
+ ///
+ /// This is a convenience wrapper for callers outside the fence signalling critical
+ /// path (e.g., BAR mappings). For DRM usecases, [`Vmm::prepare_map()`] and
+ /// [`Vmm::execute_map()`] will be called separately.
+ pub(crate) fn map_pages(
+ &mut self,
+ mm: &GpuMm,
+ pfns: &[Pfn],
+ va_range: Option<Range<u64>>,
+ writable: bool,
+ ) -> Result<MappedRange> {
+ if pfns.is_empty() {
+ return Err(EINVAL);
+ }
+
+ // Check if provided VA range is sufficient (if provided).
+ if let Some(ref range) = va_range {
+ let required: u64 = pfns
+ .len()
+ .checked_mul(PAGE_SIZE)
+ .ok_or(EOVERFLOW)?
+ .into_safe_cast();
+ let available = range.end.checked_sub(range.start).ok_or(EINVAL)?;
+ if available < required {
+ return Err(EINVAL);
+ }
+ }
+
+ let prepared = self.prepare_map(mm, pfns.len(), va_range)?;
+ self.execute_map(mm, prepared, pfns, writable)
+ }
+
+ /// Unmap all pages in a [`MappedRange`] with a single TLB flush.
+ ///
+ /// Takes the range by value (consumes it), then invalidates PTEs for the range,
+ /// flushes the TLB, then drops the range (freeing the VA). PRAMIN lock is held.
+ pub(crate) fn unmap_pages(&mut self, mm: &GpuMm, range: MappedRange) -> Result {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+ let invalid_pte = Pte::invalid(self.mmu_version);
+
+ let mut window = mm.pramin().window()?;
+ for i in 0..range.num_pages {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(range.vfn_start.raw() + i_u64);
+ let result = walker.walk_to_pte_lookup_with_window(&mut window, vfn)?;
+
+ match result {
+ WalkResult::Mapped { pte_addr, .. } | WalkResult::Unmapped { pte_addr } => {
+ invalid_pte.write(&mut window, pte_addr)?;
+ }
+ WalkResult::PageTableMissing => {
+ continue;
+ }
+ }
+ }
+ drop(window);
+
+ mm.tlb().flush(self.pdb_addr)?;
+
+ // TODO: Internal page table pages (PDE, PTE pages) are still kept around.
+ // This is by design as repeated maps/unmaps will be fast. As a future TODO,
+ // we can add a reclaimer here to reclaim if VRAM is short. For now, the PT
+ // pages are dropped once the `Vmm` is dropped.
+
+ range._drop_guard.disarm(); // Unmap complete, Ok to drop MappedRange.
+ Ok(())
+ }
}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 20/25] gpu: nova-core: Add BAR1 aperture type and size constant
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (18 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 19/25] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 21/25] gpu: nova-core: gsp: Add BAR1 PDE base accessors Joel Fernandes
` (5 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add BAR1_SIZE constant and Bar1 type alias for the 256MB BAR1 aperture.
These are prerequisites for BAR1 memory access functionality.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index 5a4cc047bcfc..f30ffa45cf13 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -13,7 +13,10 @@
Vendor, //
},
prelude::*,
- sizes::SZ_16M,
+ sizes::{
+ SZ_16M,
+ SZ_256M, //
+ },
sync::Arc, //
};
@@ -28,6 +31,7 @@ pub(crate) struct NovaCore {
}
const BAR0_SIZE: usize = SZ_16M;
+pub(crate) const BAR1_SIZE: usize = SZ_256M;
// For now we only support Ampere which can use up to 47-bit DMA addresses.
//
@@ -38,6 +42,8 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
+#[expect(dead_code)]
+pub(crate) type Bar1 = pci::Bar<BAR1_SIZE>;
kernel::pci_device_table!(
PCI_TABLE,
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 21/25] gpu: nova-core: gsp: Add BAR1 PDE base accessors
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (19 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 20/25] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 22/25] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
` (4 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add accessor methods to GspStaticConfigInfo for retrieving the BAR1 Page
Directory Entry base addresses from GSP-RM firmware.
These addresses point to the root page tables for BAR1 virtual memory spaces.
The page tables are set up by GSP-RM during GPU initialization.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gsp/commands.rs | 8 ++++++++
drivers/gpu/nova-core/gsp/fw/commands.rs | 8 ++++++++
2 files changed, 16 insertions(+)
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index fc9ba08f9f8d..22bd61e2a67e 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -189,6 +189,7 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
/// The reply from the GSP to the [`GetGspStaticInfo`] command.
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
+ bar1_pde_base: u64,
/// First usable FB region `(base, size)` for memory allocation.
usable_fb_region: Option<(u64, u64)>,
}
@@ -204,6 +205,7 @@ fn read(
) -> Result<Self, Self::InitError> {
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
+ bar1_pde_base: msg.bar1_pde_base(),
usable_fb_region: msg.first_usable_fb_region(),
})
}
@@ -232,6 +234,12 @@ pub(crate) fn gpu_name(&self) -> core::result::Result<&str, GpuNameError> {
.map_err(GpuNameError::InvalidUtf8)
}
+ /// Returns the BAR1 Page Directory Entry base address.
+ #[expect(dead_code)]
+ pub(crate) fn bar1_pde_base(&self) -> u64 {
+ self.bar1_pde_base
+ }
+
/// Returns the usable FB region `(base, size)` for driver allocation which is
/// already retrieved from the GSP.
pub(crate) fn usable_fb_region(&self) -> Option<(u64, u64)> {
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index ff771a4aba4f..307f48670e6d 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -116,6 +116,14 @@ impl GspStaticConfigInfo {
self.0.gpuNameString
}
+ /// Returns the BAR1 Page Directory Entry base address.
+ ///
+ /// This is the root page table address for BAR1 virtual memory,
+ /// set up by GSP-RM firmware.
+ pub(crate) fn bar1_pde_base(&self) -> u64 {
+ self.0.bar1PdeBase
+ }
+
/// Extract the first usable FB region from GSP firmware data.
///
/// Returns the first region suitable for driver memory allocation as a `(base, size)` tuple.
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 22/25] gpu: nova-core: mm: Add BAR1 user interface
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (20 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 21/25] gpu: nova-core: gsp: Add BAR1 PDE base accessors Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 23/25] gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot Joel Fernandes
` (3 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the BAR1 user interface for CPU access to GPU video memory through
the BAR1 aperture.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 1 -
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/bar_user.rs | 156 +++++++++++++++++++++++++++
3 files changed, 157 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/nova-core/mm/bar_user.rs
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index f30ffa45cf13..d8b2e967ba4c 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -42,7 +42,6 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
-#[expect(dead_code)]
pub(crate) type Bar1 = pci::Bar<BAR1_SIZE>;
kernel::pci_device_table!(
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 594ac93497aa..42529e2339a1 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -4,6 +4,7 @@
#![expect(dead_code)]
+pub(crate) mod bar_user;
pub(crate) mod pagetable;
pub(crate) mod pramin;
pub(crate) mod tlb;
diff --git a/drivers/gpu/nova-core/mm/bar_user.rs b/drivers/gpu/nova-core/mm/bar_user.rs
new file mode 100644
index 000000000000..4af56ac628b6
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/bar_user.rs
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! BAR1 user interface for CPU access to GPU virtual memory. Used for USERD
+//! for GPU work submission, and applications to access GPU buffers via mmap().
+
+use kernel::io::Io;
+use kernel::prelude::*;
+
+use crate::{
+ driver::Bar1,
+ mm::{
+ pagetable::MmuVersion,
+ vmm::{
+ MappedRange,
+ Vmm, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VirtualAddress,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::{
+ IntoSafeCast, //
+ },
+};
+
+/// BAR1 user interface for virtual memory mappings.
+///
+/// Owns a VMM instance with virtual address tracking and provides
+/// BAR1-specific mapping and cleanup operations.
+pub(crate) struct BarUser {
+ vmm: Vmm,
+}
+
+impl BarUser {
+ /// Create a new [`BarUser`] with virtual address tracking.
+ pub(crate) fn new(
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+ va_size: u64,
+ ) -> Result<Self> {
+ Ok(Self {
+ vmm: Vmm::new(pdb_addr, mmu_version, va_size)?,
+ })
+ }
+
+ /// Map physical pages to a contiguous BAR1 virtual range.
+ pub(crate) fn map<'a>(
+ &'a mut self,
+ mm: &'a GpuMm,
+ bar: &'a Bar1,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result<BarAccess<'a>> {
+ if pfns.is_empty() {
+ return Err(EINVAL);
+ }
+
+ let mapped = self.vmm.map_pages(mm, pfns, None, writable)?;
+
+ Ok(BarAccess {
+ vmm: &mut self.vmm,
+ mm,
+ bar,
+ mapped: Some(mapped),
+ })
+ }
+}
+
+/// Access object for a mapped BAR1 region.
+///
+/// Wraps a [`MappedRange`] and provides BAR1 access. When dropped,
+/// unmaps pages and releases the VA range (by passing the range to
+/// [`Vmm::unmap_pages()`], which consumes it).
+pub(crate) struct BarAccess<'a> {
+ vmm: &'a mut Vmm,
+ mm: &'a GpuMm,
+ bar: &'a Bar1,
+ /// Needs to be an `Option` so that we can `take()` it and call `Drop`
+ /// on it in [`Vmm::unmap_pages()`].
+ mapped: Option<MappedRange>,
+}
+
+impl<'a> BarAccess<'a> {
+ /// Returns the active mapping.
+ fn mapped(&self) -> &MappedRange {
+ // `mapped` is only `None` after `take()` in `Drop`; accessors are
+ // never called from within `Drop`, so unwrap() never panics.
+ self.mapped.as_ref().unwrap()
+ }
+
+ /// Get the base virtual address of this mapping.
+ pub(crate) fn base(&self) -> VirtualAddress {
+ VirtualAddress::from(self.mapped().vfn_start)
+ }
+
+ /// Get the total size of the mapped region in bytes.
+ pub(crate) fn size(&self) -> usize {
+ self.mapped().num_pages * PAGE_SIZE
+ }
+
+ /// Get the starting virtual frame number.
+ pub(crate) fn vfn_start(&self) -> Vfn {
+ self.mapped().vfn_start
+ }
+
+ /// Get the number of pages in this mapping.
+ pub(crate) fn num_pages(&self) -> usize {
+ self.mapped().num_pages
+ }
+
+ /// Translate an offset within this mapping to a BAR1 aperture offset.
+ fn bar_offset(&self, offset: usize) -> Result<usize> {
+ if offset >= self.size() {
+ return Err(EINVAL);
+ }
+
+ let base_vfn: usize = self.mapped().vfn_start.raw().into_safe_cast();
+ let base = base_vfn.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+ base.checked_add(offset).ok_or(EOVERFLOW)
+ }
+
+ // Fallible accessors with runtime bounds checking.
+
+ /// Read a 32-bit value at the given offset.
+ pub(crate) fn try_read32(&self, offset: usize) -> Result<u32> {
+ self.bar.try_read32(self.bar_offset(offset)?)
+ }
+
+ /// Write a 32-bit value at the given offset.
+ pub(crate) fn try_write32(&self, value: u32, offset: usize) -> Result {
+ self.bar.try_write32(value, self.bar_offset(offset)?)
+ }
+
+ /// Read a 64-bit value at the given offset.
+ pub(crate) fn try_read64(&self, offset: usize) -> Result<u64> {
+ self.bar.try_read64(self.bar_offset(offset)?)
+ }
+
+ /// Write a 64-bit value at the given offset.
+ pub(crate) fn try_write64(&self, value: u64, offset: usize) -> Result {
+ self.bar.try_write64(value, self.bar_offset(offset)?)
+ }
+}
+
+impl Drop for BarAccess<'_> {
+ fn drop(&mut self) {
+ if let Some(mapped) = self.mapped.take() {
+ if self.vmm.unmap_pages(self.mm, mapped).is_err() {
+ kernel::pr_warn_once!("BarAccess: unmap_pages failed.\n");
+ }
+ }
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 23/25] gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (21 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 22/25] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 24/25] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
` (2 subsequent siblings)
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add a BarUser field to struct Gpu and eagerly create it during GPU
initialization. The BarUser provides the BAR1 user interface for CPU
access to GPU virtual memory through the GPU's MMU.
The BarUser is initialized using BAR1 PDE base address from GSP static
info, MMU version and BAR1 size obtained from platform device.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 22 +++++++++++++++++++++-
drivers/gpu/nova-core/gsp/commands.rs | 1 -
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index ba769de2f984..4281487ca52e 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -26,7 +26,12 @@
commands::GetGspStaticInfoReply,
Gsp, //
},
- mm::GpuMm,
+ mm::{
+ bar_user::BarUser,
+ pagetable::MmuVersion,
+ GpuMm,
+ VramAddress, //
+ },
num::IntoSafeCast,
regs,
};
@@ -36,6 +41,7 @@
struct BootParams {
usable_vram_start: u64,
usable_vram_size: u64,
+ bar1_pde_base: u64,
}
macro_rules! define_chipset {
@@ -273,6 +279,8 @@ pub(crate) struct Gpu {
gsp: Gsp,
/// Static GPU information from GSP.
gsp_static_info: GetGspStaticInfoReply,
+ /// BAR1 user interface for CPU access to GPU virtual memory.
+ bar_user: BarUser,
}
impl Gpu {
@@ -286,6 +294,7 @@ pub(crate) fn new<'a>(
let boot_params: Cell<BootParams> = Cell::new(BootParams {
usable_vram_start: 0,
usable_vram_size: 0,
+ bar1_pde_base: 0,
});
try_pin_init!(Self {
@@ -330,6 +339,7 @@ pub(crate) fn new<'a>(
boot_params.set(BootParams {
usable_vram_start: usable_vram.start,
usable_vram_size: usable_vram.end - usable_vram.start,
+ bar1_pde_base: info.bar1_pde_base(),
});
info
@@ -346,6 +356,16 @@ pub(crate) fn new<'a>(
})?
},
+ // Create BAR1 user interface for CPU access to GPU virtual memory.
+ // Uses the BAR1 PDE base from GSP and full BAR1 size for VA space.
+ bar_user: {
+ let params = boot_params.get();
+ let pdb_addr = VramAddress::new(params.bar1_pde_base);
+ let mmu_version = MmuVersion::from(spec.chipset.arch());
+ let bar1_size = pdev.resource_len(1)?;
+ BarUser::new(pdb_addr, mmu_version, bar1_size)?
+ },
+
bar: devres_bar,
})
}
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 22bd61e2a67e..8b3fa29c57f1 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -235,7 +235,6 @@ pub(crate) fn gpu_name(&self) -> core::result::Result<&str, GpuNameError> {
}
/// Returns the BAR1 Page Directory Entry base address.
- #[expect(dead_code)]
pub(crate) fn bar1_pde_base(&self) -> u64 {
self.bar1_pde_base
}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 24/25] gpu: nova-core: mm: Add BAR1 memory management self-tests
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (22 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 23/25] gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 25/25] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
2026-02-27 4:25 ` Claude review: gpu: nova-core: Add memory management support Claude Code Review Bot
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add self-tests for BAR1 access during driver probe when
CONFIG_NOVA_MM_SELFTESTS is enabled (default disabled). This results in
testing the Vmm, GPU buddy allocator and BAR1 region all of which should
function correctly for the tests to pass.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 10 ++
drivers/gpu/nova-core/driver.rs | 2 +
drivers/gpu/nova-core/gpu.rs | 42 +++++
drivers/gpu/nova-core/mm/bar_user.rs | 250 +++++++++++++++++++++++++++
4 files changed, 304 insertions(+)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 6513007bf66f..35de55aabcfc 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -15,3 +15,13 @@ config NOVA_CORE
This driver is work in progress and may not be functional.
If M is selected, the module will be called nova_core.
+
+config NOVA_MM_SELFTESTS
+ bool "Memory management self-tests"
+ depends on NOVA_CORE
+ help
+ Enable self-tests for the memory management subsystem. When enabled,
+ tests are run during GPU probe to verify PRAMIN aperture access,
+ page table walking, and BAR1 virtual memory mapping functionality.
+
+ This is a testing option and is default-disabled.
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index d8b2e967ba4c..7d0d09939835 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -92,6 +92,8 @@ fn probe(pdev: &pci::Device<Core>, _info: &Self::IdInfo) -> impl PinInit<Self, E
Ok(try_pin_init!(Self {
gpu <- Gpu::new(pdev, bar.clone(), bar.access(pdev.as_ref())?),
+ // Run optional GPU selftests.
+ _: { gpu.run_selftests(pdev)? },
_reg <- auxiliary::Registration::new(
pdev.as_ref(),
c"nova-drm",
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 4281487ca52e..fba6ddba6a3f 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -380,4 +380,46 @@ pub(crate) fn unbind(&self, dev: &device::Device<device::Core>) {
.inspect(|bar| self.sysmem_flush.unregister(bar))
.is_err());
}
+
+ /// Run selftests on the constructed [`Gpu`].
+ pub(crate) fn run_selftests(
+ mut self: Pin<&mut Self>,
+ pdev: &pci::Device<device::Bound>,
+ ) -> Result {
+ self.as_mut().run_mm_selftests(pdev)?;
+ Ok(())
+ }
+
+ #[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+ fn run_mm_selftests(mut self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>) -> Result {
+ use crate::driver::BAR1_SIZE;
+
+ let mmu_version = MmuVersion::from(self.spec.chipset.arch());
+
+ // BAR1 self-tests.
+ let bar1 = Arc::pin_init(
+ pdev.iomap_region_sized::<BAR1_SIZE>(1, c"nova-core/bar1"),
+ GFP_KERNEL,
+ )?;
+ let bar1_access = bar1.access(pdev.as_ref())?;
+
+ let proj = self.as_mut().project();
+ let bar1_pde_base = proj.gsp_static_info.bar1_pde_base();
+ let mm = proj.mm.as_ref().get_ref();
+
+ crate::mm::bar_user::run_self_test(
+ pdev.as_ref(),
+ mm,
+ bar1_access,
+ bar1_pde_base,
+ mmu_version,
+ )?;
+
+ Ok(())
+ }
+
+ #[cfg(not(CONFIG_NOVA_MM_SELFTESTS))]
+ fn run_mm_selftests(self: Pin<&mut Self>, _pdev: &pci::Device<device::Bound>) -> Result {
+ Ok(())
+ }
}
diff --git a/drivers/gpu/nova-core/mm/bar_user.rs b/drivers/gpu/nova-core/mm/bar_user.rs
index 4af56ac628b6..28dfb10e7cea 100644
--- a/drivers/gpu/nova-core/mm/bar_user.rs
+++ b/drivers/gpu/nova-core/mm/bar_user.rs
@@ -154,3 +154,253 @@ fn drop(&mut self) {
}
}
}
+
+/// Check if the PDB has valid, VRAM-backed page tables.
+///
+/// Returns `Err(ENOENT)` if page tables are missing or not in VRAM.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn check_valid_page_tables(mm: &GpuMm, pdb_addr: VramAddress) -> Result {
+ use crate::mm::pagetable::{
+ ver2::Pde,
+ AperturePde, //
+ };
+
+ let mut window = mm.pramin().window()?;
+ let pdb_entry_raw = window.try_read64(pdb_addr.raw())?;
+ let pdb_entry = Pde::new(pdb_entry_raw);
+
+ if !pdb_entry.is_valid() {
+ return Err(ENOENT);
+ }
+
+ if pdb_entry.aperture() != AperturePde::VideoMemory {
+ return Err(ENOENT);
+ }
+
+ Ok(())
+}
+
+/// Run MM subsystem self-tests during probe.
+///
+/// Tests page table infrastructure and `BAR1` MMIO access using the `BAR1`
+/// address space. Uses the `GpuMm`'s buddy allocator to allocate page tables
+/// and test pages as needed.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(crate) fn run_self_test(
+ dev: &kernel::device::Device,
+ mm: &GpuMm,
+ bar1: &crate::driver::Bar1,
+ bar1_pdb: u64,
+ mmu_version: MmuVersion,
+) -> Result {
+ use crate::mm::{
+ vmm::Vmm,
+ PAGE_SIZE, //
+ };
+ use kernel::gpu::buddy::{
+ BuddyFlags,
+ GpuBuddyAllocParams, //
+ };
+ use kernel::sizes::{
+ SZ_4K,
+ SZ_16K,
+ SZ_32K,
+ SZ_64K, //
+ };
+
+ // Self-tests only support MMU v2 for now.
+ if mmu_version != MmuVersion::V2 {
+ dev_info!(
+ dev,
+ "MM: Skipping self-tests for MMU {:?} (only V2 supported)\n",
+ mmu_version
+ );
+ return Ok(());
+ }
+
+ // Test patterns.
+ const PATTERN_PRAMIN: u32 = 0xDEAD_BEEF;
+ const PATTERN_BAR1: u32 = 0xCAFE_BABE;
+
+ dev_info!(dev, "MM: Starting self-test...\n");
+
+ let pdb_addr = VramAddress::new(bar1_pdb);
+
+ // Check if initial page tables are in VRAM.
+ if check_valid_page_tables(mm, pdb_addr).is_err() {
+ dev_info!(dev, "MM: Self-test SKIPPED - no valid VRAM page tables\n");
+ return Ok(());
+ }
+
+ // Set up a test page from the buddy allocator.
+ let alloc_params = GpuBuddyAllocParams {
+ start_range_address: 0,
+ end_range_address: 0,
+ size_bytes: SZ_4K.into_safe_cast(),
+ min_block_size_bytes: SZ_4K.into_safe_cast(),
+ buddy_flags: BuddyFlags::try_new(0)?,
+ };
+
+ let test_page_blocks = KBox::pin_init(mm.buddy().alloc_blocks(&alloc_params), GFP_KERNEL)?;
+ let test_vram_offset = test_page_blocks.iter().next().ok_or(ENOMEM)?.offset();
+ let test_vram = VramAddress::new(test_vram_offset);
+ let test_pfn = Pfn::from(test_vram);
+
+ // Create a VMM of size 64K to track virtual memory mappings.
+ let mut vmm = Vmm::new(pdb_addr, MmuVersion::V2, SZ_64K.into_safe_cast())?;
+
+ // Create a test mapping.
+ let mapped = vmm.map_pages(mm, &[test_pfn], None, true)?;
+ let test_vfn = mapped.vfn_start;
+
+ // Pre-compute test addresses for the PRAMIN to BAR1 read test.
+ let vfn_offset: usize = test_vfn.raw().into_safe_cast();
+ let bar1_base_offset = vfn_offset.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+ let bar1_read_offset: usize = bar1_base_offset + 0x100;
+ let vram_read_addr: usize = test_vram.raw() + 0x100;
+
+ // Test 1: Write via PRAMIN, read via BAR1.
+ {
+ let mut window = mm.pramin().window()?;
+ window.try_write32(vram_read_addr, PATTERN_PRAMIN)?;
+ }
+
+ // Read back via BAR1 aperture.
+ let bar1_value = bar1.try_read32(bar1_read_offset)?;
+
+ let test1_passed = if bar1_value == PATTERN_PRAMIN {
+ true
+ } else {
+ dev_err!(
+ dev,
+ "MM: Test 1 FAILED - Expected {:#010x}, got {:#010x}\n",
+ PATTERN_PRAMIN,
+ bar1_value
+ );
+ false
+ };
+
+ // Cleanup - invalidate PTE.
+ vmm.unmap_pages(mm, mapped)?;
+
+ // Test 2: Two-phase prepare/execute API.
+ let prepared = vmm.prepare_map(mm, 1, None)?;
+ let mapped2 = vmm.execute_map(mm, prepared, &[test_pfn], true)?;
+ let readback = vmm.read_mapping(mm, mapped2.vfn_start)?;
+ let test2_passed = if readback == Some(test_pfn) {
+ true
+ } else {
+ dev_err!(dev, "MM: Test 2 FAILED - Two-phase map readback mismatch\n");
+ false
+ };
+ vmm.unmap_pages(mm, mapped2)?;
+
+ // Test 3: Range-constrained allocation with a hole — exercises block.size()-driven
+ // BAR1 mapping. A 4K hole is punched at base+16K, then a single 32K allocation
+ // is requested within [base, base+36K). The buddy allocator must split around the
+ // hole, returning multiple blocks (expected: {16K, 4K, 8K, 4K} = 32K total).
+ // Each block is mapped into BAR1 and verified via PRAMIN read-back.
+ //
+ // Address layout (base = 0x10000):
+ // [ 16K ] [HOLE 4K] [4K] [ 8K ] [4K]
+ // 0x10000 0x14000 0x15000 0x16000 0x18000 0x19000
+ let range_base: u64 = SZ_64K.into_safe_cast();
+ let range_flag = BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?;
+ let sz_4k: u64 = SZ_4K.into_safe_cast();
+ let sz_16k: u64 = SZ_16K.into_safe_cast();
+ let sz_32k_4k: u64 = (SZ_32K + SZ_4K).into_safe_cast();
+
+ // Punch a 4K hole at base+16K so the subsequent 32K allocation must split.
+ let _hole = KBox::pin_init(mm.buddy().alloc_blocks(&GpuBuddyAllocParams {
+ start_range_address: range_base + sz_16k,
+ end_range_address: range_base + sz_16k + sz_4k,
+ size_bytes: SZ_4K.into_safe_cast(),
+ min_block_size_bytes: SZ_4K.into_safe_cast(),
+ buddy_flags: range_flag,
+ }), GFP_KERNEL)?;
+
+ // Allocate 32K within [base, base+36K). The hole forces the allocator to return
+ // split blocks whose sizes are determined by buddy alignment.
+ let blocks = KBox::pin_init(mm.buddy().alloc_blocks(&GpuBuddyAllocParams {
+ start_range_address: range_base,
+ end_range_address: range_base + sz_32k_4k,
+ size_bytes: SZ_32K.into_safe_cast(),
+ min_block_size_bytes: SZ_4K.into_safe_cast(),
+ buddy_flags: range_flag,
+ }), GFP_KERNEL)?;
+
+ let mut test3_passed = true;
+ let mut total_size = 0u64;
+
+ for block in blocks.iter() {
+ total_size += block.size();
+
+ // Map all pages of this block.
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+ let num_pages: usize = (block.size() / page_size).into_safe_cast();
+
+ let mut pfns = KVec::new();
+ for j in 0..num_pages {
+ let j_u64: u64 = j.into_safe_cast();
+ pfns.push(
+ Pfn::from(VramAddress::new(
+ block.offset()
+ + j_u64.checked_mul(page_size)
+ .ok_or(EOVERFLOW)?,
+ )),
+ GFP_KERNEL,
+ )?;
+ }
+
+ let mapped = vmm.map_pages(mm, &pfns, None, true)?;
+ let bar1_base_vfn: usize = mapped.vfn_start.raw().into_safe_cast();
+ let bar1_base = bar1_base_vfn.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+
+ for j in 0..num_pages {
+ let page_bar1_off = bar1_base + j * PAGE_SIZE;
+ let j_u64: u64 = j.into_safe_cast();
+ let page_phys = block.offset()
+ + j_u64.checked_mul(PAGE_SIZE.into_safe_cast())
+ .ok_or(EOVERFLOW)?;
+
+ bar1.try_write32(PATTERN_BAR1, page_bar1_off)?;
+
+ let pramin_val = {
+ let mut window = mm.pramin().window()?;
+ window.try_read32(page_phys.into_safe_cast())?
+ };
+
+ if pramin_val != PATTERN_BAR1 {
+ dev_err!(
+ dev,
+ "MM: Test 3 FAILED block offset {:#x} page {} (val={:#x})\n",
+ block.offset(),
+ j,
+ pramin_val
+ );
+ test3_passed = false;
+ }
+ }
+
+ vmm.unmap_pages(mm, mapped)?;
+ }
+
+ // Verify aggregate: all returned block sizes must sum to allocation size.
+ if total_size != SZ_32K.into_safe_cast() {
+ dev_err!(
+ dev,
+ "MM: Test 3 FAILED - total size {} != expected {}\n",
+ total_size,
+ SZ_32K
+ );
+ test3_passed = false;
+ }
+
+ if test1_passed && test2_passed && test3_passed {
+ dev_info!(dev, "MM: All self-tests PASSED\n");
+ Ok(())
+ } else {
+ dev_err!(dev, "MM: Self-tests FAILED\n");
+ Err(EIO)
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* [PATCH v8 25/25] gpu: nova-core: mm: Add PRAMIN aperture self-tests
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (23 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 24/25] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
@ 2026-02-24 22:53 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-27 4:25 ` Claude review: gpu: nova-core: Add memory management support Claude Code Review Bot
25 siblings, 1 reply; 55+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add self-tests for the PRAMIN aperture mechanism to verify correct
operation during GPU probe. The tests validate various alignment
requirements and corner cases.
The tests are default disabled and behind CONFIG_NOVA_MM_SELFTESTS.
When enabled, tests run after GSP boot during probe.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 3 +
drivers/gpu/nova-core/mm/pramin.rs | 161 +++++++++++++++++++++++++++++
2 files changed, 164 insertions(+)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index fba6ddba6a3f..34827dc2afff 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -396,6 +396,9 @@ fn run_mm_selftests(mut self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>)
let mmu_version = MmuVersion::from(self.spec.chipset.arch());
+ // PRAMIN aperture self-tests.
+ crate::mm::pramin::run_self_test(pdev.as_ref(), self.bar.clone(), mmu_version)?;
+
// BAR1 self-tests.
let bar1 = Arc::pin_init(
pdev.iomap_region_sized::<BAR1_SIZE>(1, c"nova-core/bar1"),
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
index 04b652d3ee4f..30b1dba0c305 100644
--- a/drivers/gpu/nova-core/mm/pramin.rs
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -290,3 +290,164 @@ fn drop(&mut self) {
// MutexGuard drops automatically, releasing the lock.
}
}
+
+/// Run PRAMIN self-tests during boot if self-tests are enabled.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(crate) fn run_self_test(
+ dev: &kernel::device::Device,
+ bar: Arc<Devres<Bar0>>,
+ mmu_version: super::pagetable::MmuVersion,
+) -> Result {
+ use super::pagetable::MmuVersion;
+
+ // PRAMIN support is only for MMU v2 for now (Turing/Ampere/Ada).
+ if mmu_version != MmuVersion::V2 {
+ dev_info!(
+ dev,
+ "PRAMIN: Skipping self-tests for MMU {:?} (only V2 supported)\n",
+ mmu_version
+ );
+ return Ok(());
+ }
+
+ dev_info!(dev, "PRAMIN: Starting self-test...\n");
+
+ let pramin = Arc::pin_init(Pramin::new(bar)?, GFP_KERNEL)?;
+ let mut win = pramin.window()?;
+
+ // Use offset 0x1000 as test area.
+ let base: usize = 0x1000;
+
+ // Test 1: Read/write at byte-aligned locations.
+ for i in 0u8..4 {
+ let offset = base + 1 + usize::from(i); // Offsets 0x1001, 0x1002, 0x1003, 0x1004
+ let val = 0xA0 + i;
+ win.try_write8(offset, val)?;
+ let read_val = win.try_read8(offset)?;
+ if read_val != val {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: wrote {:#x}, read {:#x}\n",
+ offset,
+ val,
+ read_val
+ );
+ return Err(EIO);
+ }
+ }
+
+ // Test 2: Write `u32` and read back as `u8`s.
+ let test2_offset = base + 0x10;
+ let test2_val: u32 = 0xDEADBEEF;
+ win.try_write32(test2_offset, test2_val)?;
+
+ // Read back as individual bytes (little-endian: EF BE AD DE).
+ let expected_bytes: [u8; 4] = [0xEF, 0xBE, 0xAD, 0xDE];
+ for (i, &expected) in expected_bytes.iter().enumerate() {
+ let read_val = win.try_read8(test2_offset + i)?;
+ if read_val != expected {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ test2_offset + i,
+ expected,
+ read_val
+ );
+ return Err(EIO);
+ }
+ }
+
+ // Test 3: Window repositioning across 1MB boundaries.
+ // Write to offset > 1MB to trigger window slide, then verify.
+ let test3_offset_a: usize = base; // First 1MB region.
+ let test3_offset_b: usize = 0x200000 + base; // 2MB + base (different 1MB region).
+ let val_a: u32 = 0x11111111;
+ let val_b: u32 = 0x22222222;
+
+ // Write to first region.
+ win.try_write32(test3_offset_a, val_a)?;
+
+ // Write to second region (triggers window reposition).
+ win.try_write32(test3_offset_b, val_b)?;
+
+ // Read back from second region.
+ let read_b = win.try_read32(test3_offset_b)?;
+ if read_b != val_b {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ test3_offset_b,
+ val_b,
+ read_b
+ );
+ return Err(EIO);
+ }
+
+ // Read back from first region (triggers window reposition again).
+ let read_a = win.try_read32(test3_offset_a)?;
+ if read_a != val_a {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ test3_offset_a,
+ val_a,
+ read_a
+ );
+ return Err(EIO);
+ }
+
+ // Test 4: Invalid offset rejection (beyond 40-bit address space).
+ {
+ // 40-bit address space limit check.
+ let invalid_offset: usize = MAX_VRAM_OFFSET + 1;
+ let result = win.try_read32(invalid_offset);
+ if result.is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - read at invalid offset {:#x} should have failed\n",
+ invalid_offset
+ );
+ return Err(EIO);
+ }
+ }
+
+ // Test 5: Misaligned multi-byte access rejection.
+ // Verify that misaligned `u16`/`u32`/`u64` accesses are properly rejected.
+ {
+ // `u16` at odd offset (not 2-byte aligned).
+ let offset_u16 = base + 0x21;
+ if win.try_write16(offset_u16, 0xABCD).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u16 write at {:#x} should have failed\n",
+ offset_u16
+ );
+ return Err(EIO);
+ }
+
+ // `u32` at 2-byte-aligned (not 4-byte-aligned) offset.
+ let offset_u32 = base + 0x32;
+ if win.try_write32(offset_u32, 0x12345678).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u32 write at {:#x} should have failed\n",
+ offset_u32
+ );
+ return Err(EIO);
+ }
+
+ // `u64` read at 4-byte-aligned (not 8-byte-aligned) offset.
+ let offset_u64 = base + 0x44;
+ if win.try_read64(offset_u64).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u64 read at {:#x} should have failed\n",
+ offset_u64
+ );
+ return Err(EIO);
+ }
+ }
+
+ dev_info!(dev, "PRAMIN: All self-tests PASSED\n");
+ Ok(())
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 55+ messages in thread
* Re: [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3
2026-02-24 22:53 ` [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
@ 2026-02-25 5:39 ` Gary Guo
2026-02-25 14:26 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 55+ messages in thread
From: Gary Guo @ 2026-02-25 5:39 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev
On 2026-02-24 22:53, Joel Fernandes wrote:
> Add the page table walker implementation that traverses the page table
> hierarchy for both MMU v2 (5-level) and MMU v3 (6-level) to resolve
> virtual addresses to physical addresses or find PTE locations.
>
> Currently only v2 has been tested (nova-core currently boots pre-hopper)
> with some initial prepatory work done for v3.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
> drivers/gpu/nova-core/mm/pagetable.rs | 1 +
> drivers/gpu/nova-core/mm/pagetable/walk.rs | 218 +++++++++++++++++++++
> 2 files changed, 219 insertions(+)
> create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs
>
> diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
> index 33acb7053fbe..7ebea4cb8437 100644
> --- a/drivers/gpu/nova-core/mm/pagetable.rs
> +++ b/drivers/gpu/nova-core/mm/pagetable.rs
> @@ -9,6 +9,7 @@
> #![expect(dead_code)]
> pub(crate) mod ver2;
> pub(crate) mod ver3;
> +pub(crate) mod walk;
>
> use kernel::prelude::*;
>
> diff --git a/drivers/gpu/nova-core/mm/pagetable/walk.rs b/drivers/gpu/nova-core/mm/pagetable/walk.rs
> new file mode 100644
> index 000000000000..023226af8816
> --- /dev/null
> +++ b/drivers/gpu/nova-core/mm/pagetable/walk.rs
> @@ -0,0 +1,218 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Page table walker implementation for NVIDIA GPUs.
> +//!
> +//! This module provides page table walking functionality for MMU v2 and v3.
> +//! The walker traverses the page table hierarchy to resolve virtual addresses
> +//! to physical addresses or to find PTE locations.
> +//!
> +//! # Page Table Hierarchy
> +//!
> +//! ## MMU v2 (Turing/Ampere/Ada) - 5 levels
> +//!
> +//! ```text
> +//! +-------+ +-------+ +-------+ +---------+ +-------+
> +//! | PDB |---->| L1 |---->| L2 |---->| L3 Dual |---->| L4 |
> +//! | (L0) | | | | | | PDE | | (PTE) |
> +//! +-------+ +-------+ +-------+ +---------+ +-------+
> +//! 64-bit 64-bit 64-bit 128-bit 64-bit
> +//! PDE PDE PDE (big+small) PTE
> +//! ```
> +//!
> +//! ## MMU v3 (Hopper+) - 6 levels
I think this is called "4 levels" and "5 levels" in kernel MM rather than
"5 levels" and "6 levels".
Best,
Gary
> +//!
> +//! ```text
> +//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
> +//! | PDB |---->| L1 |---->| L2 |---->| L3 |---->| L4 Dual |---->| L5 |
> +//! | (L0) | | | | | | | | PDE | | (PTE) |
> +//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
> +//! 64-bit 64-bit 64-bit 64-bit 128-bit 64-bit
> +//! PDE PDE PDE PDE (big+small) PTE
> +//! ```
> +//!
> +//! # Result of a page table walk
> +//!
> +//! The walker returns a [`WalkResult`] indicating the outcome.
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3
2026-02-25 5:39 ` Gary Guo
@ 2026-02-25 14:26 ` Joel Fernandes
0 siblings, 0 replies; 55+ messages in thread
From: Joel Fernandes @ 2026-02-25 14:26 UTC (permalink / raw)
To: Gary Guo
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Bjorn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher, Christian Koenig,
Jani Nikula, Joonas Lahtinen, Vivi Rodrigo, Tvrtko Ursulin,
Rui Huang, Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellstrom, Helge Deller, Alex Gaynor, Boqun Feng,
John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev
On 2026-02-25, Gary Guo <gary@garyguo.net> wrote:
> On 2026-02-24 22:53, Joel Fernandes wrote:
>> +//! ## MMU v2 (Turing/Ampere/Ada) - 5 levels
>> [...]
>> +//! ## MMU v3 (Hopper+) - 6 levels
>
> I think this is called "4 levels" and "5 levels" in kernel MM rather than
> "5 levels" and "6 levels".
Actually, I think "5 levels" and "6 levels" is correct even by x86 kernel MM
convention. In x86 "4-level paging", the 4 levels are PGD, PUD, PMD, PTE -
the root page directory (PGD) IS counted as one of the 4 levels. Similarly,
for the GPU MMU, counting the root PDB (L0) as a level gives us 5 levels for
v2 (PDB/L0 through L4/PTE) and 6 levels for v3 (PDB/L0 through L5/PTE).
This is also consistent with NVIDIA's own hardware definitions in the OpenRM
headers (dev_mmu.h for Turing and Hopper) which define the page table entries
for each of these levels. The virtual address bitfield spans L0 (bits 56:48)
through L4 (bits 20:12) for v2, giving 5 distinct page table levels.
FWIW, the existing nouveau driver also uses this convention - NVKM_VMM_LEVELS_MAX
is defined as 6 in nvkm/subdev/mmu/vmm.c, and the GH100 page table descriptors
in vmmgh100.c list all 6 levels.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: Add memory management support
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
` (24 preceding siblings ...)
2026-02-24 22:53 ` [PATCH v8 25/25] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
25 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: gpu: nova-core: Add memory management support
Author: Joel Fernandes <joelagnelf@nvidia.com>
Patches: 28
Reviewed: 2026-02-27T14:25:27.441160
---
This is a well-structured v8 series adding memory management support to nova-core (the Rust-based NVIDIA GPU driver). It covers PRAMIN (1MB VRAM aperture via BAR0), page table structures for MMU v2/v3, a Virtual Memory Manager (VMM) with prepare/execute phases, TLB flush, BAR1 user interface, and self-tests. The overall architecture is sound -- the code is well-documented with clear ownership models. The two-phase prepare/execute mapping design correctly addresses the DMA fence signalling critical path constraint. However, there are several issues worth noting:
**Series-level concerns:**
1. **Patch ordering in the mbox is scrambled.** The cover letter lists patches 01-25 in one order, but the mbox contains them out of sequence (e.g., patch 05 appears before patch 04, patch 12 before 11, etc.). This makes it very hard for reviewers to apply and test the series incrementally. Each patch should build on the previous one.
2. **Heavy use of `#[expect(dead_code)]` / `#[allow(dead_code)]`.** Many types and methods are introduced with dead_code suppression, only to have it removed later or never. While this is somewhat inherent to building infrastructure ahead of users, it makes it harder to validate correctness since the code paths aren't exercised until late patches (or not at all if self-tests are disabled).
3. **MMU v3 is untested.** The cover letter and code comments note v3 is preparatory work. Having all this v3 code with `ENOTSUPP` returned for non-v2 raises the question of whether the v3 code should be deferred until it can be tested, to reduce review burden.
4. **`Cell` usage for sharing boot parameters** (patch 11) is unconventional for kernel code. Using `Cell<BootParams>` inside `Gpu::new()` to pass data between pin-init field initializations is clever but fragile -- if field ordering changes, data flow could silently break.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: Select GPU_BUDDY for VRAM allocation
2026-02-24 22:52 ` [PATCH v8 01/25] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Straightforward Kconfig addition. Fine.
One minor note: the `select GPU_BUDDY` is added without an alphabetical ordering, which is fixed in the next patch. Consider squashing patches 1 and 2 to avoid the intermediate misordering.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: Kconfig: Sort select statements alphabetically
2026-02-24 22:53 ` [PATCH v8 02/25] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Trivial cleanup. Would be cleaner if squashed into patch 1.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot()
2026-02-24 22:53 ` [PATCH v8 03/25] gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot() Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean refactoring. The return type changes from `Result` to `Result<(GetGspStaticInfoReply, FbLayout)>`. The doc comment update is good.
Minor: The `// //` trailing comment in the import formatting:
```rust
Gsp, //
```
This `//` trailing comment pattern appears throughout the series as a rustfmt workaround. It's a known Rust pattern but might look odd to kernel reviewers unfamiliar with it.
In `gpu.rs`, accessing `.0` on the boot result:
```rust
gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?.0 },
```
The `FbLayout` (`.1`) is silently discarded here but needed later -- this is fixed in patch 11. Between patches 3 and 11, `FbLayout` is computed and thrown away, which is wasteful. Consider restructuring the series so boot() return value is used immediately.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: gsp: Extract usable FB region from GSP
2026-02-24 22:53 ` [PATCH v8 04/25] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The `first_usable_fb_region()` logic looks correct. One concern:
```rust
for i in 0..fb_info.numFBRegions.into_safe_cast() {
if let Some(reg) = fb_info.fbRegion.get(i) {
```
The `into_safe_cast()` on `numFBRegions` converts to `usize` for the loop bound. If `numFBRegions` is larger than the `fbRegion` array, the `.get(i)` bounds check catches it. This is safe, but it would be clearer to also cap to `fb_info.fbRegion.len()`:
```rust
for i in 0..core::cmp::min(fb_info.numFBRegions.into_safe_cast(), fb_info.fbRegion.len()) {
```
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: fb: Add usable_vram field to FbLayout
2026-02-24 22:53 ` [PATCH v8 05/25] gpu: nova-core: fb: Add usable_vram field to FbLayout Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The two-phase initialization approach (`new()` + `set_usable_vram()`) is reasonable given the GSP boot sequence.
`#[allow(dead_code)]` on `usable_vram` is removed in patch 11 -- fine for incremental development.
`saturating_add` in `set_usable_vram`:
```rust
self.usable_vram = Some(base..base.saturating_add(size));
```
If `base + size` overflows, saturating would silently create a smaller-than-expected range. Consider returning an error on overflow instead.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM
2026-02-24 22:53 ` [PATCH v8 06/25] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is a substantial and well-documented patch. The PRAMIN architecture is clearly explained. Review notes:
- **`#![expect(unused)]`** at module top is very broad. Consider more targeted suppressions.
- The `compute_window` check:
```rust
if offset_in_window + access_size > PRAMIN_SIZE {
return Err(EINVAL);
}
```
Since the window is 64KB-aligned and 1MB in size, and accesses are at most 8 bytes, this can only fail if the VRAM offset happens to be right at the edge of a 1MB boundary. This is correct but perhaps an edge case worth documenting in the function comment.
- In `Drop for PraminWindow`:
```rust
if self.state.current_base != self.saved_base {
if let Some(bar) = self.bar.try_access() {
Self::write_window_base(&bar, self.saved_base);
```
If `try_access()` fails (device unbound), the window base is not restored. This is silently ignored. In a teardown scenario this is probably fine, but it might be worth a `pr_warn` or at least a comment explaining why the failure is harmless.
- The macros `define_pramin_read!` / `define_pramin_write!` are clean. The `bar.$name(bar_offset)` / `bar.$name(value, bar_offset)` pattern relies on the `Bar0` method names exactly matching, which couples the PRAMIN API to `Bar0`'s naming convention.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: docs: gpu: nova-core: Document the PRAMIN aperture mechanism
2026-02-24 22:53 ` [PATCH v8 07/25] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Good documentation with ASCII diagrams. No issues.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add common memory management types
2026-02-24 22:53 ` [PATCH v8 08/25] gpu: nova-core: mm: Add common memory management types Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Well-structured bitfield types for `VramAddress`, `VirtualAddress`, `Pfn`, `Vfn`.
Concern with `VirtualAddress::level_index()`:
```rust
// L5 is only used by MMU v3 (PTE level).
5 => self.l4_index(),
_ => 0,
```
L5 returns `l4_index()` -- this seems like it could be a bug. If v3 has a 6-level hierarchy, L5 should have its own index bits. Looking at the `VirtualAddress` bitfield, there is no `l5_index` field defined. This means the v3 PTE level reuses L4's index bits. This may be intentional if v3 PTE pages have the same index extraction as v2's L4, but it deserves a more detailed comment explaining the rationale. The `_ => 0` catch-all silently returns 0 for any unexpected level, which could mask bugs.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add TLB flush support
2026-02-24 22:53 ` [PATCH v8 09/25] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean implementation. The 2-second timeout via `read_poll_timeout` with `Delta::ZERO` sleep is correct (zero sleep = busy-poll, matching Nouveau).
The DMA fence signalling path comment on the mutex is good practice.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add GpuMm centralized memory manager
2026-02-24 22:53 ` [PATCH v8 10/25] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Good ownership model. The placeholder `SZ_1M` buddy allocator is immediately replaced in patch 11.
```rust
physical_memory_size_bytes: SZ_1M as u64,
chunk_size_bytes: SZ_4K as u64,
```
These use `as u64` casts which the cover letter says are being replaced with `into_safe_cast()`. Patch 11 fixes the `SZ_4K` one but the `SZ_1M` goes away entirely. Fine for the transient state.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Use usable VRAM region for buddy allocator
2026-02-24 22:53 ` [PATCH v8 11/25] gpu: nova-core: mm: Use usable VRAM region for buddy allocator Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the key integration patch. The `Cell<BootParams>` pattern:
```rust
let boot_params: Cell<BootParams> = Cell::new(BootParams {
usable_vram_start: 0,
usable_vram_size: 0,
});
```
This is used to shuttle data from the `gsp_static_info` field initializer to the `mm` field initializer within `try_pin_init!`. It works because `Cell` is single-threaded and the init is sequential. However, this is an unusual pattern -- it would be clearer to refactor `Gpu::new()` into two phases: boot GSP first, then construct the Gpu struct with all the data available.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add common types for all page table formats
2026-02-24 22:53 ` [PATCH v8 12/25] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean abstraction. The `MmuVersion::from(Architecture)` conversion currently only has v2 variants with a commented-out v3 fallback. This is fine for now.
The `AperturePte::from(u8)` catch-all:
```rust
_ => Self::VideoMemory,
```
Silently defaulting unknown values to `VideoMemory` could mask data corruption. Consider returning `Invalid` or panicking in debug builds.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add MMU v2 page table types
2026-02-24 22:53 ` [PATCH v8 13/25] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Well-documented bitfield definitions matching the hardware register layout. The `DualPde::new_small()` hardware quirk comment is helpful:
```rust
// Hardware quirk: big PDE must be zero (not Pde::invalid()) for small-only PDEs
```
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add MMU v3 page table types
2026-02-24 22:53 ` [PATCH v8 14/25] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Good v3 support with PCF (Page Classification Field) abstraction. Since this is entirely untested, there's risk of latent bugs. The `DualPdeBig::new_vram()` alignment validation is a nice touch.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add unified page table entry wrapper enums
2026-02-24 22:53 ` [PATCH v8 15/25] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The `pt_pages_upper_bound()` calculation in `MmuVersion` is important for pre-reservation. The computation walks from PTE level upward, dividing by 512 at each PDE level. Worth verifying with concrete numbers. For example, 1 page needs 1 PTE page + 1 page at each PDE level = 5 pages for v2 (PDB already exists). The implementation adds 1 at each level for the remainder, which is conservative (correct for an upper bound).
The unified `Pte::read()` / `Pte::write()` methods that delegate to PRAMIN are clean.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add page table walker for MMU v2/v3
2026-02-24 22:53 ` [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
2026-02-25 5:39 ` Gary Guo
@ 2026-02-27 4:25 ` Claude Code Review Bot
1 sibling, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The walker design with `resolve_prepared` closure is elegant -- it allows the prepare phase to check in-memory (not-yet-installed) pages.
`entry_addr` computation:
```rust
fn entry_addr(&self, table_addr: VramAddress, index: u64, level: PageTableLevel) -> VramAddress {
```
The entry size dispatch (16 bytes for dual PDE, 8 bytes otherwise) is correct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add Virtual Memory Manager
2026-02-24 22:53 ` [PATCH v8 17/25] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The `ENOTSUPP` for non-v2 is appropriate. Clean initial structure.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add virtual address range tracking to VMM
2026-02-24 22:53 ` [PATCH v8 18/25] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Using `GpuBuddy` for VA space tracking is a reasonable approach. The `CONTIGUOUS_ALLOCATION` flag ensures contiguous VFN ranges.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add multi-page mapping API to VMM
2026-02-24 22:53 ` [PATCH v8 19/25] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the most complex and important patch. Several observations:
1. **Per-page PDE walk in prepare phase**: `ensure_pte_path()` is called per-VFN in `prepare_map()`. For a large mapping, this re-walks from the root for each page. The RBTree of prepared pages helps avoid duplicate allocations, but the repeated root-to-leaf walks are O(N * depth). Consider optimizing by tracking the last-resolved PTE table and short-circuiting if consecutive VFNs share the same PTE page.
2. **`MustUnmapGuard` uses `Cell<bool>`**: This is fine for the current single-threaded usage but won't work if `MappedRange` is ever sent across threads. Consider `AtomicBool` or documenting the `!Sync` constraint.
3. In `execute_map()`, the cursor iteration over `pt_pages`:
```rust
let mut cursor = self.pt_pages.cursor_front_mut();
while let Some(c) = cursor {
let (next, node) = c.remove_current();
```
This drains all prepared pages, not just those for the current mapping. If multiple mappings were prepared before executing, this would install pages for all of them. The comment says "Shared by all pending maps in the `Vmm`, thus preventing races" which addresses this, but it means `execute_map()` must be called serially, one prepared mapping at a time. This constraint should be more prominently documented.
4. **Page table page leak on error**: If `execute_map()` fails partway through (e.g., after installing some PDEs but before writing all PTEs), the partially-installed PDEs are not rolled back. The VA allocation is freed (via `PreparedMapping` drop), but the page table pages are leaked in `page_table_allocs`. This is probably acceptable for now since partial failures are rare, but worth a TODO comment.
5. In `unmap_pages()`:
```rust
range._drop_guard.disarm(); // Unmap complete, Ok to drop MappedRange.
```
Accessing `_drop_guard` through the underscore-prefixed name works but is unusual. Consider naming it `drop_guard` without the underscore prefix.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: Add BAR1 aperture type and size constant
2026-02-24 22:53 ` [PATCH v8 20/25] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Trivial. `BAR1_SIZE = SZ_256M` hardcoded -- the actual BAR1 size from PCI config may differ. Patch 23 later uses `pdev.resource_len(1)?` for the actual size, so this constant may only be used as the `pci::Bar<N>` type parameter (for MMIO mapping size). Worth verifying this matches hardware.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: gsp: Add BAR1 PDE base accessors
2026-02-24 22:53 ` [PATCH v8 21/25] gpu: nova-core: gsp: Add BAR1 PDE base accessors Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean accessor addition. Reads `bar1PdeBase` from GSP firmware bindings.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add BAR1 user interface
2026-02-24 22:53 ` [PATCH v8 22/25] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The `BarAccess` design with read/write through BAR1 offset translation is clean. The `Drop` impl calling `unmap_pages()` is good RAII practice.
Concern: `BarAccess::bar_offset()` adds `PRAMIN_BASE` (0x700000) to the calculated offset, which seems wrong for BAR1 access. Wait -- looking more carefully, it computes `(vfn_start * PAGE_SIZE) + local_offset` as the raw BAR1 offset. This should be correct since the VFN corresponds to the BAR1 aperture's virtual address space.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot
2026-02-24 22:53 ` [PATCH v8 23/25] gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Integrates BAR1 into the Gpu struct initialization. Uses `pdev.resource_len(1)?` for BAR1 size which is correct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add BAR1 memory management self-tests
2026-02-24 22:53 ` [PATCH v8 24/25] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Comprehensive self-tests covering:
- PDB validation
- PRAMIN write -> BAR1 readback (cross-path verification)
- Two-phase prepare/execute API
- Range-constrained allocation with holes
Good test coverage. The tests are behind `CONFIG_NOVA_MM_SELFTESTS` (disabled by default), which is appropriate.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: mm: Add PRAMIN aperture self-tests
2026-02-24 22:53 ` [PATCH v8 25/25] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
@ 2026-02-27 4:25 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Good coverage of alignment, byte-order, window repositioning, and error rejection. The misalignment rejection test (Test 5) is valuable for verifying MMIO access safety.
---
**Summary of actionable items:**
- Fix mbox patch ordering to match the cover letter's logical sequence
- Consider squashing patches 1+2
- `VirtualAddress::level_index()` L5 returning `l4_index()` needs clarification/verification
- `AperturePte::from(u8)` catch-all silently defaulting to `VideoMemory` is risky
- `Cell<BootParams>` pattern in `Gpu::new()` is fragile; consider refactoring
- `ensure_pte_path()` per-VFN O(N*depth) walk could be optimized
- `execute_map()` drains all prepared pages -- document serialization constraint
- `FbLayout::set_usable_vram()` should error on overflow rather than saturating
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
* Claude review: gpu: nova-core: Add memory management support
2026-03-11 0:39 [PATCH v9 00/23] " Joel Fernandes
@ 2026-03-11 3:02 ` Claude Code Review Bot
0 siblings, 0 replies; 55+ messages in thread
From: Claude Code Review Bot @ 2026-03-11 3:02 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: gpu: nova-core: Add memory management support
Author: Joel Fernandes <joelagnelf@nvidia.com>
Patches: 24
Reviewed: 2026-03-11T13:02:22.553326
---
This is a substantial v9 patch series (23 patches, ~4400 lines of new Rust code) adding memory management infrastructure to the nova-core GPU driver. It covers VRAM allocation via GPU buddy allocator, PRAMIN aperture for pre-MMU VRAM access, GPU page table management (MMU v2/v3), a Virtual Memory Manager (VMM), BAR1 access, and self-tests.
**Strengths:**
- Well-structured layered architecture with clean separation between PRAMIN, page tables, VMM, and BAR1
- Careful attention to DMA fence signalling deadlock avoidance (documented lock ordering constraints, two-phase prepare/execute mapping)
- Good documentation including the PRAMIN RST doc and extensive inline doc comments
- Self-tests behind a Kconfig option provide validation without impacting production
- Proper use of Rust safety patterns (pin-init, mutex guards, type-safe register definitions)
**Concerns:**
- Patch ordering in the mbox is scrambled vs the logical series order (e.g., patch 00/23 appears after 01/23, patch 04 before 03, etc.) — this may just be mbox delivery ordering but could confuse reviewers
- Several `#[expect(dead_code)]` annotations are added then removed within the series, which is normal for incremental development but adds churn
- The `MustUnmapGuard` using `Cell<bool>` to track unmap state is functional but unconventional — a `Drop` warning is a weak enforcement mechanism
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 55+ messages in thread
end of thread, other threads:[~2026-03-11 3:02 UTC | newest]
Thread overview: 55+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-24 22:52 [PATCH v8 00/25] gpu: nova-core: Add memory management support Joel Fernandes
2026-02-24 22:52 ` [PATCH v8 01/25] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 02/25] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 03/25] gpu: nova-core: gsp: Return GspStaticInfo and FbLayout from boot() Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 04/25] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 05/25] gpu: nova-core: fb: Add usable_vram field to FbLayout Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 06/25] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 07/25] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 08/25] gpu: nova-core: mm: Add common memory management types Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 09/25] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 10/25] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 11/25] gpu: nova-core: mm: Use usable VRAM region for buddy allocator Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 12/25] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 13/25] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 14/25] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 15/25] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 16/25] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
2026-02-25 5:39 ` Gary Guo
2026-02-25 14:26 ` Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 17/25] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 18/25] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 19/25] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 20/25] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 21/25] gpu: nova-core: gsp: Add BAR1 PDE base accessors Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 22/25] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 23/25] gpu: nova-core: mm: Add BarUser to struct Gpu and create at boot Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 24/25] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-24 22:53 ` [PATCH v8 25/25] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
2026-02-27 4:25 ` Claude review: " Claude Code Review Bot
2026-02-27 4:25 ` Claude review: gpu: nova-core: Add memory management support Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-03-11 0:39 [PATCH v9 00/23] " Joel Fernandes
2026-03-11 3:02 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox