* [PATCH v14 0/2] Rust GPU buddy allocator bindings
@ 2026-03-20 4:57 Joel Fernandes
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Joel Fernandes @ 2026-03-20 4:57 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, rust-for-linux, Nikola Djukic, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alex Gaynor, Boqun Feng,
John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This patch adds safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The prerequisite infrastructure
patches (DRM buddy code movement and the uninitialized buddy fix) have been
absorbed into upstream -next, so this is now a set of standalone patches.
The series along with all dependencies, including clist and nova-core mm
patches, are available at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: buddy-bindings-v14-20260319)
Change log:
Changes from v13 to v14:
- Changed clist_create! call site to use explicit unsafe{} block.
- Various changes to change to Range type for ranges (Alex).
- Renamed GpuBuddyParams::physical_memory_size to size (Alex).
- Changed GpuBuddyAllocFlags inner type from u32 to usize (Alex).
- Renamed free_memory() to avail() (Alex).
- Various nits (Alex).
Changes from v12 to v13:
- Split MAINTAINERS reviewer update into a separate patch (Danilo).
- Adjustments to use the Alignment type chunk size parameters (Alex).
- Fixed doctest assertion due to blocks landing on top of range.
- Changed chunk_size local vars to Alignment.
- Changed block size to u64 from usize.
- Renamed rust/kernel/gpu/mod.rs to rust/kernel/gpu.rs.
- Several other adjustments (Alex).
Changes from v11 to v12:
- Rebased on linux-next his is now a standalone single patch as dependencies
are absorbed (but not clist is a prequisite)
- Redesigned allocation API (Alexandre Courbot) for better Rust ergonomics.
- Split single long example into 4 self-contained examples (Alexandre Courbot).
- Several safety and invariant comment changes (Danilo).
- MAINTAINERS changes (Arun, Mathew, Danilo, Dave).
- Fixed `#[cfg(CONFIG_GPU_BUDDY)]` to `#[cfg(CONFIG_GPU_BUDDY = "y")]` (Danilo Krummrich).
- Updated `ffi::clist::CListHead` to `interop::list::CListHead`.
Changes from v10 to v11:
- Dropped "rust: ffi: Convert pub use to pub mod and create ffi module" patch;
the ffi module restructuring will go through a different path.
- Dropped "rust: clist: Add support to interface with C linked lists" patch;
the clist module will be submitted separately.
- Dropped "nova-core: Kconfig: Sort select statements alphabetically" cosmetic
patch.
- Patches 1-3 (DRM buddy movement and fix) are included as reference only;
they are already being pulled into upstream via drm-misc-next.
- Removed clist patches as those can go in independently (Alice).
- Moved the Kconfig GPU_BUDDY selection patch to nova-core mm series to enable
it when it is actually used.
- Various nits to comments, etc.
Changes from v9 to v10:
- Absorbed the DRM buddy code movement patches into this series as patches 1-2.
Dave Airlie reworked these into two parts for better git history.
- Added "gpu: Fix uninitialized buddy for built-in drivers" fix by Koen Koning,
using subsys_initcall instead of module_init to fix NULL pointer dereference
when built-in drivers use the buddy allocator before initialization.
- Added "rust: ffi: Convert pub use to pub mod and create ffi module" to prepare
the ffi module for hosting clist as a sub-module.
- Moved clist from rust/kernel/clist.rs to rust/kernel/ffi/.
- Added "nova-core: Kconfig: Sort select statements alphabetically" (Danilo).
Changes from v8 to v9:
- Updated nova-core Kconfig patch: addressed sorting of Kconfig options.
- Added Daniel Almeida's Reviewed-by tag to clist patch.
- Minor refinements to GPU buddy bindings.
Changes from v7 to v8:
- Added nova-core Kconfig patch to select GPU_BUDDY for VRAM allocation.
- Various changes suggested by Danilo Krummrich, Gary Guo, and Daniel Almeida.
- Added Acked-by: Gary Guo for clist patch.
Changes from v6 to v7:
- Major restructuring: split the large 26-patch v6 RFC series. v7 only contains
the Rust infrastructure patches (clist + GPU buddy bindings), extracted from
the full nova-core MM series. The nova-core MM patches follow separately.
- Rebased on linux-next.
Changes from v5 to v6:
- Rebased on drm-rust-kernel/drm-rust-next.
- Expanded from 6 to 26 patches with full nova-core MM infrastructure including
page table walker, VMM, BAR1 user interface, TLB flush, and GpuMm manager.
Changes from v4 to v5:
- Added PRAMIN aperture support with documentation and self-tests.
- Improved buddy allocator bindings (fewer lines of code).
- Based on drm-rust-next instead of linux-next.
Changes from v3 to v4:
- Combined the clist and DRM buddy series into a single coherent series.
- Added DRM buddy allocator movement from drivers/gpu/drm/ up to drivers/gpu/,
renaming API from drm_buddy to gpu_buddy.
- Added Rust bindings for the GPU buddy allocator.
Changes from v2 to v3:
- Squashed 3 clist patches into one due to inter-dependencies.
- Changed Clist to Clist<'a, T> using const generic offset (Alex Courbot).
- Simplified C helpers to only list_add_tail (Alex Courbot, John Hubbard).
- Added init_list_head() Rust function (Alex Courbot).
- Added FusedIterator, PartialEq/Eq impls.
- Added MAINTAINERS entry (Miguel Ojeda).
Changes from v1 (RFC) to v2:
- Dropped DRM buddy allocator patches; series focuses solely on clist module.
- Dropped sample modules, replaced with doctests.
- Added proper lifetime management similar to scatterlist.
- Split clist into 3 separate patches.
Link to v13: https://lore.kernel.org/all/20260317220323.1909618-1-joelagnelf@nvidia.com/
Joel Fernandes (2):
rust: gpu: Add GPU buddy allocator bindings
MAINTAINERS: gpu: buddy: Update reviewer
MAINTAINERS | 8 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu.rs | 6 +
rust/kernel/gpu/buddy.rs | 613 ++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 2 +
7 files changed, 663 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu.rs
create mode 100644 rust/kernel/gpu/buddy.rs
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH v14 1/2] rust: gpu: Add GPU buddy allocator bindings
2026-03-20 4:57 [PATCH v14 0/2] Rust GPU buddy allocator bindings Joel Fernandes
@ 2026-03-20 4:57 ` Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
` (2 more replies)
2026-03-20 4:57 ` [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer Joel Fernandes
2026-03-21 17:56 ` Claude review: Rust GPU buddy allocator bindings Claude Code Review Bot
2 siblings, 3 replies; 15+ messages in thread
From: Joel Fernandes @ 2026-03-20 4:57 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, rust-for-linux, Nikola Djukic, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alex Gaynor, Boqun Feng,
John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 6 +
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu.rs | 6 +
rust/kernel/gpu/buddy.rs | 613 ++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 2 +
7 files changed, 662 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu.rs
create mode 100644 rust/kernel/gpu/buddy.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index e847099efcc2..cd9505d3be60 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8531,7 +8531,10 @@ T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
F: drivers/gpu/drm/nova/
F: drivers/gpu/drm/tyr/
F: drivers/gpu/nova-core/
+F: rust/helpers/gpu.c
F: rust/kernel/drm/
+F: rust/kernel/gpu.rs
+F: rust/kernel/gpu/
DRM DRIVERS FOR ALLWINNER A10
M: Chen-Yu Tsai <wens@kernel.org>
@@ -8952,6 +8955,9 @@ F: drivers/gpu/drm/drm_buddy.c
F: drivers/gpu/tests/gpu_buddy_test.c
F: include/drm/drm_buddy.h
F: include/linux/gpu_buddy.h
+F: rust/helpers/gpu.c
+F: rust/kernel/gpu.rs
+F: rust/kernel/gpu/
DRM AUTOMATED TESTING
M: Helen Koike <helen.fornazier@gmail.com>
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu.rs b/rust/kernel/gpu.rs
new file mode 100644
index 000000000000..1dc5d0c8c09d
--- /dev/null
+++ b/rust/kernel/gpu.rs
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+#[cfg(CONFIG_GPU_BUDDY = "y")]
+pub mod buddy;
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..fdf1fcc2dcee
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,613 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! Create a buddy allocator and perform a basic range allocation:
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{
+//! GpuBuddy,
+//! GpuBuddyAllocFlags,
+//! GpuBuddyAllocMode,
+//! GpuBuddyParams, //
+//! },
+//! prelude::*,
+//! ptr::Alignment,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! size: SZ_1G as u64,
+//! chunk_size: Alignment::new::<SZ_4K>(),
+//! })?;
+//!
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), Alignment::new::<SZ_4K>());
+//! let initial_free = buddy.avail();
+//!
+//! // Allocate 16MB. Block lands at the top of the address range.
+//! let allocated = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Simple,
+//! SZ_16M as u64,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.avail(), initial_free - SZ_16M as u64);
+//!
+//! let block = allocated.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages = 16MB
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! assert_eq!(allocated.iter().count(), 1);
+//!
+//! // Dropping the allocation returns the range to the buddy allocator.
+//! drop(allocated);
+//! assert_eq!(buddy.avail(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Top-down allocation allocates from the highest addresses:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # size: SZ_1G as u64,
+//! # chunk_size: Alignment::new::<SZ_4K>(),
+//! # })?;
+//! # let initial_free = buddy.avail();
+//! let topdown = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::TopDown,
+//! SZ_16M as u64,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.avail(), initial_free - SZ_16M as u64);
+//!
+//! let block = topdown.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12);
+//! assert_eq!(block.size(), SZ_16M as u64);
+//!
+//! // Dropping the allocation returns the range to the buddy allocator.
+//! drop(topdown);
+//! assert_eq!(buddy.avail(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Non-contiguous allocation can fill fragmented memory by returning multiple
+//! blocks:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlags, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # size: SZ_1G as u64,
+//! # chunk_size: Alignment::new::<SZ_4K>(),
+//! # })?;
+//! # let initial_free = buddy.avail();
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! let frag1 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range(0..SZ_4M as u64),
+//! SZ_4M as u64,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.avail(), initial_free - SZ_4M as u64);
+//!
+//! let frag2 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range(SZ_8M as u64..(SZ_8M + SZ_4M) as u64),
+//! SZ_4M as u64,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.avail(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB, this returns 2 blocks from the holes.
+//! let fragmented = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range(0..SZ_16M as u64),
+//! SZ_8M as u64,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.avail(), initial_free - SZ_16M as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Contiguous allocation fails when only fragmented space is available:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlag, GpuBuddyAllocFlags, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! // Create a small 16MB buddy allocator with fragmented memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! size: SZ_16M as u64,
+//! chunk_size: Alignment::new::<SZ_4K>(),
+//! })?;
+//!
+//! let _hole1 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range(0..SZ_4M as u64),
+//! SZ_4M as u64,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! let _hole2 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range(SZ_8M as u64..(SZ_8M + SZ_4M) as u64),
+//! SZ_4M as u64,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! // 8MB contiguous should fail, only two non-contiguous 4MB holes exist.
+//! let result = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Simple,
+//! SZ_8M as u64,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlag::Contiguous,
+//! ),
+//! GFP_KERNEL,
+//! );
+//! assert!(result.is_err());
+//! # Ok::<(), Error>(())
+//! ```
+
+use core::ops::Range;
+
+use crate::{
+ bindings,
+ clist_create,
+ error::to_result,
+ interop::list::CListHead,
+ new_mutex,
+ prelude::*,
+ ptr::Alignment,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque, //
+};
+
+/// Allocation mode for the GPU buddy allocator.
+///
+/// The mode determines the primary allocation strategy. Modes are mutually
+/// exclusive: an allocation is either simple, range-constrained, or top-down.
+///
+/// Orthogonal modifier flags (e.g., contiguous, clear) are specified separately
+/// via [`GpuBuddyAllocFlags`].
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub enum GpuBuddyAllocMode {
+ /// Simple allocation without constraints.
+ Simple,
+ /// Range-based allocation within the given address range.
+ Range(Range<u64>),
+ /// Allocate from top of address space downward.
+ TopDown,
+}
+
+impl GpuBuddyAllocMode {
+ // Returns the C flags corresponding to the allocation mode.
+ fn as_flags(&self) -> usize {
+ match self {
+ Self::Simple => 0,
+ Self::Range(_) => bindings::GPU_BUDDY_RANGE_ALLOCATION,
+ Self::TopDown => bindings::GPU_BUDDY_TOPDOWN_ALLOCATION,
+ }
+ }
+
+ // Extracts the range start/end, defaulting to `(0, 0)` for non-range modes.
+ fn range(&self) -> (u64, u64) {
+ match self {
+ Self::Range(range) => (range.start, range.end),
+ _ => (0, 0),
+ }
+ }
+}
+
+crate::impl_flags!(
+ /// Modifier flags for GPU buddy allocation.
+ ///
+ /// These flags can be combined with any [`GpuBuddyAllocMode`] to control
+ /// additional allocation behavior.
+ #[derive(Clone, Copy, Default, PartialEq, Eq)]
+ pub struct GpuBuddyAllocFlags(usize);
+
+ /// Individual modifier flag for GPU buddy allocation.
+ #[derive(Clone, Copy, PartialEq, Eq)]
+ pub enum GpuBuddyAllocFlag {
+ /// Allocate physically contiguous blocks.
+ Contiguous = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION,
+
+ /// Request allocation from cleared (zeroed) memory.
+ Clear = bindings::GPU_BUDDY_CLEAR_ALLOCATION,
+
+ /// Disable trimming of partially used blocks.
+ TrimDisable = bindings::GPU_BUDDY_TRIM_DISABLE,
+ }
+);
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset (in bytes) where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset: u64,
+ /// Total size (in bytes) of the address space managed by the allocator.
+ pub size: u64,
+ /// Minimum allocation unit / chunk size, must be >= 4KB.
+ pub chunk_size: Alignment,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// Internal locking ensures all allocator and free operations are properly
+/// synchronized, preventing races between concurrent allocations and the
+/// freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains an initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Cached creation parameters (do not change after init).
+ params: GpuBuddyParams,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let size = params.size;
+ let chunk_size = params.chunk_size;
+
+ // INVARIANT: `gpu_buddy_init` returns 0 on success, at which point the
+ // `gpu_buddy` structure is initialized and ready for use with all
+ // `gpu_buddy_*` APIs. `try_pin_init!` only completes if all fields succeed,
+ // so the invariant holds when construction finishes.
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: `ptr` points to valid uninitialized memory from the pin-init
+ // infrastructure. `gpu_buddy_init` will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size.as_usize() as u64) })
+ }),
+ lock <- new_mutex!(()),
+ params,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: `GpuBuddyInner` can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: `GpuBuddyInner` is `Sync` because `GpuBuddyInner::lock`
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+// Guard that proves the lock is held, enabling access to the allocator.
+// The `_guard` holds the lock for the duration of this guard's lifetime.
+struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// Refer to the module-level documentation for usage examples.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Arc::pin_init(GpuBuddyInner::new(params), GFP_KERNEL).map(Self)
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.params.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> Alignment {
+ self.0.params.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.params.size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn avail(&self) -> u64 {
+ let guard = self.0.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized allocator.
+ // `guard` provides exclusive access.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ pub fn alloc_blocks(
+ &self,
+ mode: GpuBuddyAllocMode,
+ size: u64,
+ min_block_size: Alignment,
+ flags: impl Into<GpuBuddyAllocFlags>,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let (start, end) = mode.range();
+ let mode_flags = mode.as_flags();
+ let modifier_flags = flags.into();
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ _: {
+ // Reject zero-sized or inverted ranges.
+ if let GpuBuddyAllocMode::Range(range) = &mode {
+ if range.is_empty() {
+ Err::<(), Error>(EINVAL)?;
+ }
+ }
+
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size,
+ min_block_size.as_usize() as u64,
+ list.as_raw(),
+ mode_flags | usize::from(modifier_flags),
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated blocks.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
+ /// borrows `self` and is only valid for the duration of that borrow.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ let head = self.list.as_raw();
+ // SAFETY: Per the type invariant, `list` is an initialized sentinel `list_head`
+ // and is not concurrently modified (we hold a `&self` borrow). The list contains
+ // `gpu_buddy_block` items linked via `__bindgen_anon_1.link`. `Block` is
+ // `#[repr(transparent)]` over `gpu_buddy_block`.
+ let clist = unsafe {
+ clist_create!(
+ head,
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|this| AllocatedBlock { this, blocks: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_free_list(guard.as_raw(), self.list.as_raw(), 0);
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's raw offset in the buddy address space (without base offset).
+ fn offset(&self) -> u64 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ fn order(&self) -> u32 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a wrapper around `gpu_buddy_block` which can be
+// sent across threads safely.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is only accessed through shared references after
+// allocation, and thus safe to access concurrently across threads.
+unsafe impl Sync for Block {}
+
+/// A buddy block paired with its owning [`AllocatedBlocks`] context.
+///
+/// Unlike a raw block, which only knows its offset within the buddy address
+/// space, an [`AllocatedBlock`] also has access to the allocator's `base_offset`
+/// and `chunk_size`, enabling it to compute absolute offsets and byte sizes.
+///
+/// Returned by [`AllocatedBlocks::iter()`].
+pub struct AllocatedBlock<'a> {
+ this: &'a Block,
+ blocks: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.blocks.buddy.params.base_offset + self.this.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.this.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ (self.blocks.buddy.params.chunk_size.as_usize() as u64) << self.this.order()
+ }
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index bdcf632050ee..2652933e585f 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -102,6 +102,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY = "y")]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH v14 1/2] rust: gpu: Add GPU buddy allocator bindings
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
@ 2026-03-20 12:16 ` Alexandre Courbot
2026-03-20 13:04 ` Gary Guo
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
2 siblings, 0 replies; 15+ messages in thread
From: Alexandre Courbot @ 2026-03-20 12:16 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Fri Mar 20, 2026 at 1:57 PM JST, Joel Fernandes wrote:
> Add safe Rust abstractions over the Linux kernel's GPU buddy
> allocator for physical memory management. The GPU buddy allocator
> implements a binary buddy system useful for GPU physical memory
> allocation. nova-core will use it for physical memory allocation.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: Alexandre Courbot <acourbot@nvidia.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v14 1/2] rust: gpu: Add GPU buddy allocator bindings
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
@ 2026-03-20 13:04 ` Gary Guo
2026-03-20 19:50 ` Joel Fernandes
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
2 siblings, 1 reply; 15+ messages in thread
From: Gary Guo @ 2026-03-20 13:04 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, rust-for-linux, Nikola Djukic, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alex Gaynor, Boqun Feng,
John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev
On Fri Mar 20, 2026 at 4:57 AM GMT, Joel Fernandes wrote:
> Add safe Rust abstractions over the Linux kernel's GPU buddy
> allocator for physical memory management. The GPU buddy allocator
> implements a binary buddy system useful for GPU physical memory
> allocation. nova-core will use it for physical memory allocation.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
> MAINTAINERS | 6 +
> rust/bindings/bindings_helper.h | 11 +
> rust/helpers/gpu.c | 23 ++
> rust/helpers/helpers.c | 1 +
> rust/kernel/gpu.rs | 6 +
> rust/kernel/gpu/buddy.rs | 613 ++++++++++++++++++++++++++++++++
> rust/kernel/lib.rs | 2 +
> 7 files changed, 662 insertions(+)
> create mode 100644 rust/helpers/gpu.c
> create mode 100644 rust/kernel/gpu.rs
> create mode 100644 rust/kernel/gpu/buddy.rs
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index e847099efcc2..cd9505d3be60 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8531,7 +8531,10 @@ T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
> F: drivers/gpu/drm/nova/
> F: drivers/gpu/drm/tyr/
> F: drivers/gpu/nova-core/
> +F: rust/helpers/gpu.c
> F: rust/kernel/drm/
> +F: rust/kernel/gpu.rs
> +F: rust/kernel/gpu/
>
> DRM DRIVERS FOR ALLWINNER A10
> M: Chen-Yu Tsai <wens@kernel.org>
> @@ -8952,6 +8955,9 @@ F: drivers/gpu/drm/drm_buddy.c
> F: drivers/gpu/tests/gpu_buddy_test.c
> F: include/drm/drm_buddy.h
> F: include/linux/gpu_buddy.h
> +F: rust/helpers/gpu.c
> +F: rust/kernel/gpu.rs
> +F: rust/kernel/gpu/
>
> DRM AUTOMATED TESTING
> M: Helen Koike <helen.fornazier@gmail.com>
> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
> index 083cc44aa952..dbb765a9fdbd 100644
> --- a/rust/bindings/bindings_helper.h
> +++ b/rust/bindings/bindings_helper.h
> @@ -29,6 +29,7 @@
> #include <linux/hrtimer_types.h>
>
> #include <linux/acpi.h>
> +#include <linux/gpu_buddy.h>
> #include <drm/drm_device.h>
> #include <drm/drm_drv.h>
> #include <drm/drm_file.h>
> @@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
> const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
> const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
>
> +#if IS_ENABLED(CONFIG_GPU_BUDDY)
> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
> + GPU_BUDDY_CONTIGUOUS_ALLOCATION;
> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
> +#endif
> +
> #if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
> #include "../../drivers/android/binder/rust_binder.h"
> #include "../../drivers/android/binder/rust_binder_events.h"
> diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
> new file mode 100644
> index 000000000000..38b1a4e6bef8
> --- /dev/null
> +++ b/rust/helpers/gpu.c
> @@ -0,0 +1,23 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/gpu_buddy.h>
> +
> +#ifdef CONFIG_GPU_BUDDY
> +
> +__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
> +{
> + return gpu_buddy_block_offset(block);
> +}
> +
> +__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
> +{
> + return gpu_buddy_block_order(block);
> +}
> +
> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
> + struct gpu_buddy_block *block)
> +{
> + return gpu_buddy_block_size(mm, block);
> +}
>From Sashiko:
https://sashiko.dev/#/patchset/20260320045711.43494-1-joelagnelf%40nvidia.com
Does the Rust wrapper use this helper? It looks like AllocatedBlock::size()
manually duplicates the bitwise logic (chunk_size << order) rather than
calling this helper, which could create a divergence risk if the underlying C
allocator implementation changes.
Many other review comments there seem to be false positive, but it might worth
confirming.
Best,
Gary
> +
> +#endif /* CONFIG_GPU_BUDDY */
> diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
> index 724fcb8240ac..a53929ce52a3 100644
> --- a/rust/helpers/helpers.c
> +++ b/rust/helpers/helpers.c
> @@ -32,6 +32,7 @@
> #include "err.c"
> #include "irq.c"
> #include "fs.c"
> +#include "gpu.c"
> #include "io.c"
> #include "jump_label.c"
> #include "kunit.c"
> diff --git a/rust/kernel/gpu.rs b/rust/kernel/gpu.rs
> new file mode 100644
> index 000000000000..1dc5d0c8c09d
> --- /dev/null
> +++ b/rust/kernel/gpu.rs
> @@ -0,0 +1,6 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! GPU subsystem abstractions.
> +
> +#[cfg(CONFIG_GPU_BUDDY = "y")]
> +pub mod buddy;
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH v14 1/2] rust: gpu: Add GPU buddy allocator bindings
2026-03-20 13:04 ` Gary Guo
@ 2026-03-20 19:50 ` Joel Fernandes
0 siblings, 0 replies; 15+ messages in thread
From: Joel Fernandes @ 2026-03-20 19:50 UTC (permalink / raw)
To: Gary Guo, linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
rust-for-linux, Nikola Djukic, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Alex Gaynor, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On 3/20/2026 9:04 AM, Gary Guo wrote:
> On Fri Mar 20, 2026 at 4:57 AM GMT, Joel Fernandes wrote:
>> Add safe Rust abstractions over the Linux kernel's GPU buddy
>> allocator for physical memory management. The GPU buddy allocator
>> implements a binary buddy system useful for GPU physical memory
>> allocation. nova-core will use it for physical memory allocation.
>>
>> Cc: Nikola Djukic <ndjukic@nvidia.com>
>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>> ---
>> MAINTAINERS | 6 +
>> rust/bindings/bindings_helper.h | 11 +
>> rust/helpers/gpu.c | 23 ++
>> rust/helpers/helpers.c | 1 +
>> rust/kernel/gpu.rs | 6 +
>> rust/kernel/gpu/buddy.rs | 613 ++++++++++++++++++++++++++++++++
>> rust/kernel/lib.rs | 2 +
>> 7 files changed, 662 insertions(+)
>> create mode 100644 rust/helpers/gpu.c
>> create mode 100644 rust/kernel/gpu.rs
>> create mode 100644 rust/kernel/gpu/buddy.rs
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index e847099efcc2..cd9505d3be60 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -8531,7 +8531,10 @@ T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
>> F: drivers/gpu/drm/nova/
>> F: drivers/gpu/drm/tyr/
>> F: drivers/gpu/nova-core/
>> +F: rust/helpers/gpu.c
>> F: rust/kernel/drm/
>> +F: rust/kernel/gpu.rs
>> +F: rust/kernel/gpu/
>>
>> DRM DRIVERS FOR ALLWINNER A10
>> M: Chen-Yu Tsai <wens@kernel.org>
>> @@ -8952,6 +8955,9 @@ F: drivers/gpu/drm/drm_buddy.c
>> F: drivers/gpu/tests/gpu_buddy_test.c
>> F: include/drm/drm_buddy.h
>> F: include/linux/gpu_buddy.h
>> +F: rust/helpers/gpu.c
>> +F: rust/kernel/gpu.rs
>> +F: rust/kernel/gpu/
>>
>> DRM AUTOMATED TESTING
>> M: Helen Koike <helen.fornazier@gmail.com>
>> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
>> index 083cc44aa952..dbb765a9fdbd 100644
>> --- a/rust/bindings/bindings_helper.h
>> +++ b/rust/bindings/bindings_helper.h
>> @@ -29,6 +29,7 @@
>> #include <linux/hrtimer_types.h>
>>
>> #include <linux/acpi.h>
>> +#include <linux/gpu_buddy.h>
>> #include <drm/drm_device.h>
>> #include <drm/drm_drv.h>
>> #include <drm/drm_file.h>
>> @@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
>> const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
>> const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
>>
>> +#if IS_ENABLED(CONFIG_GPU_BUDDY)
>> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
>> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
>> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
>> + GPU_BUDDY_CONTIGUOUS_ALLOCATION;
>> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
>> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
>> +const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
>> +#endif
>> +
>> #if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
>> #include "../../drivers/android/binder/rust_binder.h"
>> #include "../../drivers/android/binder/rust_binder_events.h"
>> diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
>> new file mode 100644
>> index 000000000000..38b1a4e6bef8
>> --- /dev/null
>> +++ b/rust/helpers/gpu.c
>> @@ -0,0 +1,23 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include <linux/gpu_buddy.h>
>> +
>> +#ifdef CONFIG_GPU_BUDDY
>> +
>> +__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
>> +{
>> + return gpu_buddy_block_offset(block);
>> +}
>> +
>> +__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
>> +{
>> + return gpu_buddy_block_order(block);
>> +}
>> +
>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>> + struct gpu_buddy_block *block)
>> +{
>> + return gpu_buddy_block_size(mm, block);
>> +}
>
> From Sashiko:
> https://sashiko.dev/#/patchset/20260320045711.43494-1-joelagnelf%40nvidia.com
>
> Does the Rust wrapper use this helper? It looks like AllocatedBlock::size()
> manually duplicates the bitwise logic (chunk_size << order) rather than
> calling this helper, which could create a divergence risk if the underlying C
> allocator implementation changes.
>
> Many other review comments there seem to be false positive, but it might worth
> confirming.
If Danilo is applying the patch, can we please remove the helper on apply? I
think in this case we do not need to call the helper. I don't think there is a
divergence risk here.
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 15+ messages in thread
* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
2026-03-20 13:04 ` Gary Guo
@ 2026-03-21 17:56 ` Claude Code Review Bot
2 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:56 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Positive aspects:**
- Clean separation of allocation mode (enum) vs modifier flags (bitflags) — this is good Rust API design.
- Proper pin-init usage throughout, with correct `#[pin_data(PinnedDrop)]` annotations.
- The `Arc<GpuBuddyInner>` sharing between `GpuBuddy` and `AllocatedBlocks` correctly ensures the allocator outlives its allocations.
- Doctests are thorough, covering simple, top-down, range, fragmented, and contiguous-failure cases.
- SAFETY comments are present and generally adequate.
**Observations:**
1. **`base_offset` is never passed to C:** The `GpuBuddyParams::base_offset` field is stored in the Rust wrapper and used to adjust offsets returned by `AllocatedBlock::offset()`, but it is never passed to `gpu_buddy_init()`. This means the C allocator always thinks it starts at 0, and the Rust side adds the offset. This works for offset reporting, but if the caller ever passes a range to `GpuBuddyAllocMode::Range`, they would need to know that the range is relative to the buddy's internal 0-based space, not the base-offset-adjusted space. The doctest examples all use `base_offset: 0`, so this isn't tested. Consider documenting this contract explicitly, or adjusting `Range` start/end by subtracting `base_offset` before passing to C.
2. **Range validation is incomplete:** The `alloc_blocks` method checks `range.is_empty()` for `Range` mode but doesn't validate that the range falls within the allocator's `[0, size)` bounds. The C side may handle this, but a Rust-side bounds check would provide a clearer error:
```rust
if let GpuBuddyAllocMode::Range(range) = &mode {
if range.is_empty() {
Err::<(), Error>(EINVAL)?;
}
}
```
3. **`try_pin_init!` field reference across init:** In `alloc_blocks`, the `try_pin_init!` block references `buddy` and `list` inside the unnamed `_:` initializer field:
```rust
try_pin_init!(AllocatedBlocks {
buddy: buddy_arc,
list <- CListHead::new(),
_: {
...
let guard = buddy.lock();
...
list.as_raw(),
...
}
})
```
This relies on the `try_pin_init!` macro making previously-initialized fields available by name in subsequent field initializers. This is a known feature of the kernel's pin-init macros, but it's subtle — a brief comment noting this would help future readers.
4. **`gpu_buddy_block_size` helper is unused:** The C helper `rust_helper_gpu_buddy_block_size` is defined in `rust/helpers/gpu.c` but never called from Rust. The Rust `AllocatedBlock::size()` computes the size itself:
```rust
pub fn size(&self) -> u64 {
(self.blocks.buddy.params.chunk_size.as_usize() as u64) << self.this.order()
}
```
This duplicates the logic in `gpu_buddy_block_size()`. Consider either using the C helper (passing the `gpu_buddy*` pointer) for consistency, or removing the unused helper to avoid dead code. Using the C helper would also be more robust if the size computation ever changes in the C side.
5. **`Block::offset` takes `const` pointer, `Block::order` does not:** Looking at the C header:
- `gpu_buddy_block_offset(const struct gpu_buddy_block *block)` — const
- `gpu_buddy_block_order(struct gpu_buddy_block *block)` — non-const
The Rust helpers match this, but both are logically read-only. The non-const on `order` in the C side looks like an oversight in the C header (since it only reads `block->header`), but that's not something for this patch to fix.
6. **MAINTAINERS dual-listing of `rust/helpers/gpu.c`:** The patch adds `F: rust/helpers/gpu.c`, `F: rust/kernel/gpu.rs`, and `F: rust/kernel/gpu/` to both the "DRM DRIVERS AND GPU SUBSYSTEM - RUST" section and the "GPU BUDDY ALLOCATOR" section. This dual listing is intentional (both groups should review changes), but worth confirming this is the desired arrangement.
7. **`#[cfg(CONFIG_GPU_BUDDY = "y")]` vs `IS_ENABLED`:** The Rust module uses `CONFIG_GPU_BUDDY = "y"` (built-in only), while the C bindings_helper.h uses `IS_ENABLED(CONFIG_GPU_BUDDY)` (built-in or module). If `CONFIG_GPU_BUDDY` is ever `=m`, the C constants would be compiled but the Rust module wouldn't. This is probably fine for now (GPU_BUDDY is likely always `=y` when selected by a built-in driver like nova-core), but the inconsistency is worth noting.
8. **Minor nit — comment style:** The `GpuBuddyAllocMode` method comments use `//` instead of `///`:
```rust
// Returns the C flags corresponding to the allocation mode.
fn as_flags(&self) -> usize {
```
Since these are private methods, `//` is fine, but `///` would generate internal docs that could be useful during development.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer
2026-03-20 4:57 [PATCH v14 0/2] Rust GPU buddy allocator bindings Joel Fernandes
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
@ 2026-03-20 4:57 ` Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
2026-03-21 17:56 ` Claude review: Rust GPU buddy allocator bindings Claude Code Review Bot
2 siblings, 2 replies; 15+ messages in thread
From: Joel Fernandes @ 2026-03-20 4:57 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, rust-for-linux, Nikola Djukic, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alex Gaynor, Boqun Feng,
John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Christian Koenig mentioned he'd like to step down from the reviewer
role for the GPU buddy allocator. Joel Fernandes is stepping in as
reviewer with agreement from Matthew Auld and Arun Pravin.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index cd9505d3be60..3353cbf98be1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8946,7 +8946,7 @@ F: include/drm/ttm/
GPU BUDDY ALLOCATOR
M: Matthew Auld <matthew.auld@intel.com>
M: Arun Pravin <arunpravin.paneerselvam@amd.com>
-R: Christian Koenig <christian.koenig@amd.com>
+R: Joel Fernandes <joelagnelf@nvidia.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer
2026-03-20 4:57 ` [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer Joel Fernandes
@ 2026-03-20 12:16 ` Alexandre Courbot
2026-03-21 0:06 ` Joel Fernandes
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 15+ messages in thread
From: Alexandre Courbot @ 2026-03-20 12:16 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Fri Mar 20, 2026 at 1:57 PM JST, Joel Fernandes wrote:
> Christian Koenig mentioned he'd like to step down from the reviewer
> role for the GPU buddy allocator. Joel Fernandes is stepping in as
> reviewer with agreement from Matthew Auld and Arun Pravin.
>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
This is missing the Acked-bys you collected on v13.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer
2026-03-20 12:16 ` Alexandre Courbot
@ 2026-03-21 0:06 ` Joel Fernandes
0 siblings, 0 replies; 15+ messages in thread
From: Joel Fernandes @ 2026-03-21 0:06 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel@vger.kernel.org, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel@lists.freedesktop.org,
rust-for-linux@vger.kernel.org, Nikola Djukic, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Vivi Rodrigo, Tvrtko Ursulin, Rui Huang,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alex Gaynor, Boqun Feng,
John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, Alexey Ivanov, Eliot Courtney,
joel@joelfernandes.org, linux-doc@vger.kernel.org,
amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
intel-xe@lists.freedesktop.org, linux-fbdev@vger.kernel.org
> On Mar 20, 2026, at 8:16 AM, Alexandre Courbot <acourbot@nvidia.com> wrote:
>
> On Fri Mar 20, 2026 at 1:57 PM JST, Joel Fernandes wrote:
>> Christian Koenig mentioned he'd like to step down from the reviewer
>> role for the GPU buddy allocator. Joel Fernandes is stepping in as
>> reviewer with agreement from Matthew Auld and Arun Pravin.
>>
>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>
> This is missing the Acked-bys you collected on v13.
Thank you, would Danilo be willing to add it only apply so I do not need to resend? Acks are from Mathew, Arun and Christian.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Claude review: MAINTAINERS: gpu: buddy: Update reviewer
2026-03-20 4:57 ` [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
@ 2026-03-21 17:56 ` Claude Code Review Bot
1 sibling, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:56 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Straightforward reviewer swap — Christian König stepping down, Joel Fernandes stepping in. The commit message mentions agreement from Matthew Auld and Arun Pravin (the maintainers).
No technical issues. It would be good to have explicit Acked-by tags from Christian König (confirming he wants to step down) and ideally from Arun Pravin / Matthew Auld, though these may have been given on prior versions.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* Claude review: Rust GPU buddy allocator bindings
2026-03-20 4:57 [PATCH v14 0/2] Rust GPU buddy allocator bindings Joel Fernandes
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
2026-03-20 4:57 ` [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer Joel Fernandes
@ 2026-03-21 17:56 ` Claude Code Review Bot
2 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:56 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: Rust GPU buddy allocator bindings
Author: Joel Fernandes <joelagnelf@nvidia.com>
Patches: 8
Reviewed: 2026-03-22T03:56:35.891654
---
This is v14 of the Rust GPU buddy allocator bindings, a 2-patch series adding safe Rust abstractions over the kernel's `gpu_buddy` allocator plus a MAINTAINERS reviewer update. The series is well-structured and has clearly benefited from extensive review iterations. The Rust API design is clean, using an enum for mutually exclusive allocation modes and a separate flags type for orthogonal modifiers. The locking strategy (Mutex wrapping all allocator access) is correct per the C API requirements.
**Key dependency:** The patch depends on `interop::list::CListHead` and the `clist_create!` macro, which are not yet in the tree. This is noted in the cover letter but should be tracked carefully for merge ordering.
**Overall assessment:** The series looks good for its current state. A few minor observations follow.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v12 0/1] Rust GPU buddy allocator bindings
@ 2026-03-08 18:04 Joel Fernandes
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-03-08 18:04 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This patch adds safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The prerequisite infrastructure
patches (DRM buddy code movement and the uninitialized buddy fix) have been
absorbed into upstream -next, so this is now a standalone patch.
The series along with all dependencies, including clist and nova-core mm
patches, are available at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova-mm-current-3-8)
This patch is also here:
https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/commit/?id=gpu-buddy-bindings-v12
Change log:
Changes from v11 to v12:
- Rebased on linux-next his is now a standalone single patch as dependencies
are absorbed (but not clist is a prequisite)
- Redesigned allocation API (Alexandre Courbot) for better Rust ergonomics.
- Split single long example into 4 self-contained examples (Alexandre Courbot).
- Several safety and invariant comment changes (Danilo).
- MAINTAINERS changes (Arun, Mathew, Danilo, Dave).
- Fixed `#[cfg(CONFIG_GPU_BUDDY)]` to `#[cfg(CONFIG_GPU_BUDDY = "y")]` (Danilo Krummrich).
- Updated `ffi::clist::CListHead` to `interop::list::CListHead`.
Changes from v10 to v11:
- Dropped "rust: ffi: Convert pub use to pub mod and create ffi module" patch;
the ffi module restructuring will go through a different path.
- Dropped "rust: clist: Add support to interface with C linked lists" patch;
the clist module will be submitted separately.
- Dropped "nova-core: Kconfig: Sort select statements alphabetically" cosmetic
patch.
- Patches 1-3 (DRM buddy movement and fix) are included as reference only;
they are already being pulled into upstream via drm-misc-next.
- Removed clist patches as those can go in independently (Alice).
- Moved the Kconfig GPU_BUDDY selection patch to nova-core mm series to enable
it when it is actually used.
- Various nits to comments, etc.
Changes from v9 to v10:
- Absorbed the DRM buddy code movement patches into this series as patches 1-2.
Dave Airlie reworked these into two parts for better git history.
- Added "gpu: Fix uninitialized buddy for built-in drivers" fix by Koen Koning,
using subsys_initcall instead of module_init to fix NULL pointer dereference
when built-in drivers use the buddy allocator before initialization.
- Added "rust: ffi: Convert pub use to pub mod and create ffi module" to prepare
the ffi module for hosting clist as a sub-module.
- Moved clist from rust/kernel/clist.rs to rust/kernel/ffi/.
- Added "nova-core: Kconfig: Sort select statements alphabetically" (Danilo).
Changes from v8 to v9:
- Updated nova-core Kconfig patch: addressed sorting of Kconfig options.
- Added Daniel Almeida's Reviewed-by tag to clist patch.
- Minor refinements to GPU buddy bindings.
Changes from v7 to v8:
- Added nova-core Kconfig patch to select GPU_BUDDY for VRAM allocation.
- Various changes suggested by Danilo Krummrich, Gary Guo, and Daniel Almeida.
- Added Acked-by: Gary Guo for clist patch.
Changes from v6 to v7:
- Major restructuring: split the large 26-patch v6 RFC series. v7 only contains
the Rust infrastructure patches (clist + GPU buddy bindings), extracted from
the full nova-core MM series. The nova-core MM patches follow separately.
- Rebased on linux-next.
Changes from v5 to v6:
- Rebased on drm-rust-kernel/drm-rust-next.
- Expanded from 6 to 26 patches with full nova-core MM infrastructure including
page table walker, VMM, BAR1 user interface, TLB flush, and GpuMm manager.
Changes from v4 to v5:
- Added PRAMIN aperture support with documentation and self-tests.
- Improved buddy allocator bindings (fewer lines of code).
- Based on drm-rust-next instead of linux-next.
Changes from v3 to v4:
- Combined the clist and DRM buddy series into a single coherent series.
- Added DRM buddy allocator movement from drivers/gpu/drm/ up to drivers/gpu/,
renaming API from drm_buddy to gpu_buddy.
- Added Rust bindings for the GPU buddy allocator.
Changes from v2 to v3:
- Squashed 3 clist patches into one due to inter-dependencies.
- Changed Clist to Clist<'a, T> using const generic offset (Alex Courbot).
- Simplified C helpers to only list_add_tail (Alex Courbot, John Hubbard).
- Added init_list_head() Rust function (Alex Courbot).
- Added FusedIterator, PartialEq/Eq impls.
- Added MAINTAINERS entry (Miguel Ojeda).
Changes from v1 (RFC) to v2:
- Dropped DRM buddy allocator patches; series focuses solely on clist module.
- Dropped sample modules, replaced with doctests.
- Added proper lifetime management similar to scatterlist.
- Split clist into 3 separate patches.
Link to v11: https://lore.kernel.org/all/20260224224005.3232841-1-joelagnelf@nvidia.com/
Joel Fernandes (1):
rust: gpu: Add GPU buddy allocator bindings
MAINTAINERS | 6 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 611 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 658 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v12 1/1] rust: gpu: Add GPU buddy allocator bindings
2026-03-08 18:04 [PATCH v12 0/1] " Joel Fernandes
@ 2026-03-08 18:04 ` Joel Fernandes
2026-03-08 21:42 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-03-08 18:04 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Christian Koenig mentioned he'd like to step down from reviewer role for
GPU buddy so updated accordingly. Arun/Matthew agree on the modified entry.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 6 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 611 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 658 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 4c66f8261ff2..b2600dd05fc2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8513,7 +8513,9 @@ T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
F: drivers/gpu/drm/nova/
F: drivers/gpu/drm/tyr/
F: drivers/gpu/nova-core/
+F: rust/helpers/gpu.c
F: rust/kernel/drm/
+F: rust/kernel/gpu/
DRM DRIVERS FOR ALLWINNER A10
M: Chen-Yu Tsai <wens@kernel.org>
@@ -8926,7 +8928,7 @@ F: include/drm/ttm/
GPU BUDDY ALLOCATOR
M: Matthew Auld <matthew.auld@intel.com>
M: Arun Pravin <arunpravin.paneerselvam@amd.com>
-R: Christian Koenig <christian.koenig@amd.com>
+R: Joel Fernandes <joelagnelf@nvidia.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
@@ -8935,6 +8937,8 @@ F: drivers/gpu/buddy.c
F: drivers/gpu/tests/gpu_buddy_test.c
F: include/linux/gpu_buddy.h
F: include/drm/drm_buddy.h
+F: rust/helpers/gpu.c
+F: rust/kernel/gpu/
DRM AUTOMATED TESTING
M: Helen Koike <helen.fornazier@gmail.com>
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..082dc79ab247
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,611 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! Create a buddy allocator and perform a basic range allocation:
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! prelude::*,
+//! ptr::Alignment,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! physical_memory_size: SZ_1G as u64,
+//! chunk_size: SZ_4K,
+//! })?;
+//!
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K);
+//! let initial_free = buddy.free_memory();
+//!
+//! // Allocate 16MB, results in a single 16MB block at offset 0.
+//! let allocated = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: 0 },
+//! SZ_16M,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let block = allocated.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages = 16MB
+//! assert_eq!(block.size(), SZ_16M);
+//!
+//! // Dropping the allocation returns the memory to the buddy allocator.
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Top-down allocation allocates from the highest addresses:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # physical_memory_size: SZ_1G as u64,
+//! # chunk_size: SZ_4K,
+//! # })?;
+//! # let initial_free = buddy.free_memory();
+//! let topdown = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::TopDown,
+//! SZ_16M,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let block = topdown.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12);
+//! assert_eq!(block.size(), SZ_16M);
+//!
+//! // Dropping the allocation returns the memory to the buddy allocator.
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Non-contiguous allocation can fill fragmented memory by returning multiple
+//! blocks:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlags, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # physical_memory_size: SZ_1G as u64,
+//! # chunk_size: SZ_4K,
+//! # })?;
+//! # let initial_free = buddy.free_memory();
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! let frag1 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_4M as u64 },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_4M as u64);
+//!
+//! let frag2 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range {
+//! start: SZ_8M as u64,
+//! end: (SZ_8M + SZ_4M) as u64,
+//! },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB, this returns 2 blocks from the holes.
+//! let fragmented = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_16M as u64 },
+//! SZ_8M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0usize);
+//! for block in fragmented.iter() {
+//! assert_eq!(block.size(), SZ_4M);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M);
+//! assert_eq!(count, 2);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Contiguous allocation fails when only fragmented space is available:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlag, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! // Create a small 16MB buddy allocator with fragmented memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! physical_memory_size: SZ_16M as u64,
+//! chunk_size: SZ_4K,
+//! })?;
+//!
+//! let _hole1 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_4M as u64 },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! let _hole2 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range {
+//! start: SZ_8M as u64,
+//! end: (SZ_8M + SZ_4M) as u64,
+//! },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! // 8MB contiguous should fail, only two non-contiguous 4MB holes exist.
+//! let result = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Simple,
+//! SZ_8M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlag::Contiguous.into(),
+//! ),
+//! GFP_KERNEL,
+//! );
+//! assert!(result.is_err());
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist_create,
+ error::to_result,
+ interop::list::CListHead,
+ new_mutex,
+ prelude::*,
+ ptr::Alignment,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque, //
+};
+
+/// Allocation mode for the GPU buddy allocator.
+///
+/// The mode determines the primary allocation strategy. Modes are mutually
+/// exclusive: an allocation is either simple, range-constrained, or top-down.
+///
+/// Orthogonal modifier flags (e.g., contiguous, clear) are specified separately
+/// via [`GpuBuddyAllocFlags`].
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
+pub enum GpuBuddyAllocMode {
+ /// Simple allocation without constraints.
+ Simple,
+ /// Range-based allocation between `start` and `end` addresses.
+ Range {
+ /// Start of the allocation range.
+ start: u64,
+ /// End of the allocation range.
+ end: u64,
+ },
+ /// Allocate from top of address space downward.
+ TopDown,
+}
+
+impl GpuBuddyAllocMode {
+ // Returns the C flags corresponding to the allocation mode.
+ fn into_flags(self) -> usize {
+ match self {
+ Self::Simple => 0,
+ Self::Range { .. } => bindings::GPU_BUDDY_RANGE_ALLOCATION as usize,
+ Self::TopDown => bindings::GPU_BUDDY_TOPDOWN_ALLOCATION as usize,
+ }
+ }
+
+ // Extracts the range start/end, defaulting to (0, 0) for non-range modes.
+ fn range(self) -> (u64, u64) {
+ match self {
+ Self::Range { start, end } => (start, end),
+ _ => (0, 0),
+ }
+ }
+}
+
+crate::impl_flags!(
+ /// Modifier flags for GPU buddy allocation.
+ ///
+ /// These flags can be combined with any [`GpuBuddyAllocMode`] to control
+ /// additional allocation behavior.
+ #[derive(Clone, Copy, Default, PartialEq, Eq)]
+ pub struct GpuBuddyAllocFlags(u32);
+
+ /// Individual modifier flag for GPU buddy allocation.
+ #[derive(Clone, Copy, PartialEq, Eq)]
+ pub enum GpuBuddyAllocFlag {
+ /// Allocate physically contiguous blocks.
+ Contiguous = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION as u32,
+
+ /// Request allocation from cleared (zeroed) memory.
+ Clear = bindings::GPU_BUDDY_CLEAR_ALLOCATION as u32,
+
+ /// Disable trimming of partially used blocks.
+ TrimDisable = bindings::GPU_BUDDY_TRIM_DISABLE as u32,
+ }
+);
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset (in bytes) where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset: u64,
+ /// Total physical memory size (in bytes) managed by the allocator.
+ pub physical_memory_size: u64,
+ /// Minimum allocation unit / chunk size (in bytes), must be >= 4KB.
+ pub chunk_size: usize,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains an initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Cached creation parameters (do not change after init).
+ params: GpuBuddyParams,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let size = params.physical_memory_size;
+ let chunk_size = params.chunk_size;
+
+ // INVARIANT: `gpu_buddy_init` returns 0 on success, at which point the
+ // `gpu_buddy` structure is initialized and ready for use with all
+ // `gpu_buddy_*` APIs. `try_pin_init!` only completes if all fields succeed,
+ // so the invariant holds when construction finishes.
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: `ptr` points to valid uninitialized memory from the pin-init
+ // infrastructure. `gpu_buddy_init` will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size as u64) })
+ }),
+ lock <- new_mutex!(()),
+ params,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: GpuBuddyInner can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: `GpuBuddyInner` is `Sync` because `GpuBuddyInner::lock`
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+// Guard that proves the lock is held, enabling access to the allocator.
+// The `_guard` holds the lock for the duration of this guard's lifetime.
+struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// Refer to the module-level documentation for usage examples.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(GpuBuddyInner::new(params), GFP_KERNEL)?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.params.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> usize {
+ self.0.params.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.params.physical_memory_size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory(&self) -> u64 {
+ let guard = self.0.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized allocator.
+ // `guard` provides exclusive access.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ mode: GpuBuddyAllocMode,
+ size: usize,
+ min_block_size: Alignment,
+ flags: GpuBuddyAllocFlags,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let (start, end) = mode.range();
+ let mode_flags = mode.into_flags();
+ let modifier_flags = u32::from(flags) as usize;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size as u64,
+ min_block_size.as_usize() as u64,
+ list.as_raw(),
+ mode_flags | modifier_flags,
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated blocks.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
+ /// borrows `self` and is only valid for the duration of that borrow.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY:
+ // - Per the type invariant, `list` is an initialized sentinel `list_head`
+ // and is not concurrently modified (we hold a `&self` borrow).
+ // - The list contains `gpu_buddy_block` items linked via
+ // `__bindgen_anon_1.link`.
+ // - `Block` is `#[repr(transparent)]` over `gpu_buddy_block`.
+ let clist = clist_create!(unsafe {
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ });
+
+ clist
+ .iter()
+ .map(|this| AllocatedBlock { this, blocks: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_free_list(guard.as_raw(), self.list.as_raw(), 0);
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's raw offset in the buddy address space (without base offset).
+ fn offset(&self) -> u64 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ fn order(&self) -> u32 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a wrapper around `gpu_buddy_block` which can be
+// sent across threads safely.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is only accessed through shared references after
+// allocation, and thus safe to access concurrently across threads.
+unsafe impl Sync for Block {}
+
+/// A buddy block paired with its owning [`AllocatedBlocks`] context.
+///
+/// Unlike a raw block, which only knows its offset within the buddy address
+/// space, an [`AllocatedBlock`] also has access to the allocator's `base_offset`
+/// and `chunk_size`, enabling it to compute absolute offsets and byte sizes.
+///
+/// Returned by [`AllocatedBlocks::iter()`].
+pub struct AllocatedBlock<'a> {
+ this: &'a Block,
+ blocks: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.blocks.buddy.params.base_offset + self.this.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.this.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> usize {
+ self.blocks.buddy.params.chunk_size << self.this.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index bb741f1e0dfd..63e3f656eb6c 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -98,6 +98,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY = "y")]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
@ 2026-03-08 21:42 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 21:42 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Doc example type name mismatch (compile error)**
The doc examples import `GpuBuddyAllocFlags` but then use the non-existent type `GpuBuddyFlags`. This would cause doctest compilation failures. It appears in examples 1, 2, and 3:
```rust
// buddy.rs line 19 (example 1 import):
gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
// buddy.rs line 42 (example 1 usage):
GpuBuddyFlags::default(),
```
This should be `GpuBuddyAllocFlags::default()`. The same issue appears in examples 2 (line 79) and 3 (lines 120, 134, 146).
**Misleading `Range { start: 0, end: 0 }` in first example**
In the first doc example (buddy.rs line 39):
```rust
GpuBuddyAllocMode::Range { start: 0, end: 0 },
```
This sets `GPU_BUDDY_RANGE_ALLOCATION` with `end=0`. In the C code at `buddy.c:626`:
```c
end = end - 1;
```
This causes `end` to underflow to `U64_MAX`, making the range effectively unbounded. The example "works" because the allocator is fresh and happens to allocate at offset 0, but the semantics are wrong — a `Range` with `start: 0, end: 0` doesn't restrict the range at all. The example should either use `GpuBuddyAllocMode::Simple` or specify a proper end value like `end: SZ_1G as u64`.
**Unused C helper: `rust_helper_gpu_buddy_block_size`**
In `rust/helpers/gpu.c` (lines 16-20):
```c
__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
struct gpu_buddy_block *block)
{
return gpu_buddy_block_size(mm, block);
}
```
This helper is defined but never called from Rust. `AllocatedBlock::size()` (buddy.rs line 608-609) computes the size manually:
```rust
pub fn size(&self) -> usize {
self.blocks.buddy.params.chunk_size << self.this.order()
}
```
This is functionally equivalent and avoids needing the `gpu_buddy*` pointer, so the approach is fine, but the unused helper should be removed to avoid dead code.
**`AllocatedBlock::size()` returns `usize` — potential overflow on 32-bit**
At buddy.rs line 608:
```rust
pub fn size(&self) -> usize {
self.blocks.buddy.params.chunk_size << self.this.order()
}
```
If `chunk_size` is 4KB and `order` is large (the max order is 51 per `GPU_BUDDY_MAX_ORDER`), this would overflow `usize` on 32-bit platforms. While GPU drivers are primarily 64-bit, this could be a latent issue. Consider returning `u64` to match the C type (`gpu_buddy_block_size` returns `u64`).
**MAINTAINERS: `rust/helpers/gpu.c` listed under two entries**
The file `rust/helpers/gpu.c` is listed under both "DRM DRIVERS AND INFRASTRUCTURE" and "GPU BUDDY ALLOCATOR" sections. Similarly `rust/kernel/gpu/` appears in both. While dual-listing isn't necessarily wrong (especially if both groups should review changes), it's worth confirming this is intentional rather than accidental.
**Minor: `GpuBuddyInner::drop` takes lock during drop**
At buddy.rs lines 365-373:
```rust
fn drop(self: Pin<&mut Self>) {
let guard = self.lock();
unsafe {
bindings::gpu_buddy_fini(guard.as_raw());
}
}
```
The lock is taken during `drop()` for `gpu_buddy_fini`. At this point the `Arc` refcount has reached zero, so no other thread can hold a reference. The lock acquisition is technically unnecessary (there can be no contention), but it's not incorrect — it provides a safety margin. Worth a comment noting this is defensive.
**Minor: The `alloc_blocks` initializer field access pattern**
In buddy.rs lines 457-478, the `try_pin_init!` macro body references `buddy` and `list` as field names in the `_:` initializer:
```rust
try_pin_init!(AllocatedBlocks {
buddy: buddy_arc,
list <- CListHead::new(),
_: {
let guard = buddy.lock();
...
list.as_raw(),
...
}
})
```
This relies on `try_pin_init!` making previously-initialized fields available by name. This is a valid kernel Rust pattern but is quite implicit — a brief comment explaining that `buddy` and `list` refer to the already-initialized fields would help readability.
**Overall**: The architecture is clean and the safety story is well-reasoned. The main blockers are the `GpuBuddyFlags` typo (doctests won't compile) and the misleading `Range { start: 0, end: 0 }` example. The rest are minor cleanups.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v11 0/4] Rust GPU buddy allocator bindings
@ 2026-02-24 22:40 Joel Fernandes
2026-02-24 22:40 ` [PATCH v11 4/4] rust: gpu: Add " Joel Fernandes
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
The series adds Rust GPU buddy allocator bindings for use by nova-core.
Note: Patches 1-3 are included for reference only and should not be merged
through this series. They are the DRM buddy allocator code movement patches
(already being pulled into upstream via drm-misc-next by Dave Airlie) and the
uninitialized buddy fix by Koen Koning (which is going in independently). They
are included here as they are prerequisites for the Rust GPU buddy bindings
(and could be further revised).
The series along with all dependencies, including clist and nova-core mm
patches, are available in:
https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/log/?h=nova/mm
Changes from v10 to v11:
- Dropped "rust: ffi: Convert pub use to pub mod and create ffi module" patch;
the ffi module restructuring will go through a different path.
- Dropped "rust: clist: Add support to interface with C linked lists" patch;
the clist module will be submitted separately.
- Dropped "nova-core: Kconfig: Sort select statements alphabetically" cosmetic
patch.
- Patches 1-3 (DRM buddy movement and fix) are included as reference only;
they are already being pulled into upstream via drm-misc-next.
- Removed clist patches as those can go in independently (Alice).
- Moved the Kconfig GPU_BUDDY selection patch to nova-core mm series to enable
it when it is actually used.
- Various nits to comments, etc.
Changes from v9 to v10:
- Absorbed the DRM buddy code movement patches into this series as patches 1-2.
Dave Airlie reworked these into two parts for better git history.
- Added "gpu: Fix uninitialized buddy for built-in drivers" fix by Koen Koning,
using subsys_initcall instead of module_init to fix NULL pointer dereference
when built-in drivers use the buddy allocator before initialization.
- Added "rust: ffi: Convert pub use to pub mod and create ffi module" to prepare
the ffi module for hosting clist as a sub-module.
- Moved clist from rust/kernel/clist.rs to rust/kernel/ffi/.
- Added "nova-core: Kconfig: Sort select statements alphabetically" (Danilo).
Changes from v8 to v9:
- Updated nova-core Kconfig patch: addressed sorting of Kconfig options.
- Added Daniel Almeida's Reviewed-by tag to clist patch.
- Minor refinements to GPU buddy bindings.
Changes from v7 to v8:
- Added nova-core Kconfig patch to select GPU_BUDDY for VRAM allocation.
- Various changes suggested by Danilo Krummrich, Gary Guo, and Daniel Almeida.
- Added Acked-by: Gary Guo for clist patch.
Changes from v6 to v7:
- Major restructuring: split the large 26-patch v6 RFC series. v7 only contains
the Rust infrastructure patches (clist + GPU buddy bindings), extracted from
the full nova-core MM series. The nova-core MM patches follow separately.
- Rebased on linux-next.
Changes from v5 to v6:
- Rebased on drm-rust-kernel/drm-rust-next.
- Expanded from 6 to 26 patches with full nova-core MM infrastructure including
page table walker, VMM, BAR1 user interface, TLB flush, and GpuMm manager.
Changes from v4 to v5:
- Added PRAMIN aperture support with documentation and self-tests.
- Improved buddy allocator bindings (fewer lines of code).
- Based on drm-rust-next instead of linux-next.
Changes from v3 to v4:
- Combined the clist and DRM buddy series into a single coherent series.
- Added DRM buddy allocator movement from drivers/gpu/drm/ up to drivers/gpu/,
renaming API from drm_buddy to gpu_buddy.
- Added Rust bindings for the GPU buddy allocator.
Changes from v2 to v3:
- Squashed 3 clist patches into one due to inter-dependencies.
- Changed Clist to Clist<'a, T> using const generic offset (Alex Courbot).
- Simplified C helpers to only list_add_tail (Alex Courbot, John Hubbard).
- Added init_list_head() Rust function (Alex Courbot).
- Added FusedIterator, PartialEq/Eq impls.
- Added MAINTAINERS entry (Miguel Ojeda).
Changes from v1 (RFC) to v2:
- Dropped DRM buddy allocator patches; series focuses solely on clist module.
- Dropped sample modules, replaced with doctests.
- Added proper lifetime management similar to scatterlist.
- Split clist into 3 separate patches.
Link to v10: https://lore.kernel.org/all/20260218205507.689429-1-joelagnelf@nvidia.com/
Joel Fernandes (3):
gpu: Move DRM buddy allocator one level up (part one)
gpu: Move DRM buddy allocator one level up (part two)
rust: gpu: Add GPU buddy allocator bindings
Koen Koning (1):
gpu: Fix uninitialized buddy for built-in drivers
Documentation/gpu/drm-mm.rst | 10 +-
MAINTAINERS | 16 +-
drivers/gpu/Kconfig | 13 +
drivers/gpu/Makefile | 3 +-
drivers/gpu/buddy.c | 1322 +++++++++++++++++
drivers/gpu/drm/Kconfig | 5 +-
drivers/gpu/drm/Kconfig.debug | 1 -
drivers/gpu/drm/Makefile | 1 -
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 79 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
drivers/gpu/drm/drm_buddy.c | 1277 +---------------
drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +-
drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 4 +-
.../drm/i915/selftests/intel_memory_region.c | 20 +-
drivers/gpu/drm/tests/Makefile | 1 -
drivers/gpu/drm/tests/drm_exec_test.c | 2 -
drivers/gpu/drm/tests/drm_mm_test.c | 2 -
.../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 4 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
drivers/gpu/drm/xe/xe_svm.c | 12 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 71 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
drivers/gpu/tests/Makefile | 4 +
.../gpu_buddy_test.c} | 416 +++---
.../lib/drm_random.c => tests/gpu_random.c} | 18 +-
.../lib/drm_random.h => tests/gpu_random.h} | 18 +-
drivers/video/Kconfig | 1 +
include/drm/drm_buddy.h | 163 +-
include/linux/gpu_buddy.h | 177 +++
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 +
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 536 +++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
41 files changed, 2536 insertions(+), 1843 deletions(-)
create mode 100644 drivers/gpu/Kconfig
create mode 100644 drivers/gpu/buddy.c
create mode 100644 drivers/gpu/tests/Makefile
rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (67%)
rename drivers/gpu/{drm/lib/drm_random.c => tests/gpu_random.c} (59%)
rename drivers/gpu/{drm/lib/drm_random.h => tests/gpu_random.h} (53%)
create mode 100644 include/linux/gpu_buddy.h
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
base-commit: 2961f841b025fb234860bac26dfb7fa7cb0fb122
prerequisite-patch-id: f8778e0b72697243cfe40a8bfcc3ca1d66160b12
prerequisite-patch-id: 763b30c69f9806de97eb58bc8e3406feb6bf61b4
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH v11 4/4] rust: gpu: Add GPU buddy allocator bindings
2026-02-24 22:40 [PATCH v11 0/4] Rust " Joel Fernandes
@ 2026-02-24 22:40 ` Joel Fernandes
2026-02-27 4:31 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-02-24 22:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 8 +
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 536 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 586 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index f1e5a8011d2f..6befe3890f78 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -23191,6 +23191,14 @@ S: Maintained
T: git https://github.com/Rust-for-Linux/linux.git ffi-next
F: rust/kernel/ffi/
+RUST [GPU BUDDY]
+M: Joel Fernandes <joelagnelf@nvidia.com>
+L: dri-devel@lists.freedesktop.org
+L: rust-for-linux@vger.kernel.org
+S: Maintained
+F: rust/helpers/gpu.c
+F: rust/kernel/gpu/
+
RUST [NUM]
M: Alexandre Courbot <acourbot@nvidia.com>
R: Yury Norov <yury.norov@gmail.com>
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..4383f82c0fc1
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,536 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams, GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - mutated between calls for field overrides.
+//! let mut params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?;
+//! let topdown = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?;
+//! let allocated = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! params.min_block_size_bytes = SZ_4M as u64;
+//! let frag1 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let frag2 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_16M as u64;
+//! params.size_bytes = SZ_8M as u64;
+//! let fragmented = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! let hole1 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let hole2 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! params.start_range_address = 0;
+//! params.end_range_address = 0;
+//! params.size_bytes = SZ_8M as u64;
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?;
+//! let result = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL);
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist_create,
+ error::to_result,
+ ffi::clist::CListHead,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque, //
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize = bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset in bytes where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset_bytes: u64,
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Base offset for all allocations (does not change after init).
+ base_offset: u64,
+ /// Cached chunk size (does not change after init).
+ chunk_size: u64,
+ /// Cached total size (does not change after init).
+ size: u64,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let base_offset = params.base_offset_bytes;
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the pin-init
+ // infrastructure. gpu_buddy_init will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ base_offset: base_offset,
+ chunk_size: chunk_size,
+ size: size,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: GpuBuddyInner can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: GpuBuddyInner is `Sync` because the internal GpuBuddyGuard
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(GpuBuddyInner::new(params), GFP_KERNEL)?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ self.0.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ params: &GpuBuddyAllocParams,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let start = params.start_range_address;
+ let end = params.end_range_address;
+ let size = params.size_bytes;
+ let min_block_size = params.min_block_size_bytes;
+ let flags = params.buddy_flags;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ flags: flags,
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: `guard` provides exclusive access to the buddy allocator.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size,
+ min_block_size,
+ list.as_raw(),
+ flags.as_raw(),
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
+ /// borrows `self` and is only valid for the duration of that borrow.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
+ let clist = clist_create!(unsafe {
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ });
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is not modified after allocation for the lifetime
+// of `AllocatedBlock`.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is not modified after allocation for the lifetime
+// of `AllocatedBlock`.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the GPU buddy allocator.
+///
+/// It is returned by [`AllocatedBlocks::iter()`] and provides access to the
+/// GPU buddy allocator required for some accessors.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.alloc.buddy.base_offset + self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ self.alloc.buddy.chunk_size << self.block.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 58dbb02c5197..b8287601f464 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -97,6 +97,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-24 22:40 ` [PATCH v11 4/4] rust: gpu: Add " Joel Fernandes
@ 2026-02-27 4:31 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:31 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Overall this is a well-written patch. Detailed comments below:
**1. Dead helper: `rust_helper_gpu_buddy_block_size` is defined but never called**
In `rust/helpers/gpu.c`:
```c
__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
struct gpu_buddy_block *block)
{
return gpu_buddy_block_size(mm, block);
}
```
This helper is exported but the Rust code never uses it. `AllocatedBlock::size()` computes the same value inline:
```rust
pub fn size(&self) -> u64 {
self.alloc.buddy.chunk_size << self.block.order()
}
```
Either use the C helper (for consistency, ensuring the Rust code tracks any future changes to the C computation) or remove the dead helper. The inline computation is fine in practice since `chunk_size << order` is the stable definition, but having an unused helper is confusing.
**2. `BuddyFlags` validation could check for unknown bits**
In `BuddyFlags::try_new()`:
```rust
pub fn try_new(flags: usize) -> Result<Self> {
if flags > u32::MAX as usize {
return Err(EINVAL);
}
if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
return Err(EINVAL);
}
Ok(Self(flags))
}
```
This validates the range and the RANGE+TOPDOWN conflict, which is good. However, it doesn't reject unknown flag bits. If a caller passes e.g. `BIT(6) | BIT(7)` those would be silently accepted and passed to the C API. Consider adding a mask check:
```rust
const ALL_KNOWN: usize = Self::RANGE_ALLOCATION | Self::TOPDOWN_ALLOCATION
| Self::CONTIGUOUS_ALLOCATION | Self::CLEAR_ALLOCATION
| Self::TRIM_DISABLE | Self::CLEARED;
if flags & !ALL_KNOWN != 0 {
return Err(EINVAL);
}
```
This prevents silent bugs if the flag constants are accidentally misused.
**3. `free_memory_bytes()` directly accesses the C struct field**
```rust
pub fn free_memory_bytes(&self) -> u64 {
let guard = self.0.lock();
unsafe { (*guard.as_raw()).avail }
}
```
This directly reads `gpu_buddy.avail` from the C struct. While this is fine today (the field is documented as public and static after init, except during alloc/free which are serialized by the lock), it bypasses the abstraction boundary. If the C struct layout changes, this would silently break. A C helper accessor (like the block helpers) might be more robust. This is a minor style point though.
**4. The `_:` syntax in `try_pin_init!` references fields by name**
```rust
try_pin_init!(AllocatedBlocks {
buddy: buddy_arc,
list <- CListHead::new(),
flags: flags,
_: {
let guard = buddy.lock();
to_result(unsafe {
bindings::gpu_buddy_alloc_blocks(
guard.as_raw(),
start, end, size, min_block_size,
list.as_raw(),
flags.as_raw(),
)
})?
}
})
```
This references `buddy`, `list`, and `flags` by their field names inside the `_:` block. This works because `try_pin_init!` makes previously-initialized fields accessible. This is a valid kernel Rust pattern, but it's worth noting that the order of field initialization matters here — `buddy` and `list` must be initialized before the `_:` block runs. The current order is correct.
One subtlety: if `gpu_buddy_alloc_blocks` fails, the `try_pin_init!` will unwind and the partially-initialized `list` and `buddy` will need to be dropped. Since `list` is a freshly initialized (empty) `CListHead` and `buddy` is an `Arc`, this should be safe — no blocks need freeing on error.
**5. `AllocatedBlocks::is_empty()` semantics**
```rust
pub fn is_empty(&self) -> bool {
!self.list.is_linked()
}
```
The naming is somewhat confusing — `is_linked()` on a list head typically returns `true` when it has entries (is linked into some list / has elements). So `!is_linked()` means "empty." The logic appears correct, but it depends on the `CListHead` API contract. A comment clarifying this would help.
**6. `GpuBuddyInner::drop` acquires the lock defensively**
```rust
fn drop(self: Pin<&mut Self>) {
let guard = self.lock();
unsafe {
bindings::gpu_buddy_fini(guard.as_raw());
}
}
```
Since `GpuBuddyInner` is behind `Arc`, `drop` only runs when the last reference is released. At that point, no other thread can hold the lock (they'd need an `Arc` reference). The lock acquisition is therefore uncontested and defensive. This is fine — it's belt-and-suspenders safety — but a comment noting that the lock is uncontested at drop time would be helpful.
**7. `Block` `Send`/`Sync` safety comments are weak**
```rust
// SAFETY: `Block` is not modified after allocation for the lifetime
// of `AllocatedBlock`.
unsafe impl Send for Block {}
unsafe impl Sync for Block {}
```
The safety argument is that blocks aren't modified, but `Block` wraps `Opaque<gpu_buddy_block>` which contains raw pointers (`left`, `right`, `parent`, `link`). A stronger safety argument would be: "Block is only accessed through shared references (`&Block`) obtained via `AllocatedBlocks::iter()`, and the underlying data is immutable while the block is allocated. The block's link field is owned by the allocator during allocation and not modified until `gpu_buddy_free_list` is called, which requires exclusive access via the mutex."
**8. Doctest is comprehensive and well-structured**
The doctest at the top of the file is extensive and tests: basic allocation, top-down allocation, non-contiguous allocation with fragmentation, and contiguous allocation failure. This is excellent coverage for a doctest.
**9. Minor: `bindings_helper.h` include ordering**
```c
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
```
The `linux/gpu_buddy.h` include is placed between `linux/acpi.h` and `drm/drm_device.h`. This is fine alphabetically within the `linux/` group, but it's inserted outside the existing `#include` block structure. Not a real issue, just noting the placement.
**10. Minor: `GpuBuddyAllocParams` end_range_address=0 convention**
```rust
/// End of allocation range in bytes. Use 0 for entire range.
pub end_range_address: u64,
```
The "use 0 for entire range" semantic is inherited from the C API where `end=0` means "use the full size." This is documented, which is good. The doctest demonstrates this with `end_range_address: 0`.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings
@ 2026-02-10 23:32 Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add " Joel Fernandes
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-02-10 23:32 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
This series provides CList module for interfacing with linked lists, GPU buddy
allocator bindings for physical memory management in nova-core and nova-core
Kconfig change to select GPU_BUDDY.
The clist patch (patch 1) is independent and go through any tree. The other 2
patches depend on the DRM buddy code movement patch as a prerequisite, a
version of which is now in drm-misc-next:
https://lore.kernel.org/all/20260206003451.1914130-1-joelagnelf@nvidia.com/
Based on linux-next.
The git tree with all patches can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: clist-gpu-buddy-v9-20260210)
Link to v8: https://lore.kernel.org/all/20260209214246.2783990-1-joelagnelf@nvidia.com/
Link to v7: https://lore.kernel.org/all/20260206004110.1914814-1-joelagnelf@nvidia.com/
Joel Fernandes (3):
rust: clist: Add support to interface with C linked lists
rust: gpu: Add GPU buddy allocator bindings
nova-core: mm: Select GPU_BUDDY for VRAM allocation
MAINTAINERS | 7 +
drivers/gpu/nova-core/Kconfig | 3 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 2 +
rust/helpers/list.c | 17 +
rust/kernel/clist.rs | 320 +++++++++++++++++++
rust/kernel/gpu/buddy.rs | 537 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 3 +
10 files changed, 927 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
base-commit: fd9678829d6dd0c10fde080b536abf4b1121c346
prerequisite-patch-id: 51e9eb2490026debebe75b8a0a9ce0c3991cd580
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH -next v9 2/3] rust: gpu: Add GPU buddy allocator bindings
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and " Joel Fernandes
@ 2026-02-10 23:32 ` Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-02-10 23:32 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 537 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
6 files changed, 579 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..102414eba65a
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,537 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams, GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - mutated between calls for field overrides.
+//! let mut params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?;
+//! let topdown = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?;
+//! let allocated = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! params.min_block_size_bytes = SZ_4M as u64;
+//! let frag1 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let frag2 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_16M as u64;
+//! params.size_bytes = SZ_8M as u64;
+//! let fragmented = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! let hole1 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let hole2 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! params.start_range_address = 0;
+//! params.end_range_address = 0;
+//! params.size_bytes = SZ_8M as u64;
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?;
+//! let result = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL);
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist::CListHead,
+ clist_create,
+ error::to_result,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque,
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize = bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset in bytes where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset_bytes: u64,
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Base offset for all allocations (does not change after init).
+ base_offset: u64,
+ /// Cached chunk size (does not change after init).
+ chunk_size: u64,
+ /// Cached total size (does not change after init).
+ size: u64,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let base_offset = params.base_offset_bytes;
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the pin-init
+ // infrastructure. gpu_buddy_init will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ base_offset: base_offset,
+ chunk_size: chunk_size,
+ size: size,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: [`GpuBuddyInner`] can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(
+ GpuBuddyInner::new(params),
+ GFP_KERNEL,
+ )?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ self.0.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ params: &GpuBuddyAllocParams,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let start = params.start_range_address;
+ let end = params.end_range_address;
+ let size = params.size_bytes;
+ let min_block_size = params.min_block_size_bytes;
+ let flags = params.buddy_flags;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ flags: flags,
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: `guard` provides exclusive access to the buddy allocator.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size,
+ min_block_size,
+ list.as_raw(),
+ flags.as_raw(),
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+/// - `buddy` references a valid [`GpuBuddyInner`].
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] references. The blocks
+ /// are only valid for the duration of the borrow of `self`.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
+ let clist = unsafe {
+ clist_create!(
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely sent between threads.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely shared among threads.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the allocation list.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.alloc.buddy.base_offset + self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ self.alloc.buddy.chunk_size << self.block.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index fe711d34ca1e..02ec5b9b22c8 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -98,6 +98,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add " Joel Fernandes
@ 2026-02-12 20:27 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-02-12 20:27 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
> +pub struct BuddyFlags(usize);
> +
> + pub fn try_new(flags: usize) -> Result<Self> {
> + // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
> + if flags > u32::MAX as usize {
> + return Err(EINVAL);
> + }
The comment says "satisfy the GPU buddy allocator C API," but looking at the C API, `drm_buddy_alloc_blocks` takes `unsigned long flags` (which on 64-bit is 8 bytes, same as `usize`), while `drm_buddy_free_list` takes `unsigned int flags` (4 bytes). The validation to `u32::MAX` is actually only necessary for the free path. The alloc path could accept the full `unsigned long` range. This is not a bug since it is more restrictive, but the comment is somewhat inaccurate about why.
> + pub fn alloc_blocks(
> + &self,
> + params: &GpuBuddyAllocParams,
> + ) -> impl PinInit<AllocatedBlocks, Error> {
> + ...
> + try_pin_init!(AllocatedBlocks {
> + buddy: buddy_arc,
> + list <- CListHead::new(),
> + flags: flags,
> + _: {
> + let guard = buddy.lock();
> + to_result(unsafe {
> + bindings::gpu_buddy_alloc_blocks(
> + guard.as_raw(),
> + start,
> + end,
> + size,
> + min_block_size,
> + list.as_raw(),
> + flags.as_raw(),
> + )
> + })?
> + }
> + })
> + }
The `try_pin_init!` pattern with `_:` initializer block is interesting - it runs the allocation after the list and buddy are already initialized. If `gpu_buddy_alloc_blocks` fails, the `try_pin_init!` should properly drop the already-initialized fields. The `list` will have `INIT_LIST_HEAD` called on it, and the `buddy` `Arc` clone will be dropped. This looks correct.
One concern: `flags.as_raw()` returns `usize`, but the C function `gpu_buddy_alloc_blocks` takes `unsigned long flags`. On 64-bit Linux, `usize` and `unsigned long` are both 64-bit, so this is fine. But for the free path:
> +#[pinned_drop]
> +impl PinnedDrop for AllocatedBlocks {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.buddy.lock();
> + unsafe {
> + bindings::gpu_buddy_free_list(
> + guard.as_raw(),
> + self.list.as_raw(),
> + self.flags.as_raw() as u32,
> + );
> + }
> + }
> +}
The `as u32` cast is safe because `try_new()` validated flags fit in u32. No issue here.
> +impl AllocatedBlock<'_> {
> + pub fn size(&self) -> u64 {
> + self.alloc.buddy.chunk_size << self.block.order()
> + }
> +}
This mirrors the C implementation `mm->chunk_size << drm_buddy_block_order(block)`. The `order()` returns `u32`. Shifting a `u64` left by a `u32` - if `order()` returned a value >= 64, this would be undefined behavior in C but would panic in Rust debug builds or produce 0 in release builds. However, since the buddy allocator constrains orders based on the address space size and chunk size, and `DRM_BUDDY_HEADER_ORDER` is masked to 6 bits (max 63), values above 63 should not occur in practice. For a 1GB allocator with 4KB chunk size, max order would be 18. This is not a real concern.
> +#[pinned_drop]
> +impl PinnedDrop for GpuBuddyInner {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.lock();
> + unsafe {
> + bindings::gpu_buddy_fini(guard.as_raw());
> + }
> + }
> +}
This calls `lock()` on `self` inside `drop`. Since `drop` takes `Pin<&mut Self>`, and `lock()` takes `&self`, this works via auto-deref. The mutex lock is acquired to serialize with any concurrent `AllocatedBlocks` drops that might be running on other threads. However - if this is the last reference to the `Arc<GpuBuddyInner>` (which it must be, since `drop` is running), then no other thread can hold a reference, meaning there cannot be concurrent `AllocatedBlocks` drops. So the lock acquisition here is technically unnecessary but harmless - it provides defense-in-depth.
> + fn lock(&self) -> GpuBuddyGuard<'_> {
> + GpuBuddyGuard {
> + inner: self,
> + _guard: self.lock.lock(),
> + }
> + }
Wait - there is a potential issue here. The method is named `lock` and the field is also named `lock`. `self.lock` refers to the field (type `Mutex<()>`), and `self.lock.lock()` calls `Mutex::lock()` on that field. This is not ambiguous because Rust resolves method calls on the concrete type, but it is confusing to read. Not a bug, just a naming observation.
The `rust_helper_gpu_buddy_block_offset` helper takes `const struct gpu_buddy_block *block`, but the Rust `Block::offset()` method calls it with `self.as_raw()` which returns `*mut`. Passing `*mut` where `*const` is expected is fine in C (implicit conversion), and bindgen should handle this correctly. No issue.
Overall this is a solid set of bindings. The synchronization model is sound, the lifetimes are correctly constrained, and the cleanup paths are properly handled.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH -next v8 0/3] rust: Add CList and GPU buddy allocator bindings
@ 2026-02-09 21:42 Joel Fernandes
2026-02-09 21:42 ` [PATCH -next v8 2/3] rust: gpu: Add " Joel Fernandes
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-02-09 21:42 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller,
Danilo Krummrich, Alice Ryhl, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, John Hubbard, Alistair Popple,
Timur Tabi, Edwin Peer, Alexandre Courbot, Andrea Righi,
Andy Ritger, Zhi Wang, Balbir Singh, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, joel, nouveau, dri-devel,
rust-for-linux, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Changes from v7 to v8:
his series provides Rust infrastructure needed for nova-core memory management.
CList module for interfacing with linked lists, GPU buddy allocator bindings
for physical memory management in nova-core and nova-core Kconfig change to
select GPU_BUDDY.
The clist patch (patch 1) is independent and go through any tree. The other 2
patches depend on the DRM buddy code movement patch as a prerequisite, a
version of which is now in drm-misc-next:
https://lore.kernel.org/all/20260206003451.1914130-1-joelagnelf@nvidia.com/
Based on linux-next.
v7->v8:
Various changes suggested by Danilo, Gary, Daniel. Added tags.
Link to v7: https://lore.kernel.org/all/20260206004110.1914814-1-joelagnelf@nvidia.com/
Joel Fernandes (3):
rust: clist: Add support to interface with C linked lists
rust: gpu: Add GPU buddy allocator bindings
nova-core: mm: Select GPU_BUDDY for VRAM allocation
MAINTAINERS | 7 +
drivers/gpu/nova-core/Kconfig | 1 +
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 2 +
rust/helpers/list.c | 17 +
rust/kernel/clist.rs | 320 +++++++++++++++++++
rust/kernel/gpu/buddy.rs | 530 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 3 +
10 files changed, 919 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH -next v8 2/3] rust: gpu: Add GPU buddy allocator bindings
2026-02-09 21:42 [PATCH -next v8 0/3] rust: Add CList and " Joel Fernandes
@ 2026-02-09 21:42 ` Joel Fernandes
2026-02-11 6:52 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 15+ messages in thread
From: Joel Fernandes @ 2026-02-09 21:42 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller,
Danilo Krummrich, Alice Ryhl, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, John Hubbard, Alistair Popple,
Timur Tabi, Edwin Peer, Alexandre Courbot, Andrea Righi,
Andy Ritger, Zhi Wang, Balbir Singh, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, joel, nouveau, dri-devel,
rust-for-linux, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 530 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
6 files changed, 572 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..00290ce53aeb
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,530 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams, GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(&GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - mutated between calls for field overrides.
+//! let mut params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?;
+//! let topdown = buddy.alloc_blocks(¶ms)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?;
+//! let allocated = buddy.alloc_blocks(¶ms)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! params.min_block_size_bytes = SZ_4M as u64;
+//! let frag1 = buddy.alloc_blocks(¶ms)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let frag2 = buddy.alloc_blocks(¶ms)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_16M as u64;
+//! params.size_bytes = SZ_8M as u64;
+//! let fragmented = buddy.alloc_blocks(¶ms)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let small = GpuBuddy::new(&GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! let hole1 = small.alloc_blocks(¶ms)?;
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let hole2 = small.alloc_blocks(¶ms)?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! params.start_range_address = 0;
+//! params.end_range_address = 0;
+//! params.size_bytes = SZ_8M as u64;
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?;
+//! let result = small.alloc_blocks(¶ms);
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist::CListHead,
+ clist_create,
+ error::to_result,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque,
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize = bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset in bytes where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset_bytes: u64,
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ #[pin]
+ lock: Mutex<()>,
+ /// Base offset for all allocations (does not change after init).
+ base_offset: u64,
+ /// Cached chunk size (does not change after init).
+ chunk_size: u64,
+ /// Cached total size (does not change after init).
+ size: u64,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: &GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let base_offset = params.base_offset_bytes;
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the pin-init
+ // infrastructure. gpu_buddy_init will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ base_offset: base_offset,
+ chunk_size: chunk_size,
+ size: size,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: [`GpuBuddyInner`] can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: &GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(
+ GpuBuddyInner::new(params),
+ GFP_KERNEL,
+ )?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ self.0.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns an [`Arc<AllocatedBlocks>`] structure that owns the allocated blocks
+ /// and automatically frees them when all references are dropped.
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(&self, params: &GpuBuddyAllocParams) -> Result<Arc<AllocatedBlocks>> {
+ let buddy_arc = Arc::clone(&self.0);
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ let init = try_pin_init!(AllocatedBlocks {
+ buddy: Arc::clone(&buddy_arc),
+ list <- CListHead::new(),
+ flags: params.buddy_flags,
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: `guard` provides exclusive access to the buddy allocator.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ params.start_range_address,
+ params.end_range_address,
+ params.size_bytes,
+ params.min_block_size_bytes,
+ list.as_raw(),
+ params.buddy_flags.as_raw(),
+ )
+ })?
+ }
+ });
+
+ Arc::pin_init(init, GFP_KERNEL)
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+/// - `buddy` references a valid [`GpuBuddyInner`].
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] references. The blocks
+ /// are only valid for the duration of the borrow of `self`.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
+ let clist = unsafe {
+ clist_create!(
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely sent between threads.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely shared among threads.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the allocation list.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.alloc.buddy.base_offset + self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ self.alloc.buddy.chunk_size << self.block.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index fe711d34ca1e..02ec5b9b22c8 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -98,6 +98,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-09 21:42 ` [PATCH -next v8 2/3] rust: gpu: Add " Joel Fernandes
@ 2026-02-11 6:52 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-02-11 6:52 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Critical API Design Issue - Arc Return Type:**
```rust
+ pub fn alloc_blocks(&self, params: &GpuBuddyAllocParams) -> Result<Arc<AllocatedBlocks>> {
```
**Problem:** Forcing `Arc<AllocatedBlocks>` prevents drivers from embedding allocation metadata in larger structures, requiring two allocations where one should suffice (identified by Danilo Krummrich, acknowledged by author).
**Recommendation:** Change to:
```rust
pub fn alloc_blocks(&self, params: &GpuBuddyAllocParams) -> impl PinInit<AllocatedBlocks, Error>
```
This allows drivers to choose their container (Box, Arc, or embedded).
**Parameter Passing Pattern:**
```rust
+ fn new(params: &GpuBuddyParams) -> impl PinInit<Self, Error> {
```
**Problem:** `GpuBuddyParams` is constructed inline and never reused in examples. Passing by reference is unnecessarily restrictive.
**Recommendation:** Pass by value for better ownership semantics:
```rust
fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error>
```
**Mutex Placement Question:**
```rust
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ #[pin]
+ lock: Mutex<()>,
```
**Question:** Why isn't the mutex wrapping the `Opaque<gpu_buddy>` instead of being a sibling? Author confirms it's because `Mutex` doesn't take `impl PinInit`.
**Recommendation:** Add TODO comment:
```rust
/// TODO: Ideally `lock` should wrap `inner` as `Mutex<Opaque<bindings::gpu_buddy>>`,
/// but `Mutex::new()` doesn't accept `impl PinInit` yet.
lock: Mutex<()>,
```
**Excellent Flags Validation:**
```rust
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
```
**Assessment:** This is exactly the kind of safety improvement Rust bindings should provide - catching incompatible combinations the C API would silently ignore.
**Synchronization Correctness:**
```rust
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
```
Guard pattern correctly ensures mutex is held for all C allocator access.
**Cleanup Correctness:**
```rust
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
```
Correct - lock held during free, cast safe due to validation in `try_new()`.
**Verdict:** Core implementation is sound, but needs API design changes (return type, parameter passing) acknowledged by author.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-03-21 17:56 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-20 4:57 [PATCH v14 0/2] Rust GPU buddy allocator bindings Joel Fernandes
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
2026-03-20 13:04 ` Gary Guo
2026-03-20 19:50 ` Joel Fernandes
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
2026-03-20 4:57 ` [PATCH v14 2/2] MAINTAINERS: gpu: buddy: Update reviewer Joel Fernandes
2026-03-20 12:16 ` Alexandre Courbot
2026-03-21 0:06 ` Joel Fernandes
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
2026-03-21 17:56 ` Claude review: Rust GPU buddy allocator bindings Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-03-08 18:04 [PATCH v12 0/1] " Joel Fernandes
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
2026-03-08 21:42 ` Claude review: " Claude Code Review Bot
2026-02-24 22:40 [PATCH v11 0/4] Rust " Joel Fernandes
2026-02-24 22:40 ` [PATCH v11 4/4] rust: gpu: Add " Joel Fernandes
2026-02-27 4:31 ` Claude review: " Claude Code Review Bot
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and " Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add " Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-09 21:42 [PATCH -next v8 0/3] rust: Add CList and " Joel Fernandes
2026-02-09 21:42 ` [PATCH -next v8 2/3] rust: gpu: Add " Joel Fernandes
2026-02-11 6:52 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox