* [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings
@ 2026-02-10 23:32 Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Joel Fernandes @ 2026-02-10 23:32 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
This series provides CList module for interfacing with linked lists, GPU buddy
allocator bindings for physical memory management in nova-core and nova-core
Kconfig change to select GPU_BUDDY.
The clist patch (patch 1) is independent and go through any tree. The other 2
patches depend on the DRM buddy code movement patch as a prerequisite, a
version of which is now in drm-misc-next:
https://lore.kernel.org/all/20260206003451.1914130-1-joelagnelf@nvidia.com/
Based on linux-next.
The git tree with all patches can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: clist-gpu-buddy-v9-20260210)
Link to v8: https://lore.kernel.org/all/20260209214246.2783990-1-joelagnelf@nvidia.com/
Link to v7: https://lore.kernel.org/all/20260206004110.1914814-1-joelagnelf@nvidia.com/
Joel Fernandes (3):
rust: clist: Add support to interface with C linked lists
rust: gpu: Add GPU buddy allocator bindings
nova-core: mm: Select GPU_BUDDY for VRAM allocation
MAINTAINERS | 7 +
drivers/gpu/nova-core/Kconfig | 3 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 2 +
rust/helpers/list.c | 17 +
rust/kernel/clist.rs | 320 +++++++++++++++++++
rust/kernel/gpu/buddy.rs | 537 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 3 +
10 files changed, 927 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
base-commit: fd9678829d6dd0c10fde080b536abf4b1121c346
prerequisite-patch-id: 51e9eb2490026debebe75b8a0a9ce0c3991cd580
--
2.34.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings Joel Fernandes
@ 2026-02-10 23:32 ` Joel Fernandes
2026-02-11 9:23 ` Danilo Krummrich
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
` (3 subsequent siblings)
4 siblings, 2 replies; 14+ messages in thread
From: Joel Fernandes @ 2026-02-10 23:32 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
Add a new module `clist` for working with C's doubly circular linked
lists. Provide low-level iteration over list nodes.
Typed iteration over actual items is provided with a `clist_create`
macro to assist in creation of the `CList` type.
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Acked-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 7 +
rust/helpers/helpers.c | 1 +
rust/helpers/list.c | 17 +++
rust/kernel/clist.rs | 320 +++++++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
5 files changed, 346 insertions(+)
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 7cfb766112cd..b0050b478dc9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -23205,6 +23205,13 @@ S: Maintained
T: git https://github.com/Rust-for-Linux/linux.git rust-analyzer-next
F: scripts/generate_rust_analyzer.py
+RUST TO C LIST INTERFACES
+M: Joel Fernandes <joelagnelf@nvidia.com>
+M: Alexandre Courbot <acourbot@nvidia.com>
+L: rust-for-linux@vger.kernel.org
+S: Maintained
+F: rust/kernel/clist.rs
+
RXRPC SOCKETS (AF_RXRPC)
M: David Howells <dhowells@redhat.com>
M: Marc Dionne <marc.dionne@auristor.com>
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index a3c42e51f00a..724fcb8240ac 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -35,6 +35,7 @@
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
+#include "list.c"
#include "maple_tree.c"
#include "mm.c"
#include "mutex.c"
diff --git a/rust/helpers/list.c b/rust/helpers/list.c
new file mode 100644
index 000000000000..4c1f9c111ec8
--- /dev/null
+++ b/rust/helpers/list.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Helpers for C Circular doubly linked list implementation.
+ */
+
+#include <linux/list.h>
+
+__rust_helper void rust_helper_INIT_LIST_HEAD(struct list_head *list)
+{
+ INIT_LIST_HEAD(list);
+}
+
+__rust_helper void rust_helper_list_add_tail(struct list_head *new, struct list_head *head)
+{
+ list_add_tail(new, head);
+}
diff --git a/rust/kernel/clist.rs b/rust/kernel/clist.rs
new file mode 100644
index 000000000000..8aa72b5d54be
--- /dev/null
+++ b/rust/kernel/clist.rs
@@ -0,0 +1,320 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A C doubly circular intrusive linked list interface for rust code.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! bindings,
+//! clist_create,
+//! types::Opaque, //
+//! };
+//! # // Create test list with values (0, 10, 20) - normally done by C code but it is
+//! # // emulated here for doctests using the C bindings.
+//! # use core::mem::MaybeUninit;
+//! #
+//! # /// C struct with embedded `list_head` (typically will be allocated by C code).
+//! # #[repr(C)]
+//! # pub struct SampleItemC {
+//! # pub value: i32,
+//! # pub link: bindings::list_head,
+//! # }
+//! #
+//! # let mut head = MaybeUninit::<bindings::list_head>::uninit();
+//! #
+//! # let head = head.as_mut_ptr();
+//! # // SAFETY: head and all the items are test objects allocated in this scope.
+//! # unsafe { bindings::INIT_LIST_HEAD(head) };
+//! #
+//! # let mut items = [
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # ];
+//! #
+//! # for (i, item) in items.iter_mut().enumerate() {
+//! # let ptr = item.as_mut_ptr();
+//! # // SAFETY: pointers are to allocated test objects with a list_head field.
+//! # unsafe {
+//! # (*ptr).value = i as i32 * 10;
+//! # // &raw mut computes address of link directly as link is uninitialized.
+//! # bindings::INIT_LIST_HEAD(&raw mut (*ptr).link);
+//! # bindings::list_add_tail(&mut (*ptr).link, head);
+//! # }
+//! # }
+//!
+//! // Rust wrapper for the C struct.
+//! // The list item struct in this example is defined in C code as:
+//! // struct SampleItemC {
+//! // int value;
+//! // struct list_head link;
+//! // };
+//! //
+//! #[repr(transparent)]
+//! pub struct Item(Opaque<SampleItemC>);
+//!
+//! impl Item {
+//! pub fn value(&self) -> i32 {
+//! // SAFETY: [`Item`] has same layout as [`SampleItemC`].
+//! unsafe { (*self.0.get()).value }
+//! }
+//! }
+//!
+//! // Create typed [`CList`] from sentinel head.
+//! // SAFETY: head is valid, items are [`SampleItemC`] with embedded `link` field.
+//! let list = unsafe { clist_create!(head, Item, SampleItemC, link) };
+//!
+//! // Iterate directly over typed items.
+//! let mut found_0 = false;
+//! let mut found_10 = false;
+//! let mut found_20 = false;
+//!
+//! for item in list.iter() {
+//! let val = item.value();
+//! if val == 0 { found_0 = true; }
+//! if val == 10 { found_10 = true; }
+//! if val == 20 { found_20 = true; }
+//! }
+//!
+//! assert!(found_0 && found_10 && found_20);
+//! ```
+
+use core::{
+ iter::FusedIterator,
+ marker::PhantomData, //
+};
+
+use crate::{
+ bindings,
+ types::Opaque, //
+};
+
+use pin_init::{
+ pin_data,
+ pin_init,
+ PinInit //
+};
+
+/// Wraps a `list_head` object for use in intrusive linked lists.
+///
+/// # Invariants
+///
+/// - [`CListHead`] represents an allocated and valid `list_head` structure.
+#[pin_data]
+#[repr(transparent)]
+pub struct CListHead {
+ #[pin]
+ inner: Opaque<bindings::list_head>,
+}
+
+impl CListHead {
+ /// Create a `&CListHead` reference from a raw `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
+ /// for the lifetime `'a`.
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
+ // SAFETY:
+ // - [`CListHead`] has same layout as `list_head`.
+ // - `ptr` is valid and unmodified for 'a per caller guarantees.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Get the raw `list_head` pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::list_head {
+ self.inner.get()
+ }
+
+ /// Get the next [`CListHead`] in the list.
+ #[inline]
+ pub fn next(&self) -> &Self {
+ let raw = self.as_raw();
+ // SAFETY:
+ // - `self.as_raw()` is valid per type invariants.
+ // - The `next` pointer is guaranteed to be non-NULL.
+ unsafe { Self::from_raw((*raw).next) }
+ }
+
+ /// Check if this node is linked in a list (not isolated).
+ #[inline]
+ pub fn is_linked(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next != raw && (*raw).prev != raw }
+ }
+
+ /// Pin-initializer that initializes the list head.
+ pub fn new() -> impl PinInit<Self> {
+ pin_init!(Self {
+ // SAFETY: `INIT_LIST_HEAD` initializes `slot` to a valid empty list.
+ inner <- Opaque::ffi_init(|slot| unsafe { bindings::INIT_LIST_HEAD(slot) }),
+ })
+ }
+}
+
+// SAFETY: [`CListHead`] can be sent to any thread.
+unsafe impl Send for CListHead {}
+
+// SAFETY: [`CListHead`] can be shared among threads as it is not modified
+// by non-Rust code per safety requirements of [`CListHead::from_raw`].
+unsafe impl Sync for CListHead {}
+
+impl PartialEq for CListHead {
+ #[inline]
+ fn eq(&self, other: &Self) -> bool {
+ core::ptr::eq(self, other)
+ }
+}
+
+impl Eq for CListHead {}
+
+/// Low-level iterator over `list_head` nodes.
+///
+/// An iterator used to iterate over a C intrusive linked list (`list_head`). Caller has to
+/// perform conversion of returned [`CListHead`] to an item (using `container_of` macro or similar).
+///
+/// # Invariants
+///
+/// [`CListHeadIter`] is iterating over an allocated, initialized and valid list.
+struct CListHeadIter<'a> {
+ /// Current position in the list.
+ current: &'a CListHead,
+ /// The sentinel head (used to detect end of iteration).
+ sentinel: &'a CListHead,
+}
+
+impl<'a> Iterator for CListHeadIter<'a> {
+ type Item = &'a CListHead;
+
+ #[inline]
+ fn next(&mut self) -> Option<Self::Item> {
+ // Check if we've reached the sentinel (end of list).
+ if self.current == self.sentinel {
+ return None;
+ }
+
+ let item = self.current;
+ self.current = item.next();
+ Some(item)
+ }
+}
+
+impl<'a> FusedIterator for CListHeadIter<'a> {}
+
+/// A typed C linked list with a sentinel head.
+///
+/// A sentinel head represents the entire linked list and can be used for
+/// iteration over items of type `T`, it is not associated with a specific item.
+///
+/// The const generic `OFFSET` specifies the byte offset of the `list_head` field within
+/// the struct that `T` wraps.
+///
+/// # Invariants
+///
+/// - The [`CListHead`] is an allocated and valid sentinel C `list_head` structure.
+/// - `OFFSET` is the byte offset of the `list_head` field within the struct that `T` wraps.
+/// - All the list's `list_head` nodes are allocated and have valid next/prev pointers.
+#[repr(transparent)]
+pub struct CList<T, const OFFSET: usize>(CListHead, PhantomData<T>);
+
+impl<T, const OFFSET: usize> CList<T, OFFSET> {
+ /// Create a typed [`CList`] reference from a raw sentinel `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure
+ /// representing a list sentinel.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ /// - The list must contain items where the `list_head` field is at byte offset `OFFSET`.
+ /// - `T` must be `#[repr(transparent)]` over the C struct.
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
+ // SAFETY:
+ // - [`CList`] has same layout as [`CListHead`] due to repr(transparent).
+ // - Caller guarantees `ptr` is a valid, sentinel `list_head` object.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Check if the list is empty.
+ #[inline]
+ pub fn is_empty(&self) -> bool {
+ !self.0.is_linked()
+ }
+
+ /// Create an iterator over typed items.
+ #[inline]
+ pub fn iter(&self) -> CListIter<'_, T, OFFSET> {
+ let head = &self.0;
+ CListIter {
+ head_iter: CListHeadIter {
+ current: head.next(),
+ sentinel: head,
+ },
+ _phantom: PhantomData,
+ }
+ }
+}
+
+/// High-level iterator over typed list items.
+pub struct CListIter<'a, T, const OFFSET: usize> {
+ head_iter: CListHeadIter<'a>,
+ _phantom: PhantomData<&'a T>,
+}
+
+impl<'a, T, const OFFSET: usize> Iterator for CListIter<'a, T, OFFSET> {
+ type Item = &'a T;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ let head = self.head_iter.next()?;
+
+ // Convert to item using OFFSET.
+ // SAFETY: `item_ptr` calculation from `OFFSET` (calculated using offset_of!)
+ // is valid per invariants.
+ Some(unsafe { &*head.as_raw().byte_sub(OFFSET).cast::<T>() })
+ }
+}
+
+impl<'a, T, const OFFSET: usize> FusedIterator for CListIter<'a, T, OFFSET> {}
+
+/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
+///
+/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
+/// linked via the `$field` field in the underlying C struct `$c_type`.
+///
+/// # Arguments
+///
+/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
+/// - `$rust_type`: Each item's rust wrapper type.
+/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
+/// - `$field`: The name of the `list_head` field within the C struct.
+///
+/// # Safety
+///
+/// This is an unsafe macro. The caller must ensure:
+///
+/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
+/// unmodified for the lifetime of the rust `CList`.
+/// - The list contains items of type `$c_type` linked via an embedded `$field`.
+/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
+///
+/// # Examples
+///
+/// Refer to the examples in this module's documentation.
+#[macro_export]
+macro_rules! clist_create {
+ ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
+ // Compile-time check that field path is a list_head.
+ let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
+ |p| &raw const (*p).$($field).+;
+
+ // Calculate offset and create `CList`.
+ const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
+ $crate::clist::CList::<$rust_type, OFFSET>::from_raw($head)
+ }};
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 3da92f18f4ee..fe711d34ca1e 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -75,6 +75,7 @@
pub mod bug;
#[doc(hidden)]
pub mod build_assert;
+pub mod clist;
pub mod clk;
#[cfg(CONFIG_CONFIGFS_FS)]
pub mod configfs;
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH -next v9 2/3] rust: gpu: Add GPU buddy allocator bindings
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
@ 2026-02-10 23:32 ` Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-10 23:32 ` [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Joel Fernandes @ 2026-02-10 23:32 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 537 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
6 files changed, 579 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..102414eba65a
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,537 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams, GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - mutated between calls for field overrides.
+//! let mut params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?;
+//! let topdown = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?;
+//! let allocated = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! params.min_block_size_bytes = SZ_4M as u64;
+//! let frag1 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let frag2 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_16M as u64;
+//! params.size_bytes = SZ_8M as u64;
+//! let fragmented = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! let hole1 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let hole2 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! params.start_range_address = 0;
+//! params.end_range_address = 0;
+//! params.size_bytes = SZ_8M as u64;
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?;
+//! let result = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL);
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist::CListHead,
+ clist_create,
+ error::to_result,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque,
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize = bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset in bytes where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset_bytes: u64,
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Base offset for all allocations (does not change after init).
+ base_offset: u64,
+ /// Cached chunk size (does not change after init).
+ chunk_size: u64,
+ /// Cached total size (does not change after init).
+ size: u64,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let base_offset = params.base_offset_bytes;
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the pin-init
+ // infrastructure. gpu_buddy_init will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ base_offset: base_offset,
+ chunk_size: chunk_size,
+ size: size,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: [`GpuBuddyInner`] can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(
+ GpuBuddyInner::new(params),
+ GFP_KERNEL,
+ )?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ self.0.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ params: &GpuBuddyAllocParams,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let start = params.start_range_address;
+ let end = params.end_range_address;
+ let size = params.size_bytes;
+ let min_block_size = params.min_block_size_bytes;
+ let flags = params.buddy_flags;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ flags: flags,
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: `guard` provides exclusive access to the buddy allocator.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size,
+ min_block_size,
+ list.as_raw(),
+ flags.as_raw(),
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+/// - `buddy` references a valid [`GpuBuddyInner`].
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] references. The blocks
+ /// are only valid for the duration of the borrow of `self`.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
+ let clist = unsafe {
+ clist_create!(
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely sent between threads.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely shared among threads.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the allocation list.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.alloc.buddy.base_offset + self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ self.alloc.buddy.chunk_size << self.block.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index fe711d34ca1e..02ec5b9b22c8 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -98,6 +98,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
@ 2026-02-10 23:32 ` Joel Fernandes
2026-02-11 9:21 ` Danilo Krummrich
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-11 6:13 ` Claude review: rust: Add CList and GPU buddy allocator bindings Claude Code Review Bot
2026-02-11 9:19 ` [PATCH -next v9 0/3] " Danilo Krummrich
4 siblings, 2 replies; 14+ messages in thread
From: Joel Fernandes @ 2026-02-10 23:32 UTC (permalink / raw)
To: linux-kernel
Cc: Maarten Lankhorst, Maxime Ripard, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
nova-core will use the GPU buddy allocator for physical VRAM management.
Enable it in Kconfig.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 527920f9c4d3..6513007bf66f 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -3,8 +3,9 @@ config NOVA_CORE
depends on 64BIT
depends on PCI
depends on RUST
- select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
+ select GPU_BUDDY
+ select RUST_FW_LOADER_ABSTRACTIONS
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Claude review: rust: Add CList and GPU buddy allocator bindings
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings Joel Fernandes
` (2 preceding siblings ...)
2026-02-10 23:32 ` [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-02-11 6:13 ` Claude Code Review Bot
2026-02-11 9:19 ` [PATCH -next v9 0/3] " Danilo Krummrich
4 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-11 6:13 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: rust: Add CList and GPU buddy allocator bindings
Author: Joel Fernandes <joelagnelf@nvidia.com>
Patches: 4
Reviewed: 2026-02-11T16:13:42.504972
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 14+ messages in thread
* Claude review: rust: clist: Add support to interface with C linked lists
2026-02-09 21:42 ` [PATCH -next v8 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
@ 2026-02-11 6:52 ` Claude Code Review Bot
0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-11 6:52 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**File Organization Issue:**
```rust
// Location: rust/kernel/clist.rs
```
**Problem:** Should be moved to `rust/kernel/ffi/clist.rs` to make it obvious this is FFI infrastructure (raised by Danilo Krummrich).
**Recommendation:** Move to `rust/kernel/ffi/` directory structure.
**Documentation Clarity:**
```rust
+//! A C doubly circular intrusive linked list interface for rust code.
```
**Problem:** Doesn't explicitly warn against using this in pure Rust code.
**Recommendation:** Add explicit FFI-only warning:
```rust
//! **FFI Use Only**: This module provides interfaces to C linked lists for FFI
//! scenarios only. Pure Rust code should use standard Rust collection types.
```
**Critical Logic Error in `is_linked()`:**
```rust
+ /// Check if this node is linked in a list (not isolated).
+ #[inline]
+ pub fn is_linked(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next != raw && (*raw).prev != raw }
+ }
```
**Problem:** This logic is incorrect for empty list heads (sentinels). An initialized empty list has `next == prev == &self`, which this would report as "not linked". The function conflates:
1. A sentinel head that is initialized (empty or not)
2. A node that is part of a list
For an empty list: `head->next == head->prev == head` should still be considered valid.
**Recommendation:** Rename and clarify semantics:
```rust
/// Returns true if this node is linked to other nodes (not a singleton).
/// An initialized empty list head returns false.
pub fn has_nodes(&self) -> bool { ... }
```
**Soundness Concern with Sync:**
```rust
+// SAFETY: [`CListHead`] can be shared among threads as it is not modified
+// by non-Rust code per safety requirements of [`CListHead::from_raw`].
+unsafe impl Sync for CListHead {}
```
**Problem:** The `Sync` implementation relies on "not modified by non-Rust code" guarantee. In kernel FFI contexts where C code may modify lists concurrently, this could be unsound.
**Recommendation:** Document this limitation clearly or add synchronization primitives.
**Offset Calculation (Correct):**
```rust
+ Some(unsafe { &*head.as_raw().byte_sub(OFFSET).cast::<T>() })
```
This is correct usage of `offset_of!` and pointer arithmetic with adequate safety justification.
**Verdict:** Needs fixes for organization, documentation, and the `is_linked()` logic error before merge.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings Joel Fernandes
` (3 preceding siblings ...)
2026-02-11 6:13 ` Claude review: rust: Add CList and GPU buddy allocator bindings Claude Code Review Bot
@ 2026-02-11 9:19 ` Danilo Krummrich
2026-02-11 17:30 ` Joel Fernandes
4 siblings, 1 reply; 14+ messages in thread
From: Danilo Krummrich @ 2026-02-11 9:19 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Maarten Lankhorst, Maxime Ripard, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev
On Wed Feb 11, 2026 at 12:32 AM CET, Joel Fernandes wrote:
> Link to v8: https://lore.kernel.org/all/20260209214246.2783990-1-joelagnelf@nvidia.com/
> Link to v7: https://lore.kernel.org/all/20260206004110.1914814-1-joelagnelf@nvidia.com/
I already asked about this in [1], can you please add a changlog to your series?
[1] https://lore.kernel.org/all/DG0MRL6T7ACW.25G3GLJMP7PN1@kernel.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-10 23:32 ` [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-02-11 9:21 ` Danilo Krummrich
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
1 sibling, 0 replies; 14+ messages in thread
From: Danilo Krummrich @ 2026-02-11 9:21 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Maarten Lankhorst, Maxime Ripard, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev
On Wed Feb 11, 2026 at 12:32 AM CET, Joel Fernandes wrote:
> nova-core will use the GPU buddy allocator for physical VRAM management.
> Enable it in Kconfig.
>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
> drivers/gpu/nova-core/Kconfig | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
> index 527920f9c4d3..6513007bf66f 100644
> --- a/drivers/gpu/nova-core/Kconfig
> +++ b/drivers/gpu/nova-core/Kconfig
> @@ -3,8 +3,9 @@ config NOVA_CORE
> depends on 64BIT
> depends on PCI
> depends on RUST
> - select RUST_FW_LOADER_ABSTRACTIONS
I think moving this should be a separate patch.
> select AUXILIARY_BUS
> + select GPU_BUDDY
> + select RUST_FW_LOADER_ABSTRACTIONS
> default n
> help
> Choose this if you want to build the Nova Core driver for Nvidia
> --
> 2.34.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists
2026-02-10 23:32 ` [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
@ 2026-02-11 9:23 ` Danilo Krummrich
2026-02-11 17:28 ` Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 14+ messages in thread
From: Danilo Krummrich @ 2026-02-11 9:23 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Maarten Lankhorst, Maxime Ripard, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev
On Wed Feb 11, 2026 at 12:32 AM CET, Joel Fernandes wrote:
> Add a new module `clist` for working with C's doubly circular linked
> lists. Provide low-level iteration over list nodes.
>
> Typed iteration over actual items is provided with a `clist_create`
> macro to assist in creation of the `CList` type.
>
> Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
> Acked-by: Gary Guo <gary@garyguo.net>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
I think you forgot to address my comments from [1].
[1] https://lore.kernel.org/all/DGB75XMWML8M.DFZY5L52EBQF@kernel.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists
2026-02-11 9:23 ` Danilo Krummrich
@ 2026-02-11 17:28 ` Joel Fernandes
0 siblings, 0 replies; 14+ messages in thread
From: Joel Fernandes @ 2026-02-11 17:28 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Maarten Lankhorst, Maxime Ripard, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev
On 2/11/2026 4:23 AM, Danilo Krummrich wrote:
> On Wed Feb 11, 2026 at 12:32 AM CET, Joel Fernandes wrote:
>> Add a new module `clist` for working with C's doubly circular linked
>> lists. Provide low-level iteration over list nodes.
>>
>> Typed iteration over actual items is provided with a `clist_create`
>> macro to assist in creation of the `CList` type.
>>
>> Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
>> Acked-by: Gary Guo <gary@garyguo.net>
>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>
> I think you forgot to address my comments from [1].
>
> [1] https://lore.kernel.org/all/DGB75XMWML8M.DFZY5L52EBQF@kernel.org/
Indeed I missed that reply, sorry. Will address them.
Thanks.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings
2026-02-11 9:19 ` [PATCH -next v9 0/3] " Danilo Krummrich
@ 2026-02-11 17:30 ` Joel Fernandes
0 siblings, 0 replies; 14+ messages in thread
From: Joel Fernandes @ 2026-02-11 17:30 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Maarten Lankhorst, Maxime Ripard, Simona Vetter,
Jonathan Corbet, Alex Deucher, Christian König, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, Huang Rui,
Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Alistair Popple,
Alexandre Courbot, Andrea Righi, Zhi Wang, Philipp Stanner,
Elle Rhumsaa, Daniel Almeida, David Airlie, Edwin Peer,
John Hubbard, Andy Ritger, Balbir Singh, Timur Tabi, joel,
nouveau, dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev
On 2/11/2026 4:19 AM, Danilo Krummrich wrote:
> On Wed Feb 11, 2026 at 12:32 AM CET, Joel Fernandes wrote:
>> Link to v8: https://lore.kernel.org/all/20260209214246.2783990-1-joelagnelf@nvidia.com/
>> Link to v7: https://lore.kernel.org/all/20260206004110.1914814-1-joelagnelf@nvidia.com/
>
> I already asked about this in [1], can you please add a changlog to your series?
Sure, I will do that. My bad. (I did include good cover-letters in earlier
series, but missed on this one). Thanks for pointing it out.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 14+ messages in thread
* Claude review: rust: clist: Add support to interface with C linked lists
2026-02-10 23:32 ` [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-11 9:23 ` Danilo Krummrich
@ 2026-02-12 20:27 ` Claude Code Review Bot
1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-12 20:27 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
> + pub fn is_linked(&self) -> bool {
> + let raw = self.as_raw();
> + // SAFETY: self.as_raw() is valid per type invariants.
> + unsafe { (*raw).next != raw && (*raw).prev != raw }
> + }
The kernel's `list_empty()` checks `head->next == head`. An isolated/empty node has both `next` and `prev` pointing to itself after `INIT_LIST_HEAD`. Using `&&` here means a node where only `next` has been corrupted to point elsewhere (but `prev` still points to self) would be considered "linked." This seems unlikely in practice, but it might be worth noting that the kernel's own `list_empty` only checks `next`. For the actual use in this series (`is_empty` on `CList` returns `!is_linked()`), the behavior is functionally equivalent for properly-formed lists, so this is a minor observation.
> + pub fn next(&self) -> &Self {
> + let raw = self.as_raw();
> + // SAFETY:
> + // - `self.as_raw()` is valid per type invariants.
> + // - The `next` pointer is guaranteed to be non-NULL.
> + unsafe { Self::from_raw((*raw).next) }
> + }
The `next()` method returns `&Self` with the same lifetime as the `from_raw` call. Since `from_raw` binds the lifetime to the caller's choice via `'a`, and `next()` borrows `&self`, the returned reference is correctly constrained to the lifetime of `self`. No issue here.
> +macro_rules! clist_create {
> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
> + // Compile-time check that field path is a list_head.
> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
> + |p| &raw const (*p).$($field).+;
> +
> + // Calculate offset and create `CList`.
> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
> + $crate::clist::CList::<$rust_type, OFFSET>::from_raw($head)
> + }};
> +}
The macro calls `CList::from_raw($head)` which is an `unsafe` function, but the macro itself is documented as unsafe. The compile-time field type check is a nice touch. However, the type-checking closure `let _: fn(...) -> ...` constructs a function value that is never used - it could arguably be a `const _` or similar, but as-is it works for compile-time verification.
The `CListHeadIter` implements `FusedIterator`, which is correct since after returning `None` (when `current == sentinel`), it will continue to return `None` - `current` is never advanced past the sentinel.
No significant issues found. This is a clean implementation of a read-only C list iteration interface.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 14+ messages in thread
* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
@ 2026-02-12 20:27 ` Claude Code Review Bot
0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-12 20:27 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
> +pub struct BuddyFlags(usize);
> +
> + pub fn try_new(flags: usize) -> Result<Self> {
> + // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
> + if flags > u32::MAX as usize {
> + return Err(EINVAL);
> + }
The comment says "satisfy the GPU buddy allocator C API," but looking at the C API, `drm_buddy_alloc_blocks` takes `unsigned long flags` (which on 64-bit is 8 bytes, same as `usize`), while `drm_buddy_free_list` takes `unsigned int flags` (4 bytes). The validation to `u32::MAX` is actually only necessary for the free path. The alloc path could accept the full `unsigned long` range. This is not a bug since it is more restrictive, but the comment is somewhat inaccurate about why.
> + pub fn alloc_blocks(
> + &self,
> + params: &GpuBuddyAllocParams,
> + ) -> impl PinInit<AllocatedBlocks, Error> {
> + ...
> + try_pin_init!(AllocatedBlocks {
> + buddy: buddy_arc,
> + list <- CListHead::new(),
> + flags: flags,
> + _: {
> + let guard = buddy.lock();
> + to_result(unsafe {
> + bindings::gpu_buddy_alloc_blocks(
> + guard.as_raw(),
> + start,
> + end,
> + size,
> + min_block_size,
> + list.as_raw(),
> + flags.as_raw(),
> + )
> + })?
> + }
> + })
> + }
The `try_pin_init!` pattern with `_:` initializer block is interesting - it runs the allocation after the list and buddy are already initialized. If `gpu_buddy_alloc_blocks` fails, the `try_pin_init!` should properly drop the already-initialized fields. The `list` will have `INIT_LIST_HEAD` called on it, and the `buddy` `Arc` clone will be dropped. This looks correct.
One concern: `flags.as_raw()` returns `usize`, but the C function `gpu_buddy_alloc_blocks` takes `unsigned long flags`. On 64-bit Linux, `usize` and `unsigned long` are both 64-bit, so this is fine. But for the free path:
> +#[pinned_drop]
> +impl PinnedDrop for AllocatedBlocks {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.buddy.lock();
> + unsafe {
> + bindings::gpu_buddy_free_list(
> + guard.as_raw(),
> + self.list.as_raw(),
> + self.flags.as_raw() as u32,
> + );
> + }
> + }
> +}
The `as u32` cast is safe because `try_new()` validated flags fit in u32. No issue here.
> +impl AllocatedBlock<'_> {
> + pub fn size(&self) -> u64 {
> + self.alloc.buddy.chunk_size << self.block.order()
> + }
> +}
This mirrors the C implementation `mm->chunk_size << drm_buddy_block_order(block)`. The `order()` returns `u32`. Shifting a `u64` left by a `u32` - if `order()` returned a value >= 64, this would be undefined behavior in C but would panic in Rust debug builds or produce 0 in release builds. However, since the buddy allocator constrains orders based on the address space size and chunk size, and `DRM_BUDDY_HEADER_ORDER` is masked to 6 bits (max 63), values above 63 should not occur in practice. For a 1GB allocator with 4KB chunk size, max order would be 18. This is not a real concern.
> +#[pinned_drop]
> +impl PinnedDrop for GpuBuddyInner {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.lock();
> + unsafe {
> + bindings::gpu_buddy_fini(guard.as_raw());
> + }
> + }
> +}
This calls `lock()` on `self` inside `drop`. Since `drop` takes `Pin<&mut Self>`, and `lock()` takes `&self`, this works via auto-deref. The mutex lock is acquired to serialize with any concurrent `AllocatedBlocks` drops that might be running on other threads. However - if this is the last reference to the `Arc<GpuBuddyInner>` (which it must be, since `drop` is running), then no other thread can hold a reference, meaning there cannot be concurrent `AllocatedBlocks` drops. So the lock acquisition here is technically unnecessary but harmless - it provides defense-in-depth.
> + fn lock(&self) -> GpuBuddyGuard<'_> {
> + GpuBuddyGuard {
> + inner: self,
> + _guard: self.lock.lock(),
> + }
> + }
Wait - there is a potential issue here. The method is named `lock` and the field is also named `lock`. `self.lock` refers to the field (type `Mutex<()>`), and `self.lock.lock()` calls `Mutex::lock()` on that field. This is not ambiguous because Rust resolves method calls on the concrete type, but it is confusing to read. Not a bug, just a naming observation.
The `rust_helper_gpu_buddy_block_offset` helper takes `const struct gpu_buddy_block *block`, but the Rust `Block::offset()` method calls it with `self.as_raw()` which returns `*mut`. Passing `*mut` where `*const` is expected is fine in C (implicit conversion), and bindgen should handle this correctly. No issue.
Overall this is a solid set of bindings. The synchronization model is sound, the lifetimes are correctly constrained, and the cleanup paths are properly handled.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 14+ messages in thread
* Claude review: nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-10 23:32 ` [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-02-11 9:21 ` Danilo Krummrich
@ 2026-02-12 20:27 ` Claude Code Review Bot
1 sibling, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-12 20:27 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
> - select RUST_FW_LOADER_ABSTRACTIONS
> select AUXILIARY_BUS
> + select GPU_BUDDY
> + select RUST_FW_LOADER_ABSTRACTIONS
This patch adds `select GPU_BUDDY` and alphabetizes the `select` lines. As Danilo noted in his reply, the alphabetization of `RUST_FW_LOADER_ABSTRACTIONS` should be a separate patch from the functional change of adding `GPU_BUDDY`. The actual functional change (adding `select GPU_BUDDY`) is straightforward and correct - nova-core will use the buddy allocator so it needs the Kconfig dependency.
No other issues found. This is a trivial Kconfig change.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-02-12 20:27 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and GPU buddy allocator bindings Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-11 9:23 ` Danilo Krummrich
2026-02-11 17:28 ` Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-10 23:32 ` [PATCH -next v9 3/3] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-02-11 9:21 ` Danilo Krummrich
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-11 6:13 ` Claude review: rust: Add CList and GPU buddy allocator bindings Claude Code Review Bot
2026-02-11 9:19 ` [PATCH -next v9 0/3] " Danilo Krummrich
2026-02-11 17:30 ` Joel Fernandes
-- strict thread matches above, loose matches on Subject: below --
2026-02-09 21:42 [PATCH -next v8 " Joel Fernandes
2026-02-09 21:42 ` [PATCH -next v8 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-11 6:52 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox