* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-09 21:42 ` [PATCH -next v8 2/3] rust: gpu: Add " Joel Fernandes
@ 2026-02-11 6:52 ` Claude Code Review Bot
0 siblings, 0 replies; 10+ messages in thread
From: Claude Code Review Bot @ 2026-02-11 6:52 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Critical API Design Issue - Arc Return Type:**
```rust
+ pub fn alloc_blocks(&self, params: &GpuBuddyAllocParams) -> Result<Arc<AllocatedBlocks>> {
```
**Problem:** Forcing `Arc<AllocatedBlocks>` prevents drivers from embedding allocation metadata in larger structures, requiring two allocations where one should suffice (identified by Danilo Krummrich, acknowledged by author).
**Recommendation:** Change to:
```rust
pub fn alloc_blocks(&self, params: &GpuBuddyAllocParams) -> impl PinInit<AllocatedBlocks, Error>
```
This allows drivers to choose their container (Box, Arc, or embedded).
**Parameter Passing Pattern:**
```rust
+ fn new(params: &GpuBuddyParams) -> impl PinInit<Self, Error> {
```
**Problem:** `GpuBuddyParams` is constructed inline and never reused in examples. Passing by reference is unnecessarily restrictive.
**Recommendation:** Pass by value for better ownership semantics:
```rust
fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error>
```
**Mutex Placement Question:**
```rust
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ #[pin]
+ lock: Mutex<()>,
```
**Question:** Why isn't the mutex wrapping the `Opaque<gpu_buddy>` instead of being a sibling? Author confirms it's because `Mutex` doesn't take `impl PinInit`.
**Recommendation:** Add TODO comment:
```rust
/// TODO: Ideally `lock` should wrap `inner` as `Mutex<Opaque<bindings::gpu_buddy>>`,
/// but `Mutex::new()` doesn't accept `impl PinInit` yet.
lock: Mutex<()>,
```
**Excellent Flags Validation:**
```rust
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
```
**Assessment:** This is exactly the kind of safety improvement Rust bindings should provide - catching incompatible combinations the C API would silently ignore.
**Synchronization Correctness:**
```rust
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
```
Guard pattern correctly ensures mutex is held for all C allocator access.
**Cleanup Correctness:**
```rust
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
```
Correct - lock held during free, cast safe due to validation in `try_new()`.
**Verdict:** Core implementation is sound, but needs API design changes (return type, parameter passing) acknowledged by author.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add " Joel Fernandes
@ 2026-02-12 20:27 ` Claude Code Review Bot
0 siblings, 0 replies; 10+ messages in thread
From: Claude Code Review Bot @ 2026-02-12 20:27 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
> +pub struct BuddyFlags(usize);
> +
> + pub fn try_new(flags: usize) -> Result<Self> {
> + // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
> + if flags > u32::MAX as usize {
> + return Err(EINVAL);
> + }
The comment says "satisfy the GPU buddy allocator C API," but looking at the C API, `drm_buddy_alloc_blocks` takes `unsigned long flags` (which on 64-bit is 8 bytes, same as `usize`), while `drm_buddy_free_list` takes `unsigned int flags` (4 bytes). The validation to `u32::MAX` is actually only necessary for the free path. The alloc path could accept the full `unsigned long` range. This is not a bug since it is more restrictive, but the comment is somewhat inaccurate about why.
> + pub fn alloc_blocks(
> + &self,
> + params: &GpuBuddyAllocParams,
> + ) -> impl PinInit<AllocatedBlocks, Error> {
> + ...
> + try_pin_init!(AllocatedBlocks {
> + buddy: buddy_arc,
> + list <- CListHead::new(),
> + flags: flags,
> + _: {
> + let guard = buddy.lock();
> + to_result(unsafe {
> + bindings::gpu_buddy_alloc_blocks(
> + guard.as_raw(),
> + start,
> + end,
> + size,
> + min_block_size,
> + list.as_raw(),
> + flags.as_raw(),
> + )
> + })?
> + }
> + })
> + }
The `try_pin_init!` pattern with `_:` initializer block is interesting - it runs the allocation after the list and buddy are already initialized. If `gpu_buddy_alloc_blocks` fails, the `try_pin_init!` should properly drop the already-initialized fields. The `list` will have `INIT_LIST_HEAD` called on it, and the `buddy` `Arc` clone will be dropped. This looks correct.
One concern: `flags.as_raw()` returns `usize`, but the C function `gpu_buddy_alloc_blocks` takes `unsigned long flags`. On 64-bit Linux, `usize` and `unsigned long` are both 64-bit, so this is fine. But for the free path:
> +#[pinned_drop]
> +impl PinnedDrop for AllocatedBlocks {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.buddy.lock();
> + unsafe {
> + bindings::gpu_buddy_free_list(
> + guard.as_raw(),
> + self.list.as_raw(),
> + self.flags.as_raw() as u32,
> + );
> + }
> + }
> +}
The `as u32` cast is safe because `try_new()` validated flags fit in u32. No issue here.
> +impl AllocatedBlock<'_> {
> + pub fn size(&self) -> u64 {
> + self.alloc.buddy.chunk_size << self.block.order()
> + }
> +}
This mirrors the C implementation `mm->chunk_size << drm_buddy_block_order(block)`. The `order()` returns `u32`. Shifting a `u64` left by a `u32` - if `order()` returned a value >= 64, this would be undefined behavior in C but would panic in Rust debug builds or produce 0 in release builds. However, since the buddy allocator constrains orders based on the address space size and chunk size, and `DRM_BUDDY_HEADER_ORDER` is masked to 6 bits (max 63), values above 63 should not occur in practice. For a 1GB allocator with 4KB chunk size, max order would be 18. This is not a real concern.
> +#[pinned_drop]
> +impl PinnedDrop for GpuBuddyInner {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.lock();
> + unsafe {
> + bindings::gpu_buddy_fini(guard.as_raw());
> + }
> + }
> +}
This calls `lock()` on `self` inside `drop`. Since `drop` takes `Pin<&mut Self>`, and `lock()` takes `&self`, this works via auto-deref. The mutex lock is acquired to serialize with any concurrent `AllocatedBlocks` drops that might be running on other threads. However - if this is the last reference to the `Arc<GpuBuddyInner>` (which it must be, since `drop` is running), then no other thread can hold a reference, meaning there cannot be concurrent `AllocatedBlocks` drops. So the lock acquisition here is technically unnecessary but harmless - it provides defense-in-depth.
> + fn lock(&self) -> GpuBuddyGuard<'_> {
> + GpuBuddyGuard {
> + inner: self,
> + _guard: self.lock.lock(),
> + }
> + }
Wait - there is a potential issue here. The method is named `lock` and the field is also named `lock`. `self.lock` refers to the field (type `Mutex<()>`), and `self.lock.lock()` calls `Mutex::lock()` on that field. This is not ambiguous because Rust resolves method calls on the concrete type, but it is confusing to read. Not a bug, just a naming observation.
The `rust_helper_gpu_buddy_block_offset` helper takes `const struct gpu_buddy_block *block`, but the Rust `Block::offset()` method calls it with `self.as_raw()` which returns `*mut`. Passing `*mut` where `*const` is expected is fine in C (implicit conversion), and bindgen should handle this correctly. No issue.
Overall this is a solid set of bindings. The synchronization model is sound, the lifetimes are correctly constrained, and the cleanup paths are properly handled.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-02-24 22:40 ` [PATCH v11 4/4] rust: gpu: Add " Joel Fernandes
@ 2026-02-27 4:31 ` Claude Code Review Bot
0 siblings, 0 replies; 10+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 4:31 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Overall this is a well-written patch. Detailed comments below:
**1. Dead helper: `rust_helper_gpu_buddy_block_size` is defined but never called**
In `rust/helpers/gpu.c`:
```c
__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
struct gpu_buddy_block *block)
{
return gpu_buddy_block_size(mm, block);
}
```
This helper is exported but the Rust code never uses it. `AllocatedBlock::size()` computes the same value inline:
```rust
pub fn size(&self) -> u64 {
self.alloc.buddy.chunk_size << self.block.order()
}
```
Either use the C helper (for consistency, ensuring the Rust code tracks any future changes to the C computation) or remove the dead helper. The inline computation is fine in practice since `chunk_size << order` is the stable definition, but having an unused helper is confusing.
**2. `BuddyFlags` validation could check for unknown bits**
In `BuddyFlags::try_new()`:
```rust
pub fn try_new(flags: usize) -> Result<Self> {
if flags > u32::MAX as usize {
return Err(EINVAL);
}
if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
return Err(EINVAL);
}
Ok(Self(flags))
}
```
This validates the range and the RANGE+TOPDOWN conflict, which is good. However, it doesn't reject unknown flag bits. If a caller passes e.g. `BIT(6) | BIT(7)` those would be silently accepted and passed to the C API. Consider adding a mask check:
```rust
const ALL_KNOWN: usize = Self::RANGE_ALLOCATION | Self::TOPDOWN_ALLOCATION
| Self::CONTIGUOUS_ALLOCATION | Self::CLEAR_ALLOCATION
| Self::TRIM_DISABLE | Self::CLEARED;
if flags & !ALL_KNOWN != 0 {
return Err(EINVAL);
}
```
This prevents silent bugs if the flag constants are accidentally misused.
**3. `free_memory_bytes()` directly accesses the C struct field**
```rust
pub fn free_memory_bytes(&self) -> u64 {
let guard = self.0.lock();
unsafe { (*guard.as_raw()).avail }
}
```
This directly reads `gpu_buddy.avail` from the C struct. While this is fine today (the field is documented as public and static after init, except during alloc/free which are serialized by the lock), it bypasses the abstraction boundary. If the C struct layout changes, this would silently break. A C helper accessor (like the block helpers) might be more robust. This is a minor style point though.
**4. The `_:` syntax in `try_pin_init!` references fields by name**
```rust
try_pin_init!(AllocatedBlocks {
buddy: buddy_arc,
list <- CListHead::new(),
flags: flags,
_: {
let guard = buddy.lock();
to_result(unsafe {
bindings::gpu_buddy_alloc_blocks(
guard.as_raw(),
start, end, size, min_block_size,
list.as_raw(),
flags.as_raw(),
)
})?
}
})
```
This references `buddy`, `list`, and `flags` by their field names inside the `_:` block. This works because `try_pin_init!` makes previously-initialized fields accessible. This is a valid kernel Rust pattern, but it's worth noting that the order of field initialization matters here — `buddy` and `list` must be initialized before the `_:` block runs. The current order is correct.
One subtlety: if `gpu_buddy_alloc_blocks` fails, the `try_pin_init!` will unwind and the partially-initialized `list` and `buddy` will need to be dropped. Since `list` is a freshly initialized (empty) `CListHead` and `buddy` is an `Arc`, this should be safe — no blocks need freeing on error.
**5. `AllocatedBlocks::is_empty()` semantics**
```rust
pub fn is_empty(&self) -> bool {
!self.list.is_linked()
}
```
The naming is somewhat confusing — `is_linked()` on a list head typically returns `true` when it has entries (is linked into some list / has elements). So `!is_linked()` means "empty." The logic appears correct, but it depends on the `CListHead` API contract. A comment clarifying this would help.
**6. `GpuBuddyInner::drop` acquires the lock defensively**
```rust
fn drop(self: Pin<&mut Self>) {
let guard = self.lock();
unsafe {
bindings::gpu_buddy_fini(guard.as_raw());
}
}
```
Since `GpuBuddyInner` is behind `Arc`, `drop` only runs when the last reference is released. At that point, no other thread can hold the lock (they'd need an `Arc` reference). The lock acquisition is therefore uncontested and defensive. This is fine — it's belt-and-suspenders safety — but a comment noting that the lock is uncontested at drop time would be helpful.
**7. `Block` `Send`/`Sync` safety comments are weak**
```rust
// SAFETY: `Block` is not modified after allocation for the lifetime
// of `AllocatedBlock`.
unsafe impl Send for Block {}
unsafe impl Sync for Block {}
```
The safety argument is that blocks aren't modified, but `Block` wraps `Opaque<gpu_buddy_block>` which contains raw pointers (`left`, `right`, `parent`, `link`). A stronger safety argument would be: "Block is only accessed through shared references (`&Block`) obtained via `AllocatedBlocks::iter()`, and the underlying data is immutable while the block is allocated. The block's link field is owned by the allocator during allocation and not modified until `gpu_buddy_free_list` is called, which requires exclusive access via the mutex."
**8. Doctest is comprehensive and well-structured**
The doctest at the top of the file is extensive and tests: basic allocation, top-down allocation, non-contiguous allocation with fragmentation, and contiguous allocation failure. This is excellent coverage for a doctest.
**9. Minor: `bindings_helper.h` include ordering**
```c
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
```
The `linux/gpu_buddy.h` include is placed between `linux/acpi.h` and `drm/drm_device.h`. This is fine alphabetically within the `linux/` group, but it's inserted outside the existing `#include` block structure. Not a real issue, just noting the placement.
**10. Minor: `GpuBuddyAllocParams` end_range_address=0 convention**
```rust
/// End of allocation range in bytes. Use 0 for entire range.
pub end_range_address: u64,
```
The "use 0 for entire range" semantic is inherited from the C API where `end=0` means "use the full size." This is documented, which is good. The doctest demonstrates this with `end_range_address: 0`.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v12 0/1] Rust GPU buddy allocator bindings
@ 2026-03-08 18:04 Joel Fernandes
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Joel Fernandes @ 2026-03-08 18:04 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This patch adds safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The prerequisite infrastructure
patches (DRM buddy code movement and the uninitialized buddy fix) have been
absorbed into upstream -next, so this is now a standalone patch.
The series along with all dependencies, including clist and nova-core mm
patches, are available at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova-mm-current-3-8)
This patch is also here:
https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/commit/?id=gpu-buddy-bindings-v12
Change log:
Changes from v11 to v12:
- Rebased on linux-next his is now a standalone single patch as dependencies
are absorbed (but not clist is a prequisite)
- Redesigned allocation API (Alexandre Courbot) for better Rust ergonomics.
- Split single long example into 4 self-contained examples (Alexandre Courbot).
- Several safety and invariant comment changes (Danilo).
- MAINTAINERS changes (Arun, Mathew, Danilo, Dave).
- Fixed `#[cfg(CONFIG_GPU_BUDDY)]` to `#[cfg(CONFIG_GPU_BUDDY = "y")]` (Danilo Krummrich).
- Updated `ffi::clist::CListHead` to `interop::list::CListHead`.
Changes from v10 to v11:
- Dropped "rust: ffi: Convert pub use to pub mod and create ffi module" patch;
the ffi module restructuring will go through a different path.
- Dropped "rust: clist: Add support to interface with C linked lists" patch;
the clist module will be submitted separately.
- Dropped "nova-core: Kconfig: Sort select statements alphabetically" cosmetic
patch.
- Patches 1-3 (DRM buddy movement and fix) are included as reference only;
they are already being pulled into upstream via drm-misc-next.
- Removed clist patches as those can go in independently (Alice).
- Moved the Kconfig GPU_BUDDY selection patch to nova-core mm series to enable
it when it is actually used.
- Various nits to comments, etc.
Changes from v9 to v10:
- Absorbed the DRM buddy code movement patches into this series as patches 1-2.
Dave Airlie reworked these into two parts for better git history.
- Added "gpu: Fix uninitialized buddy for built-in drivers" fix by Koen Koning,
using subsys_initcall instead of module_init to fix NULL pointer dereference
when built-in drivers use the buddy allocator before initialization.
- Added "rust: ffi: Convert pub use to pub mod and create ffi module" to prepare
the ffi module for hosting clist as a sub-module.
- Moved clist from rust/kernel/clist.rs to rust/kernel/ffi/.
- Added "nova-core: Kconfig: Sort select statements alphabetically" (Danilo).
Changes from v8 to v9:
- Updated nova-core Kconfig patch: addressed sorting of Kconfig options.
- Added Daniel Almeida's Reviewed-by tag to clist patch.
- Minor refinements to GPU buddy bindings.
Changes from v7 to v8:
- Added nova-core Kconfig patch to select GPU_BUDDY for VRAM allocation.
- Various changes suggested by Danilo Krummrich, Gary Guo, and Daniel Almeida.
- Added Acked-by: Gary Guo for clist patch.
Changes from v6 to v7:
- Major restructuring: split the large 26-patch v6 RFC series. v7 only contains
the Rust infrastructure patches (clist + GPU buddy bindings), extracted from
the full nova-core MM series. The nova-core MM patches follow separately.
- Rebased on linux-next.
Changes from v5 to v6:
- Rebased on drm-rust-kernel/drm-rust-next.
- Expanded from 6 to 26 patches with full nova-core MM infrastructure including
page table walker, VMM, BAR1 user interface, TLB flush, and GpuMm manager.
Changes from v4 to v5:
- Added PRAMIN aperture support with documentation and self-tests.
- Improved buddy allocator bindings (fewer lines of code).
- Based on drm-rust-next instead of linux-next.
Changes from v3 to v4:
- Combined the clist and DRM buddy series into a single coherent series.
- Added DRM buddy allocator movement from drivers/gpu/drm/ up to drivers/gpu/,
renaming API from drm_buddy to gpu_buddy.
- Added Rust bindings for the GPU buddy allocator.
Changes from v2 to v3:
- Squashed 3 clist patches into one due to inter-dependencies.
- Changed Clist to Clist<'a, T> using const generic offset (Alex Courbot).
- Simplified C helpers to only list_add_tail (Alex Courbot, John Hubbard).
- Added init_list_head() Rust function (Alex Courbot).
- Added FusedIterator, PartialEq/Eq impls.
- Added MAINTAINERS entry (Miguel Ojeda).
Changes from v1 (RFC) to v2:
- Dropped DRM buddy allocator patches; series focuses solely on clist module.
- Dropped sample modules, replaced with doctests.
- Added proper lifetime management similar to scatterlist.
- Split clist into 3 separate patches.
Link to v11: https://lore.kernel.org/all/20260224224005.3232841-1-joelagnelf@nvidia.com/
Joel Fernandes (1):
rust: gpu: Add GPU buddy allocator bindings
MAINTAINERS | 6 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 611 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 658 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
--
2.34.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v12 1/1] rust: gpu: Add GPU buddy allocator bindings
2026-03-08 18:04 [PATCH v12 0/1] Rust GPU buddy allocator bindings Joel Fernandes
@ 2026-03-08 18:04 ` Joel Fernandes
2026-03-08 21:42 ` Claude review: " Claude Code Review Bot
2026-03-08 21:42 ` Claude review: Rust " Claude Code Review Bot
2026-03-09 13:53 ` [PATCH v12.1 0/1] " Joel Fernandes
2 siblings, 1 reply; 10+ messages in thread
From: Joel Fernandes @ 2026-03-08 18:04 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Christian Koenig mentioned he'd like to step down from reviewer role for
GPU buddy so updated accordingly. Arun/Matthew agree on the modified entry.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 6 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 611 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 658 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 4c66f8261ff2..b2600dd05fc2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8513,7 +8513,9 @@ T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
F: drivers/gpu/drm/nova/
F: drivers/gpu/drm/tyr/
F: drivers/gpu/nova-core/
+F: rust/helpers/gpu.c
F: rust/kernel/drm/
+F: rust/kernel/gpu/
DRM DRIVERS FOR ALLWINNER A10
M: Chen-Yu Tsai <wens@kernel.org>
@@ -8926,7 +8928,7 @@ F: include/drm/ttm/
GPU BUDDY ALLOCATOR
M: Matthew Auld <matthew.auld@intel.com>
M: Arun Pravin <arunpravin.paneerselvam@amd.com>
-R: Christian Koenig <christian.koenig@amd.com>
+R: Joel Fernandes <joelagnelf@nvidia.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
@@ -8935,6 +8937,8 @@ F: drivers/gpu/buddy.c
F: drivers/gpu/tests/gpu_buddy_test.c
F: include/linux/gpu_buddy.h
F: include/drm/drm_buddy.h
+F: rust/helpers/gpu.c
+F: rust/kernel/gpu/
DRM AUTOMATED TESTING
M: Helen Koike <helen.fornazier@gmail.com>
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..082dc79ab247
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,611 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! Create a buddy allocator and perform a basic range allocation:
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! prelude::*,
+//! ptr::Alignment,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! physical_memory_size: SZ_1G as u64,
+//! chunk_size: SZ_4K,
+//! })?;
+//!
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K);
+//! let initial_free = buddy.free_memory();
+//!
+//! // Allocate 16MB, results in a single 16MB block at offset 0.
+//! let allocated = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: 0 },
+//! SZ_16M,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let block = allocated.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages = 16MB
+//! assert_eq!(block.size(), SZ_16M);
+//!
+//! // Dropping the allocation returns the memory to the buddy allocator.
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Top-down allocation allocates from the highest addresses:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # physical_memory_size: SZ_1G as u64,
+//! # chunk_size: SZ_4K,
+//! # })?;
+//! # let initial_free = buddy.free_memory();
+//! let topdown = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::TopDown,
+//! SZ_16M,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let block = topdown.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12);
+//! assert_eq!(block.size(), SZ_16M);
+//!
+//! // Dropping the allocation returns the memory to the buddy allocator.
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Non-contiguous allocation can fill fragmented memory by returning multiple
+//! blocks:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlags, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # physical_memory_size: SZ_1G as u64,
+//! # chunk_size: SZ_4K,
+//! # })?;
+//! # let initial_free = buddy.free_memory();
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! let frag1 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_4M as u64 },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_4M as u64);
+//!
+//! let frag2 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range {
+//! start: SZ_8M as u64,
+//! end: (SZ_8M + SZ_4M) as u64,
+//! },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB, this returns 2 blocks from the holes.
+//! let fragmented = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_16M as u64 },
+//! SZ_8M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0usize);
+//! for block in fragmented.iter() {
+//! assert_eq!(block.size(), SZ_4M);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M);
+//! assert_eq!(count, 2);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Contiguous allocation fails when only fragmented space is available:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlag, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! // Create a small 16MB buddy allocator with fragmented memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! physical_memory_size: SZ_16M as u64,
+//! chunk_size: SZ_4K,
+//! })?;
+//!
+//! let _hole1 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_4M as u64 },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! let _hole2 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range {
+//! start: SZ_8M as u64,
+//! end: (SZ_8M + SZ_4M) as u64,
+//! },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! // 8MB contiguous should fail, only two non-contiguous 4MB holes exist.
+//! let result = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Simple,
+//! SZ_8M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlag::Contiguous.into(),
+//! ),
+//! GFP_KERNEL,
+//! );
+//! assert!(result.is_err());
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist_create,
+ error::to_result,
+ interop::list::CListHead,
+ new_mutex,
+ prelude::*,
+ ptr::Alignment,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque, //
+};
+
+/// Allocation mode for the GPU buddy allocator.
+///
+/// The mode determines the primary allocation strategy. Modes are mutually
+/// exclusive: an allocation is either simple, range-constrained, or top-down.
+///
+/// Orthogonal modifier flags (e.g., contiguous, clear) are specified separately
+/// via [`GpuBuddyAllocFlags`].
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
+pub enum GpuBuddyAllocMode {
+ /// Simple allocation without constraints.
+ Simple,
+ /// Range-based allocation between `start` and `end` addresses.
+ Range {
+ /// Start of the allocation range.
+ start: u64,
+ /// End of the allocation range.
+ end: u64,
+ },
+ /// Allocate from top of address space downward.
+ TopDown,
+}
+
+impl GpuBuddyAllocMode {
+ // Returns the C flags corresponding to the allocation mode.
+ fn into_flags(self) -> usize {
+ match self {
+ Self::Simple => 0,
+ Self::Range { .. } => bindings::GPU_BUDDY_RANGE_ALLOCATION as usize,
+ Self::TopDown => bindings::GPU_BUDDY_TOPDOWN_ALLOCATION as usize,
+ }
+ }
+
+ // Extracts the range start/end, defaulting to (0, 0) for non-range modes.
+ fn range(self) -> (u64, u64) {
+ match self {
+ Self::Range { start, end } => (start, end),
+ _ => (0, 0),
+ }
+ }
+}
+
+crate::impl_flags!(
+ /// Modifier flags for GPU buddy allocation.
+ ///
+ /// These flags can be combined with any [`GpuBuddyAllocMode`] to control
+ /// additional allocation behavior.
+ #[derive(Clone, Copy, Default, PartialEq, Eq)]
+ pub struct GpuBuddyAllocFlags(u32);
+
+ /// Individual modifier flag for GPU buddy allocation.
+ #[derive(Clone, Copy, PartialEq, Eq)]
+ pub enum GpuBuddyAllocFlag {
+ /// Allocate physically contiguous blocks.
+ Contiguous = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION as u32,
+
+ /// Request allocation from cleared (zeroed) memory.
+ Clear = bindings::GPU_BUDDY_CLEAR_ALLOCATION as u32,
+
+ /// Disable trimming of partially used blocks.
+ TrimDisable = bindings::GPU_BUDDY_TRIM_DISABLE as u32,
+ }
+);
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset (in bytes) where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset: u64,
+ /// Total physical memory size (in bytes) managed by the allocator.
+ pub physical_memory_size: u64,
+ /// Minimum allocation unit / chunk size (in bytes), must be >= 4KB.
+ pub chunk_size: usize,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains an initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Cached creation parameters (do not change after init).
+ params: GpuBuddyParams,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let size = params.physical_memory_size;
+ let chunk_size = params.chunk_size;
+
+ // INVARIANT: `gpu_buddy_init` returns 0 on success, at which point the
+ // `gpu_buddy` structure is initialized and ready for use with all
+ // `gpu_buddy_*` APIs. `try_pin_init!` only completes if all fields succeed,
+ // so the invariant holds when construction finishes.
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: `ptr` points to valid uninitialized memory from the pin-init
+ // infrastructure. `gpu_buddy_init` will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size as u64) })
+ }),
+ lock <- new_mutex!(()),
+ params,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: GpuBuddyInner can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: `GpuBuddyInner` is `Sync` because `GpuBuddyInner::lock`
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+// Guard that proves the lock is held, enabling access to the allocator.
+// The `_guard` holds the lock for the duration of this guard's lifetime.
+struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// Refer to the module-level documentation for usage examples.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(GpuBuddyInner::new(params), GFP_KERNEL)?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.params.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> usize {
+ self.0.params.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.params.physical_memory_size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory(&self) -> u64 {
+ let guard = self.0.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized allocator.
+ // `guard` provides exclusive access.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ mode: GpuBuddyAllocMode,
+ size: usize,
+ min_block_size: Alignment,
+ flags: GpuBuddyAllocFlags,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let (start, end) = mode.range();
+ let mode_flags = mode.into_flags();
+ let modifier_flags = u32::from(flags) as usize;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size as u64,
+ min_block_size.as_usize() as u64,
+ list.as_raw(),
+ mode_flags | modifier_flags,
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated blocks.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
+ /// borrows `self` and is only valid for the duration of that borrow.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY:
+ // - Per the type invariant, `list` is an initialized sentinel `list_head`
+ // and is not concurrently modified (we hold a `&self` borrow).
+ // - The list contains `gpu_buddy_block` items linked via
+ // `__bindgen_anon_1.link`.
+ // - `Block` is `#[repr(transparent)]` over `gpu_buddy_block`.
+ let clist = clist_create!(unsafe {
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ });
+
+ clist
+ .iter()
+ .map(|this| AllocatedBlock { this, blocks: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_free_list(guard.as_raw(), self.list.as_raw(), 0);
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's raw offset in the buddy address space (without base offset).
+ fn offset(&self) -> u64 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ fn order(&self) -> u32 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a wrapper around `gpu_buddy_block` which can be
+// sent across threads safely.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is only accessed through shared references after
+// allocation, and thus safe to access concurrently across threads.
+unsafe impl Sync for Block {}
+
+/// A buddy block paired with its owning [`AllocatedBlocks`] context.
+///
+/// Unlike a raw block, which only knows its offset within the buddy address
+/// space, an [`AllocatedBlock`] also has access to the allocator's `base_offset`
+/// and `chunk_size`, enabling it to compute absolute offsets and byte sizes.
+///
+/// Returned by [`AllocatedBlocks::iter()`].
+pub struct AllocatedBlock<'a> {
+ this: &'a Block,
+ blocks: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.blocks.buddy.params.base_offset + self.this.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.this.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> usize {
+ self.blocks.buddy.params.chunk_size << self.this.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index bb741f1e0dfd..63e3f656eb6c 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -98,6 +98,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY = "y")]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Claude review: Rust GPU buddy allocator bindings
2026-03-08 18:04 [PATCH v12 0/1] Rust GPU buddy allocator bindings Joel Fernandes
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
@ 2026-03-08 21:42 ` Claude Code Review Bot
2026-03-09 13:53 ` [PATCH v12.1 0/1] " Joel Fernandes
2 siblings, 0 replies; 10+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 21:42 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: Rust GPU buddy allocator bindings
Author: Joel Fernandes <joelagnelf@nvidia.com>
Patches: 2
Reviewed: 2026-03-09T07:42:19.323243
---
This is v12 of a single-patch series adding safe Rust abstractions over the kernel's GPU buddy allocator (`gpu_buddy`), intended for use by nova-core. The overall design is solid: a `GpuBuddy` wrapper using `Arc` for shared ownership, a `Mutex` for synchronization (as required by the C API), RAII-based `AllocatedBlocks` with automatic cleanup on drop, and a clean `GpuBuddyAllocMode` enum that maps to C flags. The pin-init integration is well done and follows kernel Rust patterns.
There are several issues in the doc examples that would prevent doctests from compiling, and one semantically misleading example. The core implementation code looks correct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
@ 2026-03-08 21:42 ` Claude Code Review Bot
0 siblings, 0 replies; 10+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 21:42 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Doc example type name mismatch (compile error)**
The doc examples import `GpuBuddyAllocFlags` but then use the non-existent type `GpuBuddyFlags`. This would cause doctest compilation failures. It appears in examples 1, 2, and 3:
```rust
// buddy.rs line 19 (example 1 import):
gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
// buddy.rs line 42 (example 1 usage):
GpuBuddyFlags::default(),
```
This should be `GpuBuddyAllocFlags::default()`. The same issue appears in examples 2 (line 79) and 3 (lines 120, 134, 146).
**Misleading `Range { start: 0, end: 0 }` in first example**
In the first doc example (buddy.rs line 39):
```rust
GpuBuddyAllocMode::Range { start: 0, end: 0 },
```
This sets `GPU_BUDDY_RANGE_ALLOCATION` with `end=0`. In the C code at `buddy.c:626`:
```c
end = end - 1;
```
This causes `end` to underflow to `U64_MAX`, making the range effectively unbounded. The example "works" because the allocator is fresh and happens to allocate at offset 0, but the semantics are wrong — a `Range` with `start: 0, end: 0` doesn't restrict the range at all. The example should either use `GpuBuddyAllocMode::Simple` or specify a proper end value like `end: SZ_1G as u64`.
**Unused C helper: `rust_helper_gpu_buddy_block_size`**
In `rust/helpers/gpu.c` (lines 16-20):
```c
__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
struct gpu_buddy_block *block)
{
return gpu_buddy_block_size(mm, block);
}
```
This helper is defined but never called from Rust. `AllocatedBlock::size()` (buddy.rs line 608-609) computes the size manually:
```rust
pub fn size(&self) -> usize {
self.blocks.buddy.params.chunk_size << self.this.order()
}
```
This is functionally equivalent and avoids needing the `gpu_buddy*` pointer, so the approach is fine, but the unused helper should be removed to avoid dead code.
**`AllocatedBlock::size()` returns `usize` — potential overflow on 32-bit**
At buddy.rs line 608:
```rust
pub fn size(&self) -> usize {
self.blocks.buddy.params.chunk_size << self.this.order()
}
```
If `chunk_size` is 4KB and `order` is large (the max order is 51 per `GPU_BUDDY_MAX_ORDER`), this would overflow `usize` on 32-bit platforms. While GPU drivers are primarily 64-bit, this could be a latent issue. Consider returning `u64` to match the C type (`gpu_buddy_block_size` returns `u64`).
**MAINTAINERS: `rust/helpers/gpu.c` listed under two entries**
The file `rust/helpers/gpu.c` is listed under both "DRM DRIVERS AND INFRASTRUCTURE" and "GPU BUDDY ALLOCATOR" sections. Similarly `rust/kernel/gpu/` appears in both. While dual-listing isn't necessarily wrong (especially if both groups should review changes), it's worth confirming this is intentional rather than accidental.
**Minor: `GpuBuddyInner::drop` takes lock during drop**
At buddy.rs lines 365-373:
```rust
fn drop(self: Pin<&mut Self>) {
let guard = self.lock();
unsafe {
bindings::gpu_buddy_fini(guard.as_raw());
}
}
```
The lock is taken during `drop()` for `gpu_buddy_fini`. At this point the `Arc` refcount has reached zero, so no other thread can hold a reference. The lock acquisition is technically unnecessary (there can be no contention), but it's not incorrect — it provides a safety margin. Worth a comment noting this is defensive.
**Minor: The `alloc_blocks` initializer field access pattern**
In buddy.rs lines 457-478, the `try_pin_init!` macro body references `buddy` and `list` as field names in the `_:` initializer:
```rust
try_pin_init!(AllocatedBlocks {
buddy: buddy_arc,
list <- CListHead::new(),
_: {
let guard = buddy.lock();
...
list.as_raw(),
...
}
})
```
This relies on `try_pin_init!` making previously-initialized fields available by name. This is a valid kernel Rust pattern but is quite implicit — a brief comment explaining that `buddy` and `list` refer to the already-initialized fields would help readability.
**Overall**: The architecture is clean and the safety story is well-reasoned. The main blockers are the `GpuBuddyFlags` typo (doctests won't compile) and the misleading `Range { start: 0, end: 0 }` example. The rest are minor cleanups.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v12.1 0/1] Rust GPU buddy allocator bindings
2026-03-08 18:04 [PATCH v12 0/1] Rust GPU buddy allocator bindings Joel Fernandes
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
2026-03-08 21:42 ` Claude review: Rust " Claude Code Review Bot
@ 2026-03-09 13:53 ` Joel Fernandes
2026-03-09 13:53 ` [PATCH v12.1 1/1] rust: gpu: Add " Joel Fernandes
2 siblings, 1 reply; 10+ messages in thread
From: Joel Fernandes @ 2026-03-09 13:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This patch adds safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The prerequisite infrastructure
patches (DRM buddy code movement and the uninitialized buddy fix) have been
absorbed into upstream -next, so this is now a standalone patch.
The git tree with all patches can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: gpu-buddy-bindings-v12.1-20260309)
Change log:
Changes from v12 to v12.1:
- Fixed undeclared type `GpuBuddyFlags` in doc examples: renamed to
`GpuBuddyAllocFlags` in all seven doc test examples (found by kernel
test robot).
Changes from v11 to v12:
- Rebased on linux-next; this is now a standalone single patch as dependencies
are absorbed (but clist is a prerequisite).
- Redesigned allocation API (Alexandre Courbot) for better Rust ergonomics.
- Split single long example into 4 self-contained examples (Alexandre Courbot).
- Several safety and invariant comment changes (Danilo).
- MAINTAINERS changes (Arun, Mathew, Danilo, Dave).
- Fixed `#[cfg(CONFIG_GPU_BUDDY)]` to `#[cfg(CONFIG_GPU_BUDDY = "y")]` (Danilo Krummrich).
- Updated `ffi::clist::CListHead` to `interop::list::CListHead`.
Changes from v10 to v11:
- Dropped "rust: ffi: Convert pub use to pub mod and create ffi module" patch;
the ffi module restructuring will go through a different path.
- Dropped "rust: clist: Add support to interface with C linked lists" patch;
the clist module will be submitted separately.
- Dropped "nova-core: Kconfig: Sort select statements alphabetically" cosmetic
patch.
- Patches 1-3 (DRM buddy movement and fix) are included as reference only;
they are already being pulled into upstream via drm-misc-next.
- Removed clist patches as those can go in independently (Alice).
- Moved the Kconfig GPU_BUDDY selection patch to nova-core mm series to enable
it when it is actually used.
- Various nits to comments, etc.
Changes from v9 to v10:
- Absorbed the DRM buddy code movement patches into this series as patches 1-2.
Dave Airlie reworked these into two parts for better git history.
- Added "gpu: Fix uninitialized buddy for built-in drivers" fix by Koen Koning,
using subsys_initcall instead of module_init to fix NULL pointer dereference
when built-in drivers use the buddy allocator before initialization.
- Added "rust: ffi: Convert pub use to pub mod and create ffi module" to prepare
the ffi module for hosting clist as a sub-module.
- Moved clist from rust/kernel/clist.rs to rust/kernel/ffi/.
- Added "nova-core: Kconfig: Sort select statements alphabetically" (Danilo).
Changes from v8 to v9:
- Updated nova-core Kconfig patch: addressed sorting of Kconfig options.
- Added Daniel Almeida's Reviewed-by tag to clist patch.
- Minor refinements to GPU buddy bindings.
Changes from v7 to v8:
- Added nova-core Kconfig patch to select GPU_BUDDY for VRAM allocation.
- Various changes suggested by Danilo Krummrich, Gary Guo, and Daniel Almeida.
- Added Acked-by: Gary Guo for clist patch.
Changes from v6 to v7:
- Major restructuring: split the large 26-patch v6 RFC series. v7 only contains
the Rust infrastructure patches (clist + GPU buddy bindings), extracted from
the full nova-core MM series. The nova-core MM patches follow separately.
- Rebased on linux-next.
Link to v12: https://lore.kernel.org/all/20260308180407.3988286-1-joelagnelf@nvidia.com/
Link to v11: https://lore.kernel.org/all/20260224224005.3232841-1-joelagnelf@nvidia.com/
Joel Fernandes (1):
rust: gpu: Add GPU buddy allocator bindings
MAINTAINERS | 6 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 611 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 658 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
--
2.34.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v12.1 1/1] rust: gpu: Add GPU buddy allocator bindings
2026-03-09 13:53 ` [PATCH v12.1 0/1] " Joel Fernandes
@ 2026-03-09 13:53 ` Joel Fernandes
0 siblings, 0 replies; 10+ messages in thread
From: Joel Fernandes @ 2026-03-09 13:53 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Christian Koenig mentioned he'd like to step down from reviewer role for
GPU buddy so updated accordingly. Arun/Matthew agree on the modified entry.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 6 +-
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 611 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
7 files changed, 658 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 4c66f8261ff2..b2600dd05fc2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8513,7 +8513,9 @@ T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
F: drivers/gpu/drm/nova/
F: drivers/gpu/drm/tyr/
F: drivers/gpu/nova-core/
+F: rust/helpers/gpu.c
F: rust/kernel/drm/
+F: rust/kernel/gpu/
DRM DRIVERS FOR ALLWINNER A10
M: Chen-Yu Tsai <wens@kernel.org>
@@ -8926,7 +8928,7 @@ F: include/drm/ttm/
GPU BUDDY ALLOCATOR
M: Matthew Auld <matthew.auld@intel.com>
M: Arun Pravin <arunpravin.paneerselvam@amd.com>
-R: Christian Koenig <christian.koenig@amd.com>
+R: Joel Fernandes <joelagnelf@nvidia.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
@@ -8935,6 +8937,8 @@ F: drivers/gpu/buddy.c
F: drivers/gpu/tests/gpu_buddy_test.c
F: include/linux/gpu_buddy.h
F: include/drm/drm_buddy.h
+F: rust/helpers/gpu.c
+F: rust/kernel/gpu/
DRM AUTOMATED TESTING
M: Helen Koike <helen.fornazier@gmail.com>
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..9027c9a7778f
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,611 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! Create a buddy allocator and perform a basic range allocation:
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! prelude::*,
+//! ptr::Alignment,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! physical_memory_size: SZ_1G as u64,
+//! chunk_size: SZ_4K,
+//! })?;
+//!
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K);
+//! let initial_free = buddy.free_memory();
+//!
+//! // Allocate 16MB, results in a single 16MB block at offset 0.
+//! let allocated = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: 0 },
+//! SZ_16M,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let block = allocated.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages = 16MB
+//! assert_eq!(block.size(), SZ_16M);
+//!
+//! // Dropping the allocation returns the memory to the buddy allocator.
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Top-down allocation allocates from the highest addresses:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{GpuBuddy, GpuBuddyAllocMode, GpuBuddyAllocFlags, GpuBuddyParams},
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # physical_memory_size: SZ_1G as u64,
+//! # chunk_size: SZ_4K,
+//! # })?;
+//! # let initial_free = buddy.free_memory();
+//! let topdown = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::TopDown,
+//! SZ_16M,
+//! Alignment::new::<SZ_16M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let block = topdown.iter().next().expect("expected one block");
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12);
+//! assert_eq!(block.size(), SZ_16M);
+//!
+//! // Dropping the allocation returns the memory to the buddy allocator.
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory(), initial_free);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Non-contiguous allocation can fill fragmented memory by returning multiple
+//! blocks:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlags, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! # let buddy = GpuBuddy::new(GpuBuddyParams {
+//! # base_offset: 0,
+//! # physical_memory_size: SZ_1G as u64,
+//! # chunk_size: SZ_4K,
+//! # })?;
+//! # let initial_free = buddy.free_memory();
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! let frag1 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_4M as u64 },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_4M as u64);
+//!
+//! let frag2 = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range {
+//! start: SZ_8M as u64,
+//! end: (SZ_8M + SZ_4M) as u64,
+//! },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB, this returns 2 blocks from the holes.
+//! let fragmented = KBox::pin_init(
+//! buddy.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_16M as u64 },
+//! SZ_8M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//! assert_eq!(buddy.free_memory(), initial_free - SZ_16M as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0usize);
+//! for block in fragmented.iter() {
+//! assert_eq!(block.size(), SZ_4M);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M);
+//! assert_eq!(count, 2);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! Contiguous allocation fails when only fragmented space is available:
+//!
+//! ```
+//! # use kernel::{
+//! # gpu::buddy::{
+//! # GpuBuddy, GpuBuddyAllocFlag, GpuBuddyAllocMode, GpuBuddyParams,
+//! # },
+//! # prelude::*,
+//! # ptr::Alignment,
+//! # sizes::*, //
+//! # };
+//! // Create a small 16MB buddy allocator with fragmented memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset: 0,
+//! physical_memory_size: SZ_16M as u64,
+//! chunk_size: SZ_4K,
+//! })?;
+//!
+//! let _hole1 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range { start: 0, end: SZ_4M as u64 },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! let _hole2 = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Range {
+//! start: SZ_8M as u64,
+//! end: (SZ_8M + SZ_4M) as u64,
+//! },
+//! SZ_4M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlags::default(),
+//! ),
+//! GFP_KERNEL,
+//! )?;
+//!
+//! // 8MB contiguous should fail, only two non-contiguous 4MB holes exist.
+//! let result = KBox::pin_init(
+//! small.alloc_blocks(
+//! GpuBuddyAllocMode::Simple,
+//! SZ_8M,
+//! Alignment::new::<SZ_4M>(),
+//! GpuBuddyAllocFlag::Contiguous.into(),
+//! ),
+//! GFP_KERNEL,
+//! );
+//! assert!(result.is_err());
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist_create,
+ error::to_result,
+ interop::list::CListHead,
+ new_mutex,
+ prelude::*,
+ ptr::Alignment,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque, //
+};
+
+/// Allocation mode for the GPU buddy allocator.
+///
+/// The mode determines the primary allocation strategy. Modes are mutually
+/// exclusive: an allocation is either simple, range-constrained, or top-down.
+///
+/// Orthogonal modifier flags (e.g., contiguous, clear) are specified separately
+/// via [`GpuBuddyAllocFlags`].
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
+pub enum GpuBuddyAllocMode {
+ /// Simple allocation without constraints.
+ Simple,
+ /// Range-based allocation between `start` and `end` addresses.
+ Range {
+ /// Start of the allocation range.
+ start: u64,
+ /// End of the allocation range.
+ end: u64,
+ },
+ /// Allocate from top of address space downward.
+ TopDown,
+}
+
+impl GpuBuddyAllocMode {
+ // Returns the C flags corresponding to the allocation mode.
+ fn into_flags(self) -> usize {
+ match self {
+ Self::Simple => 0,
+ Self::Range { .. } => bindings::GPU_BUDDY_RANGE_ALLOCATION as usize,
+ Self::TopDown => bindings::GPU_BUDDY_TOPDOWN_ALLOCATION as usize,
+ }
+ }
+
+ // Extracts the range start/end, defaulting to (0, 0) for non-range modes.
+ fn range(self) -> (u64, u64) {
+ match self {
+ Self::Range { start, end } => (start, end),
+ _ => (0, 0),
+ }
+ }
+}
+
+crate::impl_flags!(
+ /// Modifier flags for GPU buddy allocation.
+ ///
+ /// These flags can be combined with any [`GpuBuddyAllocMode`] to control
+ /// additional allocation behavior.
+ #[derive(Clone, Copy, Default, PartialEq, Eq)]
+ pub struct GpuBuddyAllocFlags(u32);
+
+ /// Individual modifier flag for GPU buddy allocation.
+ #[derive(Clone, Copy, PartialEq, Eq)]
+ pub enum GpuBuddyAllocFlag {
+ /// Allocate physically contiguous blocks.
+ Contiguous = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION as u32,
+
+ /// Request allocation from cleared (zeroed) memory.
+ Clear = bindings::GPU_BUDDY_CLEAR_ALLOCATION as u32,
+
+ /// Disable trimming of partially used blocks.
+ TrimDisable = bindings::GPU_BUDDY_TRIM_DISABLE as u32,
+ }
+);
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset (in bytes) where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset: u64,
+ /// Total physical memory size (in bytes) managed by the allocator.
+ pub physical_memory_size: u64,
+ /// Minimum allocation unit / chunk size (in bytes), must be >= 4KB.
+ pub chunk_size: usize,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains an initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Cached creation parameters (do not change after init).
+ params: GpuBuddyParams,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let size = params.physical_memory_size;
+ let chunk_size = params.chunk_size;
+
+ // INVARIANT: `gpu_buddy_init` returns 0 on success, at which point the
+ // `gpu_buddy` structure is initialized and ready for use with all
+ // `gpu_buddy_*` APIs. `try_pin_init!` only completes if all fields succeed,
+ // so the invariant holds when construction finishes.
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: `ptr` points to valid uninitialized memory from the pin-init
+ // infrastructure. `gpu_buddy_init` will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size as u64) })
+ }),
+ lock <- new_mutex!(()),
+ params,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: GpuBuddyInner can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: `GpuBuddyInner` is `Sync` because `GpuBuddyInner::lock`
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+// Guard that proves the lock is held, enabling access to the allocator.
+// The `_guard` holds the lock for the duration of this guard's lifetime.
+struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// Refer to the module-level documentation for usage examples.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(GpuBuddyInner::new(params), GFP_KERNEL)?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.params.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> usize {
+ self.0.params.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.params.physical_memory_size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory(&self) -> u64 {
+ let guard = self.0.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized allocator.
+ // `guard` provides exclusive access.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ mode: GpuBuddyAllocMode,
+ size: usize,
+ min_block_size: Alignment,
+ flags: GpuBuddyAllocFlags,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let (start, end) = mode.range();
+ let mode_flags = mode.into_flags();
+ let modifier_flags = u32::from(flags) as usize;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: Per the type invariant, `inner` contains an initialized
+ // allocator. `guard` provides exclusive access.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size as u64,
+ min_block_size.as_usize() as u64,
+ list.as_raw(),
+ mode_flags | modifier_flags,
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated blocks.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
+ /// borrows `self` and is only valid for the duration of that borrow.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY:
+ // - Per the type invariant, `list` is an initialized sentinel `list_head`
+ // and is not concurrently modified (we hold a `&self` borrow).
+ // - The list contains `gpu_buddy_block` items linked via
+ // `__bindgen_anon_1.link`.
+ // - `Block` is `#[repr(transparent)]` over `gpu_buddy_block`.
+ let clist = clist_create!(unsafe {
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ });
+
+ clist
+ .iter()
+ .map(|this| AllocatedBlock { this, blocks: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_free_list(guard.as_raw(), self.list.as_raw(), 0);
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's raw offset in the buddy address space (without base offset).
+ fn offset(&self) -> u64 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ fn order(&self) -> u32 {
+ // SAFETY: `self.as_raw()` is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a wrapper around `gpu_buddy_block` which can be
+// sent across threads safely.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is only accessed through shared references after
+// allocation, and thus safe to access concurrently across threads.
+unsafe impl Sync for Block {}
+
+/// A buddy block paired with its owning [`AllocatedBlocks`] context.
+///
+/// Unlike a raw block, which only knows its offset within the buddy address
+/// space, an [`AllocatedBlock`] also has access to the allocator's `base_offset`
+/// and `chunk_size`, enabling it to compute absolute offsets and byte sizes.
+///
+/// Returned by [`AllocatedBlocks::iter()`].
+pub struct AllocatedBlock<'a> {
+ this: &'a Block,
+ blocks: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.blocks.buddy.params.base_offset + self.this.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.this.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> usize {
+ self.blocks.buddy.params.chunk_size << self.this.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index bb741f1e0dfd..63e3f656eb6c 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -98,6 +98,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY = "y")]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Claude review: rust: gpu: Add GPU buddy allocator bindings
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
@ 2026-03-21 17:56 ` Claude Code Review Bot
0 siblings, 0 replies; 10+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:56 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Positive aspects:**
- Clean separation of allocation mode (enum) vs modifier flags (bitflags) — this is good Rust API design.
- Proper pin-init usage throughout, with correct `#[pin_data(PinnedDrop)]` annotations.
- The `Arc<GpuBuddyInner>` sharing between `GpuBuddy` and `AllocatedBlocks` correctly ensures the allocator outlives its allocations.
- Doctests are thorough, covering simple, top-down, range, fragmented, and contiguous-failure cases.
- SAFETY comments are present and generally adequate.
**Observations:**
1. **`base_offset` is never passed to C:** The `GpuBuddyParams::base_offset` field is stored in the Rust wrapper and used to adjust offsets returned by `AllocatedBlock::offset()`, but it is never passed to `gpu_buddy_init()`. This means the C allocator always thinks it starts at 0, and the Rust side adds the offset. This works for offset reporting, but if the caller ever passes a range to `GpuBuddyAllocMode::Range`, they would need to know that the range is relative to the buddy's internal 0-based space, not the base-offset-adjusted space. The doctest examples all use `base_offset: 0`, so this isn't tested. Consider documenting this contract explicitly, or adjusting `Range` start/end by subtracting `base_offset` before passing to C.
2. **Range validation is incomplete:** The `alloc_blocks` method checks `range.is_empty()` for `Range` mode but doesn't validate that the range falls within the allocator's `[0, size)` bounds. The C side may handle this, but a Rust-side bounds check would provide a clearer error:
```rust
if let GpuBuddyAllocMode::Range(range) = &mode {
if range.is_empty() {
Err::<(), Error>(EINVAL)?;
}
}
```
3. **`try_pin_init!` field reference across init:** In `alloc_blocks`, the `try_pin_init!` block references `buddy` and `list` inside the unnamed `_:` initializer field:
```rust
try_pin_init!(AllocatedBlocks {
buddy: buddy_arc,
list <- CListHead::new(),
_: {
...
let guard = buddy.lock();
...
list.as_raw(),
...
}
})
```
This relies on the `try_pin_init!` macro making previously-initialized fields available by name in subsequent field initializers. This is a known feature of the kernel's pin-init macros, but it's subtle — a brief comment noting this would help future readers.
4. **`gpu_buddy_block_size` helper is unused:** The C helper `rust_helper_gpu_buddy_block_size` is defined in `rust/helpers/gpu.c` but never called from Rust. The Rust `AllocatedBlock::size()` computes the size itself:
```rust
pub fn size(&self) -> u64 {
(self.blocks.buddy.params.chunk_size.as_usize() as u64) << self.this.order()
}
```
This duplicates the logic in `gpu_buddy_block_size()`. Consider either using the C helper (passing the `gpu_buddy*` pointer) for consistency, or removing the unused helper to avoid dead code. Using the C helper would also be more robust if the size computation ever changes in the C side.
5. **`Block::offset` takes `const` pointer, `Block::order` does not:** Looking at the C header:
- `gpu_buddy_block_offset(const struct gpu_buddy_block *block)` — const
- `gpu_buddy_block_order(struct gpu_buddy_block *block)` — non-const
The Rust helpers match this, but both are logically read-only. The non-const on `order` in the C side looks like an oversight in the C header (since it only reads `block->header`), but that's not something for this patch to fix.
6. **MAINTAINERS dual-listing of `rust/helpers/gpu.c`:** The patch adds `F: rust/helpers/gpu.c`, `F: rust/kernel/gpu.rs`, and `F: rust/kernel/gpu/` to both the "DRM DRIVERS AND GPU SUBSYSTEM - RUST" section and the "GPU BUDDY ALLOCATOR" section. This dual listing is intentional (both groups should review changes), but worth confirming this is the desired arrangement.
7. **`#[cfg(CONFIG_GPU_BUDDY = "y")]` vs `IS_ENABLED`:** The Rust module uses `CONFIG_GPU_BUDDY = "y"` (built-in only), while the C bindings_helper.h uses `IS_ENABLED(CONFIG_GPU_BUDDY)` (built-in or module). If `CONFIG_GPU_BUDDY` is ever `=m`, the C constants would be compiled but the Rust module wouldn't. This is probably fine for now (GPU_BUDDY is likely always `=y` when selected by a built-in driver like nova-core), but the inconsistency is worth noting.
8. **Minor nit — comment style:** The `GpuBuddyAllocMode` method comments use `//` instead of `///`:
```rust
// Returns the C flags corresponding to the allocation mode.
fn as_flags(&self) -> usize {
```
Since these are private methods, `//` is fine, but `///` would generate internal docs that could be useful during development.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-03-21 17:56 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-08 18:04 [PATCH v12 0/1] Rust GPU buddy allocator bindings Joel Fernandes
2026-03-08 18:04 ` [PATCH v12 1/1] rust: gpu: Add " Joel Fernandes
2026-03-08 21:42 ` Claude review: " Claude Code Review Bot
2026-03-08 21:42 ` Claude review: Rust " Claude Code Review Bot
2026-03-09 13:53 ` [PATCH v12.1 0/1] " Joel Fernandes
2026-03-09 13:53 ` [PATCH v12.1 1/1] rust: gpu: Add " Joel Fernandes
-- strict thread matches above, loose matches on Subject: below --
2026-03-20 4:57 [PATCH v14 0/2] Rust " Joel Fernandes
2026-03-20 4:57 ` [PATCH v14 1/2] rust: gpu: Add " Joel Fernandes
2026-03-21 17:56 ` Claude review: " Claude Code Review Bot
2026-02-24 22:40 [PATCH v11 0/4] Rust " Joel Fernandes
2026-02-24 22:40 ` [PATCH v11 4/4] rust: gpu: Add " Joel Fernandes
2026-02-27 4:31 ` Claude review: " Claude Code Review Bot
2026-02-10 23:32 [PATCH -next v9 0/3] rust: Add CList and " Joel Fernandes
2026-02-10 23:32 ` [PATCH -next v9 2/3] rust: gpu: Add " Joel Fernandes
2026-02-12 20:27 ` Claude review: " Claude Code Review Bot
2026-02-09 21:42 [PATCH -next v8 0/3] rust: Add CList and " Joel Fernandes
2026-02-09 21:42 ` [PATCH -next v8 2/3] rust: gpu: Add " Joel Fernandes
2026-02-11 6:52 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox