* [PATCH v2 01/11] rust: io: generalize `MmioRaw` to pointer to arbitrary type
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 02/11] rust: io: generalize `Mmio` " Gary Guo
` (10 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
Conceptually, `MmioRaw` is just `__iomem *`, so it should work for any
types. The existing use case where it represents a region of compile-time
known minimum size and run-time known actual size is moved to a custom
dynamic-sized type `Region<SIZE>` instead. The `maxsize` method is also
renamed to `size` to reflect that it is the actual size (not a bound) of
the region.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/devres.rs | 7 +++--
rust/kernel/io.rs | 84 ++++++++++++++++++++++++++++++++++++++++-----------
rust/kernel/io/mem.rs | 4 +--
rust/kernel/pci/io.rs | 4 +--
4 files changed, 74 insertions(+), 25 deletions(-)
diff --git a/rust/kernel/devres.rs b/rust/kernel/devres.rs
index 9e5f93aed20c..65a4082122af 100644
--- a/rust/kernel/devres.rs
+++ b/rust/kernel/devres.rs
@@ -71,14 +71,15 @@ struct Inner<T> {
/// IoKnownSize,
/// Mmio,
/// MmioRaw,
-/// PhysAddr, //
+/// PhysAddr,
+/// Region, //
/// },
/// prelude::*,
/// };
/// use core::ops::Deref;
///
/// // See also [`pci::Bar`] for a real example.
-/// struct IoMem<const SIZE: usize>(MmioRaw<SIZE>);
+/// struct IoMem<const SIZE: usize>(MmioRaw<Region<SIZE>>);
///
/// impl<const SIZE: usize> IoMem<SIZE> {
/// /// # Safety
@@ -93,7 +94,7 @@ struct Inner<T> {
/// return Err(ENOMEM);
/// }
///
-/// Ok(IoMem(MmioRaw::new(addr as usize, SIZE)?))
+/// Ok(IoMem(MmioRaw::new_region(addr as usize, SIZE)?))
/// }
/// }
///
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index fcc7678fd9e3..d7f2145fa9b9 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -6,7 +6,8 @@
use crate::{
bindings,
- prelude::*, //
+ prelude::*,
+ ptr::KnownSize, //
};
pub mod mem;
@@ -31,39 +32,85 @@
/// `CONFIG_PHYS_ADDR_T_64BIT`, and it can be a u64 even on 32-bit architectures.
pub type ResourceSize = bindings::resource_size_t;
+/// Untyped I/O region.
+///
+/// This type can be used when an I/O region without known type information has a compile-time known
+/// minimum size (and a runtime known actual size).
+///
+/// The `SIZE` generic parameter indicate the minimum size of the region.
+#[repr(transparent)]
+pub struct Region<const SIZE: usize = 0> {
+ inner: [u8],
+}
+
+impl<const SIZE: usize> KnownSize for Region<SIZE> {
+ #[inline(always)]
+ fn size(p: *const Self) -> usize {
+ (p as *const [u8]).len()
+ }
+}
+
/// Raw representation of an MMIO region.
///
+/// `MmioRaw<T>` is equivalent to `T __iomem *` in C.
+///
/// By itself, the existence of an instance of this structure does not provide any guarantees that
/// the represented MMIO region does exist or is properly mapped.
///
/// Instead, the bus specific MMIO implementation must convert this raw representation into an
/// `Mmio` instance providing the actual memory accessors. Only by the conversion into an `Mmio`
/// structure any guarantees are given.
-pub struct MmioRaw<const SIZE: usize = 0> {
- addr: usize,
- maxsize: usize,
+pub struct MmioRaw<T: ?Sized> {
+ /// Pointer is in I/O address space.
+ ///
+ /// The provenance does not matter, only the address and metadata do.
+ addr: *mut T,
}
-impl<const SIZE: usize> MmioRaw<SIZE> {
- /// Returns a new `MmioRaw` instance on success, an error otherwise.
- pub fn new(addr: usize, maxsize: usize) -> Result<Self> {
- if maxsize < SIZE {
+// SAFETY: `MmioRaw` is just an address, so is thread-safe.
+unsafe impl<T: ?Sized> Send for MmioRaw<T> {}
+// SAFETY: `MmioRaw` is just an address, so is thread-safe.
+unsafe impl<T: ?Sized> Sync for MmioRaw<T> {}
+
+impl<T> MmioRaw<T> {
+ /// Create a `MmioRaw` from address.
+ #[inline]
+ pub fn new(addr: usize) -> Self {
+ Self {
+ addr: core::ptr::without_provenance_mut(addr),
+ }
+ }
+}
+
+impl<const SIZE: usize> MmioRaw<Region<SIZE>> {
+ /// Create a `MmioRaw` representing a I/O region with given size.
+ ///
+ /// The size is checked against the minimum size specified via const generics.
+ #[inline]
+ pub fn new_region(addr: usize, size: usize) -> Result<Self> {
+ if size < SIZE {
return Err(EINVAL);
}
- Ok(Self { addr, maxsize })
+ let addr = core::ptr::slice_from_raw_parts_mut::<u8>(
+ core::ptr::without_provenance_mut(addr),
+ size,
+ ) as *mut Region<SIZE>;
+ Ok(Self { addr })
}
+}
+impl<T: ?Sized + KnownSize> MmioRaw<T> {
/// Returns the base address of the MMIO region.
#[inline]
pub fn addr(&self) -> usize {
- self.addr
+ self.addr.addr()
}
- /// Returns the maximum size of the MMIO region.
+ /// Returns the size of the MMIO region.
#[inline]
- pub fn maxsize(&self) -> usize {
- self.maxsize
+ pub fn size(&self) -> usize {
+ KnownSize::size(self.addr)
}
}
@@ -89,12 +136,13 @@ pub fn maxsize(&self) -> usize {
/// Mmio,
/// MmioRaw,
/// PhysAddr,
+/// Region,
/// },
/// };
/// use core::ops::Deref;
///
/// // See also `pci::Bar` for a real example.
-/// struct IoMem<const SIZE: usize>(MmioRaw<SIZE>);
+/// struct IoMem<const SIZE: usize>(MmioRaw<Region<SIZE>>);
///
/// impl<const SIZE: usize> IoMem<SIZE> {
/// /// # Safety
@@ -109,7 +157,7 @@ pub fn maxsize(&self) -> usize {
/// return Err(ENOMEM);
/// }
///
-/// Ok(IoMem(MmioRaw::new(addr as usize, SIZE)?))
+/// Ok(IoMem(MmioRaw::new_region(addr as usize, SIZE)?))
/// }
/// }
///
@@ -139,7 +187,7 @@ pub fn maxsize(&self) -> usize {
/// # }
/// ```
#[repr(transparent)]
-pub struct Mmio<const SIZE: usize = 0>(MmioRaw<SIZE>);
+pub struct Mmio<const SIZE: usize = 0>(MmioRaw<Region<SIZE>>);
/// Checks whether an access of type `U` at the given `offset`
/// is valid within this region.
@@ -767,7 +815,7 @@ fn addr(&self) -> usize {
/// Returns the maximum size of this mapping.
#[inline]
fn maxsize(&self) -> usize {
- self.0.maxsize()
+ self.0.size()
}
}
@@ -782,7 +830,7 @@ impl<const SIZE: usize> Mmio<SIZE> {
///
/// Callers must ensure that `addr` is the start of a valid I/O mapped memory region of size
/// `maxsize`.
- pub unsafe fn from_raw(raw: &MmioRaw<SIZE>) -> &Self {
+ pub unsafe fn from_raw(raw: &MmioRaw<Region<SIZE>>) -> &Self {
// SAFETY: `Mmio` is a transparent wrapper around `MmioRaw`.
unsafe { &*core::ptr::from_ref(raw).cast() }
}
diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
index 7dc78d547f7a..9117d417f99c 100644
--- a/rust/kernel/io/mem.rs
+++ b/rust/kernel/io/mem.rs
@@ -231,7 +231,7 @@ fn deref(&self) -> &Self::Target {
/// [`IoMem`] always holds an [`MmioRaw`] instance that holds a valid pointer to the
/// start of the I/O memory mapped region.
pub struct IoMem<const SIZE: usize = 0> {
- io: MmioRaw<SIZE>,
+ io: MmioRaw<super::Region<SIZE>>,
}
impl<const SIZE: usize> IoMem<SIZE> {
@@ -266,7 +266,7 @@ fn ioremap(resource: &Resource) -> Result<Self> {
return Err(ENOMEM);
}
- let io = MmioRaw::new(addr as usize, size)?;
+ let io = MmioRaw::new_region(addr as usize, size)?;
let io = IoMem { io };
Ok(io)
diff --git a/rust/kernel/pci/io.rs b/rust/kernel/pci/io.rs
index ae78676c927f..0335b5068f69 100644
--- a/rust/kernel/pci/io.rs
+++ b/rust/kernel/pci/io.rs
@@ -148,7 +148,7 @@ impl<'a, S: ConfigSpaceKind> IoKnownSize for ConfigSpace<'a, S> {
/// memory mapped PCI BAR and its size.
pub struct Bar<const SIZE: usize = 0> {
pdev: ARef<Device>,
- io: MmioRaw<SIZE>,
+ io: MmioRaw<crate::io::Region<SIZE>>,
num: i32,
}
@@ -184,7 +184,7 @@ pub(super) fn new(pdev: &Device, num: u32, name: &CStr) -> Result<Self> {
return Err(ENOMEM);
}
- let io = match MmioRaw::new(ioptr, len as usize) {
+ let io = match MmioRaw::new_region(ioptr, len as usize) {
Ok(io) => io,
Err(err) => {
// SAFETY:
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: generalize `MmioRaw` to pointer to arbitrary type
2026-04-21 14:56 ` [PATCH v2 01/11] rust: io: generalize `MmioRaw` to pointer to arbitrary type Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Introduces the `Region<SIZE>` DST and makes `MmioRaw` generic over `T: ?Sized`. The old `MmioRaw<SIZE>` becomes `MmioRaw<Region<SIZE>>`.
The `Send`/`Sync` impls are appropriate since `MmioRaw` only stores an address:
```rust
// SAFETY: `MmioRaw` is just an address, so is thread-safe.
unsafe impl<T: ?Sized> Send for MmioRaw<T> {}
unsafe impl<T: ?Sized> Sync for MmioRaw<T> {}
```
The use of `core::ptr::without_provenance_mut` is correct for I/O addresses that are not in normal memory. The `new_region` constructor properly checks `size < SIZE` before constructing the fat pointer via `slice_from_raw_parts_mut`.
Clean patch, no issues.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 02/11] rust: io: generalize `Mmio` to arbitrary type
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
2026-04-21 14:56 ` [PATCH v2 01/11] rust: io: generalize `MmioRaw` to pointer to arbitrary type Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 03/11] rust: io: use pointer types instead of address Gary Guo
` (9 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
Currently, `io::Mmio` always represent an untyped region of a compile-time
known minimum size, which is roughly equivalent to `void __iomem*` (but
with bound checks). However, it is useful to also be to represent I/O
memory of a specific type, e.g. `u32 __iomem*` or `struct foo __iomem*`.
Thus, make `Mmio` generic on arbitrary `T`, where `T` is a sized type, or a
DST that implements `KnownSize`. Similar to the `MmioRaw` change, the
existing behaviour is preserved in the form of `Mmio<Region<SIZE>>`. This
change brings the MMIO closer to the DMA coherent allocation types that we
have, which is already typed.
To be able to implement `IoKnownSize`, add a `MIN_SIZE` constant to
`KnownSize` trait to represent compile-time known minimum size of a
specific type.
Acked-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/devres.rs | 2 +-
rust/kernel/io.rs | 63 +++++++++++++++++++++++++++-------------------
rust/kernel/io/mem.rs | 4 +--
rust/kernel/io/poll.rs | 6 +++--
rust/kernel/io/register.rs | 19 ++++++++------
rust/kernel/pci/io.rs | 2 +-
rust/kernel/ptr.rs | 7 ++++++
7 files changed, 64 insertions(+), 39 deletions(-)
diff --git a/rust/kernel/devres.rs b/rust/kernel/devres.rs
index 65a4082122af..3e22c63efb98 100644
--- a/rust/kernel/devres.rs
+++ b/rust/kernel/devres.rs
@@ -106,7 +106,7 @@ struct Inner<T> {
/// }
///
/// impl<const SIZE: usize> Deref for IoMem<SIZE> {
-/// type Target = Mmio<SIZE>;
+/// type Target = Mmio<Region<SIZE>>;
///
/// fn deref(&self) -> &Self::Target {
/// // SAFETY: The memory range stored in `self` has been properly mapped in `Self::new`.
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index d7f2145fa9b9..0b9c97c0a1d7 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -44,6 +44,8 @@ pub struct Region<const SIZE: usize = 0> {
}
impl<const SIZE: usize> KnownSize for Region<SIZE> {
+ const MIN_SIZE: usize = SIZE;
+
#[inline(always)]
fn size(p: *const Self) -> usize {
(p as *const [u8]).len()
@@ -169,7 +171,7 @@ pub fn size(&self) -> usize {
/// }
///
/// impl<const SIZE: usize> Deref for IoMem<SIZE> {
-/// type Target = Mmio<SIZE>;
+/// type Target = Mmio<Region<SIZE>>;
///
/// fn deref(&self) -> &Self::Target {
/// // SAFETY: The memory range stored in `self` has been properly mapped in `Self::new`.
@@ -187,7 +189,7 @@ pub fn size(&self) -> usize {
/// # }
/// ```
#[repr(transparent)]
-pub struct Mmio<const SIZE: usize = 0>(MmioRaw<Region<SIZE>>);
+pub struct Mmio<T: ?Sized>(MmioRaw<T>);
/// Checks whether an access of type `U` at the given `offset`
/// is valid within this region.
@@ -462,9 +464,10 @@ fn write64(&self, value: u64, offset: usize)
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// };
///
- /// fn do_reads(io: &Mmio) -> Result {
+ /// fn do_reads(io: &Mmio<Region>) -> Result {
/// // 32-bit read from address `0x10`.
/// let v: u32 = io.try_read(0x10)?;
///
@@ -496,9 +499,10 @@ fn try_read<T, L>(&self, location: L) -> Result<T>
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// };
///
- /// fn do_writes(io: &Mmio) -> Result {
+ /// fn do_writes(io: &Mmio<Region>) -> Result {
/// // 32-bit write of value `1` at address `0x10`.
/// io.try_write(0x10, 1u32)?;
///
@@ -534,6 +538,7 @@ fn try_write<T, L>(&self, location: L, value: T) -> Result
/// register,
/// Io,
/// Mmio,
+ /// Region,
/// };
///
/// register! {
@@ -549,7 +554,7 @@ fn try_write<T, L>(&self, location: L, value: T) -> Result
/// }
/// }
///
- /// fn do_write_reg(io: &Mmio) -> Result {
+ /// fn do_write_reg(io: &Mmio<Region>) -> Result {
///
/// io.try_write_reg(VERSION::new(1, 0))
/// }
@@ -579,9 +584,10 @@ fn try_write_reg<T, L, V>(&self, value: V) -> Result
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// };
///
- /// fn do_update(io: &Mmio<0x1000>) -> Result {
+ /// fn do_update(io: &Mmio<Region<0x1000>>) -> Result {
/// io.try_update(0x10, |v: u32| {
/// v + 1
/// })
@@ -616,9 +622,10 @@ fn try_update<T, L, F>(&self, location: L, f: F) -> Result
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// };
///
- /// fn do_reads(io: &Mmio<0x1000>) {
+ /// fn do_reads(io: &Mmio<Region<0x1000>>) {
/// // 32-bit read from address `0x10`.
/// let v: u32 = io.read(0x10);
///
@@ -648,9 +655,10 @@ fn read<T, L>(&self, location: L) -> T
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// };
///
- /// fn do_writes(io: &Mmio<0x1000>) {
+ /// fn do_writes(io: &Mmio<Region<0x1000>>) {
/// // 32-bit write of value `1` at address `0x10`.
/// io.write(0x10, 1u32);
///
@@ -682,6 +690,7 @@ fn write<T, L>(&self, location: L, value: T)
/// register,
/// Io,
/// Mmio,
+ /// Region,
/// };
///
/// register! {
@@ -697,7 +706,7 @@ fn write<T, L>(&self, location: L, value: T)
/// }
/// }
///
- /// fn do_write_reg(io: &Mmio<0x1000>) {
+ /// fn do_write_reg(io: &Mmio<Region<0x1000>>) {
/// io.write_reg(VERSION::new(1, 0));
/// }
/// ```
@@ -726,9 +735,10 @@ fn write_reg<T, L, V>(&self, value: V)
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// };
///
- /// fn do_update(io: &Mmio<0x1000>) {
+ /// fn do_update(io: &Mmio<Region<0x1000>>) {
/// io.update(0x10, |v: u32| {
/// v + 1
/// })
@@ -778,7 +788,7 @@ fn io_addr_assert<U>(&self, offset: usize) -> usize {
macro_rules! impl_mmio_io_capable {
($mmio:ident, $(#[$attr:meta])* $ty:ty, $read_fn:ident, $write_fn:ident) => {
$(#[$attr])*
- impl<const SIZE: usize> IoCapable<$ty> for $mmio<SIZE> {
+ impl<T: ?Sized> IoCapable<$ty> for $mmio<T> {
unsafe fn io_read(&self, address: usize) -> $ty {
// SAFETY: By the trait invariant `address` is a valid address for MMIO operations.
unsafe { bindings::$read_fn(address as *const c_void) }
@@ -805,7 +815,7 @@ unsafe fn io_write(&self, value: $ty, address: usize) {
writeq
);
-impl<const SIZE: usize> Io for Mmio<SIZE> {
+impl<T: ?Sized + KnownSize> Io for Mmio<T> {
/// Returns the base address of this mapping.
#[inline]
fn addr(&self) -> usize {
@@ -819,18 +829,18 @@ fn maxsize(&self) -> usize {
}
}
-impl<const SIZE: usize> IoKnownSize for Mmio<SIZE> {
- const MIN_SIZE: usize = SIZE;
+impl<T: ?Sized + KnownSize> IoKnownSize for Mmio<T> {
+ const MIN_SIZE: usize = T::MIN_SIZE;
}
-impl<const SIZE: usize> Mmio<SIZE> {
+impl<T: ?Sized + KnownSize> Mmio<T> {
/// Converts an `MmioRaw` into an `Mmio` instance, providing the accessors to the MMIO mapping.
///
/// # Safety
///
/// Callers must ensure that `addr` is the start of a valid I/O mapped memory region of size
- /// `maxsize`.
- pub unsafe fn from_raw(raw: &MmioRaw<Region<SIZE>>) -> &Self {
+ /// `addr.size()`.
+ pub unsafe fn from_raw(raw: &MmioRaw<T>) -> &Self {
// SAFETY: `Mmio` is a transparent wrapper around `MmioRaw`.
unsafe { &*core::ptr::from_ref(raw).cast() }
}
@@ -843,9 +853,9 @@ pub unsafe fn from_raw(raw: &MmioRaw<Region<SIZE>>) -> &Self {
///
/// See [`Mmio::relaxed`] for a usage example.
#[repr(transparent)]
-pub struct RelaxedMmio<const SIZE: usize = 0>(Mmio<SIZE>);
+pub struct RelaxedMmio<T: ?Sized>(Mmio<T>);
-impl<const SIZE: usize> Io for RelaxedMmio<SIZE> {
+impl<T: ?Sized + KnownSize> Io for RelaxedMmio<T> {
#[inline]
fn addr(&self) -> usize {
self.0.addr()
@@ -857,11 +867,11 @@ fn maxsize(&self) -> usize {
}
}
-impl<const SIZE: usize> IoKnownSize for RelaxedMmio<SIZE> {
- const MIN_SIZE: usize = SIZE;
+impl<T: ?Sized + KnownSize> IoKnownSize for RelaxedMmio<T> {
+ const MIN_SIZE: usize = T::MIN_SIZE;
}
-impl<const SIZE: usize> Mmio<SIZE> {
+impl<T: ?Sized> Mmio<T> {
/// Returns a [`RelaxedMmio`] reference that performs relaxed I/O operations.
///
/// Relaxed accessors do not provide ordering guarantees with respect to DMA or memory accesses
@@ -873,18 +883,19 @@ impl<const SIZE: usize> Mmio<SIZE> {
/// use kernel::io::{
/// Io,
/// Mmio,
+ /// Region,
/// RelaxedMmio,
/// };
///
- /// fn do_io(io: &Mmio<0x100>) {
+ /// fn do_io(io: &Mmio<Region<0x100>>) {
/// // The access is performed using `readl_relaxed` instead of `readl`.
/// let v = io.relaxed().read32(0x10);
/// }
///
/// ```
- pub fn relaxed(&self) -> &RelaxedMmio<SIZE> {
- // SAFETY: `RelaxedMmio` is `#[repr(transparent)]` over `Mmio`, so `Mmio<SIZE>` and
- // `RelaxedMmio<SIZE>` have identical layout.
+ pub fn relaxed(&self) -> &RelaxedMmio<T> {
+ // SAFETY: `RelaxedMmio` is `#[repr(transparent)]` over `Mmio`, so `Mmio<T>` and
+ // `RelaxedMmio<T>` have identical layout.
unsafe { core::mem::transmute(self) }
}
}
diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
index 9117d417f99c..a6292f4ebfa4 100644
--- a/rust/kernel/io/mem.rs
+++ b/rust/kernel/io/mem.rs
@@ -214,7 +214,7 @@ pub fn new<'a>(io_request: IoRequest<'a>) -> impl PinInit<Devres<Self>, Error> +
}
impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
- type Target = Mmio<SIZE>;
+ type Target = Mmio<super::Region<SIZE>>;
fn deref(&self) -> &Self::Target {
&self.iomem
@@ -289,7 +289,7 @@ fn drop(&mut self) {
}
impl<const SIZE: usize> Deref for IoMem<SIZE> {
- type Target = Mmio<SIZE>;
+ type Target = Mmio<super::Region<SIZE>>;
fn deref(&self) -> &Self::Target {
// SAFETY: Safe as by the invariant of `IoMem`.
diff --git a/rust/kernel/io/poll.rs b/rust/kernel/io/poll.rs
index 75d1b3e8596c..2dce2b24b5ff 100644
--- a/rust/kernel/io/poll.rs
+++ b/rust/kernel/io/poll.rs
@@ -48,13 +48,14 @@
/// use kernel::io::{
/// Io,
/// Mmio,
+/// Region,
/// poll::read_poll_timeout, //
/// };
/// use kernel::time::Delta;
///
/// const HW_READY: u16 = 0x01;
///
-/// fn wait_for_hardware<const SIZE: usize>(io: &Mmio<SIZE>) -> Result {
+/// fn wait_for_hardware<const SIZE: usize>(io: &Mmio<Region<SIZE>>) -> Result {
/// read_poll_timeout(
/// // The `op` closure reads the value of a specific status register.
/// || io.try_read16(0x1000),
@@ -135,13 +136,14 @@ pub fn read_poll_timeout<Op, Cond, T>(
/// use kernel::io::{
/// Io,
/// Mmio,
+/// Region,
/// poll::read_poll_timeout_atomic, //
/// };
/// use kernel::time::Delta;
///
/// const HW_READY: u16 = 0x01;
///
-/// fn wait_for_hardware<const SIZE: usize>(io: &Mmio<SIZE>) -> Result {
+/// fn wait_for_hardware<const SIZE: usize>(io: &Mmio<Region<SIZE>>) -> Result {
/// read_poll_timeout_atomic(
/// // The `op` closure reads the value of a specific status register.
/// || io.try_read16(0x1000),
diff --git a/rust/kernel/io/register.rs b/rust/kernel/io/register.rs
index abc49926abfe..1a407fc35edc 100644
--- a/rust/kernel/io/register.rs
+++ b/rust/kernel/io/register.rs
@@ -55,6 +55,7 @@
//! register,
//! Io,
//! IoLoc,
+//! Region,
//! },
//! num::Bounded,
//! };
@@ -66,7 +67,7 @@
//! # 3:0 minor_revision;
//! # }
//! # }
-//! # fn test(io: &Mmio<0x1000>) {
+//! # fn test(io: &Mmio<Region<0x1000>>) {
//! # fn obtain_vendor_id() -> u8 { 0xff }
//!
//! // Read from the register's defined offset (0x100).
@@ -441,6 +442,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// io::{
/// register,
/// Io,
+/// Region,
/// },
/// };
/// # use kernel::io::Mmio;
@@ -452,7 +454,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// }
/// }
///
-/// # fn test(io: &Mmio<0x1000>) {
+/// # fn test(io: &Mmio<Region<0x1000>>) {
/// let val = io.read(FIXED_REG);
///
/// // Write from an already-existing value.
@@ -554,6 +556,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// WithBase,
/// },
/// Io,
+/// Region,
/// },
/// };
/// # use kernel::io::Mmio;
@@ -581,7 +584,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// }
/// }
///
-/// # fn test(io: Mmio<0x1000>) {
+/// # fn test(io: Mmio<Region<0x1000>>) {
/// // Read the status of `Cpu0`.
/// let cpu0_started = io.read(CPU_CTL::of::<Cpu0>());
///
@@ -598,7 +601,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// }
/// }
///
-/// # fn test2(io: Mmio<0x1000>) {
+/// # fn test2(io: Mmio<Region<0x1000>>) {
/// // Start the aliased `CPU0`, leaving its other fields untouched.
/// io.update(CPU_CTL_ALIAS::of::<Cpu0>(), |r| r.with_alias_start(true));
/// # }
@@ -633,6 +636,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// register,
/// register::Array,
/// Io,
+/// Region,
/// },
/// };
/// # use kernel::io::Mmio;
@@ -648,7 +652,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// }
/// }
///
-/// # fn test(io: &Mmio<0x1000>)
+/// # fn test(io: &Mmio<Region<0x1000>>)
/// # -> Result<(), Error>{
/// // Read scratch register 0, i.e. I/O address `0x80`.
/// let scratch_0 = io.read(SCRATCH::at(0)).value();
@@ -719,6 +723,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// WithBase,
/// },
/// Io,
+/// Region,
/// },
/// };
/// # use kernel::io::Mmio;
@@ -749,7 +754,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// }
/// }
///
-/// # fn test(io: &Mmio<0x1000>) -> Result<(), Error> {
+/// # fn test(io: &Mmio<Region<0x1000>>) -> Result<(), Error> {
/// // Read scratch register 0 of CPU0.
/// let scratch = io.read(CPU_SCRATCH::of::<Cpu0>().at(0));
///
@@ -791,7 +796,7 @@ fn into_io_op(self) -> (FixedRegisterLoc<T>, T) {
/// }
/// }
///
-/// # fn test2(io: &Mmio<0x1000>) -> Result<(), Error> {
+/// # fn test2(io: &Mmio<Region<0x1000>>) -> Result<(), Error> {
/// let cpu0_status = io.read(CPU_FIRMWARE_STATUS::of::<Cpu0>()).status();
/// # Ok(())
/// # }
diff --git a/rust/kernel/pci/io.rs b/rust/kernel/pci/io.rs
index 0335b5068f69..e048370fb8c1 100644
--- a/rust/kernel/pci/io.rs
+++ b/rust/kernel/pci/io.rs
@@ -238,7 +238,7 @@ fn drop(&mut self) {
}
impl<const SIZE: usize> Deref for Bar<SIZE> {
- type Target = Mmio<SIZE>;
+ type Target = Mmio<crate::io::Region<SIZE>>;
fn deref(&self) -> &Self::Target {
// SAFETY: By the type invariant of `Self`, the MMIO range in `self.io` is properly mapped.
diff --git a/rust/kernel/ptr.rs b/rust/kernel/ptr.rs
index 3f3e529e9f58..566e68f567a2 100644
--- a/rust/kernel/ptr.rs
+++ b/rust/kernel/ptr.rs
@@ -235,11 +235,16 @@ fn align_up(self, alignment: Alignment) -> Option<Self> {
///
/// This is a generalization of [`size_of`] that works for dynamically sized types.
pub trait KnownSize {
+ /// Minimum size of this type known at compile-time.
+ const MIN_SIZE: usize;
+
/// Get the size of an object of this type in bytes, with the metadata of the given pointer.
fn size(p: *const Self) -> usize;
}
impl<T> KnownSize for T {
+ const MIN_SIZE: usize = core::mem::size_of::<T>();
+
#[inline(always)]
fn size(_: *const Self) -> usize {
size_of::<T>()
@@ -247,6 +252,8 @@ fn size(_: *const Self) -> usize {
}
impl<T> KnownSize for [T] {
+ const MIN_SIZE: usize = 0;
+
#[inline(always)]
fn size(p: *const Self) -> usize {
p.len() * size_of::<T>()
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: generalize `Mmio` to arbitrary type
2026-04-21 14:56 ` [PATCH v2 02/11] rust: io: generalize `Mmio` " Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Changes `Mmio<const SIZE: usize>` to `Mmio<T: ?Sized>`. Adds `MIN_SIZE` to `KnownSize` trait. Mostly mechanical updates to doc examples changing `Mmio<SIZE>` to `Mmio<Region<SIZE>>`.
The `MIN_SIZE` additions are sensible:
```rust
impl<T> KnownSize for T {
const MIN_SIZE: usize = core::mem::size_of::<T>();
}
impl<T> KnownSize for [T] {
const MIN_SIZE: usize = 0;
}
```
This is correct -- sized types have a fixed known size, slices have minimum 0.
Clean patch.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 03/11] rust: io: use pointer types instead of address
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
2026-04-21 14:56 ` [PATCH v2 01/11] rust: io: generalize `MmioRaw` to pointer to arbitrary type Gary Guo
2026-04-21 14:56 ` [PATCH v2 02/11] rust: io: generalize `Mmio` " Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 04/11] rust: io: add missing safety requirement in `IoCapable` methods Gary Guo
` (8 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
This carries the size information with the pointer type and metadata, makes
it possible to use I/O projections and paves the way for IO view types.
With this change, minimum size information becomes available through types;
so `KnownSize::MIN_SIZE` can be used and `IoKnownSize` trait is no longer
necessary. The trait is kept for compatibility and can be removed when
users stop using it for bounds.
PCI config space uses only offsets and not pointers like MMIO; for this
null pointers (with proper size metadata) is used. This is okay as I/O
trait impl and I/O projections can operate on invalid pointers, and for PCI
config space we will only use address info and ignore the provenance.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/devres.rs | 2 +-
rust/kernel/io.rs | 123 +++++++++++++++++++++-----------------------------
rust/kernel/io/mem.rs | 2 +-
rust/kernel/pci/io.rs | 74 ++++++++++++++++++------------
4 files changed, 99 insertions(+), 102 deletions(-)
diff --git a/rust/kernel/devres.rs b/rust/kernel/devres.rs
index 3e22c63efb98..ea86e9c62cdf 100644
--- a/rust/kernel/devres.rs
+++ b/rust/kernel/devres.rs
@@ -101,7 +101,7 @@ struct Inner<T> {
/// impl<const SIZE: usize> Drop for IoMem<SIZE> {
/// fn drop(&mut self) {
/// // SAFETY: `self.0.addr()` is guaranteed to be properly mapped by `Self::new`.
-/// unsafe { bindings::iounmap(self.0.addr() as *mut c_void); };
+/// unsafe { bindings::iounmap(self.0.as_ptr().cast()); };
/// }
/// }
///
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index 0b9c97c0a1d7..1682f2a0d20d 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -105,8 +105,8 @@ pub fn new_region(addr: usize, size: usize) -> Result<Self> {
impl<T: ?Sized + KnownSize> MmioRaw<T> {
/// Returns the base address of the MMIO region.
#[inline]
- pub fn addr(&self) -> usize {
- self.addr.addr()
+ pub fn as_ptr(&self) -> *mut T {
+ self.addr
}
/// Returns the size of the MMIO region.
@@ -166,7 +166,7 @@ pub fn size(&self) -> usize {
/// impl<const SIZE: usize> Drop for IoMem<SIZE> {
/// fn drop(&mut self) {
/// // SAFETY: `self.0.addr()` is guaranteed to be properly mapped by `Self::new`.
-/// unsafe { bindings::iounmap(self.0.addr() as *mut c_void); };
+/// unsafe { bindings::iounmap(self.0.as_ptr().cast()); };
/// }
/// }
///
@@ -217,14 +217,14 @@ pub trait IoCapable<T> {
/// # Safety
///
/// The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`.
- unsafe fn io_read(&self, address: usize) -> T;
+ unsafe fn io_read(&self, address: *mut T) -> T;
/// Performs an I/O write of `value` at `address`.
///
/// # Safety
///
/// The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`.
- unsafe fn io_write(&self, value: T, address: usize);
+ unsafe fn io_write(&self, value: T, address: *mut T);
}
/// Describes a given I/O location: its offset, width, and type to convert the raw value from and
@@ -291,23 +291,35 @@ fn offset(self) -> usize {
/// For MMIO regions, all widths (u8, u16, u32, and u64 on 64-bit systems) are typically
/// supported. For PCI configuration space, u8, u16, and u32 are supported but u64 is not.
pub trait Io {
- /// Returns the base address of this mapping.
- fn addr(&self) -> usize;
+ /// Type of this I/O region. For untyped I/O regions, [`Region`] type can be used.
+ type Type: ?Sized + KnownSize;
+
+ /// Returns the base pointer of this mapping.
+ ///
+ /// This is a pointer to capture metadata. The specific meaning of the pointer depends on
+ /// I/O backend and is not necessarily valid.
+ fn as_ptr(&self) -> *mut Self::Type;
+
+ /// Returns the absolute I/O address for a given `offset`,
+ /// performing compile-time bound checks.
+ // Always inline to optimize out error path of `build_assert`.
+ #[inline(always)]
+ fn io_addr_assert<U>(&self, offset: usize) -> *mut U {
+ build_assert!(offset_valid::<U>(offset, Self::Type::MIN_SIZE));
- /// Returns the maximum size of this mapping.
- fn maxsize(&self) -> usize;
+ self.as_ptr().wrapping_byte_add(offset).cast()
+ }
/// Returns the absolute I/O address for a given `offset`,
/// performing runtime bound checks.
#[inline]
- fn io_addr<U>(&self, offset: usize) -> Result<usize> {
- if !offset_valid::<U>(offset, self.maxsize()) {
+ fn io_addr<U>(&self, offset: usize) -> Result<*mut U> {
+ let ptr = self.as_ptr();
+ if !offset_valid::<U>(offset, Self::Type::size(ptr)) {
return Err(EINVAL);
}
- // Probably no need to check, since the safety requirements of `Self::new` guarantee that
- // this can't overflow.
- self.addr().checked_add(offset).ok_or(EINVAL)
+ Ok(ptr.wrapping_byte_add(offset).cast())
}
/// Fallible 8-bit read with runtime bounds check.
@@ -386,7 +398,7 @@ fn try_write64(&self, value: u64, offset: usize) -> Result
#[inline(always)]
fn read8(&self, offset: usize) -> u8
where
- Self: IoKnownSize + IoCapable<u8>,
+ Self: IoCapable<u8>,
{
self.read(offset)
}
@@ -395,7 +407,7 @@ fn read8(&self, offset: usize) -> u8
#[inline(always)]
fn read16(&self, offset: usize) -> u16
where
- Self: IoKnownSize + IoCapable<u16>,
+ Self: IoCapable<u16>,
{
self.read(offset)
}
@@ -404,7 +416,7 @@ fn read16(&self, offset: usize) -> u16
#[inline(always)]
fn read32(&self, offset: usize) -> u32
where
- Self: IoKnownSize + IoCapable<u32>,
+ Self: IoCapable<u32>,
{
self.read(offset)
}
@@ -413,7 +425,7 @@ fn read32(&self, offset: usize) -> u32
#[inline(always)]
fn read64(&self, offset: usize) -> u64
where
- Self: IoKnownSize + IoCapable<u64>,
+ Self: IoCapable<u64>,
{
self.read(offset)
}
@@ -422,7 +434,7 @@ fn read64(&self, offset: usize) -> u64
#[inline(always)]
fn write8(&self, value: u8, offset: usize)
where
- Self: IoKnownSize + IoCapable<u8>,
+ Self: IoCapable<u8>,
{
self.write(offset, value)
}
@@ -431,7 +443,7 @@ fn write8(&self, value: u8, offset: usize)
#[inline(always)]
fn write16(&self, value: u16, offset: usize)
where
- Self: IoKnownSize + IoCapable<u16>,
+ Self: IoCapable<u16>,
{
self.write(offset, value)
}
@@ -440,7 +452,7 @@ fn write16(&self, value: u16, offset: usize)
#[inline(always)]
fn write32(&self, value: u32, offset: usize)
where
- Self: IoKnownSize + IoCapable<u32>,
+ Self: IoCapable<u32>,
{
self.write(offset, value)
}
@@ -449,7 +461,7 @@ fn write32(&self, value: u32, offset: usize)
#[inline(always)]
fn write64(&self, value: u64, offset: usize)
where
- Self: IoKnownSize + IoCapable<u64>,
+ Self: IoCapable<u64>,
{
self.write(offset, value)
}
@@ -637,7 +649,7 @@ fn try_update<T, L, F>(&self, location: L, f: F) -> Result
fn read<T, L>(&self, location: L) -> T
where
L: IoLoc<T>,
- Self: IoKnownSize + IoCapable<L::IoType>,
+ Self: IoCapable<L::IoType>,
{
let address = self.io_addr_assert::<L::IoType>(location.offset());
@@ -670,7 +682,7 @@ fn read<T, L>(&self, location: L) -> T
fn write<T, L>(&self, location: L, value: T)
where
L: IoLoc<T>,
- Self: IoKnownSize + IoCapable<L::IoType>,
+ Self: IoCapable<L::IoType>,
{
let address = self.io_addr_assert::<L::IoType>(location.offset());
let io_value = value.into();
@@ -715,7 +727,7 @@ fn write_reg<T, L, V>(&self, value: V)
where
L: IoLoc<T>,
V: LocatedRegister<Location = L, Value = T>,
- Self: IoKnownSize + IoCapable<L::IoType>,
+ Self: IoCapable<L::IoType>,
{
let (location, value) = value.into_io_op();
@@ -748,7 +760,7 @@ fn write_reg<T, L, V>(&self, value: V)
fn update<T, L, F>(&self, location: L, f: F)
where
L: IoLoc<T>,
- Self: IoKnownSize + IoCapable<L::IoType> + Sized,
+ Self: IoCapable<L::IoType>,
F: FnOnce(T) -> T,
{
let address = self.io_addr_assert::<L::IoType>(location.offset());
@@ -762,41 +774,25 @@ fn update<T, L, F>(&self, location: L, f: F)
}
}
-/// Trait for types with a known size at compile time.
-///
-/// This trait is implemented by I/O backends that have a compile-time known size,
-/// enabling the use of infallible I/O accessors with compile-time bounds checking.
-///
-/// Types implementing this trait can use the infallible methods in [`Io`] trait
-/// (e.g., `read8`, `write32`), which require `Self: IoKnownSize` bound.
-pub trait IoKnownSize: Io {
- /// Minimum usable size of this region.
- const MIN_SIZE: usize;
-
- /// Returns the absolute I/O address for a given `offset`,
- /// performing compile-time bound checks.
- // Always inline to optimize out error path of `build_assert`.
- #[inline(always)]
- fn io_addr_assert<U>(&self, offset: usize) -> usize {
- build_assert!(offset_valid::<U>(offset, Self::MIN_SIZE));
+// For compatibility only.
+#[doc(hidden)]
+pub trait IoKnownSize: Io {}
- self.addr() + offset
- }
-}
+impl<T: Io + ?Sized> IoKnownSize for T {}
/// Implements [`IoCapable`] on `$mmio` for `$ty` using `$read_fn` and `$write_fn`.
macro_rules! impl_mmio_io_capable {
($mmio:ident, $(#[$attr:meta])* $ty:ty, $read_fn:ident, $write_fn:ident) => {
$(#[$attr])*
impl<T: ?Sized> IoCapable<$ty> for $mmio<T> {
- unsafe fn io_read(&self, address: usize) -> $ty {
+ unsafe fn io_read(&self, address: *mut $ty) -> $ty {
// SAFETY: By the trait invariant `address` is a valid address for MMIO operations.
unsafe { bindings::$read_fn(address as *const c_void) }
}
- unsafe fn io_write(&self, value: $ty, address: usize) {
+ unsafe fn io_write(&self, value: $ty, address: *mut $ty) {
// SAFETY: By the trait invariant `address` is a valid address for MMIO operations.
- unsafe { bindings::$write_fn(value, address as *mut c_void) }
+ unsafe { bindings::$write_fn(value, address.cast()) }
}
}
};
@@ -816,23 +812,15 @@ unsafe fn io_write(&self, value: $ty, address: usize) {
);
impl<T: ?Sized + KnownSize> Io for Mmio<T> {
- /// Returns the base address of this mapping.
- #[inline]
- fn addr(&self) -> usize {
- self.0.addr()
- }
+ type Type = T;
- /// Returns the maximum size of this mapping.
+ /// Returns the base address of this mapping.
#[inline]
- fn maxsize(&self) -> usize {
- self.0.size()
+ fn as_ptr(&self) -> *mut T {
+ self.0.as_ptr()
}
}
-impl<T: ?Sized + KnownSize> IoKnownSize for Mmio<T> {
- const MIN_SIZE: usize = T::MIN_SIZE;
-}
-
impl<T: ?Sized + KnownSize> Mmio<T> {
/// Converts an `MmioRaw` into an `Mmio` instance, providing the accessors to the MMIO mapping.
///
@@ -856,21 +844,14 @@ pub unsafe fn from_raw(raw: &MmioRaw<T>) -> &Self {
pub struct RelaxedMmio<T: ?Sized>(Mmio<T>);
impl<T: ?Sized + KnownSize> Io for RelaxedMmio<T> {
- #[inline]
- fn addr(&self) -> usize {
- self.0.addr()
- }
+ type Type = T;
#[inline]
- fn maxsize(&self) -> usize {
- self.0.maxsize()
+ fn as_ptr(&self) -> *mut T {
+ self.0.as_ptr()
}
}
-impl<T: ?Sized + KnownSize> IoKnownSize for RelaxedMmio<T> {
- const MIN_SIZE: usize = T::MIN_SIZE;
-}
-
impl<T: ?Sized> Mmio<T> {
/// Returns a [`RelaxedMmio`] reference that performs relaxed I/O operations.
///
diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
index a6292f4ebfa4..f87a29f90e79 100644
--- a/rust/kernel/io/mem.rs
+++ b/rust/kernel/io/mem.rs
@@ -284,7 +284,7 @@ pub fn new<'a>(io_request: IoRequest<'a>) -> impl PinInit<Devres<Self>, Error> +
impl<const SIZE: usize> Drop for IoMem<SIZE> {
fn drop(&mut self) {
// SAFETY: Safe as by the invariant of `Io`.
- unsafe { bindings::iounmap(self.io.addr() as *mut c_void) }
+ unsafe { bindings::iounmap(self.io.as_ptr().cast()) }
}
}
diff --git a/rust/kernel/pci/io.rs b/rust/kernel/pci/io.rs
index e048370fb8c1..63c35eaab6c3 100644
--- a/rust/kernel/pci/io.rs
+++ b/rust/kernel/pci/io.rs
@@ -10,11 +10,11 @@
io::{
Io,
IoCapable,
- IoKnownSize,
Mmio,
MmioRaw, //
},
prelude::*,
+ ptr::KnownSize,
sync::aref::ARef, //
};
use core::{
@@ -60,14 +60,37 @@ pub const fn into_raw(self) -> usize {
pub trait ConfigSpaceKind {
/// The size of this configuration space in bytes.
const SIZE: usize;
+
+ /// Region type for this kind of config space. This should be [`crate::io::Region<SIZE>`].
+ type Region: ?Sized + KnownSize;
+
+ /// Obtain pointer with actual size information.
+ fn null_ptr_from_size(size: ConfigSpaceSize) -> *mut Self::Region;
}
impl ConfigSpaceKind for Normal {
const SIZE: usize = 256;
+
+ type Region = crate::io::Region<256>;
+
+ #[inline]
+ fn null_ptr_from_size(size: ConfigSpaceSize) -> *mut Self::Region {
+ core::ptr::slice_from_raw_parts_mut(core::ptr::null_mut::<u8>(), size.into_raw())
+ as *mut Self::Region
+ }
}
impl ConfigSpaceKind for Extended {
const SIZE: usize = 4096;
+
+ type Region = crate::io::Region<4096>;
+
+ #[inline]
+ fn null_ptr_from_size(_: ConfigSpaceSize) -> *mut Self::Region {
+ // Small optimization. For a extended config space, we already know that the size
+ // is 4096.
+ core::ptr::slice_from_raw_parts_mut(core::ptr::null_mut::<u8>(), 4096) as *mut Self::Region
+ }
}
/// The PCI configuration space of a device.
@@ -87,28 +110,28 @@ pub struct ConfigSpace<'a, S: ConfigSpaceKind = Extended> {
macro_rules! impl_config_space_io_capable {
($ty:ty, $read_fn:ident, $write_fn:ident) => {
impl<'a, S: ConfigSpaceKind> IoCapable<$ty> for ConfigSpace<'a, S> {
- unsafe fn io_read(&self, address: usize) -> $ty {
+ unsafe fn io_read(&self, address: *mut $ty) -> $ty {
+ // CAST: The offset is cast to `i32` because the C functions expect a 32-bit
+ // signed offset parameter. PCI configuration space size is at most 4096 bytes,
+ // so the value always fits within `i32` without truncation or sign change.
+ let addr = address.addr() as i32;
let mut val: $ty = 0;
// Return value from C function is ignored in infallible accessors.
- let _ret =
- // SAFETY: By the type invariant `self.pdev` is a valid address.
- // CAST: The offset is cast to `i32` because the C functions expect a 32-bit
- // signed offset parameter. PCI configuration space size is at most 4096 bytes,
- // so the value always fits within `i32` without truncation or sign change.
- unsafe { bindings::$read_fn(self.pdev.as_raw(), address as i32, &mut val) };
-
+ // SAFETY: By the type invariant `self.pdev` is a valid address.
+ let _ = unsafe { bindings::$read_fn(self.pdev.as_raw(), addr, &mut val) };
val
}
- unsafe fn io_write(&self, value: $ty, address: usize) {
+ unsafe fn io_write(&self, value: $ty, address: *mut $ty) {
+ // CAST: The offset is cast to `i32` because the C functions expect a 32-bit
+ // signed offset parameter. PCI configuration space size is at most 4096 bytes,
+ // so the value always fits within `i32` without truncation or sign change.
+ let addr = address.addr() as i32;
+
// Return value from C function is ignored in infallible accessors.
- let _ret =
- // SAFETY: By the type invariant `self.pdev` is a valid address.
- // CAST: The offset is cast to `i32` because the C functions expect a 32-bit
- // signed offset parameter. PCI configuration space size is at most 4096 bytes,
- // so the value always fits within `i32` without truncation or sign change.
- unsafe { bindings::$write_fn(self.pdev.as_raw(), address as i32, value) };
+ // SAFETY: By the type invariant `self.pdev` is a valid address.
+ let _ = unsafe { bindings::$write_fn(self.pdev.as_raw(), addr, value) };
}
}
};
@@ -120,23 +143,15 @@ unsafe fn io_write(&self, value: $ty, address: usize) {
impl_config_space_io_capable!(u32, pci_read_config_dword, pci_write_config_dword);
impl<'a, S: ConfigSpaceKind> Io for ConfigSpace<'a, S> {
- /// Returns the base address of the I/O region. It is always 0 for configuration space.
- #[inline]
- fn addr(&self) -> usize {
- 0
- }
+ type Type = S::Region;
- /// Returns the maximum size of the configuration space.
+ /// Returns the base address of the I/O region. It is always 0 for configuration space.
#[inline]
- fn maxsize(&self) -> usize {
- self.pdev.cfg_size().into_raw()
+ fn as_ptr(&self) -> *mut Self::Type {
+ S::null_ptr_from_size(self.pdev.cfg_size())
}
}
-impl<'a, S: ConfigSpaceKind> IoKnownSize for ConfigSpace<'a, S> {
- const MIN_SIZE: usize = S::SIZE;
-}
-
/// A PCI BAR to perform I/O-Operations on.
///
/// I/O backend assumes that the device is little-endian and will automatically
@@ -219,7 +234,7 @@ unsafe fn do_release(pdev: &Device, ioptr: usize, num: i32) {
fn release(&self) {
// SAFETY: The safety requirements are guaranteed by the type invariant of `self.pdev`.
- unsafe { Self::do_release(&self.pdev, self.io.addr(), self.num) };
+ unsafe { Self::do_release(&self.pdev, self.io.as_ptr().addr(), self.num) };
}
}
@@ -267,6 +282,7 @@ pub fn iomap_region<'a>(
}
/// Returns the size of configuration space.
+ #[inline]
pub fn cfg_size(&self) -> ConfigSpaceSize {
// SAFETY: `self.as_raw` is a valid pointer to a `struct pci_dev`.
let size = unsafe { (*self.as_raw()).cfg_size };
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: use pointer types instead of address
2026-04-21 14:56 ` [PATCH v2 03/11] rust: io: use pointer types instead of address Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The biggest refactor: replaces `addr() -> usize` / `maxsize() -> usize` with `as_ptr() -> *mut Self::Type` on the `Io` trait, and makes `IoCapable` take `*mut T` instead of `usize`. This eliminates `IoKnownSize` (kept as a blanket impl for compatibility).
The PCI config space handling is notable. Since PCI config space uses offsets not real pointers, null pointers with size metadata are used:
```rust
fn null_ptr_from_size(size: ConfigSpaceSize) -> *mut Self::Region {
core::ptr::slice_from_raw_parts_mut(core::ptr::null_mut::<u8>(), size.into_raw())
as *mut Self::Region
}
```
This is documented as intentional since `IoCapable` impls for PCI config space only use `address.addr()` (the offset), never dereferencing the pointer. The `io_addr` method now uses `wrapping_byte_add` which is appropriate for potentially-null pointers.
One minor observation: the `io_addr_assert` method now uses `build_assert!` with `Self::Type::MIN_SIZE` -- this means it checks against the compile-time minimum, while `io_addr` checks against the runtime `KnownSize::size(ptr)`. This is consistent with the intent (compile-time vs runtime bounds checking).
The `IoKnownSize` deprecation is handled gracefully with a blanket impl and `#[doc(hidden)]`.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 04/11] rust: io: add missing safety requirement in `IoCapable` methods
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (2 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 03/11] rust: io: use pointer types instead of address Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 05/11] rust: io: restrict untyped IO access and `register!` to `Region` Gary Guo
` (7 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
The current safety comment on `io_read`/`io_write` does not cover the topic
about alignment, although this is guaranteed by checks in `Io`. Add it so
it can be relied on by implementor of `IoCapable`.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/io.rs | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index 1682f2a0d20d..c6d30c5b4e10 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -216,14 +216,16 @@ pub trait IoCapable<T> {
///
/// # Safety
///
- /// The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`.
+ /// - The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`.
+ /// - `address` must be aligned.
unsafe fn io_read(&self, address: *mut T) -> T;
/// Performs an I/O write of `value` at `address`.
///
/// # Safety
///
- /// The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`.
+ /// - The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`.
+ /// - `address` must be aligned.
unsafe fn io_write(&self, value: T, address: *mut T);
}
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH v2 05/11] rust: io: restrict untyped IO access and `register!` to `Region`
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (3 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 04/11] rust: io: add missing safety requirement in `IoCapable` methods Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 06/11] rust: io: add view type Gary Guo
` (6 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
Currently the `Io` trait exposes a bunch of untyped IO accesses, but if the
`Io` region itself is typed, then it might be weird to have
let io: Mmio<u32> = /* ... */;
io.read8(1);
while not unsound, it is surely strange. Thus, restrict the untyped methods
and also the register macro to `Region` type only.
The way it is implemented is by adding a generic type to `IoLoc`. This also
paves the way to add typed register blocks in the future; for example, we
could use this mechanism to block driver A's `register!()` generated macro
from being used on driver B's MMIO. The same mechanism could be used for
relative IO registers. These are future opoortunities, and for this patch I
just restricted everything to require `IoLoc<Region<SIZE>, _>`.
Suggested-by: Alexandre Courbot <acourbot@nvidia.com>
Link: https://lore.kernel.org/rust-for-linux/DHLB3RO3OSF5.2R7F27U99BKLN@nvidia.com/
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/io.rs | 47 +++++++++++++++++++++++++++++++---------------
rust/kernel/io/register.rs | 21 ++++++++++++---------
2 files changed, 44 insertions(+), 24 deletions(-)
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index c6d30c5b4e10..a13be8c5fd2d 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -238,15 +238,16 @@ pub trait IoCapable<T> {
/// (for primitive types like [`u32`]) and typed ones (like those generated by the [`register!`]
/// macro).
///
-/// An `IoLoc<T>` carries three pieces of information:
+/// An `IoLoc<Base, T>` carries the following pieces of information:
///
+/// - The valid `Base` to operate on. For most registers, this should be [`Region`].
/// - The offset to access (returned by [`IoLoc::offset`]),
/// - The width of the access (determined by [`IoLoc::IoType`]),
/// - The type `T` in which the raw data is returned or provided.
///
/// `T` and `IoLoc::IoType` may differ: for instance, a typed register has `T` = the register type
/// with its bitfields, and `IoType` = its backing primitive (e.g. `u32`).
-pub trait IoLoc<T> {
+pub trait IoLoc<Base: ?Sized, T> {
/// Size ([`u8`], [`u16`], etc) of the I/O performed on the returned [`offset`](IoLoc::offset).
type IoType: Into<T> + From<T>;
@@ -254,12 +255,12 @@ pub trait IoLoc<T> {
fn offset(self) -> usize;
}
-/// Implements [`IoLoc<$ty>`] for [`usize`], allowing [`usize`] to be used as a parameter of
-/// [`Io::read`] and [`Io::write`].
+/// Implements [`IoLoc<Region<SIZE>, $ty>`] for [`usize`], allowing [`usize`] to be used as a
+/// parameter of [`Io::read`] and [`Io::write`].
macro_rules! impl_usize_ioloc {
($($ty:ty),*) => {
$(
- impl IoLoc<$ty> for usize {
+ impl<const SIZE: usize> IoLoc<Region<SIZE>, $ty> for usize {
type IoType = $ty;
#[inline(always)]
@@ -328,6 +329,7 @@ fn io_addr<U>(&self, offset: usize) -> Result<*mut U> {
#[inline(always)]
fn try_read8(&self, offset: usize) -> Result<u8>
where
+ usize: IoLoc<Self::Type, u8, IoType = u8>,
Self: IoCapable<u8>,
{
self.try_read(offset)
@@ -337,6 +339,7 @@ fn try_read8(&self, offset: usize) -> Result<u8>
#[inline(always)]
fn try_read16(&self, offset: usize) -> Result<u16>
where
+ usize: IoLoc<Self::Type, u16, IoType = u16>,
Self: IoCapable<u16>,
{
self.try_read(offset)
@@ -346,6 +349,7 @@ fn try_read16(&self, offset: usize) -> Result<u16>
#[inline(always)]
fn try_read32(&self, offset: usize) -> Result<u32>
where
+ usize: IoLoc<Self::Type, u32, IoType = u32>,
Self: IoCapable<u32>,
{
self.try_read(offset)
@@ -355,6 +359,7 @@ fn try_read32(&self, offset: usize) -> Result<u32>
#[inline(always)]
fn try_read64(&self, offset: usize) -> Result<u64>
where
+ usize: IoLoc<Self::Type, u64, IoType = u64>,
Self: IoCapable<u64>,
{
self.try_read(offset)
@@ -364,6 +369,7 @@ fn try_read64(&self, offset: usize) -> Result<u64>
#[inline(always)]
fn try_write8(&self, value: u8, offset: usize) -> Result
where
+ usize: IoLoc<Self::Type, u8, IoType = u8>,
Self: IoCapable<u8>,
{
self.try_write(offset, value)
@@ -373,6 +379,7 @@ fn try_write8(&self, value: u8, offset: usize) -> Result
#[inline(always)]
fn try_write16(&self, value: u16, offset: usize) -> Result
where
+ usize: IoLoc<Self::Type, u16, IoType = u16>,
Self: IoCapable<u16>,
{
self.try_write(offset, value)
@@ -382,6 +389,7 @@ fn try_write16(&self, value: u16, offset: usize) -> Result
#[inline(always)]
fn try_write32(&self, value: u32, offset: usize) -> Result
where
+ usize: IoLoc<Self::Type, u32, IoType = u32>,
Self: IoCapable<u32>,
{
self.try_write(offset, value)
@@ -391,6 +399,7 @@ fn try_write32(&self, value: u32, offset: usize) -> Result
#[inline(always)]
fn try_write64(&self, value: u64, offset: usize) -> Result
where
+ usize: IoLoc<Self::Type, u64, IoType = u64>,
Self: IoCapable<u64>,
{
self.try_write(offset, value)
@@ -400,6 +409,7 @@ fn try_write64(&self, value: u64, offset: usize) -> Result
#[inline(always)]
fn read8(&self, offset: usize) -> u8
where
+ usize: IoLoc<Self::Type, u8, IoType = u8>,
Self: IoCapable<u8>,
{
self.read(offset)
@@ -409,6 +419,7 @@ fn read8(&self, offset: usize) -> u8
#[inline(always)]
fn read16(&self, offset: usize) -> u16
where
+ usize: IoLoc<Self::Type, u16, IoType = u16>,
Self: IoCapable<u16>,
{
self.read(offset)
@@ -418,6 +429,7 @@ fn read16(&self, offset: usize) -> u16
#[inline(always)]
fn read32(&self, offset: usize) -> u32
where
+ usize: IoLoc<Self::Type, u32, IoType = u32>,
Self: IoCapable<u32>,
{
self.read(offset)
@@ -427,6 +439,7 @@ fn read32(&self, offset: usize) -> u32
#[inline(always)]
fn read64(&self, offset: usize) -> u64
where
+ usize: IoLoc<Self::Type, u64, IoType = u64>,
Self: IoCapable<u64>,
{
self.read(offset)
@@ -436,6 +449,7 @@ fn read64(&self, offset: usize) -> u64
#[inline(always)]
fn write8(&self, value: u8, offset: usize)
where
+ usize: IoLoc<Self::Type, u8, IoType = u8>,
Self: IoCapable<u8>,
{
self.write(offset, value)
@@ -445,6 +459,7 @@ fn write8(&self, value: u8, offset: usize)
#[inline(always)]
fn write16(&self, value: u16, offset: usize)
where
+ usize: IoLoc<Self::Type, u16, IoType = u16>,
Self: IoCapable<u16>,
{
self.write(offset, value)
@@ -454,6 +469,7 @@ fn write16(&self, value: u16, offset: usize)
#[inline(always)]
fn write32(&self, value: u32, offset: usize)
where
+ usize: IoLoc<Self::Type, u32, IoType = u32>,
Self: IoCapable<u32>,
{
self.write(offset, value)
@@ -463,6 +479,7 @@ fn write32(&self, value: u32, offset: usize)
#[inline(always)]
fn write64(&self, value: u64, offset: usize)
where
+ usize: IoLoc<Self::Type, u64, IoType = u64>,
Self: IoCapable<u64>,
{
self.write(offset, value)
@@ -494,7 +511,7 @@ fn write64(&self, value: u64, offset: usize)
#[inline(always)]
fn try_read<T, L>(&self, location: L) -> Result<T>
where
- L: IoLoc<T>,
+ L: IoLoc<Self::Type, T>,
Self: IoCapable<L::IoType>,
{
let address = self.io_addr::<L::IoType>(location.offset())?;
@@ -529,7 +546,7 @@ fn try_read<T, L>(&self, location: L) -> Result<T>
#[inline(always)]
fn try_write<T, L>(&self, location: L, value: T) -> Result
where
- L: IoLoc<T>,
+ L: IoLoc<Self::Type, T>,
Self: IoCapable<L::IoType>,
{
let address = self.io_addr::<L::IoType>(location.offset())?;
@@ -576,8 +593,8 @@ fn try_write<T, L>(&self, location: L, value: T) -> Result
#[inline(always)]
fn try_write_reg<T, L, V>(&self, value: V) -> Result
where
- L: IoLoc<T>,
- V: LocatedRegister<Location = L, Value = T>,
+ L: IoLoc<Self::Type, T>,
+ V: LocatedRegister<Self::Type, Location = L, Value = T>,
Self: IoCapable<L::IoType>,
{
let (location, value) = value.into_io_op();
@@ -610,7 +627,7 @@ fn try_write_reg<T, L, V>(&self, value: V) -> Result
#[inline(always)]
fn try_update<T, L, F>(&self, location: L, f: F) -> Result
where
- L: IoLoc<T>,
+ L: IoLoc<Self::Type, T>,
Self: IoCapable<L::IoType>,
F: FnOnce(T) -> T,
{
@@ -650,7 +667,7 @@ fn try_update<T, L, F>(&self, location: L, f: F) -> Result
#[inline(always)]
fn read<T, L>(&self, location: L) -> T
where
- L: IoLoc<T>,
+ L: IoLoc<Self::Type, T>,
Self: IoCapable<L::IoType>,
{
let address = self.io_addr_assert::<L::IoType>(location.offset());
@@ -683,7 +700,7 @@ fn read<T, L>(&self, location: L) -> T
#[inline(always)]
fn write<T, L>(&self, location: L, value: T)
where
- L: IoLoc<T>,
+ L: IoLoc<Self::Type, T>,
Self: IoCapable<L::IoType>,
{
let address = self.io_addr_assert::<L::IoType>(location.offset());
@@ -727,8 +744,8 @@ fn write<T, L>(&self, location: L, value: T)
#[inline(always)]
fn write_reg<T, L, V>(&self, value: V)
where
- L: IoLoc<T>,
- V: LocatedRegister<Location = L, Value = T>,
+ L: IoLoc<Self::Type, T>,
+ V: LocatedRegister<Self::Type, Location = L, Value = T>,
Self: IoCapable<L::IoType>,
{
let (location, value) = value.into_io_op();
@@ -761,7 +778,7 @@ fn write_reg<T, L, V>(&self, value: V)
#[inline(always)]
fn update<T, L, F>(&self, location: L, f: F)
where
- L: IoLoc<T>,
+ L: IoLoc<Self::Type, T>,
Self: IoCapable<L::IoType>,
F: FnOnce(T) -> T,
{
diff --git a/rust/kernel/io/register.rs b/rust/kernel/io/register.rs
index 1a407fc35edc..5a7d44c51477 100644
--- a/rust/kernel/io/register.rs
+++ b/rust/kernel/io/register.rs
@@ -113,6 +113,8 @@
use kernel::build_assert;
+use super::Region;
+
/// Trait implemented by all registers.
pub trait Register: Sized {
/// Backing primitive type of the register.
@@ -129,7 +131,7 @@ pub trait FixedRegister: Register {}
/// Allows `()` to be used as the `location` parameter of [`Io::write`](super::Io::write) when
/// passing a [`FixedRegister`] value.
-impl<T> IoLoc<T> for ()
+impl<const SIZE: usize, T> IoLoc<Region<SIZE>, T> for ()
where
T: FixedRegister,
{
@@ -143,7 +145,7 @@ fn offset(self) -> usize {
/// A [`FixedRegister`] carries its location in its type. Thus `FixedRegister` values can be used
/// as an [`IoLoc`].
-impl<T> IoLoc<T> for T
+impl<const SIZE: usize, T> IoLoc<Region<SIZE>, T> for T
where
T: FixedRegister,
{
@@ -168,7 +170,7 @@ pub const fn new() -> Self {
}
}
-impl<T> IoLoc<T> for FixedRegisterLoc<T>
+impl<const SIZE: usize, T> IoLoc<Region<SIZE>, T> for FixedRegisterLoc<T>
where
T: FixedRegister,
{
@@ -239,7 +241,8 @@ const fn offset(self) -> usize {
}
}
-impl<T, B> IoLoc<T> for RelativeRegisterLoc<T, B>
+// FIXME: Make use of `Base` type parameter of `Region` directly.
+impl<const SIZE: usize, T, B> IoLoc<Region<SIZE>, T> for RelativeRegisterLoc<T, B>
where
T: RelativeRegister,
B: RegisterBase<T::BaseFamily> + ?Sized,
@@ -283,7 +286,7 @@ pub fn try_new(idx: usize) -> Option<Self> {
}
}
-impl<T> IoLoc<T> for RegisterArrayLoc<T>
+impl<const SIZE: usize, T> IoLoc<Region<SIZE>, T> for RegisterArrayLoc<T>
where
T: RegisterArray,
{
@@ -370,7 +373,7 @@ pub fn try_at(self, idx: usize) -> Option<RelativeRegisterArrayLoc<T, B>> {
}
}
-impl<T, B> IoLoc<T> for RelativeRegisterArrayLoc<T, B>
+impl<const SIZE: usize, T, B> IoLoc<Region<SIZE>, T> for RelativeRegisterArrayLoc<T, B>
where
T: RelativeRegisterArray,
B: RegisterBase<T::BaseFamily> + ?Sized,
@@ -387,18 +390,18 @@ fn offset(self) -> usize {
/// which to write it.
///
/// Implementors can be used with [`Io::write_reg`](super::Io::write_reg).
-pub trait LocatedRegister {
+pub trait LocatedRegister<Base: ?Sized> {
/// Register value to write.
type Value: Register;
/// Full location information at which to write the value.
- type Location: IoLoc<Self::Value>;
+ type Location: IoLoc<Base, Self::Value>;
/// Consumes `self` and returns a `(location, value)` tuple describing a valid I/O write
/// operation.
fn into_io_op(self) -> (Self::Location, Self::Value);
}
-impl<T> LocatedRegister for T
+impl<const SIZE: usize, T> LocatedRegister<Region<SIZE>> for T
where
T: FixedRegister,
{
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: restrict untyped IO access and `register!` to `Region`
2026-04-21 14:56 ` [PATCH v2 05/11] rust: io: restrict untyped IO access and `register!` to `Region` Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds a `Base` type parameter to `IoLoc` so that untyped offset-based I/O (like `read32(0x10)`) only works on `Region`-typed I/O, not on typed I/O regions. This prevents nonsensical operations like `io.read8(1)` on an `Mmio<u32>`.
The `FIXME` comment is honest:
```rust
// FIXME: Make use of `Base` type parameter of `Region` directly.
```
The approach is reasonable and extensible to future register block typing. Implementation is mechanical but correct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 06/11] rust: io: add view type
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (4 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 05/11] rust: io: restrict untyped IO access and `register!` to `Region` Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 07/11] rust: dma: add methods to unsafely create reference from subview Gary Guo
` (5 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
The view may be created statically via I/O projection using `io_project!()`
macro to perform compile-time checks, or created by type-casting an
existing view type with `try_cast()` function, where the size and alignment
checks are performed at runtime.
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/io.rs | 147 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 146 insertions(+), 1 deletion(-)
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index a13be8c5fd2d..869071d47a13 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -7,7 +7,11 @@
use crate::{
bindings,
prelude::*,
- ptr::KnownSize, //
+ ptr::KnownSize,
+ transmute::{
+ AsBytes,
+ FromBytes, //
+ }, //
};
pub mod mem;
@@ -297,6 +301,13 @@ pub trait Io {
/// Type of this I/O region. For untyped I/O regions, [`Region`] type can be used.
type Type: ?Sized + KnownSize;
+ /// Get a [`View`] covering the entire region.
+ #[inline]
+ fn as_view(&self) -> View<'_, Self, Self::Type> {
+ // SAFETY: This is an empty projection, so it trivially satisfies the invariant.
+ unsafe { View::new_unchecked(self, self.as_ptr()) }
+ }
+
/// Returns the base pointer of this mapping.
///
/// This is a pointer to capture metadata. The specific meaning of the pointer depends on
@@ -912,3 +923,137 @@ pub fn relaxed(&self) -> &RelaxedMmio<T> {
readq_relaxed,
writeq_relaxed
);
+
+/// A view into an I/O region.
+///
+/// # Invariants
+///
+/// - `ptr` is aligned for `T`
+/// - `ptr` has same provenance as `io.as_ptr()`
+/// - `ptr.byte_offset_from(io.as_ptr())` is between 0 to
+/// `KnownSize::size(io.as_ptr()) - KnownSize::size(ptr)`.
+///
+/// These invariants are trivially satisfied if the pointer is created via pointer projection.
+pub struct View<'a, IO: ?Sized, T: ?Sized> {
+ io: &'a IO,
+ ptr: *mut T,
+}
+
+impl<'a, IO: ?Sized, T: ?Sized> View<'a, IO, T> {
+ // For `io_project!` macro use only.
+ #[doc(hidden)]
+ #[inline]
+ pub fn as_view(&self) -> Self {
+ *self
+ }
+
+ /// Create a view of a provided I/O region.
+ ///
+ /// # Safety
+ ///
+ /// `ptr` must satisfy the invariants of the view type.
+ #[inline]
+ pub unsafe fn new_unchecked(io: &'a IO, ptr: *mut T) -> Self {
+ // INVARIANT: Per function safety requirement.
+ Self { io, ptr }
+ }
+
+ /// Obtain the underlying I/O region.
+ #[inline]
+ pub fn io(self) -> &'a IO {
+ self.io
+ }
+
+ /// Obtain a pointer to the subview.
+ ///
+ /// The interpretation of the pointer depends on the underlying I/O region.
+ #[inline]
+ pub fn as_ptr(self) -> *mut T {
+ self.ptr
+ }
+}
+
+impl<IO: ?Sized, T: ?Sized> Clone for View<'_, IO, T> {
+ #[inline]
+ fn clone(&self) -> Self {
+ *self
+ }
+}
+
+impl<IO: ?Sized, T: ?Sized> Copy for View<'_, IO, T> {}
+
+impl<'a, IO: ?Sized, T: ?Sized> View<'a, IO, T> {
+ /// Try to convert this view into a different typed I/O view.
+ ///
+ /// The target type must be of same or smaller size to current type, and the current view must
+ /// be properly aligned for the target type.
+ #[inline]
+ pub fn try_cast<U>(self) -> Result<View<'a, IO, U>>
+ where
+ T: KnownSize + FromBytes + AsBytes,
+ U: FromBytes + AsBytes,
+ {
+ if size_of::<U>() > KnownSize::size(self.ptr) {
+ return Err(EINVAL);
+ }
+
+ if self.ptr.addr() % align_of::<U>() != 0 {
+ return Err(EINVAL);
+ }
+
+ // INVARIANT: We have checked bounds and alignment.
+ Ok(View {
+ io: self.io,
+ ptr: self.ptr.cast(),
+ })
+ }
+}
+
+/// Project an I/O type to a subview of it.
+///
+/// The syntax is of form `io_project!(io, proj)` where `io` is an expression to a type that
+/// implements [`Io`] and `proj` is a [projection specification](kernel::ptr::project!).
+///
+/// In addition to projecting from [`Io`], you may also project from a [`View`] of an [`Io`].
+///
+/// # Examples
+///
+/// ```
+/// use kernel::io::{
+/// io_project,
+/// Mmio,
+/// View,
+/// };
+/// struct MyStruct { field: u32, }
+///
+/// // SAFETY: All bit patterns are acceptable values for `MyStruct`.
+/// unsafe impl kernel::transmute::FromBytes for MyStruct{};
+/// // SAFETY: Instances of `MyStruct` have no uninitialized portions.
+/// unsafe impl kernel::transmute::AsBytes for MyStruct{};
+///
+/// # fn test(mmio: &Mmio<[MyStruct]>) -> Result {
+/// // let mmio: Mmio<[MyStruct]>;
+/// let field: View<'_, _, u32> = io_project!(mmio, [try: 1].field);
+/// let whole: View<'_, _, MyStruct> = io_project!(mmio, [try: 2]);
+/// let nested: View<'_, Mmio<_>, u32> = io_project!(whole, .field);
+/// # Ok::<(), Error>(()) }
+#[macro_export]
+#[doc(hidden)]
+macro_rules! io_project {
+ ($io:expr, $($proj:tt)*) => {{
+ // Bring `as_view` to scope.
+ use $crate::io::Io as _;
+
+ // Convert IO to view for unified handling.
+ // This also takes advantage to deref coercion.
+ let view: $crate::io::View<'_, _, _> = $io.as_view();
+ let ptr = $crate::ptr::project!(
+ mut view.as_ptr(), $($proj)*
+ );
+ // SAFETY: projection of a projection is still a valid projection.
+ unsafe { $crate::io::View::new_unchecked(view.io(), ptr) }
+ }};
+}
+
+#[doc(inline)]
+pub use crate::io_project;
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: add view type
2026-04-21 14:56 ` [PATCH v2 06/11] rust: io: add view type Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Introduces `View<'a, IO, T>` -- the core new abstraction. A view borrows an I/O region and holds a pointer to a sub-region within it.
The invariants are well-stated:
```rust
/// - `ptr` is aligned for `T`
/// - `ptr` has same provenance as `io.as_ptr()`
/// - `ptr.byte_offset_from(io.as_ptr())` is between 0 to
/// `KnownSize::size(io.as_ptr()) - KnownSize::size(ptr)`.
```
The `io_project!` macro leverages the existing `project!` pointer projection infrastructure and wraps the result in a `View`, which is clean.
The `try_cast` method requires `FromBytes + AsBytes` bounds, which correctly ensures safe reinterpretation. It checks both size and alignment:
```rust
if size_of::<U>() > KnownSize::size(self.ptr) {
return Err(EINVAL);
}
if self.ptr.addr() % align_of::<U>() != 0 {
return Err(EINVAL);
}
```
One design note: `View` is `Copy`, which is appropriate since it's just a reference + pointer pair, but means it can't enforce unique access. This is fine since I/O is inherently shared and the safety contracts are on the callers.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 07/11] rust: dma: add methods to unsafely create reference from subview
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (5 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 06/11] rust: io: add view type Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 08/11] rust: io: add `read_val` and `write_val` function on I/O view Gary Guo
` (4 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
Implement `Io` for `Coherent`, so now `dma::Coherent` can be used with I/O
projections.
This allows the `as_ref()` and `as_mut()` API to be used in smaller region
than the whole DMA allocation itself. For example, if a ring buffer is
shared between GPU and CPU, users may now use the `io_project!` API to
obtain a view of the buffer that is uniquely owned by the CPU and get a
reference.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/dma.rs | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs
index 642ccff465c8..92db0e58f364 100644
--- a/rust/kernel/dma.rs
+++ b/rust/kernel/dma.rs
@@ -14,6 +14,7 @@
},
error::to_result,
fs::file,
+ io::Io,
prelude::*,
ptr::KnownSize,
sync::aref::ARef,
@@ -989,6 +990,59 @@ unsafe impl<T: KnownSize + Send + ?Sized> Send for Coherent<T> {}
// The safe methods only return metadata or raw pointers whose use requires `unsafe`.
unsafe impl<T: KnownSize + ?Sized + AsBytes + FromBytes + Sync> Sync for Coherent<T> {}
+impl<T: ?Sized + KnownSize> Io for Coherent<T> {
+ type Type = T;
+
+ #[inline]
+ fn as_ptr(&self) -> *mut Self::Type {
+ self.as_mut_ptr()
+ }
+}
+
+impl<'a, B: ?Sized + KnownSize, T: ?Sized> crate::io::View<'a, Coherent<B>, T> {
+ /// Returns a DMA handle which may be given to the device as the DMA address base of
+ /// the region.
+ #[inline]
+ pub fn dma_handle(&self) -> DmaAddress {
+ let base = self.io();
+ let offset = self.as_ptr().addr() - base.as_ptr().addr();
+ // CAST: The offseted DMA address can never overflow.
+ base.dma_handle() + offset as DmaAddress
+ }
+
+ /// Returns a reference to the data in the region.
+ ///
+ /// # Safety
+ ///
+ /// * Callers must ensure that the device does not read/write to/from memory while the returned
+ /// slice is live.
+ /// * Callers must ensure that this call does not race with a write to the same region while
+ /// the returned slice is live.
+ #[inline]
+ pub unsafe fn as_ref(self) -> &'a T {
+ let ptr = self.as_ptr();
+ // SAFETY: pointer is aligned and valid per type invariant of `View`. Aliasing rule is
+ // satisfied per safety requirement.
+ unsafe { &*ptr }
+ }
+
+ /// Returns a mutable reference to the data in the region.
+ ///
+ /// # Safety
+ ///
+ /// * Callers must ensure that the device does not read/write to/from memory while the returned
+ /// slice is live.
+ /// * Callers must ensure that this call does not race with a read or write to the same region
+ /// while the returned slice is live.
+ #[inline]
+ pub unsafe fn as_mut(self) -> &'a mut T {
+ let ptr = self.as_ptr();
+ // SAFETY: pointer is aligned and valid per type invariant of `View`. Aliasing rule is
+ // satisfied per safety requirement.
+ unsafe { &mut *ptr }
+ }
+}
+
impl<T: KnownSize + AsBytes + ?Sized> debugfs::BinaryWriter for Coherent<T> {
fn write_to_slice(
&self,
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: dma: add methods to unsafely create reference from subview
2026-04-21 14:56 ` [PATCH v2 07/11] rust: dma: add methods to unsafely create reference from subview Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Implements `Io` for `Coherent<T>` and adds `as_ref`/`as_mut`/`dma_handle` methods on `View<'_, Coherent<B>, T>`.
The safety contracts on `as_ref`/`as_mut` are appropriate:
```rust
/// * Callers must ensure that the device does not read/write to/from memory
/// * Callers must ensure that this call does not race with a write
```
Minor nit: The safety comment for `as_ref` says "pointer is aligned and valid per type invariant of `View`" which is correct based on the View invariants.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 08/11] rust: io: add `read_val` and `write_val` function on I/O view
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (6 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 07/11] rust: dma: add methods to unsafely create reference from subview Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 09/11] gpu: nova-core: use I/O projection for cleaner encapsulation Gary Guo
` (3 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
When a view is narrowed to just a primitive, these functions provide a way
to access them using the `IoCapable` trait. This is used to provide
`io_read!` and `io_write!` macros, which are generalized version of current
`dma_read!` and `dma_write!` macro (that works on `Coherent` only, but not
subview of them, or other I/O backends).
For DMA coherent objects, `IoCapable` is only implemented for atomically
accessible primitives; this is because `read_volatile`/`write_volatile` can
behave undesirably for aggregates; LLVM may turn them to multiple
instructions to access parts and assemble, or could combine them to a
single instruction.
The ability to read/write aggregates (when atomicity is of no concern),
could be implemented with copying primitives (e.g. memcpy_{from,to}io) in
the future.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/dma.rs | 46 ++++++++++++++++++++++++++-
rust/kernel/io.rs | 92 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 137 insertions(+), 1 deletion(-)
diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs
index 92db0e58f364..d0b86aeebfe2 100644
--- a/rust/kernel/dma.rs
+++ b/rust/kernel/dma.rs
@@ -14,7 +14,10 @@
},
error::to_result,
fs::file,
- io::Io,
+ io::{
+ Io,
+ IoCapable, //
+ },
prelude::*,
ptr::KnownSize,
sync::aref::ARef,
@@ -999,6 +1002,47 @@ fn as_ptr(&self) -> *mut Self::Type {
}
}
+/// Implements [`IoCapable`] on `Coherent` for `$ty` using `read_volatile` and `write_volatile`.
+macro_rules! impl_coherent_io_capable {
+ ($(#[$attr:meta])* $ty:ty) => {
+ $(#[$attr])*
+ impl<T: ?Sized + KnownSize> IoCapable<$ty> for Coherent<T> {
+ #[inline]
+ unsafe fn io_read(&self, address: *mut $ty) -> $ty {
+ // SAFETY:
+ // - By the safety precondition, the address is within bounds of the allocation and
+ // aligned.
+ // - Using read_volatile() here so that race with hardware is well-defined.
+ // - Using read_volatile() here is not sound if it races with other CPU per Rust
+ // rules, but this is allowed per LKMM.
+ // - The macro is only used on primitives so all bit patterns are valid.
+ unsafe { address.read_volatile() }
+ }
+
+ #[inline]
+ unsafe fn io_write(&self, value: $ty, address: *mut $ty) {
+ // SAFETY:
+ // - By the safety precondition, the address is within bounds of the allocation and
+ // aligned.
+ // - Using write_volatile() here so that race with hardware is well-defined.
+ // - Using write_volatile() here is not sound if it races with other CPU per Rust
+ // rules, but this is allowed per LKMM.
+ unsafe { address.write_volatile(value) }
+ }
+ }
+ };
+}
+
+// DMA regions support atomic 8, 16, and 32-bit accesses.
+impl_coherent_io_capable!(u8);
+impl_coherent_io_capable!(u16);
+impl_coherent_io_capable!(u32);
+// DMA regions on 64-bit systems also support atomic 64-bit accesses.
+impl_coherent_io_capable!(
+ #[cfg(CONFIG_64BIT)]
+ u64
+);
+
impl<'a, B: ?Sized + KnownSize, T: ?Sized> crate::io::View<'a, Coherent<B>, T> {
/// Returns a DMA handle which may be given to the device as the DMA address base of
/// the region.
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index 869071d47a13..efcd7e6741d7 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -1009,6 +1009,26 @@ pub fn try_cast<U>(self) -> Result<View<'a, IO, U>>
}
}
+impl<T, IO: ?Sized + Io + IoCapable<T>> View<'_, IO, T> {
+ /// Read from I/O memory.
+ ///
+ /// This is only supported on types that is directly supported by `IO` with [`IoCapable`].
+ #[inline]
+ pub fn read_val(&self) -> T {
+ // SAFETY: Per type invariant.
+ unsafe { self.io.io_read(self.ptr) }
+ }
+
+ /// Write to I/O memory.
+ ///
+ /// This is only supported on types that is directly supported by `IO` with [`IoCapable`].
+ #[inline]
+ pub fn write_val(&self, value: T) {
+ // SAFETY: Per type invariant.
+ unsafe { self.io.io_write(value, self.ptr) }
+ }
+}
+
/// Project an I/O type to a subview of it.
///
/// The syntax is of form `io_project!(io, proj)` where `io` is an expression to a type that
@@ -1057,3 +1077,75 @@ macro_rules! io_project {
#[doc(inline)]
pub use crate::io_project;
+
+/// Read from I/O memory.
+///
+/// The syntax is of form `io_read!(io, proj)` where `io` is an expression to a type that
+/// implements [`Io`] and `proj` is a [projection specification](kernel::ptr::project!).
+///
+/// # Examples
+///
+/// ```
+/// # use kernel::io::View;
+/// struct MyStruct { field: u32, }
+///
+/// // SAFETY: All bit patterns are acceptable values for `MyStruct`.
+/// unsafe impl kernel::transmute::FromBytes for MyStruct{};
+/// // SAFETY: Instances of `MyStruct` have no uninitialized portions.
+/// unsafe impl kernel::transmute::AsBytes for MyStruct{};
+///
+/// # fn test(mmio: &kernel::io::Mmio<[MyStruct]>) -> Result {
+/// // let mmio: Mmio<[MyStruct]>;
+/// let field: u32 = kernel::io::io_read!(mmio, [try: 2].field);
+/// # Ok::<(), Error>(()) }
+#[macro_export]
+#[doc(hidden)]
+macro_rules! io_read {
+ ($io:expr, $($proj:tt)*) => {
+ $crate::io_project!($io, $($proj)*).read_val()
+ };
+}
+
+#[doc(inline)]
+pub use crate::io_read;
+
+/// Writes to I/O memory.
+///
+/// The syntax is of form `io_write!(io, proj, val)` where `io` is an expression to a type that
+/// implements [`Io`] and `proj` is a [projection specification](kernel::ptr::project!),
+/// and `val` is the value to be written to the projected location.
+///
+/// # Examples
+///
+/// ```
+/// # use kernel::io::View;
+/// struct MyStruct { field: u32, }
+///
+/// // SAFETY: All bit patterns are acceptable values for `MyStruct`.
+/// unsafe impl kernel::transmute::FromBytes for MyStruct{};
+/// // SAFETY: Instances of `MyStruct` have no uninitialized portions.
+/// unsafe impl kernel::transmute::AsBytes for MyStruct{};
+///
+/// # fn test(mmio: &kernel::io::Mmio<[MyStruct]>) -> Result {
+/// // let mmio: Mmio<[MyStruct]>;
+/// kernel::io::io_write!(mmio, [try: 2].field, 10);
+/// # Ok::<(), Error>(()) }
+#[macro_export]
+#[doc(hidden)]
+macro_rules! io_write {
+ (@parse [$io:expr] [$($proj:tt)*] [, $val:expr]) => {
+ $crate::io_project!($io, $($proj)*).write_val($val)
+ };
+ (@parse [$io:expr] [$($proj:tt)*] [.$field:tt $($rest:tt)*]) => {
+ $crate::io_write!(@parse [$io] [$($proj)* .$field] [$($rest)*])
+ };
+ (@parse [$io:expr] [$($proj:tt)*] [[$flavor:ident: $index:expr] $($rest:tt)*]) => {
+ $crate::io_write!(@parse [$io] [$($proj)* [$flavor: $index]] [$($rest)*])
+ };
+ ($io:expr, $($rest:tt)*) => {
+ $crate::io_write!(@parse [$io] [] [$($rest)*])
+ };
+}
+
+#[doc(inline)]
+pub use crate::io_write;
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: add `read_val` and `write_val` function on I/O view
2026-04-21 14:56 ` [PATCH v2 08/11] rust: io: add `read_val` and `write_val` function on I/O view Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds `io_read!` and `io_write!` macros built on `View::read_val`/`write_val` which call through `IoCapable`. Also implements `IoCapable` for `Coherent` for primitive types (u8, u16, u32, and u64 on 64-bit).
The `io_write!` macro uses the same `@parse` recursive pattern as the old `dma_write!` to separate the projection from the value argument. This is a known Rust macro technique for handling the ambiguity.
The safety comments for the DMA `IoCapable` implementation appropriately document the LKMM exception:
```rust
// - Using read_volatile() here is not sound if it races with other CPU per Rust
// rules, but this is allowed per LKMM.
```
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 09/11] gpu: nova-core: use I/O projection for cleaner encapsulation
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (7 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 08/11] rust: io: add `read_val` and `write_val` function on I/O view Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 10/11] rust: dma: drop `dma_read!` and `dma_write!` API Gary Guo
` (2 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
Use `io_project!` for PTE array and message queues to remove the
break off encapsulation.
The remaining `dma_read!` and `dma_write!` is now only acting on
primitives; thus replace by `io_read!` and `io_write!`.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
drivers/gpu/nova-core/gsp.rs | 44 +++++++++++---------
drivers/gpu/nova-core/gsp/cmdq.rs | 65 +++++++++++++++++-------------
drivers/gpu/nova-core/gsp/fw.rs | 84 +++++++++++++++------------------------
3 files changed, 94 insertions(+), 99 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index ba5b7f990031..225a77a2f464 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -10,8 +10,14 @@
CoherentBox,
DmaAddress, //
},
+ io::{
+ self,
+ io_project,
+ io_write, //
+ },
pci,
prelude::*,
+ ptr::KnownSize,
transmute::{
AsBytes,
FromBytes, //
@@ -55,12 +61,20 @@ unsafe impl<const NUM_ENTRIES: usize> FromBytes for PteArray<NUM_ENTRIES> {}
unsafe impl<const NUM_ENTRIES: usize> AsBytes for PteArray<NUM_ENTRIES> {}
impl<const NUM_PAGES: usize> PteArray<NUM_PAGES> {
- /// Returns the page table entry for `index`, for a mapping starting at `start`.
- // TODO: Replace with `IoView` projection once available.
- fn entry(start: DmaAddress, index: usize) -> Result<u64> {
- start
- .checked_add(num::usize_as_u64(index) << GSP_PAGE_SHIFT)
- .ok_or(EOVERFLOW)
+ /// Initialize a new page table array mapping `NUM_PAGES` GSP pages starting at address `start`.
+ fn init<T: KnownSize + ?Sized>(
+ view: io::View<'_, Coherent<T>, Self>,
+ start: DmaAddress,
+ ) -> Result<()> {
+ for i in 0..NUM_PAGES {
+ io_write!(view, .0[build: i],
+ start
+ .checked_add(num::usize_as_u64(i) << GSP_PAGE_SHIFT)
+ .ok_or(EOVERFLOW)?
+ );
+ }
+
+ Ok(())
}
}
@@ -86,18 +100,12 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
let obj = Self(Coherent::zeroed(dev, GFP_KERNEL)?);
let start_addr = obj.0.dma_handle();
-
- // SAFETY: `obj` has just been created and we are its sole user.
- let pte_region = unsafe {
- &mut obj.0.as_mut()[size_of::<u64>()..][..RM_LOG_BUFFER_NUM_PAGES * size_of::<u64>()]
- };
-
- // Write values one by one to avoid an on-stack instance of `PteArray`.
- for (i, chunk) in pte_region.chunks_exact_mut(size_of::<u64>()).enumerate() {
- let pte_value = PteArray::<0>::entry(start_addr, i)?;
-
- chunk.copy_from_slice(&pte_value.to_ne_bytes());
- }
+ let pte_view = io_project!(
+ obj.0,
+ [build: size_of::<u64>()..][build: ..RM_LOG_BUFFER_NUM_PAGES * size_of::<u64>()]
+ )
+ .try_cast::<PteArray<RM_LOG_BUFFER_NUM_PAGES>>()?;
+ PteArray::init(pte_view, start_addr)?;
Ok(obj)
}
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index 0b51e10e2cfc..d424d047c970 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -2,16 +2,23 @@
mod continuation;
-use core::mem;
+use core::{
+ mem,
+ sync::atomic::{
+ fence,
+ Ordering, //
+ },
+};
use kernel::{
device,
dma::{
Coherent,
+ CoherentBox,
DmaAddress, //
},
- dma_write,
io::{
+ io_project,
poll::read_poll_timeout,
Io, //
},
@@ -171,20 +178,18 @@ struct MsgqData {
#[repr(C)]
// There is no struct defined for this in the open-gpu-kernel-source headers.
// Instead it is defined by code in `GspMsgQueuesInit()`.
-// TODO: Revert to private once `IoView` projections replace the `gsp_mem` module.
-pub(super) struct Msgq {
+struct Msgq {
/// Header for sending messages, including the write pointer.
- pub(super) tx: MsgqTxHeader,
+ tx: MsgqTxHeader,
/// Header for receiving messages, including the read pointer.
- pub(super) rx: MsgqRxHeader,
+ rx: MsgqRxHeader,
/// The message queue proper.
msgq: MsgqData,
}
/// Structure shared between the driver and the GSP and containing the command and message queues.
#[repr(C)]
-// TODO: Revert to private once `IoView` projections replace the `gsp_mem` module.
-pub(super) struct GspMem {
+struct GspMem {
/// Self-mapping page table entries.
ptes: PteArray<{ Self::PTE_ARRAY_SIZE }>,
/// CPU queue: the driver writes commands here, and the GSP reads them. It also contains the
@@ -192,13 +197,13 @@ pub(super) struct GspMem {
/// index into the GSP queue.
///
/// This member is read-only for the GSP.
- pub(super) cpuq: Msgq,
+ cpuq: Msgq,
/// GSP queue: the GSP writes messages here, and the driver reads them. It also contains the
/// write and read pointers that the GSP updates. This means that the read pointer here is an
/// index into the CPU queue.
///
/// This member is read-only for the driver.
- pub(super) gspq: Msgq,
+ gspq: Msgq,
}
impl GspMem {
@@ -232,20 +237,13 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
const MSGQ_SIZE: u32 = num::usize_into_u32::<{ size_of::<Msgq>() }>();
const RX_HDR_OFF: u32 = num::usize_into_u32::<{ mem::offset_of!(Msgq, rx) }>();
- let gsp_mem = Coherent::<GspMem>::zeroed(dev, GFP_KERNEL)?;
+ let mut gsp_mem = CoherentBox::<GspMem>::zeroed(dev, GFP_KERNEL)?;
+ gsp_mem.cpuq.tx = MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES);
+ gsp_mem.cpuq.rx = MsgqRxHeader::new();
+ let gsp_mem: Coherent<_> = gsp_mem.into();
let start = gsp_mem.dma_handle();
- // Write values one by one to avoid an on-stack instance of `PteArray`.
- for i in 0..GspMem::PTE_ARRAY_SIZE {
- dma_write!(gsp_mem, .ptes.0[build: i], PteArray::<0>::entry(start, i)?);
- }
-
- dma_write!(
- gsp_mem,
- .cpuq.tx,
- MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES)
- );
- dma_write!(gsp_mem, .cpuq.rx, MsgqRxHeader::new());
+ PteArray::init(io_project!(gsp_mem, .ptes), start)?;
Ok(Self(gsp_mem))
}
@@ -420,7 +418,7 @@ fn allocate_command(&mut self, size: usize, timeout: Delta) -> Result<GspCommand
//
// - The returned value is within `0..MSGQ_NUM_PAGES`.
fn gsp_write_ptr(&self) -> u32 {
- super::fw::gsp_mem::gsp_write_ptr(&self.0)
+ MsgqTxHeader::write_ptr(io_project!(self.0, .gspq.tx)) % MSGQ_NUM_PAGES
}
// Returns the index of the memory page the GSP will read the next command from.
@@ -429,7 +427,7 @@ fn gsp_write_ptr(&self) -> u32 {
//
// - The returned value is within `0..MSGQ_NUM_PAGES`.
fn gsp_read_ptr(&self) -> u32 {
- super::fw::gsp_mem::gsp_read_ptr(&self.0)
+ MsgqRxHeader::read_ptr(io_project!(self.0, .gspq.rx)) % MSGQ_NUM_PAGES
}
// Returns the index of the memory page the CPU can read the next message from.
@@ -438,12 +436,18 @@ fn gsp_read_ptr(&self) -> u32 {
//
// - The returned value is within `0..MSGQ_NUM_PAGES`.
fn cpu_read_ptr(&self) -> u32 {
- super::fw::gsp_mem::cpu_read_ptr(&self.0)
+ MsgqRxHeader::read_ptr(io_project!(self.0, .cpuq.rx)) % MSGQ_NUM_PAGES
}
// Informs the GSP that it can send `elem_count` new pages into the message queue.
fn advance_cpu_read_ptr(&mut self, elem_count: u32) {
- super::fw::gsp_mem::advance_cpu_read_ptr(&self.0, elem_count)
+ let rx = io_project!(self.0, .cpuq.rx);
+ let rptr = MsgqRxHeader::read_ptr(rx).wrapping_add(elem_count) % MSGQ_NUM_PAGES;
+
+ // Ensure read pointer is properly ordered.
+ fence(Ordering::SeqCst);
+
+ MsgqRxHeader::set_read_ptr(rx, rptr)
}
// Returns the index of the memory page the CPU can write the next command to.
@@ -452,12 +456,17 @@ fn advance_cpu_read_ptr(&mut self, elem_count: u32) {
//
// - The returned value is within `0..MSGQ_NUM_PAGES`.
fn cpu_write_ptr(&self) -> u32 {
- super::fw::gsp_mem::cpu_write_ptr(&self.0)
+ MsgqTxHeader::write_ptr(io_project!(self.0, .cpuq.tx)) % MSGQ_NUM_PAGES
}
// Informs the GSP that it can process `elem_count` new pages from the command queue.
fn advance_cpu_write_ptr(&mut self, elem_count: u32) {
- super::fw::gsp_mem::advance_cpu_write_ptr(&self.0, elem_count)
+ let tx = io_project!(self.0, .cpuq.tx);
+ let wptr = MsgqTxHeader::write_ptr(tx).wrapping_add(elem_count) % MSGQ_NUM_PAGES;
+ MsgqTxHeader::set_write_ptr(tx, wptr);
+
+ // Ensure all command data is visible before triggering the GSP read.
+ fence(Ordering::SeqCst);
}
}
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index 0c8a74f0e8ac..f2ba2f13e415 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -10,6 +10,11 @@
use kernel::{
dma::Coherent,
+ io::{
+ self,
+ io_read,
+ io_write, //
+ },
prelude::*,
ptr::{
Alignable,
@@ -40,59 +45,6 @@
},
};
-// TODO: Replace with `IoView` projections once available.
-pub(super) mod gsp_mem {
- use core::sync::atomic::{
- fence,
- Ordering, //
- };
-
- use kernel::{
- dma::Coherent,
- dma_read,
- dma_write, //
- };
-
- use crate::gsp::cmdq::{
- GspMem,
- MSGQ_NUM_PAGES, //
- };
-
- pub(in crate::gsp) fn gsp_write_ptr(qs: &Coherent<GspMem>) -> u32 {
- dma_read!(qs, .gspq.tx.0.writePtr) % MSGQ_NUM_PAGES
- }
-
- pub(in crate::gsp) fn gsp_read_ptr(qs: &Coherent<GspMem>) -> u32 {
- dma_read!(qs, .gspq.rx.0.readPtr) % MSGQ_NUM_PAGES
- }
-
- pub(in crate::gsp) fn cpu_read_ptr(qs: &Coherent<GspMem>) -> u32 {
- dma_read!(qs, .cpuq.rx.0.readPtr) % MSGQ_NUM_PAGES
- }
-
- pub(in crate::gsp) fn advance_cpu_read_ptr(qs: &Coherent<GspMem>, count: u32) {
- let rptr = cpu_read_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;
-
- // Ensure read pointer is properly ordered.
- fence(Ordering::SeqCst);
-
- dma_write!(qs, .cpuq.rx.0.readPtr, rptr);
- }
-
- pub(in crate::gsp) fn cpu_write_ptr(qs: &Coherent<GspMem>) -> u32 {
- dma_read!(qs, .cpuq.tx.0.writePtr) % MSGQ_NUM_PAGES
- }
-
- pub(in crate::gsp) fn advance_cpu_write_ptr(qs: &Coherent<GspMem>, count: u32) {
- let wptr = cpu_write_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;
-
- dma_write!(qs, .cpuq.tx.0.writePtr, wptr);
-
- // Ensure all command data is visible before triggering the GSP read.
- fence(Ordering::SeqCst);
- }
-}
-
/// Maximum size of a single GSP message queue element in bytes.
pub(crate) const GSP_MSG_QUEUE_ELEMENT_SIZE_MAX: usize =
num::u32_as_usize(bindings::GSP_MSG_QUEUE_ELEMENT_SIZE_MAX);
@@ -706,6 +658,19 @@ pub(crate) fn new(msgq_size: u32, rx_hdr_offset: u32, msg_count: u32) -> Self {
entryOff: num::usize_into_u32::<GSP_PAGE_SIZE>(),
})
}
+
+ /// Returns the value of the write pointer for this queue.
+ pub(crate) fn write_ptr<T: KnownSize + ?Sized>(this: io::View<'_, Coherent<T>, Self>) -> u32 {
+ io_read!(this, .0.writePtr)
+ }
+
+ /// Sets the value of the write pointer for this queue.
+ pub(crate) fn set_write_ptr<T: KnownSize + ?Sized>(
+ this: io::View<'_, Coherent<T>, Self>,
+ val: u32,
+ ) {
+ io_write!(this, .0.writePtr, val)
+ }
}
// SAFETY: Padding is explicit and does not contain uninitialized data.
@@ -721,6 +686,19 @@ impl MsgqRxHeader {
pub(crate) fn new() -> Self {
Self(Default::default())
}
+
+ /// Returns the value of the read pointer for this queue.
+ pub(crate) fn read_ptr<T: KnownSize + ?Sized>(this: io::View<'_, Coherent<T>, Self>) -> u32 {
+ io_read!(this, .0.readPtr)
+ }
+
+ /// Sets the value of the read pointer for this queue.
+ pub(crate) fn set_read_ptr<T: KnownSize + ?Sized>(
+ this: io::View<'_, Coherent<T>, Self>,
+ val: u32,
+ ) {
+ io_write!(this, .0.readPtr, val)
+ }
}
// SAFETY: Padding is explicit and does not contain uninitialized data.
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: gpu: nova-core: use I/O projection for cleaner encapsulation
2026-04-21 14:56 ` [PATCH v2 09/11] gpu: nova-core: use I/O projection for cleaner encapsulation Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the practical payoff: nova-core's `gsp_mem` module (which was a workaround for accessing shared DMA memory fields) is deleted entirely, and replaced by `io_project!` + `io_read!`/`io_write!` calls. The `GspMem` and `Msgq` structs can become private again, removing the `pub(super)` visibility that was only needed for the workaround module.
The `PteArray::init` method now takes a `View` directly:
```rust
fn init<T: KnownSize + ?Sized>(
view: io::View<'_, Coherent<T>, Self>,
start: DmaAddress,
) -> Result<()> {
```
This is much cleaner than the old slice-based approach. The fence operations are preserved correctly.
One notable change: initialization of `cpuq.tx` and `cpuq.rx` now uses `CoherentBox` (exclusive access) before converting to `Coherent`:
```rust
let mut gsp_mem = CoherentBox::<GspMem>::zeroed(dev, GFP_KERNEL)?;
gsp_mem.cpuq.tx = MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES);
gsp_mem.cpuq.rx = MsgqRxHeader::new();
let gsp_mem: Coherent<_> = gsp_mem.into();
```
This is better than the old `dma_write!` approach since the headers are fully initialized before the DMA region becomes shared.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 10/11] rust: dma: drop `dma_read!` and `dma_write!` API
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (8 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 09/11] gpu: nova-core: use I/O projection for cleaner encapsulation Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-21 14:56 ` [PATCH v2 11/11] rust: io: add copying methods Gary Guo
2026-04-22 22:25 ` Claude review: rust: I/O type generalization and projection Claude Code Review Bot
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
The primitive read/write use case is covered by the `io_read!` and
`io_write!` macro. The non-primitive use case was finicky; they should
either be achieved using `CoherentBox` or `as_ref()/as_mut()` to assert the
lack of concurrent access, or should be using memcpy-like APIs to express
the non-atomic and tearable nature.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/dma.rs | 128 -----------------------------------------------
samples/rust/rust_dma.rs | 14 +++---
2 files changed, 8 insertions(+), 134 deletions(-)
diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs
index d0b86aeebfe2..bbdeb117c145 100644
--- a/rust/kernel/dma.rs
+++ b/rust/kernel/dma.rs
@@ -658,52 +658,6 @@ pub unsafe fn as_mut(&self) -> &mut T {
// SAFETY: per safety requirement.
unsafe { &mut *self.as_mut_ptr() }
}
-
- /// Reads the value of `field` and ensures that its type is [`FromBytes`].
- ///
- /// # Safety
- ///
- /// This must be called from the [`dma_read`] macro which ensures that the `field` pointer is
- /// validated beforehand.
- ///
- /// Public but hidden since it should only be used from [`dma_read`] macro.
- #[doc(hidden)]
- pub unsafe fn field_read<F: FromBytes>(&self, field: *const F) -> F {
- // SAFETY:
- // - By the safety requirements field is valid.
- // - Using read_volatile() here is not sound as per the usual rules, the usage here is
- // a special exception with the following notes in place. When dealing with a potential
- // race from a hardware or code outside kernel (e.g. user-space program), we need that
- // read on a valid memory is not UB. Currently read_volatile() is used for this, and the
- // rationale behind is that it should generate the same code as READ_ONCE() which the
- // kernel already relies on to avoid UB on data races. Note that the usage of
- // read_volatile() is limited to this particular case, it cannot be used to prevent
- // the UB caused by racing between two kernel functions nor do they provide atomicity.
- unsafe { field.read_volatile() }
- }
-
- /// Writes a value to `field` and ensures that its type is [`AsBytes`].
- ///
- /// # Safety
- ///
- /// This must be called from the [`dma_write`] macro which ensures that the `field` pointer is
- /// validated beforehand.
- ///
- /// Public but hidden since it should only be used from [`dma_write`] macro.
- #[doc(hidden)]
- pub unsafe fn field_write<F: AsBytes>(&self, field: *mut F, val: F) {
- // SAFETY:
- // - By the safety requirements field is valid.
- // - Using write_volatile() here is not sound as per the usual rules, the usage here is
- // a special exception with the following notes in place. When dealing with a potential
- // race from a hardware or code outside kernel (e.g. user-space program), we need that
- // write on a valid memory is not UB. Currently write_volatile() is used for this, and the
- // rationale behind is that it should generate the same code as WRITE_ONCE() which the
- // kernel already relies on to avoid UB on data races. Note that the usage of
- // write_volatile() is limited to this particular case, it cannot be used to prevent
- // the UB caused by racing between two kernel functions nor do they provide atomicity.
- unsafe { field.write_volatile(val) }
- }
}
impl<T: AsBytes + FromBytes> Coherent<T> {
@@ -1230,85 +1184,3 @@ unsafe impl Send for CoherentHandle {}
// operations on `&CoherentHandle` are reading the DMA handle and size, both of which are
// plain `Copy` values.
unsafe impl Sync for CoherentHandle {}
-
-/// Reads a field of an item from an allocated region of structs.
-///
-/// The syntax is of the form `kernel::dma_read!(dma, proj)` where `dma` is an expression evaluating
-/// to a [`Coherent`] and `proj` is a [projection specification](kernel::ptr::project!).
-///
-/// # Examples
-///
-/// ```
-/// use kernel::device::Device;
-/// use kernel::dma::{attrs::*, Coherent};
-///
-/// struct MyStruct { field: u32, }
-///
-/// // SAFETY: All bit patterns are acceptable values for `MyStruct`.
-/// unsafe impl kernel::transmute::FromBytes for MyStruct{};
-/// // SAFETY: Instances of `MyStruct` have no uninitialized portions.
-/// unsafe impl kernel::transmute::AsBytes for MyStruct{};
-///
-/// # fn test(alloc: &kernel::dma::Coherent<[MyStruct]>) -> Result {
-/// let whole = kernel::dma_read!(alloc, [try: 2]);
-/// let field = kernel::dma_read!(alloc, [panic: 1].field);
-/// # Ok::<(), Error>(()) }
-/// ```
-#[macro_export]
-macro_rules! dma_read {
- ($dma:expr, $($proj:tt)*) => {{
- let dma = &$dma;
- let ptr = $crate::ptr::project!(
- $crate::dma::Coherent::as_ptr(dma), $($proj)*
- );
- // SAFETY: The pointer created by the projection is within the DMA region.
- unsafe { $crate::dma::Coherent::field_read(dma, ptr) }
- }};
-}
-
-/// Writes to a field of an item from an allocated region of structs.
-///
-/// The syntax is of the form `kernel::dma_write!(dma, proj, val)` where `dma` is an expression
-/// evaluating to a [`Coherent`], `proj` is a
-/// [projection specification](kernel::ptr::project!), and `val` is the value to be written to the
-/// projected location.
-///
-/// # Examples
-///
-/// ```
-/// use kernel::device::Device;
-/// use kernel::dma::{attrs::*, Coherent};
-///
-/// struct MyStruct { member: u32, }
-///
-/// // SAFETY: All bit patterns are acceptable values for `MyStruct`.
-/// unsafe impl kernel::transmute::FromBytes for MyStruct{};
-/// // SAFETY: Instances of `MyStruct` have no uninitialized portions.
-/// unsafe impl kernel::transmute::AsBytes for MyStruct{};
-///
-/// # fn test(alloc: &kernel::dma::Coherent<[MyStruct]>) -> Result {
-/// kernel::dma_write!(alloc, [try: 2].member, 0xf);
-/// kernel::dma_write!(alloc, [panic: 1], MyStruct { member: 0xf });
-/// # Ok::<(), Error>(()) }
-/// ```
-#[macro_export]
-macro_rules! dma_write {
- (@parse [$dma:expr] [$($proj:tt)*] [, $val:expr]) => {{
- let dma = &$dma;
- let ptr = $crate::ptr::project!(
- mut $crate::dma::Coherent::as_mut_ptr(dma), $($proj)*
- );
- let val = $val;
- // SAFETY: The pointer created by the projection is within the DMA region.
- unsafe { $crate::dma::Coherent::field_write(dma, ptr, val) }
- }};
- (@parse [$dma:expr] [$($proj:tt)*] [.$field:tt $($rest:tt)*]) => {
- $crate::dma_write!(@parse [$dma] [$($proj)* .$field] [$($rest)*])
- };
- (@parse [$dma:expr] [$($proj:tt)*] [[$flavor:ident: $index:expr] $($rest:tt)*]) => {
- $crate::dma_write!(@parse [$dma] [$($proj)* [$flavor: $index]] [$($rest)*])
- };
- ($dma:expr, $($rest:tt)*) => {
- $crate::dma_write!(@parse [$dma] [] [$($rest)*])
- };
-}
diff --git a/samples/rust/rust_dma.rs b/samples/rust/rust_dma.rs
index a2c34bb74273..9250ed6f6673 100644
--- a/samples/rust/rust_dma.rs
+++ b/samples/rust/rust_dma.rs
@@ -8,10 +8,12 @@
device::Core,
dma::{
Coherent,
+ CoherentBox,
DataDirection,
Device,
DmaMask, //
},
+ io::io_read,
page, pci,
prelude::*,
scatterlist::{Owned, SGTable},
@@ -69,11 +71,11 @@ fn probe(pdev: &pci::Device<Core>, _info: &Self::IdInfo) -> impl PinInit<Self, E
// SAFETY: There are no concurrent calls to DMA allocation and mapping primitives.
unsafe { pdev.dma_set_mask_and_coherent(mask)? };
- let ca: Coherent<[MyStruct]> =
- Coherent::zeroed_slice(pdev.as_ref(), TEST_VALUES.len(), GFP_KERNEL)?;
+ let mut ca: CoherentBox<[MyStruct]> =
+ CoherentBox::zeroed_slice(pdev.as_ref(), TEST_VALUES.len(), GFP_KERNEL)?;
for (i, value) in TEST_VALUES.into_iter().enumerate() {
- kernel::dma_write!(ca, [try: i], MyStruct::new(value.0, value.1));
+ ca.init_at(i, MyStruct::new(value.0, value.1))?;
}
let size = 4 * page::PAGE_SIZE;
@@ -83,7 +85,7 @@ fn probe(pdev: &pci::Device<Core>, _info: &Self::IdInfo) -> impl PinInit<Self, E
Ok(try_pin_init!(Self {
pdev: pdev.into(),
- ca,
+ ca: ca.into(),
sgt <- sgt,
}))
})
@@ -93,8 +95,8 @@ fn probe(pdev: &pci::Device<Core>, _info: &Self::IdInfo) -> impl PinInit<Self, E
impl DmaSampleDriver {
fn check_dma(&self) {
for (i, value) in TEST_VALUES.into_iter().enumerate() {
- let val0 = kernel::dma_read!(self.ca, [panic: i].h);
- let val1 = kernel::dma_read!(self.ca, [panic: i].b);
+ let val0 = io_read!(self.ca, [panic: i].h);
+ let val1 = io_read!(self.ca, [panic: i].b);
assert_eq!(val0, value.0);
assert_eq!(val1, value.1);
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH v2 11/11] rust: io: add copying methods
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (9 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 10/11] rust: dma: drop `dma_read!` and `dma_write!` API Gary Guo
@ 2026-04-21 14:56 ` Gary Guo
2026-04-22 22:25 ` Claude review: " Claude Code Review Bot
2026-04-22 22:25 ` Claude review: rust: I/O type generalization and projection Claude Code Review Bot
11 siblings, 1 reply; 24+ messages in thread
From: Gary Guo @ 2026-04-21 14:56 UTC (permalink / raw)
To: Greg Kroah-Hartman, Rafael J. Wysocki, Danilo Krummrich,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Daniel Almeida, Bjorn Helgaas, Krzysztof Wilczyński,
Abdiel Janulgue, Robin Murphy, Alexandre Courbot, David Airlie,
Simona Vetter
Cc: driver-core, rust-for-linux, linux-kernel, linux-pci, nouveau,
dri-devel
One feature that was lost from the old `dma_read!()` and `dma_write!()`
when moving to `io_read!()` and `io_write!()` was the ability to read/write
a large structs. However, the semantics was unclear to begin with, as there
was no guarantee about their atomicity even for structs that were small
enough to fit in u32. Re-introduces the capability in the form of copying
methods.
dma_read!(foo, bar) -> io_project!(foo, bar).copy_read()
dma_write!(foo, bar, baz) -> io_project!(foo, bar).copy_write(baz)
The semantics for these are modelled after memcpy so user has clear
expectation of lack of atomicity. As an additional benefit of this change,
this now works for MMIO as well, which maps to `memcpy_{from,to}io`.
For slices, which is unsized so the API above can't work, `copy_from_slice`
and `copy_to_slice` were added to copy from/to normal memory, and
`copy_from_io_slice` and `copy_to_io_slice` were added to copy from/to
other `Io` regions. They're optimized if at least one end is mapped to
system memory; if none are, the copy occurs with an intermediate stack
buffer.
Signed-off-by: Gary Guo <gary@garyguo.net>
---
rust/kernel/dma.rs | 8 +-
rust/kernel/io.rs | 231 +++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 238 insertions(+), 1 deletion(-)
diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs
index bbdeb117c145..307f5769ca0a 100644
--- a/rust/kernel/dma.rs
+++ b/rust/kernel/dma.rs
@@ -16,7 +16,8 @@
fs::file,
io::{
Io,
- IoCapable, //
+ IoCapable,
+ IoCopyable, //
},
prelude::*,
ptr::KnownSize,
@@ -997,6 +998,11 @@ unsafe fn io_write(&self, value: $ty, address: *mut $ty) {
u64
);
+// SAFETY: `Coherent` is mapped to CPU address space.
+unsafe impl<T: ?Sized + KnownSize> IoCopyable for Coherent<T> {
+ const IS_MAPPED: bool = true;
+}
+
impl<'a, B: ?Sized + KnownSize, T: ?Sized> crate::io::View<'a, Coherent<B>, T> {
/// Returns a DMA handle which may be given to the device as the DMA address base of
/// the region.
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index efcd7e6741d7..0b1ed68c0f9b 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -4,6 +4,8 @@
//!
//! C header: [`include/asm-generic/io.h`](srctree/include/asm-generic/io.h)
+use core::mem::MaybeUninit;
+
use crate::{
bindings,
prelude::*,
@@ -233,6 +235,55 @@ pub trait IoCapable<T> {
unsafe fn io_write(&self, value: T, address: *mut T);
}
+/// Trait indicating that an I/O backend supports memory copy operations.
+///
+/// # Safety
+///
+/// If [`IS_MAPPED`] is overridden to true, it must be correct per documentation.
+pub unsafe trait IoCopyable {
+ /// Whether the pointers for this I/O backend are in the CPU address space, and are coherently
+ /// mapped.
+ ///
+ /// When this is true, it means that memory can be accessed with byte-wise atomic memory copy.
+ const IS_MAPPED: bool = false;
+
+ /// Copy `size` bytes from `address` to `buffer`.
+ ///
+ /// # Safety
+ ///
+ /// - The range `[address..address + size]` must be within the bounds of `Self`.
+ /// - `buffer` is valid for write for `size` bytes.
+ #[inline]
+ unsafe fn copy_from_io(&self, address: *mut u8, buffer: *mut u8, size: usize) {
+ const_assert!(Self::IS_MAPPED);
+
+ // Use `bindings::memcpy` instead of copy_nonoverlapping for volatile.
+ // SAFETY:
+ // - `buffer` is valid for write for `size` bytes.
+ // - `IS_MAPPED` guarantees `address` is in CPU address space, with safety requirements
+ // `address` is valid for read for `size` bytes.
+ unsafe { bindings::memcpy(buffer.cast(), address.cast(), size) };
+ }
+
+ /// Copy `size` bytes from `buffer` to `address`.
+ ///
+ /// # Safety
+ ///
+ /// - The range `[address..address + size]` must be within the bounds of `Self`.
+ /// - `buffer` is valid for read for `size` bytes.
+ #[inline]
+ unsafe fn copy_to_io(&self, address: *mut u8, buffer: *const u8, size: usize) {
+ const_assert!(Self::IS_MAPPED);
+
+ // Use `bindings::memcpy` instead of copy_nonoverlapping for volatile.
+ // SAFETY:
+ // - `IS_MAPPED` guarantees `address` is in CPU address space, with safety requirements
+ // `address` is valid for write for `size` bytes.
+ // - `buffer` is valid for read for `size` bytes.
+ unsafe { bindings::memcpy(address.cast(), buffer.cast(), size) };
+ }
+}
+
/// Describes a given I/O location: its offset, width, and type to convert the raw value from and
/// into.
///
@@ -841,6 +892,19 @@ unsafe fn io_write(&self, value: $ty, address: *mut $ty) {
writeq
);
+// SAFETY: `IS_MAPPED` is not overridden.
+unsafe impl<T: ?Sized + KnownSize> IoCopyable for Mmio<T> {
+ unsafe fn copy_from_io(&self, address: *mut u8, buffer: *mut u8, size: usize) {
+ // SAFETY: Per safety requirement.
+ unsafe { bindings::memcpy_fromio(buffer.cast(), address.cast(), size) };
+ }
+
+ unsafe fn copy_to_io(&self, address: *mut u8, buffer: *const u8, size: usize) {
+ // SAFETY: Per safety requirement.
+ unsafe { bindings::memcpy_toio(address.cast(), buffer.cast(), size) };
+ }
+}
+
impl<T: ?Sized + KnownSize> Io for Mmio<T> {
type Type = T;
@@ -1029,6 +1093,173 @@ pub fn write_val(&self, value: T) {
}
}
+impl<'a, IO: ?Sized, T> View<'a, IO, [T]> {
+ /// Returns the length of the slice in number of elements.
+ #[inline]
+ pub fn len(self) -> usize {
+ self.as_ptr().len()
+ }
+
+ /// Returns `true` if the slice has a length of 0.
+ #[inline]
+ pub fn is_empty(self) -> bool {
+ self.len() == 0
+ }
+}
+
+impl<T, IO: ?Sized + Io + IoCopyable> View<'_, IO, T> {
+ /// Copy-read from I/O memory.
+ ///
+ /// There is no atomicity gurantee.
+ #[inline]
+ pub fn copy_read(self) -> T
+ where
+ T: FromBytes,
+ {
+ // Optimized path if I/O backend is CPU mapped.
+ if IO::IS_MAPPED {
+ // SAFETY:
+ // - `IS_MAPPED` guarantees `self.ptr` is in CPU address space, with type invariants
+ // `self.ptr` is valid for read for `size` bytes.
+ // - `T: FromBytes` guarantee that all bit patterns are valid.
+ // - Using read_volatile() here so that race with hardware is well-defined.
+ // - Using read_volatile() here is not sound if it races with other CPU per Rust
+ // rules, but this is allowed per LKMM.
+ return unsafe { self.ptr.read_volatile() };
+ }
+
+ let mut buf = MaybeUninit::<T>::uninit();
+ // SAFETY:
+ // - Per type invariants.
+ // - `buf.as_mut_ptr()` is valid for write for `size_of::<T>()` bytes.
+ unsafe {
+ self.io
+ .copy_from_io(self.ptr.cast(), buf.as_mut_ptr().cast(), size_of::<T>())
+ };
+ // SAFETY: T: FromBytes` guarantee that all bit patterns are valid.
+ unsafe { buf.assume_init() }
+ }
+
+ /// Write to I/O memory.
+ ///
+ /// There is no atomicity gurantee.
+ #[inline]
+ pub fn copy_write(self, value: T)
+ where
+ T: AsBytes,
+ {
+ // Optimized path if I/O backend is CPU mapped.
+ if IO::IS_MAPPED {
+ // SAFETY:
+ // - `IS_MAPPED` guarantees `self.ptr` is in CPU address space, with safety requirements
+ // `self.ptr` is valid for write for `size` bytes.
+ // - Using write_volatile() here so that race with hardware is well-defined.
+ // - Using write_volatile() here is not sound if it races with other CPU per Rust
+ // rules, but this is allowed per LKMM.
+ unsafe { self.ptr.write_volatile(value) };
+ return;
+ }
+
+ // SAFETY:
+ // - Per type invariants.
+ // - `&raw const value` is valid for read for `size_of::<T>()` bytes.
+ unsafe {
+ self.io
+ .copy_to_io(self.ptr.cast(), (&raw const value).cast(), size_of::<T>())
+ };
+ core::mem::forget(value);
+ }
+}
+
+impl<IO: ?Sized + Io + IoCopyable> View<'_, IO, [u8]> {
+ /// Copy bytes from slice to I/O memory.
+ #[inline]
+ pub fn copy_from_slice(self, data: &[u8]) {
+ assert_eq!(self.len(), data.len());
+
+ // SAFETY:
+ // - Per type invariants.
+ // - `data.as_ptr()` is valid for read for `data.len()` bytes.
+ unsafe {
+ self.io
+ .copy_to_io(self.ptr.cast(), data.as_ptr(), data.len())
+ }
+ }
+
+ /// Copy bytes from I/O memory to slice.
+ #[inline]
+ pub fn copy_to_slice(self, data: &mut [u8]) {
+ assert_eq!(self.len(), data.len());
+
+ // SAFETY:
+ // - Per type invariants.
+ // - `data.as_mut_ptr()` is valid for write for `data.len()` bytes.
+ unsafe {
+ self.io
+ .copy_from_io(self.ptr.cast(), data.as_mut_ptr(), data.len())
+ }
+ }
+
+ fn copy_from_io_slice_via_buffer<T: ?Sized + Io + IoCopyable>(&self, data: View<'_, T, [u8]>) {
+ let mut buf = MaybeUninit::<[u8; 256]>::uninit();
+
+ let mut dst_ptr = self.ptr.cast::<u8>();
+ let mut src_ptr = data.ptr.cast::<u8>();
+ let mut len = self.len();
+
+ while len != 0 {
+ let copy_len = core::cmp::min(len, 256);
+ // SAFETY:
+ // - `src_ptr` is valid for I/O read of `copy_len` bytes per type invariants of `data`.
+ // - `buf.as_mut_ptr()` is valid for write for `copy_len` bytes as `copy_len <= 256`.
+ unsafe {
+ data.io
+ .copy_from_io(src_ptr, buf.as_mut_ptr().cast(), copy_len)
+ };
+
+ // SAFETY:
+ // - `dst_ptr` is valid for I/O write of `copy_len` bytes per type invariants of `self`.
+ // - `buf.as_mut_ptr()` is valid for write for `copy_len` bytes as `copy_len <= 256`.
+ unsafe { self.io.copy_to_io(dst_ptr, buf.as_ptr().cast(), copy_len) };
+
+ dst_ptr = dst_ptr.wrapping_add(copy_len);
+ src_ptr = src_ptr.wrapping_add(copy_len);
+ len -= copy_len;
+ }
+ }
+
+ /// Copy bytes from `data` I/O slice to the `self` I/O slice.
+ pub fn copy_from_io_slice<T: ?Sized + Io + IoCopyable>(&self, data: View<'_, T, [u8]>) {
+ assert_eq!(self.len(), data.len());
+
+ if T::IS_MAPPED {
+ // SAFETY:
+ // - Per type invariants.
+ // - `data.as_ptr()` is valid for read for `data.len()` bytes.
+ unsafe {
+ self.io
+ .copy_to_io(self.ptr.cast(), data.as_ptr().cast(), data.len())
+ }
+ } else if IO::IS_MAPPED {
+ // SAFETY:
+ // - Per type invariants.
+ // - `self.as_ptr()` is valid for write for `self.len()` bytes.
+ unsafe {
+ data.io
+ .copy_from_io(data.ptr.cast(), self.as_ptr().cast(), self.len())
+ }
+ } else {
+ self.copy_from_io_slice_via_buffer(data)
+ }
+ }
+
+ /// Copy bytes from `self` I/O slice to the `data` I/O slice.
+ #[inline]
+ pub fn copy_to_io_slice<T: ?Sized + Io + IoCopyable>(self, data: View<'_, T, [u8]>) {
+ data.copy_from_io_slice(self)
+ }
+}
+
/// Project an I/O type to a subview of it.
///
/// The syntax is of form `io_project!(io, proj)` where `io` is an expression to a type that
--
2.51.2
^ permalink raw reply related [flat|nested] 24+ messages in thread* Claude review: rust: io: add copying methods
2026-04-21 14:56 ` [PATCH v2 11/11] rust: io: add copying methods Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
0 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds the `IoCopyable` trait and copy methods on `View`: `copy_read`, `copy_write`, `copy_from_slice`, `copy_to_slice`, and cross-I/O-region copying (`copy_from_io_slice`).
The `IoCopyable` trait differentiates between mapped (CPU-accessible) and unmapped (MMIO) backends:
- `Coherent`: `IS_MAPPED = true`, uses `memcpy`
- `Mmio`: `IS_MAPPED = false` (default), uses `memcpy_fromio`/`memcpy_toio`
The cross-I/O copy with a stack buffer fallback is a pragmatic approach:
```rust
fn copy_from_io_slice_via_buffer<T: ...>(&self, data: View<'_, T, [u8]>) {
let mut buf = MaybeUninit::<[u8; 256]>::uninit();
// ... copy in 256-byte chunks via the stack buffer
}
```
Two minor observations:
1. Typo in doc comments: "gurantee" appears twice (in `copy_read` and `copy_write`), should be "guarantee".
2. In `copy_from_io_slice_via_buffer`, the safety comment for the second `unsafe` block says "valid for write for `copy_len` bytes" but this is `copy_to_io` which writes to `dst_ptr` -- the comment seems correct functionally but says the buffer is "valid for write" when it should say "valid for read" since we're reading from the buffer at that point:
```rust
// - `buf.as_mut_ptr()` is valid for write for `copy_len` bytes as `copy_len <= 256`.
unsafe { self.io.copy_to_io(dst_ptr, buf.as_ptr().cast(), copy_len) };
```
The `.as_ptr()` call is correct (reading from buffer), but the comment says "valid for write" which is technically about `dst_ptr`, not `buf`. The comment seems to be a copy-paste from the block above. Minor, but worth fixing for accuracy.
Overall, this is a well-designed and well-executed series that significantly improves the Rust I/O abstractions in the kernel. The type safety improvements and the elimination of the `gsp_mem` workaround module demonstrate clear practical value.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread
* Claude review: rust: I/O type generalization and projection
2026-04-21 14:56 [PATCH v2 00/11] rust: I/O type generalization and projection Gary Guo
` (10 preceding siblings ...)
2026-04-21 14:56 ` [PATCH v2 11/11] rust: io: add copying methods Gary Guo
@ 2026-04-22 22:25 ` Claude Code Review Bot
11 siblings, 0 replies; 24+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 22:25 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: rust: I/O type generalization and projection
Author: Gary Guo <gary@garyguo.net>
Patches: 12
Reviewed: 2026-04-23T08:25:44.515773
---
This is a well-structured series by Gary Guo that generalizes the Rust I/O subsystem in the kernel from untyped fixed-size regions to typed pointer-based representations. The key idea is that `MmioRaw<T>` becomes analogous to `T __iomem *` in C, enabling type-safe I/O projections via a new `View` type and `io_project!` macro. This unifies MMIO and DMA coherent memory handling, removes the `IoKnownSize` trait, and enables cleaner encapsulation in nova-core as a concrete demonstration.
The series is logically decomposed and each patch builds on the previous ones. The design is sound: moving from `(addr: usize, maxsize: usize)` to `*mut T` with metadata carrying the size is a clean abstraction. The `View` type provides safe sub-projections, and the `io_read!`/`io_write!` macros are a good generalization of `dma_read!`/`dma_write!`.
A few concerns worth discussing, but nothing blocking.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 24+ messages in thread