* [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-02 13:47 ` Gary Guo
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue Eliot Courtney
` (8 subsequent siblings)
9 siblings, 2 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
There is no particular order required here and keeping them alphabetical
will help preventing future mistakes.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/fw.rs | 67 +++++++++++++++++++++--------------------
1 file changed, 35 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index f1797e1f0d9d..4b998485360b 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -191,34 +191,34 @@ pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self {
#[repr(u32)]
pub(crate) enum MsgFunction {
// Common function codes
- Nop = bindings::NV_VGPU_MSG_FUNCTION_NOP,
- SetGuestSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO,
- AllocRoot = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT,
+ AllocChannelDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA,
+ AllocCtxDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA,
AllocDevice = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE,
AllocMemory = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY,
- AllocCtxDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA,
- AllocChannelDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA,
- MapMemory = bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY,
- BindCtxDma = bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA,
AllocObject = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT,
+ AllocRoot = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT,
+ BindCtxDma = bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA,
Free = bindings::NV_VGPU_MSG_FUNCTION_FREE,
- Log = bindings::NV_VGPU_MSG_FUNCTION_LOG,
GetGspStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO,
- SetRegistry = bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY,
- GspSetSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO,
+ GetStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO,
GspInitPostObjGpu = bindings::NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU,
GspRmControl = bindings::NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL,
- GetStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO,
+ GspSetSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO,
+ Log = bindings::NV_VGPU_MSG_FUNCTION_LOG,
+ MapMemory = bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY,
+ Nop = bindings::NV_VGPU_MSG_FUNCTION_NOP,
+ SetGuestSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO,
+ SetRegistry = bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY,
// Event codes
GspInitDone = bindings::NV_VGPU_MSG_EVENT_GSP_INIT_DONE,
+ GspLockdownNotice = bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE,
+ GspPostNoCat = bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD,
GspRunCpuSequencer = bindings::NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER,
- PostEvent = bindings::NV_VGPU_MSG_EVENT_POST_EVENT,
- RcTriggered = bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED,
MmuFaultQueued = bindings::NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED,
OsErrorLog = bindings::NV_VGPU_MSG_EVENT_OS_ERROR_LOG,
- GspPostNoCat = bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD,
- GspLockdownNotice = bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE,
+ PostEvent = bindings::NV_VGPU_MSG_EVENT_POST_EVENT,
+ RcTriggered = bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED,
UcodeLibOsPrint = bindings::NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT,
}
@@ -227,38 +227,41 @@ impl TryFrom<u32> for MsgFunction {
fn try_from(value: u32) -> Result<MsgFunction> {
match value {
- bindings::NV_VGPU_MSG_FUNCTION_NOP => Ok(MsgFunction::Nop),
- bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO => {
- Ok(MsgFunction::SetGuestSystemInfo)
- }
- bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT => Ok(MsgFunction::AllocRoot),
+ // Common function codes
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA => Ok(MsgFunction::AllocChannelDma),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA => Ok(MsgFunction::AllocCtxDma),
bindings::NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE => Ok(MsgFunction::AllocDevice),
bindings::NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY => Ok(MsgFunction::AllocMemory),
- bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA => Ok(MsgFunction::AllocCtxDma),
- bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA => Ok(MsgFunction::AllocChannelDma),
- bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY => Ok(MsgFunction::MapMemory),
- bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA => Ok(MsgFunction::BindCtxDma),
bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT => Ok(MsgFunction::AllocObject),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT => Ok(MsgFunction::AllocRoot),
+ bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA => Ok(MsgFunction::BindCtxDma),
bindings::NV_VGPU_MSG_FUNCTION_FREE => Ok(MsgFunction::Free),
- bindings::NV_VGPU_MSG_FUNCTION_LOG => Ok(MsgFunction::Log),
bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO => Ok(MsgFunction::GetGspStaticInfo),
- bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY => Ok(MsgFunction::SetRegistry),
- bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO => Ok(MsgFunction::GspSetSystemInfo),
+ bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO => Ok(MsgFunction::GetStaticInfo),
bindings::NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU => {
Ok(MsgFunction::GspInitPostObjGpu)
}
bindings::NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL => Ok(MsgFunction::GspRmControl),
- bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO => Ok(MsgFunction::GetStaticInfo),
+ bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO => Ok(MsgFunction::GspSetSystemInfo),
+ bindings::NV_VGPU_MSG_FUNCTION_LOG => Ok(MsgFunction::Log),
+ bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY => Ok(MsgFunction::MapMemory),
+ bindings::NV_VGPU_MSG_FUNCTION_NOP => Ok(MsgFunction::Nop),
+ bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO => {
+ Ok(MsgFunction::SetGuestSystemInfo)
+ }
+ bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY => Ok(MsgFunction::SetRegistry),
+
+ // Event codes
bindings::NV_VGPU_MSG_EVENT_GSP_INIT_DONE => Ok(MsgFunction::GspInitDone),
+ bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE => Ok(MsgFunction::GspLockdownNotice),
+ bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD => Ok(MsgFunction::GspPostNoCat),
bindings::NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER => {
Ok(MsgFunction::GspRunCpuSequencer)
}
- bindings::NV_VGPU_MSG_EVENT_POST_EVENT => Ok(MsgFunction::PostEvent),
- bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED => Ok(MsgFunction::RcTriggered),
bindings::NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED => Ok(MsgFunction::MmuFaultQueued),
bindings::NV_VGPU_MSG_EVENT_OS_ERROR_LOG => Ok(MsgFunction::OsErrorLog),
- bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD => Ok(MsgFunction::GspPostNoCat),
- bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE => Ok(MsgFunction::GspLockdownNotice),
+ bindings::NV_VGPU_MSG_EVENT_POST_EVENT => Ok(MsgFunction::PostEvent),
+ bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED => Ok(MsgFunction::RcTriggered),
bindings::NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT => Ok(MsgFunction::UcodeLibOsPrint),
_ => Err(EINVAL),
}
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically
2026-03-02 11:42 ` [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically Eliot Courtney
@ 2026-03-02 13:47 ` Gary Guo
2026-03-02 14:49 ` Alexandre Courbot
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 30+ messages in thread
From: Gary Guo @ 2026-03-02 13:47 UTC (permalink / raw)
To: Eliot Courtney, Danilo Krummrich, Alice Ryhl, Alexandre Courbot,
David Airlie, Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux
On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
> There is no particular order required here and keeping them alphabetical
> will help preventing future mistakes.
Looks like the current order is in increasing opcode order. Granted, currently
they're generated in bindings and then included as such, but perhaps eventually
Rust code can be generated directly so the ordering could make sense?
Best,
Gary
>
> Tested-by: Zhi Wang <zhiw@nvidia.com>
> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
> ---
> drivers/gpu/nova-core/gsp/fw.rs | 67 +++++++++++++++++++++--------------------
> 1 file changed, 35 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
> index f1797e1f0d9d..4b998485360b 100644
> --- a/drivers/gpu/nova-core/gsp/fw.rs
> +++ b/drivers/gpu/nova-core/gsp/fw.rs
> @@ -191,34 +191,34 @@ pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self {
> #[repr(u32)]
> pub(crate) enum MsgFunction {
> // Common function codes
> - Nop = bindings::NV_VGPU_MSG_FUNCTION_NOP,
> - SetGuestSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO,
> - AllocRoot = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT,
> + AllocChannelDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA,
> + AllocCtxDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA,
> AllocDevice = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE,
> AllocMemory = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY,
> - AllocCtxDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA,
> - AllocChannelDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA,
> - MapMemory = bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY,
> - BindCtxDma = bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA,
> AllocObject = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT,
> + AllocRoot = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT,
> + BindCtxDma = bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA,
> Free = bindings::NV_VGPU_MSG_FUNCTION_FREE,
> - Log = bindings::NV_VGPU_MSG_FUNCTION_LOG,
> GetGspStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO,
> - SetRegistry = bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY,
> - GspSetSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO,
> + GetStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO,
> GspInitPostObjGpu = bindings::NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU,
> GspRmControl = bindings::NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL,
> - GetStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO,
> + GspSetSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO,
> + Log = bindings::NV_VGPU_MSG_FUNCTION_LOG,
> + MapMemory = bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY,
> + Nop = bindings::NV_VGPU_MSG_FUNCTION_NOP,
> + SetGuestSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO,
> + SetRegistry = bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY,
>
> // Event codes
> GspInitDone = bindings::NV_VGPU_MSG_EVENT_GSP_INIT_DONE,
> + GspLockdownNotice = bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE,
> + GspPostNoCat = bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD,
> GspRunCpuSequencer = bindings::NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER,
> - PostEvent = bindings::NV_VGPU_MSG_EVENT_POST_EVENT,
> - RcTriggered = bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED,
> MmuFaultQueued = bindings::NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED,
> OsErrorLog = bindings::NV_VGPU_MSG_EVENT_OS_ERROR_LOG,
> - GspPostNoCat = bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD,
> - GspLockdownNotice = bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE,
> + PostEvent = bindings::NV_VGPU_MSG_EVENT_POST_EVENT,
> + RcTriggered = bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED,
> UcodeLibOsPrint = bindings::NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT,
> }
>
> @@ -227,38 +227,41 @@ impl TryFrom<u32> for MsgFunction {
>
> fn try_from(value: u32) -> Result<MsgFunction> {
> match value {
> - bindings::NV_VGPU_MSG_FUNCTION_NOP => Ok(MsgFunction::Nop),
> - bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO => {
> - Ok(MsgFunction::SetGuestSystemInfo)
> - }
> - bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT => Ok(MsgFunction::AllocRoot),
> + // Common function codes
> + bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA => Ok(MsgFunction::AllocChannelDma),
> + bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA => Ok(MsgFunction::AllocCtxDma),
> bindings::NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE => Ok(MsgFunction::AllocDevice),
> bindings::NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY => Ok(MsgFunction::AllocMemory),
> - bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA => Ok(MsgFunction::AllocCtxDma),
> - bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA => Ok(MsgFunction::AllocChannelDma),
> - bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY => Ok(MsgFunction::MapMemory),
> - bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA => Ok(MsgFunction::BindCtxDma),
> bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT => Ok(MsgFunction::AllocObject),
> + bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT => Ok(MsgFunction::AllocRoot),
> + bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA => Ok(MsgFunction::BindCtxDma),
> bindings::NV_VGPU_MSG_FUNCTION_FREE => Ok(MsgFunction::Free),
> - bindings::NV_VGPU_MSG_FUNCTION_LOG => Ok(MsgFunction::Log),
> bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO => Ok(MsgFunction::GetGspStaticInfo),
> - bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY => Ok(MsgFunction::SetRegistry),
> - bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO => Ok(MsgFunction::GspSetSystemInfo),
> + bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO => Ok(MsgFunction::GetStaticInfo),
> bindings::NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU => {
> Ok(MsgFunction::GspInitPostObjGpu)
> }
> bindings::NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL => Ok(MsgFunction::GspRmControl),
> - bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO => Ok(MsgFunction::GetStaticInfo),
> + bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO => Ok(MsgFunction::GspSetSystemInfo),
> + bindings::NV_VGPU_MSG_FUNCTION_LOG => Ok(MsgFunction::Log),
> + bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY => Ok(MsgFunction::MapMemory),
> + bindings::NV_VGPU_MSG_FUNCTION_NOP => Ok(MsgFunction::Nop),
> + bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO => {
> + Ok(MsgFunction::SetGuestSystemInfo)
> + }
> + bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY => Ok(MsgFunction::SetRegistry),
> +
> + // Event codes
> bindings::NV_VGPU_MSG_EVENT_GSP_INIT_DONE => Ok(MsgFunction::GspInitDone),
> + bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE => Ok(MsgFunction::GspLockdownNotice),
> + bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD => Ok(MsgFunction::GspPostNoCat),
> bindings::NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER => {
> Ok(MsgFunction::GspRunCpuSequencer)
> }
> - bindings::NV_VGPU_MSG_EVENT_POST_EVENT => Ok(MsgFunction::PostEvent),
> - bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED => Ok(MsgFunction::RcTriggered),
> bindings::NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED => Ok(MsgFunction::MmuFaultQueued),
> bindings::NV_VGPU_MSG_EVENT_OS_ERROR_LOG => Ok(MsgFunction::OsErrorLog),
> - bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD => Ok(MsgFunction::GspPostNoCat),
> - bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE => Ok(MsgFunction::GspLockdownNotice),
> + bindings::NV_VGPU_MSG_EVENT_POST_EVENT => Ok(MsgFunction::PostEvent),
> + bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED => Ok(MsgFunction::RcTriggered),
> bindings::NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT => Ok(MsgFunction::UcodeLibOsPrint),
> _ => Err(EINVAL),
> }
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically
2026-03-02 13:47 ` Gary Guo
@ 2026-03-02 14:49 ` Alexandre Courbot
0 siblings, 0 replies; 30+ messages in thread
From: Alexandre Courbot @ 2026-03-02 14:49 UTC (permalink / raw)
To: Gary Guo
Cc: Eliot Courtney, Danilo Krummrich, Alice Ryhl, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Zhi Wang,
Alistair Popple, Joel Fernandes, nouveau, dri-devel, linux-kernel,
rust-for-linux
On Mon Mar 2, 2026 at 10:47 PM JST, Gary Guo wrote:
> On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
>> There is no particular order required here and keeping them alphabetical
>> will help preventing future mistakes.
>
> Looks like the current order is in increasing opcode order. Granted, currently
> they're generated in bindings and then included as such, but perhaps eventually
> Rust code can be generated directly so the ordering could make sense?
Yes, there are plans to get more code generated and that would include
this as well.
For now alphabetical order makes sense to me as the opcodes come from
the bindings and have no guarantee of being stable.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Claude review: gpu: nova-core: gsp: sort MsgFunction variants alphabetically
2026-03-02 11:42 ` [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically Eliot Courtney
2026-03-02 13:47 ` Gary Guo
@ 2026-03-03 3:34 ` Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Pure code reorganization. Both the `MsgFunction` enum definition and the `TryFrom<u32>` match arms are sorted alphabetically, and section headers (`// Common function codes`, `// Event codes`) are added to the match arms.
No issues. This is a clean housekeeping change that makes future additions less error-prone.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
2026-03-02 11:42 ` [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-02 14:18 ` Gary Guo
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 3/9] rust: add EMSGSIZE error code Eliot Courtney
` (7 subsequent siblings)
9 siblings, 2 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Add a timeout to `allocate_command` which waits for space on the GSP
command queue. It uses a similar timeout to nouveau.
This lets `send_command` wait for space to free up in the command queue.
This is required to support continuation records which can fill up the
queue.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/cmdq.rs | 39 +++++++++++++++++++++++++++------------
1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index 87dbbd6d1be9..efbbc89f4d8a 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -250,6 +250,19 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
}
}
+ /// Returns the size of the region of the CPU message queue that the driver is currently allowed
+ /// to write to, in bytes.
+ fn driver_write_area_size(&self) -> usize {
+ let tx = self.cpu_write_ptr();
+ let rx = self.gsp_read_ptr();
+
+ // `rx` and `tx` are both in `0..MSGQ_NUM_PAGES` per the invariants of `gsp_read_ptr` and
+ // `cpu_write_ptr`. The minimum value case is where `rx == 0` and `tx == MSGQ_NUM_PAGES -
+ // 1`, which gives `0 + MSGQ_NUM_PAGES - (MSGQ_NUM_PAGES - 1) - 1 == 0`.
+ let slots = (rx + MSGQ_NUM_PAGES - tx - 1) % MSGQ_NUM_PAGES;
+ num::u32_as_usize(slots) * GSP_PAGE_SIZE
+ }
+
/// Returns the region of the GSP message queue that the driver is currently allowed to read
/// from.
///
@@ -281,15 +294,22 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
}
/// Allocates a region on the command queue that is large enough to send a command of `size`
- /// bytes.
+ /// bytes, waiting for space to become available based on the provided timeout.
///
/// This returns a [`GspCommand`] ready to be written to by the caller.
///
/// # Errors
///
- /// - `EAGAIN` if the driver area is too small to hold the requested command.
+ /// - `ETIMEDOUT` if space does not become available within the timeout.
/// - `EIO` if the command header is not properly aligned.
- fn allocate_command(&mut self, size: usize) -> Result<GspCommand<'_>> {
+ fn allocate_command(&mut self, size: usize, timeout: Delta) -> Result<GspCommand<'_>> {
+ read_poll_timeout(
+ || Ok(self.driver_write_area_size()),
+ |available_bytes| *available_bytes >= size_of::<GspMsgElement>() + size,
+ Delta::ZERO,
+ timeout,
+ )?;
+
// Get the current writable area as an array of bytes.
let (slice_1, slice_2) = {
let (slice_1, slice_2) = self.driver_write_area();
@@ -298,13 +318,6 @@ fn allocate_command(&mut self, size: usize) -> Result<GspCommand<'_>> {
(slice_1.as_flattened_mut(), slice_2.as_flattened_mut())
};
- // If the GSP is still processing previous messages the shared region
- // may be full in which case we will have to retry once the GSP has
- // processed the existing commands.
- if size_of::<GspMsgElement>() + size > slice_1.len() + slice_2.len() {
- return Err(EAGAIN);
- }
-
// Extract area for the `GspMsgElement`.
let (header, slice_1) = GspMsgElement::from_bytes_mut_prefix(slice_1).ok_or(EIO)?;
@@ -497,7 +510,7 @@ fn notify_gsp(bar: &Bar0) {
///
/// # Errors
///
- /// - `EAGAIN` if there was not enough space in the command queue to send the command.
+ /// - `ETIMEDOUT` if space does not become available within the timeout.
/// - `EIO` if the variable payload requested by the command has not been entirely
/// written to by its [`CommandToGsp::init_variable_payload`] method.
///
@@ -509,7 +522,9 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
Error: From<M::InitError>,
{
let command_size = size_of::<M::Command>() + command.variable_payload_len();
- let dst = self.gsp_mem.allocate_command(command_size)?;
+ let dst = self
+ .gsp_mem
+ .allocate_command(command_size, Delta::from_secs(1))?;
// Extract area for the command itself.
let (cmd, payload_1) = M::Command::from_bytes_mut_prefix(dst.contents.0).ok_or(EIO)?;
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue
2026-03-02 11:42 ` [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue Eliot Courtney
@ 2026-03-02 14:18 ` Gary Guo
2026-03-03 3:07 ` Eliot Courtney
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 30+ messages in thread
From: Gary Guo @ 2026-03-02 14:18 UTC (permalink / raw)
To: Eliot Courtney, Danilo Krummrich, Alice Ryhl, Alexandre Courbot,
David Airlie, Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux
On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
> Add a timeout to `allocate_command` which waits for space on the GSP
> command queue. It uses a similar timeout to nouveau.
>
> This lets `send_command` wait for space to free up in the command queue.
> This is required to support continuation records which can fill up the
> queue.
Any reason that this isn't implemented as a `poll_allocate_command` which just
returns a `EAGAIN`, and then a wrapper function that just waits for space to be
ready when it got one? This way the logic is cleaner.
(I write this with Rust async in mind)
>
> Tested-by: Zhi Wang <zhiw@nvidia.com>
> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
> ---
> drivers/gpu/nova-core/gsp/cmdq.rs | 39 +++++++++++++++++++++++++++------------
> 1 file changed, 27 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
> index 87dbbd6d1be9..efbbc89f4d8a 100644
> --- a/drivers/gpu/nova-core/gsp/cmdq.rs
> +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
> @@ -250,6 +250,19 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
> }
> }
>
> + /// Returns the size of the region of the CPU message queue that the driver is currently allowed
> + /// to write to, in bytes.
> + fn driver_write_area_size(&self) -> usize {
> + let tx = self.cpu_write_ptr();
> + let rx = self.gsp_read_ptr();
> +
> + // `rx` and `tx` are both in `0..MSGQ_NUM_PAGES` per the invariants of `gsp_read_ptr` and
> + // `cpu_write_ptr`. The minimum value case is where `rx == 0` and `tx == MSGQ_NUM_PAGES -
> + // 1`, which gives `0 + MSGQ_NUM_PAGES - (MSGQ_NUM_PAGES - 1) - 1 == 0`.
> + let slots = (rx + MSGQ_NUM_PAGES - tx - 1) % MSGQ_NUM_PAGES;
> + num::u32_as_usize(slots) * GSP_PAGE_SIZE
> + }
> +
> /// Returns the region of the GSP message queue that the driver is currently allowed to read
> /// from.
> ///
> @@ -281,15 +294,22 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
> }
>
> /// Allocates a region on the command queue that is large enough to send a command of `size`
> - /// bytes.
> + /// bytes, waiting for space to become available based on the provided timeout.
> ///
> /// This returns a [`GspCommand`] ready to be written to by the caller.
> ///
> /// # Errors
> ///
> - /// - `EAGAIN` if the driver area is too small to hold the requested command.
> + /// - `ETIMEDOUT` if space does not become available within the timeout.
> /// - `EIO` if the command header is not properly aligned.
> - fn allocate_command(&mut self, size: usize) -> Result<GspCommand<'_>> {
> + fn allocate_command(&mut self, size: usize, timeout: Delta) -> Result<GspCommand<'_>> {
> + read_poll_timeout(
> + || Ok(self.driver_write_area_size()),
> + |available_bytes| *available_bytes >= size_of::<GspMsgElement>() + size,
> + Delta::ZERO,
> + timeout,
> + )?;
> +
> // Get the current writable area as an array of bytes.
> let (slice_1, slice_2) = {
> let (slice_1, slice_2) = self.driver_write_area();
> @@ -298,13 +318,6 @@ fn allocate_command(&mut self, size: usize) -> Result<GspCommand<'_>> {
> (slice_1.as_flattened_mut(), slice_2.as_flattened_mut())
> };
>
> - // If the GSP is still processing previous messages the shared region
> - // may be full in which case we will have to retry once the GSP has
> - // processed the existing commands.
> - if size_of::<GspMsgElement>() + size > slice_1.len() + slice_2.len() {
> - return Err(EAGAIN);
> - }
> -
> // Extract area for the `GspMsgElement`.
> let (header, slice_1) = GspMsgElement::from_bytes_mut_prefix(slice_1).ok_or(EIO)?;
>
> @@ -497,7 +510,7 @@ fn notify_gsp(bar: &Bar0) {
> ///
> /// # Errors
> ///
> - /// - `EAGAIN` if there was not enough space in the command queue to send the command.
> + /// - `ETIMEDOUT` if space does not become available within the timeout.
> /// - `EIO` if the variable payload requested by the command has not been entirely
> /// written to by its [`CommandToGsp::init_variable_payload`] method.
> ///
> @@ -509,7 +522,9 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
> Error: From<M::InitError>,
> {
> let command_size = size_of::<M::Command>() + command.variable_payload_len();
> - let dst = self.gsp_mem.allocate_command(command_size)?;
> + let dst = self
> + .gsp_mem
> + .allocate_command(command_size, Delta::from_secs(1))?;
Let's make these consts, not magic numbers.
Best,
Gary
>
> // Extract area for the command itself.
> let (cmd, payload_1) = M::Command::from_bytes_mut_prefix(dst.contents.0).ok_or(EIO)?;
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue
2026-03-02 14:18 ` Gary Guo
@ 2026-03-03 3:07 ` Eliot Courtney
0 siblings, 0 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-03 3:07 UTC (permalink / raw)
To: Gary Guo, Eliot Courtney, Danilo Krummrich, Alice Ryhl,
Alexandre Courbot, David Airlie, Simona Vetter, Miguel Ojeda,
Boqun Feng, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, Alistair Popple, Joel Fernandes, nouveau, dri-devel,
linux-kernel, rust-for-linux
On Mon Mar 2, 2026 at 11:18 PM JST, Gary Guo wrote:
> On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
>> Add a timeout to `allocate_command` which waits for space on the GSP
>> command queue. It uses a similar timeout to nouveau.
>>
>> This lets `send_command` wait for space to free up in the command queue.
>> This is required to support continuation records which can fill up the
>> queue.
>
> Any reason that this isn't implemented as a `poll_allocate_command` which just
> returns a `EAGAIN`, and then a wrapper function that just waits for space to be
> ready when it got one? This way the logic is cleaner.
This kind of structure is what I originally did (see
https://lore.kernel.org/all/DGHRDFE9M6P7.L7JEOCLL3VS9@nvidia.com/)
although it wasn't quite as nice since `allocate_command` takes
&mut self and isn't callable from `read_poll_timeout`. Alex suggested
merging these into one as it simplifies a few things. But maybe I am
missing some better way to do it.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Claude review: gpu: nova-core: gsp: add mechanism to wait for space on command queue
2026-03-02 11:42 ` [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue Eliot Courtney
2026-03-02 14:18 ` Gary Guo
@ 2026-03-03 3:34 ` Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds `driver_write_area_size()` and changes `allocate_command()` to poll-wait with a timeout instead of immediately returning `EAGAIN`.
The available-space calculation looks correct:
```rust
let slots = (rx + MSGQ_NUM_PAGES - tx - 1) % MSGQ_NUM_PAGES;
num::u32_as_usize(slots) * GSP_PAGE_SIZE
```
The comment accurately describes the minimum case (rx==0, tx==MSGQ_NUM_PAGES-1 → 0 slots).
**Minor observation:** The poll interval is `Delta::ZERO`:
```rust
read_poll_timeout(
|| Ok(self.driver_write_area_size()),
|available_bytes| *available_bytes >= size_of::<GspMsgElement>() + size,
Delta::ZERO,
timeout,
)?;
```
This means busy-polling with no sleep between checks. For the typical case where the queue has space, this returns immediately (one iteration). For the rare case where the queue is full, this busy-spins for up to 1 second. Depending on how `read_poll_timeout` is implemented in the Rust kernel bindings, this could consume significant CPU. Consider whether a small sleep interval (e.g., `Delta::from_millis(1)`, matching the pattern used in `wait_for_msg`) would be more appropriate for the queue-full case.
The 1-second timeout at the call site (`Delta::from_secs(1)`) seems reasonable and matches nouveau's approach.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 3/9] rust: add EMSGSIZE error code
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
2026-03-02 11:42 ` [PATCH v4 1/9] gpu: nova-core: gsp: sort MsgFunction variants alphabetically Eliot Courtney
2026-03-02 11:42 ` [PATCH v4 2/9] gpu: nova-core: gsp: add mechanism to wait for space on command queue Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-02 14:19 ` Gary Guo
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 4/9] gpu: nova-core: gsp: add checking oversized commands Eliot Courtney
` (6 subsequent siblings)
9 siblings, 2 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Add the EMSGSIZE error code, which indicates that a message is too
long.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Acked-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
rust/kernel/error.rs | 1 +
1 file changed, 1 insertion(+)
diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs
index 258b12afdcba..10fcf1f0404d 100644
--- a/rust/kernel/error.rs
+++ b/rust/kernel/error.rs
@@ -67,6 +67,7 @@ macro_rules! declare_err {
declare_err!(EDOM, "Math argument out of domain of func.");
declare_err!(ERANGE, "Math result not representable.");
declare_err!(EOVERFLOW, "Value too large for defined data type.");
+ declare_err!(EMSGSIZE, "Message too long.");
declare_err!(ETIMEDOUT, "Connection timed out.");
declare_err!(ERESTARTSYS, "Restart the system call.");
declare_err!(ERESTARTNOINTR, "System call was interrupted by a signal and will be restarted.");
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH v4 3/9] rust: add EMSGSIZE error code
2026-03-02 11:42 ` [PATCH v4 3/9] rust: add EMSGSIZE error code Eliot Courtney
@ 2026-03-02 14:19 ` Gary Guo
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Gary Guo @ 2026-03-02 14:19 UTC (permalink / raw)
To: Eliot Courtney, Danilo Krummrich, Alice Ryhl, Alexandre Courbot,
David Airlie, Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux
On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
> Add the EMSGSIZE error code, which indicates that a message is too
> long.
>
> Tested-by: Zhi Wang <zhiw@nvidia.com>
> Acked-by: Miguel Ojeda <ojeda@kernel.org>
> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
> ---
> rust/kernel/error.rs | 1 +
> 1 file changed, 1 insertion(+)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Claude review: rust: add EMSGSIZE error code
2026-03-02 11:42 ` [PATCH v4 3/9] rust: add EMSGSIZE error code Eliot Courtney
2026-03-02 14:19 ` Gary Guo
@ 2026-03-03 3:34 ` Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Trivial one-liner adding `EMSGSIZE` to the Rust kernel error code declarations. Already has `Acked-by: Miguel Ojeda`. No issues.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 4/9] gpu: nova-core: gsp: add checking oversized commands
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (2 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 3/9] rust: add EMSGSIZE error code Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 5/9] gpu: nova-core: gsp: clarify invariant on command queue Eliot Courtney
` (5 subsequent siblings)
9 siblings, 1 reply; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
The limit is 16 pages for a single command sent to the GSP. Return an
error if `allocate_command` is called with a too large size.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/cmdq.rs | 7 ++++++-
drivers/gpu/nova-core/gsp/fw.rs | 4 ++++
drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs | 1 +
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index efbbc89f4d8a..3c2652be74bf 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -32,7 +32,8 @@
GspMsgElement,
MsgFunction,
MsgqRxHeader,
- MsgqTxHeader, //
+ MsgqTxHeader,
+ GSP_MSG_QUEUE_ELEMENT_SIZE_MAX, //
},
PteArray,
GSP_PAGE_SHIFT,
@@ -300,9 +301,13 @@ fn driver_write_area_size(&self) -> usize {
///
/// # Errors
///
+ /// - `EMSGSIZE` if the command is larger than [`GSP_MSG_QUEUE_ELEMENT_SIZE_MAX`].
/// - `ETIMEDOUT` if space does not become available within the timeout.
/// - `EIO` if the command header is not properly aligned.
fn allocate_command(&mut self, size: usize, timeout: Delta) -> Result<GspCommand<'_>> {
+ if size_of::<GspMsgElement>() + size > GSP_MSG_QUEUE_ELEMENT_SIZE_MAX {
+ return Err(EMSGSIZE);
+ }
read_poll_timeout(
|| Ok(self.driver_write_area_size()),
|available_bytes| *available_bytes >= size_of::<GspMsgElement>() + size,
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index 4b998485360b..6005362450cb 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -39,6 +39,10 @@
},
};
+/// Maximum size of a single GSP message queue element in bytes.
+pub(crate) const GSP_MSG_QUEUE_ELEMENT_SIZE_MAX: usize =
+ num::u32_as_usize(bindings::GSP_MSG_QUEUE_ELEMENT_SIZE_MAX);
+
/// Empty type to group methods related to heap parameters for running the GSP firmware.
enum GspFwHeapParams {}
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 6d25fe0bffa9..334e8be5fde8 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -43,6 +43,7 @@ fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result {
pub const GSP_FW_WPR_META_REVISION: u32 = 1;
pub const GSP_FW_WPR_META_MAGIC: i64 = -2577556379034558285;
pub const REGISTRY_TABLE_ENTRY_TYPE_DWORD: u32 = 1;
+pub const GSP_MSG_QUEUE_ELEMENT_SIZE_MAX: u32 = 65536;
pub type __u8 = ffi::c_uchar;
pub type __u16 = ffi::c_ushort;
pub type __u32 = ffi::c_uint;
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Claude review: gpu: nova-core: gsp: add checking oversized commands
2026-03-02 11:42 ` [PATCH v4 4/9] gpu: nova-core: gsp: add checking oversized commands Eliot Courtney
@ 2026-03-03 3:34 ` Claude Code Review Bot
0 siblings, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds the `GSP_MSG_QUEUE_ELEMENT_SIZE_MAX` constant (65536 = 16 pages) and a guard in `allocate_command` to reject commands that exceed this limit:
```rust
if size_of::<GspMsgElement>() + size > GSP_MSG_QUEUE_ELEMENT_SIZE_MAX {
return Err(EMSGSIZE);
}
```
This is correct. The check is placed before the poll-wait, which avoids a pointless 1-second timeout for commands that can never fit. The constant is properly sourced from the bindings.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 5/9] gpu: nova-core: gsp: clarify invariant on command queue
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (3 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 4/9] gpu: nova-core: gsp: add checking oversized commands Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 6/9] gpu: nova-core: gsp: unconditionally call variable payload handling Eliot Courtney
` (4 subsequent siblings)
9 siblings, 1 reply; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Clarify why using only the first returned slice from allocate_command
for the message headers is okay.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/cmdq.rs | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index 3c2652be74bf..3653212efa7a 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -531,7 +531,9 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
.gsp_mem
.allocate_command(command_size, Delta::from_secs(1))?;
- // Extract area for the command itself.
+ // Extract area for the command itself. The GSP message header and the command header
+ // together are guaranteed to fit entirely into a single page, so it's ok to only look
+ // at `dst.contents.0` here.
let (cmd, payload_1) = M::Command::from_bytes_mut_prefix(dst.contents.0).ok_or(EIO)?;
// Fill the header and command in-place.
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Claude review: gpu: nova-core: gsp: clarify invariant on command queue
2026-03-02 11:42 ` [PATCH v4 5/9] gpu: nova-core: gsp: clarify invariant on command queue Eliot Courtney
@ 2026-03-03 3:34 ` Claude Code Review Bot
0 siblings, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds a comment explaining why only `dst.contents.0` is used for the command header extraction:
```rust
// Extract area for the command itself. The GSP message header and the command header
// together are guaranteed to fit entirely into a single page, so it's ok to only look
// at `dst.contents.0` here.
```
Good clarification. Since `GspMsgElement` is 64 bytes and command headers are small, they'll always fit within the first contiguous slice of the circular buffer.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 6/9] gpu: nova-core: gsp: unconditionally call variable payload handling
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (4 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 5/9] gpu: nova-core: gsp: clarify invariant on command queue Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper Eliot Courtney
` (3 subsequent siblings)
9 siblings, 1 reply; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Unconditionally call the variable length payload code, which is a no-op
if there is no such payload but could defensively catch some coding
errors by e.g. checking that the allocated size is completely filled.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/cmdq.rs | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index 3653212efa7a..9f74f0898d90 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -545,16 +545,14 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
command.init().__init(core::ptr::from_mut(cmd))?;
}
- // Fill the variable-length payload.
- if command_size > size_of::<M::Command>() {
- let mut sbuffer =
- SBufferIter::new_writer([&mut payload_1[..], &mut dst.contents.1[..]]);
- command.init_variable_payload(&mut sbuffer)?;
+ // Fill the variable-length payload, which may be empty.
+ let mut sbuffer = SBufferIter::new_writer([&mut payload_1[..], &mut dst.contents.1[..]]);
+ command.init_variable_payload(&mut sbuffer)?;
- if !sbuffer.is_empty() {
- return Err(EIO);
- }
+ if !sbuffer.is_empty() {
+ return Err(EIO);
}
+ drop(sbuffer);
// Compute checksum now that the whole message is ready.
dst.header
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Claude review: gpu: nova-core: gsp: unconditionally call variable payload handling
2026-03-02 11:42 ` [PATCH v4 6/9] gpu: nova-core: gsp: unconditionally call variable payload handling Eliot Courtney
@ 2026-03-03 3:34 ` Claude Code Review Bot
0 siblings, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Removes the `if command_size > size_of::<M::Command>()` guard around `init_variable_payload`, making it always run even for commands with no variable payload. The explicit `drop(sbuffer)` is added to release the mutable borrows before the checksum computation.
This is a sensible defensive change. For zero-payload commands, `init_variable_payload` is a no-op (default implementation returns `Ok(())`), and `sbuffer.is_empty()` will be true since the iterator has zero bytes remaining. The `drop(sbuffer)` is necessary for borrow checker correctness.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (5 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 6/9] gpu: nova-core: gsp: unconditionally call variable payload handling Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-02 14:22 ` Gary Guo
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 8/9] gpu: nova-core: gsp: support large RPCs via continuation record Eliot Courtney
` (2 subsequent siblings)
9 siblings, 2 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Add a helper function which computes the size of a command.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/cmdq.rs | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index 9f74f0898d90..4a663a5b3437 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -450,6 +450,15 @@ struct GspMessage<'a> {
contents: (&'a [u8], &'a [u8]),
}
+/// Computes the total size of the command (including its variable-length payload) without the
+/// [`GspMsgElement`] header.
+fn command_size<M>(command: &M) -> usize
+where
+ M: CommandToGsp,
+{
+ size_of::<M::Command>() + command.variable_payload_len()
+}
+
/// GSP command queue.
///
/// Provides the ability to send commands and receive messages from the GSP using a shared memory
@@ -526,7 +535,7 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
// This allows all error types, including `Infallible`, to be used for `M::InitError`.
Error: From<M::InitError>,
{
- let command_size = size_of::<M::Command>() + command.variable_payload_len();
+ let command_size = command_size(&command);
let dst = self
.gsp_mem
.allocate_command(command_size, Delta::from_secs(1))?;
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper
2026-03-02 11:42 ` [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper Eliot Courtney
@ 2026-03-02 14:22 ` Gary Guo
2026-03-03 3:12 ` Eliot Courtney
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 30+ messages in thread
From: Gary Guo @ 2026-03-02 14:22 UTC (permalink / raw)
To: Eliot Courtney, Danilo Krummrich, Alice Ryhl, Alexandre Courbot,
David Airlie, Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux
On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
> Add a helper function which computes the size of a command.
>
> Tested-by: Zhi Wang <zhiw@nvidia.com>
> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
> ---
> drivers/gpu/nova-core/gsp/cmdq.rs | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
> index 9f74f0898d90..4a663a5b3437 100644
> --- a/drivers/gpu/nova-core/gsp/cmdq.rs
> +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
> @@ -450,6 +450,15 @@ struct GspMessage<'a> {
> contents: (&'a [u8], &'a [u8]),
> }
>
> +/// Computes the total size of the command (including its variable-length payload) without the
> +/// [`GspMsgElement`] header.
> +fn command_size<M>(command: &M) -> usize
> +where
> + M: CommandToGsp,
> +{
> + size_of::<M::Command>() + command.variable_payload_len()
> +}
> +
This could just a provided method on `CommandToGsp`?
Best,
Gary
> /// GSP command queue.
> ///
> /// Provides the ability to send commands and receive messages from the GSP using a shared memory
> @@ -526,7 +535,7 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
> // This allows all error types, including `Infallible`, to be used for `M::InitError`.
> Error: From<M::InitError>,
> {
> - let command_size = size_of::<M::Command>() + command.variable_payload_len();
> + let command_size = command_size(&command);
> let dst = self
> .gsp_mem
> .allocate_command(command_size, Delta::from_secs(1))?;
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper
2026-03-02 14:22 ` Gary Guo
@ 2026-03-03 3:12 ` Eliot Courtney
0 siblings, 0 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-03 3:12 UTC (permalink / raw)
To: Gary Guo, Eliot Courtney, Danilo Krummrich, Alice Ryhl,
Alexandre Courbot, David Airlie, Simona Vetter, Miguel Ojeda,
Boqun Feng, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, Alistair Popple, Joel Fernandes, nouveau, dri-devel,
linux-kernel, rust-for-linux
On Mon Mar 2, 2026 at 11:22 PM JST, Gary Guo wrote:
> On Mon Mar 2, 2026 at 11:42 AM GMT, Eliot Courtney wrote:
>> Add a helper function which computes the size of a command.
>>
>> Tested-by: Zhi Wang <zhiw@nvidia.com>
>> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
>> ---
>> drivers/gpu/nova-core/gsp/cmdq.rs | 11 ++++++++++-
>> 1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
>> index 9f74f0898d90..4a663a5b3437 100644
>> --- a/drivers/gpu/nova-core/gsp/cmdq.rs
>> +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
>> @@ -450,6 +450,15 @@ struct GspMessage<'a> {
>> contents: (&'a [u8], &'a [u8]),
>> }
>>
>> +/// Computes the total size of the command (including its variable-length payload) without the
>> +/// [`GspMsgElement`] header.
>> +fn command_size<M>(command: &M) -> usize
>> +where
>> + M: CommandToGsp,
>> +{
>> + size_of::<M::Command>() + command.variable_payload_len()
>> +}
>> +
>
> This could just a provided method on `CommandToGsp`?
I discussed this with Alex before[1] and my idea was that it's odd to
add it as a default trait method since implementors could override it
when they really shouldn't be able to, since it needs to agree with the
size of the command and the variable payload length.
WDYT? Is there a clear convention of doing this over the kernel? It just
feels odd to me although I agree it's useful. Would be perfect if there
was an easy way to prevent it from being overridden.
[1]: https://lore.kernel.org/all/DGHSGO2E0U9F.2M8MOSKBNA9JY@nvidia.com
^ permalink raw reply [flat|nested] 30+ messages in thread
* Claude review: gpu: nova-core: gsp: add command_size helper
2026-03-02 11:42 ` [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper Eliot Courtney
2026-03-02 14:22 ` Gary Guo
@ 2026-03-03 3:34 ` Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Extracts the command size calculation into a standalone function:
```rust
pub(crate) fn command_size<M>(command: &M) -> usize
where
M: CommandToGsp,
{
size_of::<M::Command>() + command.variable_payload_len()
}
```
Clean extraction. Made `pub(crate)` for reuse in the continuation module.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 8/9] gpu: nova-core: gsp: support large RPCs via continuation record
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (6 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 7/9] gpu: nova-core: gsp: add command_size helper Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-02 13:06 ` Alexandre Courbot
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-02 11:42 ` [PATCH v4 9/9] gpu: nova-core: gsp: add tests for continuation records Eliot Courtney
2026-03-03 3:34 ` Claude review: gpu: nova-core: gsp: add continuation record support Claude Code Review Bot
9 siblings, 2 replies; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Splits large RPCs if necessary and sends the remaining parts using
continuation records. RPCs that do not need continuation records
continue to write directly into the command buffer. Ones that do write
into a staging buffer first, so there is one copy.
Continuation record for receive is not necessary to support at the
moment because those replies do not need to be read and are currently
drained by retrying `receive_msg` on ERANGE.
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp.rs | 1 +
drivers/gpu/nova-core/gsp/cmdq.rs | 41 +++++++-
drivers/gpu/nova-core/gsp/continuation.rs | 167 ++++++++++++++++++++++++++++++
drivers/gpu/nova-core/gsp/fw.rs | 4 +
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index 174feaca0a6b..ccf56f1ad246 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -16,6 +16,7 @@
pub(crate) mod cmdq;
pub(crate) mod commands;
+mod continuation;
mod fw;
mod sequencer;
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
index 4a663a5b3437..4caf5917de64 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -28,6 +28,10 @@
use crate::{
driver::Bar0,
gsp::{
+ continuation::{
+ ContinuationRecord,
+ SplitState, //
+ },
fw::{
GspMsgElement,
MsgFunction,
@@ -452,7 +456,7 @@ struct GspMessage<'a> {
/// Computes the total size of the command (including its variable-length payload) without the
/// [`GspMsgElement`] header.
-fn command_size<M>(command: &M) -> usize
+pub(crate) fn command_size<M>(command: &M) -> usize
where
M: CommandToGsp,
{
@@ -520,7 +524,7 @@ fn notify_gsp(bar: &Bar0) {
.write(bar);
}
- /// Sends `command` to the GSP.
+ /// Sends `command` to the GSP, without splitting it.
///
/// # Errors
///
@@ -529,7 +533,7 @@ fn notify_gsp(bar: &Bar0) {
/// written to by its [`CommandToGsp::init_variable_payload`] method.
///
/// Error codes returned by the command initializers are propagated as-is.
- pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
+ fn send_single_command<M>(&mut self, bar: &Bar0, command: M) -> Result
where
M: CommandToGsp,
// This allows all error types, including `Infallible`, to be used for `M::InitError`.
@@ -588,6 +592,37 @@ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
Ok(())
}
+ /// Sends `command` to the GSP.
+ ///
+ /// The command may be split into multiple messages if it is large.
+ ///
+ /// # Errors
+ ///
+ /// - `ETIMEDOUT` if space does not become available within the timeout.
+ /// - `EIO` if the variable payload requested by the command has not been entirely
+ /// written to by its [`CommandToGsp::init_variable_payload`] method.
+ ///
+ /// Error codes returned by the command initializers are propagated as-is.
+ pub(crate) fn send_command<M>(&mut self, bar: &Bar0, command: M) -> Result
+ where
+ M: CommandToGsp,
+ Error: From<M::InitError>,
+ {
+ match SplitState::new(command)? {
+ SplitState::Single(command) => self.send_single_command(bar, command),
+ SplitState::Split(command, mut continuations) => {
+ self.send_single_command(bar, command)?;
+
+ while let Some(continuation) = continuations.next() {
+ // Turbofish needed because the compiler cannot infer M here.
+ self.send_single_command::<ContinuationRecord<'_>>(bar, continuation)?;
+ }
+
+ Ok(())
+ }
+ }
+ }
+
/// Wait for a message to become available on the message queue.
///
/// This works purely at the transport layer and does not interpret or validate the message
diff --git a/drivers/gpu/nova-core/gsp/continuation.rs b/drivers/gpu/nova-core/gsp/continuation.rs
new file mode 100644
index 000000000000..cf6d950a0e3f
--- /dev/null
+++ b/drivers/gpu/nova-core/gsp/continuation.rs
@@ -0,0 +1,167 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Support for splitting large GSP commands across continuation records.
+
+use core::convert::Infallible;
+
+use kernel::prelude::*;
+
+use crate::{
+ gsp::{
+ cmdq::{
+ command_size,
+ CommandToGsp, //
+ },
+ fw::{
+ GspMsgElement,
+ MsgFunction,
+ GSP_MSG_QUEUE_ELEMENT_SIZE_MAX, //
+ },
+ },
+ sbuffer::SBufferIter,
+};
+
+/// Maximum command size that fits in a single queue element.
+const MAX_CMD_SIZE: usize = GSP_MSG_QUEUE_ELEMENT_SIZE_MAX - size_of::<GspMsgElement>();
+
+/// Acts as an iterator over the continuation records for a split command.
+pub(super) struct ContinuationRecords {
+ payload: KVVec<u8>,
+ offset: usize,
+}
+
+impl ContinuationRecords {
+ /// Creates a new iterator over continuation records for the given payload.
+ fn new(payload: KVVec<u8>) -> Self {
+ Self { payload, offset: 0 }
+ }
+
+ /// Returns the next continuation record, or [`None`] if there are no more.
+ pub(super) fn next(&mut self) -> Option<ContinuationRecord<'_>> {
+ let remaining = self.payload.len() - self.offset;
+
+ if remaining > 0 {
+ let chunk_size = remaining.min(MAX_CMD_SIZE);
+ let record =
+ ContinuationRecord::new(&self.payload[self.offset..(self.offset + chunk_size)]);
+ self.offset += chunk_size;
+ Some(record)
+ } else {
+ None
+ }
+ }
+}
+
+/// The [`ContinuationRecord`] command.
+pub(super) struct ContinuationRecord<'a> {
+ data: &'a [u8],
+}
+
+impl<'a> ContinuationRecord<'a> {
+ /// Creates a new [`ContinuationRecord`] command with the given data.
+ fn new(data: &'a [u8]) -> Self {
+ Self { data }
+ }
+}
+
+impl<'a> CommandToGsp for ContinuationRecord<'a> {
+ const FUNCTION: MsgFunction = MsgFunction::ContinuationRecord;
+ type Command = ();
+ type InitError = Infallible;
+
+ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
+ <()>::init_zeroed()
+ }
+
+ fn variable_payload_len(&self) -> usize {
+ self.data.len()
+ }
+
+ fn init_variable_payload(
+ &self,
+ dst: &mut SBufferIter<core::array::IntoIter<&mut [u8], 2>>,
+ ) -> Result {
+ dst.write_all(self.data)
+ }
+}
+
+/// Whether a command needs to be split across continuation records or not.
+pub(super) enum SplitState<C: CommandToGsp> {
+ /// A command that fits in a single queue element.
+ Single(C),
+ /// A command split across continuation records.
+ Split(SplitCommand<C>, ContinuationRecords),
+}
+
+impl<C: CommandToGsp> SplitState<C> {
+ /// Maximum variable payload size that fits in the first command alongside the command header.
+ const MAX_FIRST_PAYLOAD: usize = MAX_CMD_SIZE - size_of::<C::Command>();
+
+ /// Creates a new [`SplitState`] for the given command.
+ ///
+ /// If the command is too large, it will be split into a main command and some number of
+ /// continuation records.
+ pub(super) fn new(command: C) -> Result<Self> {
+ let payload_len = command.variable_payload_len();
+
+ if command_size(&command) > MAX_CMD_SIZE {
+ let mut command_payload =
+ KVVec::<u8>::from_elem(0u8, payload_len.min(Self::MAX_FIRST_PAYLOAD), GFP_KERNEL)?;
+ let mut continuation_payload =
+ KVVec::<u8>::from_elem(0u8, payload_len - command_payload.len(), GFP_KERNEL)?;
+ let mut sbuffer = SBufferIter::new_writer([
+ command_payload.as_mut_slice(),
+ continuation_payload.as_mut_slice(),
+ ]);
+
+ command.init_variable_payload(&mut sbuffer)?;
+ if !sbuffer.is_empty() {
+ return Err(EIO);
+ }
+ drop(sbuffer);
+
+ Ok(Self::Split(
+ SplitCommand::new(command, command_payload),
+ ContinuationRecords::new(continuation_payload),
+ ))
+ } else {
+ Ok(Self::Single(command))
+ }
+ }
+}
+
+/// A command that has been truncated to maximum accepted length of the command queue.
+///
+/// The remainder of its payload is expected to be sent using [`ContinuationRecords`].
+pub(super) struct SplitCommand<C: CommandToGsp> {
+ command: C,
+ payload: KVVec<u8>,
+}
+
+impl<C: CommandToGsp> SplitCommand<C> {
+ /// Creates a new [`SplitCommand`] wrapping `command` with the given truncated payload.
+ fn new(command: C, payload: KVVec<u8>) -> Self {
+ Self { command, payload }
+ }
+}
+
+impl<C: CommandToGsp> CommandToGsp for SplitCommand<C> {
+ const FUNCTION: MsgFunction = C::FUNCTION;
+ type Command = C::Command;
+ type InitError = C::InitError;
+
+ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
+ self.command.init()
+ }
+
+ fn variable_payload_len(&self) -> usize {
+ self.payload.len()
+ }
+
+ fn init_variable_payload(
+ &self,
+ dst: &mut SBufferIter<core::array::IntoIter<&mut [u8], 2>>,
+ ) -> Result {
+ dst.write_all(&self.payload)
+ }
+}
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index 6005362450cb..25fca1f6db2c 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -202,6 +202,7 @@ pub(crate) enum MsgFunction {
AllocObject = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT,
AllocRoot = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT,
BindCtxDma = bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA,
+ ContinuationRecord = bindings::NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD,
Free = bindings::NV_VGPU_MSG_FUNCTION_FREE,
GetGspStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO,
GetStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO,
@@ -239,6 +240,9 @@ fn try_from(value: u32) -> Result<MsgFunction> {
bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT => Ok(MsgFunction::AllocObject),
bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT => Ok(MsgFunction::AllocRoot),
bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA => Ok(MsgFunction::BindCtxDma),
+ bindings::NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD => {
+ Ok(MsgFunction::ContinuationRecord)
+ }
bindings::NV_VGPU_MSG_FUNCTION_FREE => Ok(MsgFunction::Free),
bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO => Ok(MsgFunction::GetGspStaticInfo),
bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO => Ok(MsgFunction::GetStaticInfo),
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH v4 8/9] gpu: nova-core: gsp: support large RPCs via continuation record
2026-03-02 11:42 ` [PATCH v4 8/9] gpu: nova-core: gsp: support large RPCs via continuation record Eliot Courtney
@ 2026-03-02 13:06 ` Alexandre Courbot
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Alexandre Courbot @ 2026-03-02 13:06 UTC (permalink / raw)
To: Eliot Courtney
Cc: Danilo Krummrich, Alice Ryhl, David Airlie, Simona Vetter,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Zhi Wang,
Alistair Popple, Joel Fernandes, nouveau, dri-devel, linux-kernel,
rust-for-linux
On Mon Mar 2, 2026 at 8:42 PM JST, Eliot Courtney wrote:
> Splits large RPCs if necessary and sends the remaining parts using
> continuation records. RPCs that do not need continuation records
> continue to write directly into the command buffer. Ones that do write
> into a staging buffer first, so there is one copy.
>
> Continuation record for receive is not necessary to support at the
> moment because those replies do not need to be read and are currently
> drained by retrying `receive_msg` on ERANGE.
>
> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
> ---
> drivers/gpu/nova-core/gsp.rs | 1 +
> drivers/gpu/nova-core/gsp/cmdq.rs | 41 +++++++-
> drivers/gpu/nova-core/gsp/continuation.rs | 167 ++++++++++++++++++++++++++++++
> drivers/gpu/nova-core/gsp/fw.rs | 4 +
> 4 files changed, 210 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
> index 174feaca0a6b..ccf56f1ad246 100644
> --- a/drivers/gpu/nova-core/gsp.rs
> +++ b/drivers/gpu/nova-core/gsp.rs
> @@ -16,6 +16,7 @@
>
> pub(crate) mod cmdq;
> pub(crate) mod commands;
> +mod continuation;
Looking at this series it seems more logical to have `continuation`
under `cmdq` than just `gsp`.
Nothing in `gsp` makes use of it, as it is an implementation detail of
the command queue. And that way the `pub(super)` exports would be
perfectly scoped to their user.
Re-reading my review of v3, I am the one who suggested to put it under
`gsp` - so this is my mistake.
No need to resend just for that, I can fix when applying and this should
have no impact on the patches that come above it.
Giving it a few days to gather reviews, but after a first skim this
looks ready to me.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Claude review: gpu: nova-core: gsp: support large RPCs via continuation record
2026-03-02 11:42 ` [PATCH v4 8/9] gpu: nova-core: gsp: support large RPCs via continuation record Eliot Courtney
2026-03-02 13:06 ` Alexandre Courbot
@ 2026-03-03 3:34 ` Claude Code Review Bot
1 sibling, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the core patch. The design is solid:
- `ContinuationRecord<'a>` wraps a `&[u8]` slice, uses `MsgFunction::ContinuationRecord`, and has `type Command = ()` (zero-sized header). This is correct since continuation records are pure payload.
- `ContinuationRecords` provides a lending-iterator pattern via a custom `next()` method rather than implementing `Iterator`, which is the right approach since `ContinuationRecord` borrows from the parent.
- `SplitState<C>` cleanly separates the single vs. split cases. The associated constant `MAX_FIRST_PAYLOAD` properly accounts for the command header:
```rust
const MAX_FIRST_PAYLOAD: usize = MAX_CMD_SIZE - size_of::<C::Command>();
```
- `SplitCommand<C>` delegates `init()` to the original command but replaces the variable payload with the pre-filled staging buffer.
**Observation on the send_command flow:** For split commands, `SplitState::new()` calls `command.init_variable_payload()` to fill the staging buffer. Then `send_single_command` calls `SplitCommand.init_variable_payload()` which copies from the staging buffer into the command queue. This means the payload is initialized once and copied once (for the first chunk) — which is correct and matches the cover letter's description.
**Observation on ContinuationRecord being sent through send_single_command:** The `allocate_command` size check uses `size_of::<GspMsgElement>() + size`. For ContinuationRecord, the size is `0 (Command=()) + data.len()`, and `data.len() <= MAX_CMD_SIZE`, so `size_of::<GspMsgElement>() + data.len() <= GSP_MSG_QUEUE_ELEMENT_SIZE_MAX`. The check passes correctly.
**Minor style observation:** The trailing `//` in imports is a rustfmt formatting workaround:
```rust
ContinuationRecord,
SplitState, //
```
This is a known pattern but could be confusing to newcomers. Not a blocking issue.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v4 9/9] gpu: nova-core: gsp: add tests for continuation records
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (7 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 8/9] gpu: nova-core: gsp: support large RPCs via continuation record Eliot Courtney
@ 2026-03-02 11:42 ` Eliot Courtney
2026-03-03 3:34 ` Claude review: " Claude Code Review Bot
2026-03-03 3:34 ` Claude review: gpu: nova-core: gsp: add continuation record support Claude Code Review Bot
9 siblings, 1 reply; 30+ messages in thread
From: Eliot Courtney @ 2026-03-02 11:42 UTC (permalink / raw)
To: Danilo Krummrich, Alice Ryhl, Alexandre Courbot, David Airlie,
Simona Vetter, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross
Cc: Zhi Wang, John Hubbard, Alistair Popple, Joel Fernandes,
Timur Tabi, nouveau, dri-devel, linux-kernel, rust-for-linux,
Eliot Courtney
Add tests for continuation record splitting. They cover boundary
conditions at the split points to make sure the right number of
continuation records are made. They also check that the data
concatenated is correct.
Tested-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
---
drivers/gpu/nova-core/gsp/continuation.rs | 138 ++++++++++++++++++++++++++++++
1 file changed, 138 insertions(+)
diff --git a/drivers/gpu/nova-core/gsp/continuation.rs b/drivers/gpu/nova-core/gsp/continuation.rs
index cf6d950a0e3f..6d02334f8266 100644
--- a/drivers/gpu/nova-core/gsp/continuation.rs
+++ b/drivers/gpu/nova-core/gsp/continuation.rs
@@ -165,3 +165,141 @@ fn init_variable_payload(
dst.write_all(&self.payload)
}
}
+
+#[kunit_tests(nova_core_gsp_continuation)]
+mod tests {
+ use super::*;
+
+ use kernel::transmute::{
+ AsBytes,
+ FromBytes, //
+ };
+
+ /// Non-zero-sized command header for testing.
+ #[repr(C)]
+ #[derive(Clone, Copy, Zeroable)]
+ struct TestHeader([u8; 64]);
+
+ // SAFETY: `TestHeader` is a plain array of bytes for which all bit patterns are valid.
+ unsafe impl FromBytes for TestHeader {}
+
+ // SAFETY: `TestHeader` is a plain array of bytes for which all bit patterns are valid.
+ unsafe impl AsBytes for TestHeader {}
+
+ struct TestPayload {
+ data: KVVec<u8>,
+ }
+
+ impl TestPayload {
+ fn generate_pattern(len: usize) -> Result<KVVec<u8>> {
+ let mut data = KVVec::with_capacity(len, GFP_KERNEL)?;
+ for i in 0..len {
+ // Mix in higher bits so the pattern does not repeat every 256 bytes.
+ data.push((i ^ (i >> 8)) as u8, GFP_KERNEL)?;
+ }
+ Ok(data)
+ }
+
+ fn new(len: usize) -> Result<Self> {
+ Ok(Self {
+ data: Self::generate_pattern(len)?,
+ })
+ }
+ }
+
+ impl CommandToGsp for TestPayload {
+ const FUNCTION: MsgFunction = MsgFunction::Nop;
+ type Command = TestHeader;
+ type InitError = Infallible;
+
+ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
+ TestHeader::init_zeroed()
+ }
+
+ fn variable_payload_len(&self) -> usize {
+ self.data.len()
+ }
+
+ fn init_variable_payload(
+ &self,
+ dst: &mut SBufferIter<core::array::IntoIter<&mut [u8], 2>>,
+ ) -> Result {
+ dst.write_all(self.data.as_slice())
+ }
+ }
+
+ /// Maximum variable payload size that fits in the first command alongside the header.
+ const MAX_FIRST_PAYLOAD: usize = SplitState::<TestPayload>::MAX_FIRST_PAYLOAD;
+
+ fn read_payload(cmd: impl CommandToGsp) -> Result<KVVec<u8>> {
+ let len = cmd.variable_payload_len();
+ let mut buf = KVVec::from_elem(0u8, len, GFP_KERNEL)?;
+ let mut sbuf = SBufferIter::new_writer([buf.as_mut_slice(), &mut []]);
+ cmd.init_variable_payload(&mut sbuf)?;
+ drop(sbuf);
+ Ok(buf)
+ }
+
+ struct SplitTest {
+ payload_size: usize,
+ num_continuations: usize,
+ }
+
+ fn check_split(t: SplitTest) -> Result {
+ let payload = TestPayload::new(t.payload_size)?;
+ let mut num_continuations = 0;
+
+ let buf = match SplitState::new(payload)? {
+ SplitState::Single(cmd) => read_payload(cmd)?,
+ SplitState::Split(cmd, mut continuations) => {
+ let mut buf = read_payload(cmd)?;
+ assert!(size_of::<TestHeader>() + buf.len() <= MAX_CMD_SIZE);
+
+ while let Some(cont) = continuations.next() {
+ let payload = read_payload(cont)?;
+ assert!(payload.len() <= MAX_CMD_SIZE);
+ buf.extend_from_slice(&payload, GFP_KERNEL)?;
+ num_continuations += 1;
+ }
+
+ buf
+ }
+ };
+
+ assert_eq!(num_continuations, t.num_continuations);
+ assert_eq!(
+ buf.as_slice(),
+ TestPayload::generate_pattern(t.payload_size)?.as_slice()
+ );
+ Ok(())
+ }
+
+ #[test]
+ fn split_command() -> Result {
+ check_split(SplitTest {
+ payload_size: 0,
+ num_continuations: 0,
+ })?;
+ check_split(SplitTest {
+ payload_size: MAX_FIRST_PAYLOAD,
+ num_continuations: 0,
+ })?;
+ check_split(SplitTest {
+ payload_size: MAX_FIRST_PAYLOAD + 1,
+ num_continuations: 1,
+ })?;
+ check_split(SplitTest {
+ payload_size: MAX_FIRST_PAYLOAD + MAX_CMD_SIZE,
+ num_continuations: 1,
+ })?;
+ check_split(SplitTest {
+ payload_size: MAX_FIRST_PAYLOAD + MAX_CMD_SIZE + 1,
+ num_continuations: 2,
+ })?;
+ check_split(SplitTest {
+ payload_size: MAX_FIRST_PAYLOAD + MAX_CMD_SIZE * 3 + MAX_CMD_SIZE / 2,
+ num_continuations: 4,
+ })?;
+ Ok(())
+ }
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Claude review: gpu: nova-core: gsp: add tests for continuation records
2026-03-02 11:42 ` [PATCH v4 9/9] gpu: nova-core: gsp: add tests for continuation records Eliot Courtney
@ 2026-03-03 3:34 ` Claude Code Review Bot
0 siblings, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Good test coverage using kunit_tests. The tests:
1. Use a non-zero-sized `TestHeader` (64 bytes) to exercise the `MAX_FIRST_PAYLOAD` calculation realistically.
2. Generate a non-trivial test pattern (`i ^ (i >> 8)`) that doesn't repeat every 256 bytes.
3. Cover boundary conditions precisely:
- 0 payload → 0 continuations
- `MAX_FIRST_PAYLOAD` → 0 continuations (exact fit)
- `MAX_FIRST_PAYLOAD + 1` → 1 continuation (just over)
- `MAX_FIRST_PAYLOAD + MAX_CMD_SIZE` → 1 continuation (exact 2nd chunk)
- `MAX_FIRST_PAYLOAD + MAX_CMD_SIZE + 1` → 2 continuations (just over 2 chunks)
- `MAX_FIRST_PAYLOAD + 3.5 * MAX_CMD_SIZE` → 4 continuations
4. Verify both the number of continuations AND that the reassembled data matches the original pattern.
The `read_payload` helper correctly creates an SBufferIter with an empty second slice (`&mut []`), which works because each individual chunk (split command payload or continuation record) fits within a single buffer.
**Minor observation:** `TestHeader` manually implements `FromBytes` and `AsBytes` with `// SAFETY` comments. This is correct for `[u8; 64]`, but if the `derive` macros from zerocopy/kernel::transmute were available for this type, it would be cleaner. However, this is likely a constraint of the kernel's transmute API and is fine as-is.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread
* Claude review: gpu: nova-core: gsp: add continuation record support
2026-03-02 11:42 [PATCH v4 0/9] gpu: nova-core: gsp: add continuation record support Eliot Courtney
` (8 preceding siblings ...)
2026-03-02 11:42 ` [PATCH v4 9/9] gpu: nova-core: gsp: add tests for continuation records Eliot Courtney
@ 2026-03-03 3:34 ` Claude Code Review Bot
9 siblings, 0 replies; 30+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 3:34 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: gpu: nova-core: gsp: add continuation record support
Author: Eliot Courtney <ecourtney@nvidia.com>
Patches: 18
Reviewed: 2026-03-03T13:34:46.541034
---
This is a well-structured 9-patch series adding continuation record support for the nova-core GSP command queue, enabling RPCs larger than 16 pages (64KB) to be split and sent across multiple command queue elements. The series builds incrementally: preparatory cleanups (patches 1, 5-7), infrastructure additions (patches 2-4), the core splitting logic (patch 8), and tests (patch 9).
The design is clean. Small commands continue to be written directly into the command queue with zero overhead. Large commands go through a staging buffer (one extra copy), which is an acceptable tradeoff for correctness and simplicity. The `SplitState`/`SplitCommand`/`ContinuationRecord` type decomposition is well-chosen and uses Rust's type system effectively.
The series is in good shape overall. I have a few observations below, but nothing blocking.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 30+ messages in thread