From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: gpu: nova-core: gsp: add mutex locking to Cmdq Date: Fri, 27 Feb 2026 13:18:04 +1000 Message-ID: In-Reply-To: <20260225-cmdq-locking-v1-4-bbf6b4156706@nvidia.com> References: <20260225-cmdq-locking-v1-0-bbf6b4156706@nvidia.com> <20260225-cmdq-locking-v1-4-bbf6b4156706@nvidia.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review This is the main patch. It splits `Cmdq` into an outer struct (with `dev` a= nd the `Mutex`) and `CmdqInner` (with `seq` and `gsp_mem`). **The split is clean:** ```rust struct CmdqInner { seq: u32, gsp_mem: DmaGspMem, } pub(crate) struct Cmdq { dev: ARef, #[pin] inner: Mutex, } ``` **Methods that need `&mut` go to `CmdqInner`, stateless helpers stay on `Cm= dq`.** The `dev` reference is passed as a parameter to `CmdqInner` methods.= This is a reasonable pattern to avoid putting `dev` inside the mutex. **`send_sync_command` holds the lock across send+receive:** ```rust pub(crate) fn send_sync_command(&self, bar: &Bar0, command: M) -> Result= { let mut inner =3D self.inner.lock(); inner.send_command(&self.dev, bar, command)?; loop { match inner.receive_msg::(&self.dev, Delta::from_secs(10)= ) { Ok(reply) =3D> break Ok(reply), Err(ERANGE) =3D> continue, Err(e) =3D> break Err(e), } } } ``` This is the correct approach for reply integrity. As noted above, it does b= lock other queue access while waiting. The `read_poll_timeout` inside `wait= _for_msg` will sleep (via `Delta::from_millis(1)` polling interval), so at = least the mutex won't be hot-spinning =E2=80=94 the kernel `Mutex` is a sle= eping lock. This is important and correct. **`send_async_command` only holds the lock for the send:** ```rust pub(crate) fn send_async_command(&self, bar: &Bar0, command: M) -> Resul= t { self.inner.lock().send_command(&self.dev, bar, command) } ``` Good =E2=80=94 drops the lock immediately after sending. **`receive_msg` on outer Cmdq:** ```rust pub(crate) fn receive_msg(&self, timeout: Delta) -> Resu= lt { self.inner.lock().receive_msg(&self.dev, timeout) } ``` Used by `wait_gsp_init_done` and the sequencer. The lock is held during the= poll timeout, same concern as above. For the init-done case, this is a sin= gle-threaded initialization path, so it's fine. **`dma_handle()` concern:** ```rust pub(crate) fn dma_handle(&self) -> DmaAddress { self.inner.lock().gsp_mem.0.dma_handle() } ``` As noted in the overall review, this takes the lock to read an immutable va= lue. Minor issue =E2=80=94 could be optimized by storing the DMA handle (or= the entire `DmaGspMem`) outside the mutex, but this is not a correctness p= roblem and can be addressed later. **All callers updated from `&mut self` / `&mut Cmdq` to `&self` / `&Cmdq`:** ```rust - mut self: Pin<&mut Self>, + self: Pin<&mut Self>, ``` ```rust -pub(crate) fn wait_gsp_init_done(cmdq: &mut Cmdq) -> Result { +pub(crate) fn wait_gsp_init_done(cmdq: &Cmdq) -> Result { ``` ```rust - pub(crate) fn run(cmdq: &mut Cmdq, ...) -> Result { + pub(crate) fn run(cmdq: &Cmdq, ...) -> Result { ``` All consistent and correct. **Minor formatting nit:** In the `new_mutex!` invocation, the closing `})` = has inconsistent indentation: ```rust try_pin_init!(Self { inner <- new_mutex!(CmdqInner { gsp_mem: DmaGspMem::new(dev)?, seq: 0, }), dev: dev.into(), }) ``` The `})` closing `new_mutex!` should probably be at the same level as `inne= r`, i.e.: ```rust inner <- new_mutex!(CmdqInner { gsp_mem: DmaGspMem::new(dev)?, seq: 0, }), ``` This is purely cosmetic. Overall a solid patch. **Reviewed-by worthy** with the optional considerati= on of moving `dma_handle` out of the locked state. --- Generated by Claude Code Patch Reviewer