From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: revert "use inline lock for the dma-fence-chain" Date: Tue, 24 Feb 2026 09:42:37 +1000 Message-ID: In-Reply-To: <20260223195705.206226-1-christian.koenig@amd.com> References: <20260223195705.206226-1-christian.koenig@amd.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Overall Series Review Subject: dma-buf: revert "use inline lock for the dma-fence-chain" Author: "=?UTF-8?q?Christian=20K=C3=B6nig?=" Patches: 2 Reviewed: 2026-02-24T09:42:37.426008 --- This is a single patch that reverts commit a408c0ca0c411ca1ead995bdae3112a806c87556, which changed dma-fence-chain to use an "inline lock" (passing NULL to `dma_fence_init64` and relying on a lock embedded within the `dma_fence` structure itself). The revert restores a per-chain-node `spinlock_t lock` field in `struct dma_fence_chain`, initializes it explicitly, and passes it to `dma_fence_init64`. The stated reason is a lockdep splat. The patch is mechanically correct -- the spinlock is properly initialized before use, the field is properly placed in the struct, and the lifecycle is correct since the lock lives as long as the chain node. However, the commit message is minimal: it acknowledges this is "not really the right fix" but doesn't include the lockdep splat output or explain the root cause of the splat. Matt Brost's reply on the list suggests a potentially better fix using `lockdep_set_class` to assign distinct lock classes for chain and array fences, which would preserve the inline lock optimization while resolving the lockdep complaint. --- Generated by Claude Code Patch Reviewer