From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: revert "use inline lock for the dma-fence-chain" Date: Tue, 24 Feb 2026 09:42:37 +1000 Message-ID: In-Reply-To: <20260223195705.206226-1-christian.koenig@amd.com> References: <20260223195705.206226-1-christian.koenig@amd.com> <20260223195705.206226-1-christian.koenig@amd.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review The change itself is straightforward and correct: > + spin_lock_init(&chain->lock); > rcu_assign_pointer(chain->prev, prev); The spinlock is initialized before any use, which is correct. It is placed early in `dma_fence_chain_init`, well before the call to `dma_fence_init64` that stores the pointer. > - dma_fence_init64(&chain->base, &dma_fence_chain_ops, NULL, > + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock, > context, seqno); The lock pointer references `chain->lock` which is embedded in the chain object, so its lifetime matches the fence -- no use-after-free concern here. > + spinlock_t lock; > }; The field is placed after the `cb`/`work` union, outside the union, which is correct since the lock may be needed concurrently with either the callback or the irq work. A few observations: 1. The commit message says "This causes a lockdep splat" but doesn't include the actual splat. It's standard practice in kernel patches fixing lockdep issues to include the splat output (or at least summarize the lock ordering conflict). This would help reviewers and future readers understand what the actual problem was. 2. The commit message's own admission that this is "not really the right fix, but changing this is more work than expected" is worth taking seriously. Matt Brost's reply proposes using `lockdep_set_class` with a static `lock_class_key` to give chain fences (and array fences) their own lockdep class. That approach would fix the lockdep splat without regressing on the memory/performance characteristics of the inline lock. Is there a reason that approach wouldn't work here, or is the plan to pursue it as a follow-up? 3. If the inline lock was introduced as an optimization (saving a spinlock per chain node), does reverting it have a measurable impact in workloads that create many chain nodes? The struct grows by `sizeof(spinlock_t)` per chain allocation. No correctness bugs found in the patch itself. --- Generated by Claude Code Patch Reviewer