From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: dma-buf: revert "use inline lock for the dma-fence-chain"
Date: Tue, 24 Feb 2026 09:42:37 +1000 [thread overview]
Message-ID: <review-patch1-20260223195705.206226-1-christian.koenig@amd.com> (raw)
In-Reply-To: <20260223195705.206226-1-christian.koenig@amd.com>
Patch Review
The change itself is straightforward and correct:
> + spin_lock_init(&chain->lock);
> rcu_assign_pointer(chain->prev, prev);
The spinlock is initialized before any use, which is correct. It is placed early in `dma_fence_chain_init`, well before the call to `dma_fence_init64` that stores the pointer.
> - dma_fence_init64(&chain->base, &dma_fence_chain_ops, NULL,
> + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock,
> context, seqno);
The lock pointer references `chain->lock` which is embedded in the chain object, so its lifetime matches the fence -- no use-after-free concern here.
> + spinlock_t lock;
> };
The field is placed after the `cb`/`work` union, outside the union, which is correct since the lock may be needed concurrently with either the callback or the irq work.
A few observations:
1. The commit message says "This causes a lockdep splat" but doesn't include the actual splat. It's standard practice in kernel patches fixing lockdep issues to include the splat output (or at least summarize the lock ordering conflict). This would help reviewers and future readers understand what the actual problem was.
2. The commit message's own admission that this is "not really the right fix, but changing this is more work than expected" is worth taking seriously. Matt Brost's reply proposes using `lockdep_set_class` with a static `lock_class_key` to give chain fences (and array fences) their own lockdep class. That approach would fix the lockdep splat without regressing on the memory/performance characteristics of the inline lock. Is there a reason that approach wouldn't work here, or is the plan to pursue it as a follow-up?
3. If the inline lock was introduced as an optimization (saving a spinlock per chain node), does reverting it have a measurable impact in workloads that create many chain nodes? The struct grows by `sizeof(spinlock_t)` per chain allocation.
No correctness bugs found in the patch itself.
---
Generated by Claude Code Patch Reviewer
next prev parent reply other threads:[~2026-02-23 23:42 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-23 19:57 [PATCH] dma-buf: revert "use inline lock for the dma-fence-chain" Christian König
2026-02-23 20:18 ` Matthew Brost
2026-02-23 23:42 ` Claude Code Review Bot [this message]
2026-02-23 23:42 ` Claude review: " Claude Code Review Bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=review-patch1-20260223195705.206226-1-christian.koenig@amd.com \
--to=claude-review@example.com \
--cc=dri-devel-reviews@example.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox