From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: mm/mmu_notifier: Allow two-pass struct mmu_interval_notifiers Date: Tue, 03 Mar 2026 13:05:44 +1000 Message-ID: In-Reply-To: <20260302163248.105454-2-thomas.hellstrom@linux.intel.com> References: <20260302163248.105454-1-thomas.hellstrom@linux.intel.com> <20260302163248.105454-2-thomas.hellstrom@linux.intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review This is the core infrastructure patch. The approach of using a lockless list (`llist`) to collect finish callbacks during the first-pass tree walk and then iterating them for the second pass is clean and avoids a redundant second interval tree traversal. **Design observations:** The documentation in `struct mmu_interval_notifier_finish` is good about recommending GFP_NOWAIT allocation and pools. However, the xe driver (in patch 2) embeds the struct instead, which is the superior approach for a single-notifier-at-a-time design. **Concern: `invalidate_start` vs `invalidate` mutual exclusivity not enforced:** The ops struct allows both `invalidate` and `invalidate_start` to be set simultaneously. The code checks `invalidate_start` first (lines 364, 404), so `invalidate` would never be called if both are present. This is fine in practice but a `WARN_ON(ops->invalidate && ops->invalidate_start)` during registration would be defensive. **Correctness of error path in `mn_itree_invalidate`:** ```c if (!ret) { if (WARN_ON(mmu_notifier_range_blockable(range))) continue; err = -EAGAIN; break; } ``` When a non-blockable notifier returns false, the loop breaks, and then: ```c mn_itree_finish_pass(&finish_passes); if (err) mn_itree_inv_end(subscriptions); ``` This correctly calls the finish callbacks for any notifiers that already succeeded before the failure. Good. **Minor: llist reversal for ordering:** ```c struct llist_node *first = llist_reverse_order(__llist_del_all(finish_passes)); ``` Since `__llist_add` prepends, the list is in reverse iteration order. The reversal here restores original order, which is the right thing to do. However, the comment in the cover letter says this uses "lockless list for list management" - note that `__llist_add` (double-underscore) is actually *not* the lockless/atomic version, it's the non-atomic helper. This is fine since the list is local to each invocation, but the v2 changelog entry "Use lockless list for list management" is slightly misleading. The structure uses `llist_node`/`llist_head` types but doesn't use the atomic operations. **Nit:** The v2 changelog entry says "Update documentation (GitHub Copilot:claude-sonnet-4.6)" - the Assisted-by tag uses "GitHub Copilot:claude-sonnet-4.6" which is an unusual attribution format. --- --- Generated by Claude Code Patch Reviewer