From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/xe/userptr: Convert invalidation to two-pass MMU notifier Date: Wed, 04 Mar 2026 07:09:16 +1000 Message-ID: In-Reply-To: <20260303133409.11609-3-thomas.hellstrom@linux.intel.com> References: <20260303133409.11609-1-thomas.hellstrom@linux.intel.com> <20260303133409.11609-3-thomas.hellstrom@linux.intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review This converts the xe userptr notifier from a single `invalidate` callback t= o the new `invalidate_start`/`invalidate_finish` two-pass model. **BUG =E2=80=94 static local variable in `xe_vma_userptr_force_invalidate`:= ** ```c void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma) { static struct mmu_interval_notifier_finish *finish; ``` This declares `finish` as `static`, meaning it's shared across all calls to= this function. This is almost certainly wrong =E2=80=94 it should be a pla= in local variable. Using `static` here would cause data races if this funct= ion were ever called concurrently (even though it's under `CONFIG_DRM_XE_US= ERPTR_INVAL_INJECT` and may currently be single-threaded in testing paths, = `static` is still incorrect). Should be: ```c struct mmu_interval_notifier_finish *finish; ``` **Design observation =E2=80=94 `finish_inuse` flag:** The `finish_inuse` bool is used to prevent two concurrent invalidations fro= m both trying to use the embedded `struct mmu_interval_notifier_finish`. Wh= en `finish_inuse` is set, the second caller falls back to the synchronous s= ingle-pass path. This is a pragmatic approach that avoids dynamic allocatio= n. However, the lock protection for `finish_inuse` should be reviewed carefull= y. It's documented as: ``` Protected by struct xe_vm::svm.gpusvm.notifier_lock in write mode alternatively by the same lock in read mode *and* the vm resv held. ``` In `xe_vma_userptr_invalidate_start`, the notifier_lock is held in write mo= de when `xe_vma_userptr_invalidate_pass1` reads and sets `finish_inuse`, so= that's correct. In `xe_vma_userptr_invalidate_finish`, the notifier_lock i= s also taken in write mode, so clearing `finish_inuse` via `xe_vma_userptr_= do_inval(vm, uvma, true)` is also correct. **Locking assertion `xe_userptr_assert_in_notifier`:** ```c static void xe_userptr_assert_in_notifier(struct xe_vm *vm) { lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 0) || (lockdep_is_held(&vm->lock) && lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) && dma_resv_held(xe_vm_resv(vm)))); } ``` This asserts either: notifier_lock in write mode, OR (vm->lock held AND not= ifier_lock in read mode AND resv held). This matches the documented locking= for the state members and looks correct. **Minor nit:** The `xe_vma_userptr_do_inval` function unconditionally does = `dma_resv_wait_timeout` with `MAX_SCHEDULE_TIMEOUT` and a non-blockable `fa= lse` parameter. This was inherited from the original code but it means this= path always sleeps. The `mmu_notifier_range_blockable` check happens earli= er in `xe_vma_userptr_invalidate_start`, which returns `false` if non-block= able. So this is fine. Overall: **One bug (static variable), otherwise looks correct.** --- --- Generated by Claude Code Patch Reviewer