From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: drm/pagemap_util: Ensure proper cache lock management on free Date: Fri, 27 Feb 2026 15:07:14 +1000 Message-ID: In-Reply-To: <20260224154832.7249-2-jonathan.cavitt@intel.com> References: <20260224154832.7249-2-jonathan.cavitt@intel.com> <20260224154832.7249-2-jonathan.cavitt@intel.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **The Bug =E2=80=94 missing `cache->dpagemap =3D NULL`:** The original code sets `cache->dpagemap =3D NULL` inside the `drm_pagemap_s= hrinker_cancel` success path (line 74): ```c if (drm_pagemap_shrinker_cancel(dpagemap)) { cache->dpagemap =3D NULL; // <-- this is present in the original spin_unlock(&cache->lock); drm_pagemap_destroy(dpagemap, false); } ``` The new code drops this assignment entirely: ```c if (dpagemap && !drm_pagemap_shrinker_cancel(dpagemap)) dpagemap =3D NULL; spin_unlock(&cache->lock); if (dpagemap) drm_pagemap_destroy(dpagemap, false); ``` The local variable `dpagemap` is set to NULL, but `cache->dpagemap` is neve= r cleared. While the cache is about to be freed (the function proceeds to `= kfree(cache)`), the original code was explicitly NULLing the field while st= ill holding the spinlock, which prevents any concurrent accessor from seein= g a stale pointer between the `spin_unlock` and the `kfree`. Since `drm_pag= emap_get_from_cache_if_active()` does `spin_lock(&cache->lock); dpagemap = =3D drm_pagemap_get_unless_zero(cache->dpagemap); spin_unlock(&cache->lock)= ;`, there is a small window where a concurrent reader could grab `cache->dp= agemap` after the lock is dropped but before `kfree(cache)`. The original c= ode avoided this by setting `cache->dpagemap =3D NULL` under the lock. The = new code should do: ```c if (dpagemap && !drm_pagemap_shrinker_cancel(dpagemap)) dpagemap =3D NULL; cache->dpagemap =3D NULL; /* clear before unlock */ spin_unlock(&cache->lock); ``` That said, this is a `devm` fini callback =E2=80=94 at device teardown time= , concurrent cache accesses would themselves be a bug. So in practice this = may be benign, but for correctness under the stated goal of "consistency," = the `cache->dpagemap =3D NULL` assignment should be preserved. **The original bug being fixed:** In the existing code, when `drm_pagemap_shrinker_cancel()` returns `false`,= the lock is **never released** =E2=80=94 control falls through without cal= ling `spin_unlock`, then jumps to `out:` where `mutex_destroy` and `kfree` = happen with the spinlock still held. This is a genuine bug (lock leak), so = the patch is fixing a real issue, not just a static analysis aesthetic. ```c // Original: if shrinker_cancel returns false, no unlock! if (drm_pagemap_shrinker_cancel(dpagemap)) { cache->dpagemap =3D NULL; spin_unlock(&cache->lock); // only unlocked here drm_pagemap_destroy(dpagemap, false); } // falls through to out: without unlock ``` The new code correctly ensures the lock is always released on a single path= , which is good. **Minor style note:** The negated condition `!drm_pagemap_shrinker_cancel(d= pagemap)` makes the logic slightly harder to follow =E2=80=94 "if dpagemap = exists and cancel *failed*, then null out dpagemap" =E2=80=94 but this matc= hes the flow requested by Maarten per the v2 changelog, so it's a reasonabl= e style trade-off. **Summary:** The patch fixes a real spin_unlock omission bug but introduces= a minor regression by dropping the `cache->dpagemap =3D NULL` assignment. = That assignment should be added back (unconditionally, before the unlock) t= o maintain the original defensive clearing behavior. --- Generated by Claude Code Patch Reviewer