From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: heaps: Clear CMA pages with clear_pages() Date: Wed, 11 Mar 2026 13:29:32 +1000 Message-ID: In-Reply-To: <20260310-cma-heap-clear-pages-v2-1-ecbbed3d7e6d@kernel.org> References: <20260310-cma-heap-clear-pages-v2-0-ecbbed3d7e6d@kernel.org> <20260310-cma-heap-clear-pages-v2-1-ecbbed3d7e6d@kernel.org> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **Change:** Replaces `memset(page_address(cma_pages), 0, size)` with `clear= _pages(page_address(cma_pages), pagecount)`. This is a clean substitution. The `clear_pages()` helper (introduced by com= mit 62a9f5a85b98) can use architecture-optimized assembly for zeroing, whic= h is a potential performance win over generic `memset`. **One minor observation:** The original code used `size` (byte count) with = `memset`, while `clear_pages()` takes a page count. This is correct =E2=80= =94 `pagecount` is already computed earlier in the function and represents = the same range. However, there's a subtle difference: `size` is `PAGE_ALIGN= (len)` which is the aligned allocation size, and `pagecount =3D size >> PAG= E_SHIFT`. So `pagecount * PAGE_SIZE =3D=3D size`, making this exactly equiv= alent. No issue here. Looks good. --- Generated by Claude Code Patch Reviewer