From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: heaps: Clear CMA pages with clear_pages() Date: Thu, 05 Mar 2026 14:00:37 +1000 Message-ID: In-Reply-To: <20260304-cma-heap-clear-pages-v1-1-6ff59da716d3@kernel.org> References: <20260304-cma-heap-clear-pages-v1-1-6ff59da716d3@kernel.org> <20260304-cma-heap-clear-pages-v1-1-6ff59da716d3@kernel.org> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review **Correctness of the API change**: The conversion from `memset(..., 0, size= )` to `clear_pages(addr, pagecount)` is semantically correct =E2=80=94 both= zero out the same memory range. The `page_address()` return value is suita= ble for `clear_pages()`. **Preemption latency concern**: The `clear_pages()` documentation in `inclu= de/linux/mm.h:4410-4417` explicitly states: > clear_pages() does not (and on architectures where it reduces to a few lo= ng-running instructions, might not be able to) call cond_resched() to check= if rescheduling is required. > ...the caller is expected to limit @npages to no more than PROCESS_PAGES_= NON_PREEMPT_BATCH. Here, `pagecount` is derived directly from the user-supplied `len` paramete= r at line 303-304: ```c size_t size =3D PAGE_ALIGN(len); pgoff_t pagecount =3D size >> PAGE_SHIFT; ``` CMA allocations can be quite large (tens or hundreds of MB). On architectur= es that define `clear_pages`, `PROCESS_PAGES_NON_PREEMPT_BATCH` is `SZ_32M = >> PAGE_SHIFT` (8192 pages). If a user requests, say, a 256MB buffer, `page= count` would be 65536, far exceeding the recommended batch limit. The original `memset()` had the same theoretical issue (no `cond_resched()`= ), so this isn't a regression in that sense. But adopting `clear_pages()` w= hile ignoring its documented calling convention is worth noting. Consider b= atching the clear in a loop with `cond_resched()` checks, similar to how th= e HighMem path iterates page-by-page with a `fatal_signal_pending()` check.= For example: ```c unsigned long remaining =3D pagecount; void *addr =3D page_address(cma_pages); while (remaining > 0) { unsigned int batch =3D min_t(unsigned long, remaining, PROCESS_PAGES_NON_PREEMPT_BATCH); clear_pages(addr, batch); addr +=3D batch * PAGE_SIZE; remaining -=3D batch; cond_resched(); if (fatal_signal_pending(current)) goto free_cma; } ``` This would also bring parity with the HighMem path's `fatal_signal_pending(= )` check, which the non-HighMem path currently lacks (and has always lacked= , even before this patch). **Missing include**: Verify that `cma_heap.c` pulls in the `clear_pages()` = declaration. It's defined in ``, which is likely already includ= ed transitively, but it's worth confirming. **Overall**: The patch is a reasonable micro-optimization. The main suggest= ion is to respect the `PROCESS_PAGES_NON_PREEMPT_BATCH` limit documented in= the `clear_pages()` API, and optionally add the `fatal_signal_pending()` c= heck that the HighMem path already has. --- Generated by Claude Code Patch Reviewer