* [PATCH] dma-buf: heaps: Clear CMA pages with clear_pages()
@ 2026-03-04 0:25 Linus Walleij
2026-03-05 4:00 ` Claude review: " Claude Code Review Bot
2026-03-05 4:00 ` Claude Code Review Bot
0 siblings, 2 replies; 4+ messages in thread
From: Linus Walleij @ 2026-03-04 0:25 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König
Cc: linux-media, dri-devel, linaro-mm-sig, Linus Walleij
As of commit 62a9f5a85b98
"mm: introduce clear_pages() and clear_user_pages()" we can
clear a range of pages with a potentially assembly-optimized
call.
Instead of using a memset, use this helper to clear the whole
range of pages from the CMA allocation.
Signed-off-by: Linus Walleij <linusw@kernel.org>
---
drivers/dma-buf/heaps/cma_heap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
index bd3370b9a3f6..f0bacf25ed9d 100644
--- a/drivers/dma-buf/heaps/cma_heap.c
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -343,7 +343,7 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
nr_clear_pages--;
}
} else {
- memset(page_address(cma_pages), 0, size);
+ clear_pages(page_address(cma_pages), pagecount);
}
buffer->pages = kmalloc_objs(*buffer->pages, pagecount);
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260303-cma-heap-clear-pages-540f3ac9f734
Best regards,
--
Linus Walleij <linusw@kernel.org>
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Claude review: dma-buf: heaps: Clear CMA pages with clear_pages()
2026-03-04 0:25 [PATCH] dma-buf: heaps: Clear CMA pages with clear_pages() Linus Walleij
2026-03-05 4:00 ` Claude review: " Claude Code Review Bot
@ 2026-03-05 4:00 ` Claude Code Review Bot
1 sibling, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-05 4:00 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: dma-buf: heaps: Clear CMA pages with clear_pages()
Author: Linus Walleij <linusw@kernel.org>
Patches: 1
Reviewed: 2026-03-05T14:00:37.741553
---
This is a single-patch series that replaces `memset(page_address(cma_pages), 0, size)` with `clear_pages(page_address(cma_pages), pagecount)` in the CMA heap allocator's non-highmem path. The change leverages the newer `clear_pages()` API (introduced by commit 62a9f5a85b98) which can use architecture-optimized page-clearing instructions.
The conversion is straightforward and functionally correct. However, there is one concern regarding preemption latency that should be addressed.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
* Claude review: dma-buf: heaps: Clear CMA pages with clear_pages()
2026-03-04 0:25 [PATCH] dma-buf: heaps: Clear CMA pages with clear_pages() Linus Walleij
@ 2026-03-05 4:00 ` Claude Code Review Bot
2026-03-05 4:00 ` Claude Code Review Bot
1 sibling, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-05 4:00 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Correctness of the API change**: The conversion from `memset(..., 0, size)` to `clear_pages(addr, pagecount)` is semantically correct — both zero out the same memory range. The `page_address()` return value is suitable for `clear_pages()`.
**Preemption latency concern**: The `clear_pages()` documentation in `include/linux/mm.h:4410-4417` explicitly states:
> clear_pages() does not (and on architectures where it reduces to a few long-running instructions, might not be able to) call cond_resched() to check if rescheduling is required.
> ...the caller is expected to limit @npages to no more than PROCESS_PAGES_NON_PREEMPT_BATCH.
Here, `pagecount` is derived directly from the user-supplied `len` parameter at line 303-304:
```c
size_t size = PAGE_ALIGN(len);
pgoff_t pagecount = size >> PAGE_SHIFT;
```
CMA allocations can be quite large (tens or hundreds of MB). On architectures that define `clear_pages`, `PROCESS_PAGES_NON_PREEMPT_BATCH` is `SZ_32M >> PAGE_SHIFT` (8192 pages). If a user requests, say, a 256MB buffer, `pagecount` would be 65536, far exceeding the recommended batch limit.
The original `memset()` had the same theoretical issue (no `cond_resched()`), so this isn't a regression in that sense. But adopting `clear_pages()` while ignoring its documented calling convention is worth noting. Consider batching the clear in a loop with `cond_resched()` checks, similar to how the HighMem path iterates page-by-page with a `fatal_signal_pending()` check. For example:
```c
unsigned long remaining = pagecount;
void *addr = page_address(cma_pages);
while (remaining > 0) {
unsigned int batch = min_t(unsigned long, remaining,
PROCESS_PAGES_NON_PREEMPT_BATCH);
clear_pages(addr, batch);
addr += batch * PAGE_SIZE;
remaining -= batch;
cond_resched();
if (fatal_signal_pending(current))
goto free_cma;
}
```
This would also bring parity with the HighMem path's `fatal_signal_pending()` check, which the non-HighMem path currently lacks (and has always lacked, even before this patch).
**Missing include**: Verify that `cma_heap.c` pulls in the `clear_pages()` declaration. It's defined in `<linux/mm.h>`, which is likely already included transitively, but it's worth confirming.
**Overall**: The patch is a reasonable micro-optimization. The main suggestion is to respect the `PROCESS_PAGES_NON_PREEMPT_BATCH` limit documented in the `clear_pages()` API, and optionally add the `fatal_signal_pending()` check that the HighMem path already has.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
* Claude review: dma-buf: heaps: Clear CMA pages with clear_pages()
2026-03-10 8:53 ` [PATCH v2 1/2] dma-buf: heaps: Clear CMA pages with clear_pages() Linus Walleij
@ 2026-03-11 3:29 ` Claude Code Review Bot
0 siblings, 0 replies; 4+ messages in thread
From: Claude Code Review Bot @ 2026-03-11 3:29 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Change:** Replaces `memset(page_address(cma_pages), 0, size)` with `clear_pages(page_address(cma_pages), pagecount)`.
This is a clean substitution. The `clear_pages()` helper (introduced by commit 62a9f5a85b98) can use architecture-optimized assembly for zeroing, which is a potential performance win over generic `memset`.
**One minor observation:** The original code used `size` (byte count) with `memset`, while `clear_pages()` takes a page count. This is correct — `pagecount` is already computed earlier in the function and represents the same range. However, there's a subtle difference: `size` is `PAGE_ALIGN(len)` which is the aligned allocation size, and `pagecount = size >> PAGE_SHIFT`. So `pagecount * PAGE_SIZE == size`, making this exactly equivalent. No issue here.
Looks good.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-11 3:29 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-04 0:25 [PATCH] dma-buf: heaps: Clear CMA pages with clear_pages() Linus Walleij
2026-03-05 4:00 ` Claude review: " Claude Code Review Bot
2026-03-05 4:00 ` Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-03-10 8:53 [PATCH v2 0/2] dma-buf: heaps: Use page clearing helpers Linus Walleij
2026-03-10 8:53 ` [PATCH v2 1/2] dma-buf: heaps: Clear CMA pages with clear_pages() Linus Walleij
2026-03-11 3:29 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox