From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps Date: Wed, 04 Mar 2026 07:19:20 +1000 Message-ID: In-Reply-To: <20260303-b4-dmabuf-heap-coherent-rmem-v2-4-65a4653b3378@redhat.com> References: <20260303-b4-dmabuf-heap-coherent-rmem-v2-0-65a4653b3378@redhat.com> <20260303-b4-dmabuf-heap-coherent-rmem-v2-4-65a4653b3378@redhat.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review The main patch -- a 426-line new file `coherent_heap.c`. This is the core of the series. **Issues:** 1. **`virt_to_page()` on DMA coherent memory is not universally safe.** The allocation path does: ```c + buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len, + &buffer->dma_addr, GFP_KERNEL); ... + for (pg = 0; pg < pagecount; pg++) + buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr + + (pg * PAGE_SIZE)); ``` `dma_alloc_coherent()` may return memory from vmalloc space (especially on architectures that need uncached mappings), in which case `virt_to_page()` is incorrect -- it only works for direct-mapped (lowmem) kernel addresses. On ARM with `no-map` regions (which these coherent regions typically use), the return from `dma_alloc_coherent()` goes through `ioremap()` / `memremap()` paths, making `virt_to_page()` invalid. This could lead to returning bogus page structs. Consider using `vmalloc_to_page()` or `pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_addr)))` instead (similar to how some other drivers handle this). 2. **`begin_cpu_access` / `end_cpu_access` syncs are unnecessary for coherent memory.** The whole point of coherent DMA memory is that it is cache-coherent and doesn't need explicit sync. The implementations: ```c +static int coherent_heap_dma_buf_begin_cpu_access(...) +{ + ... + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); + } ``` These `dma_sync_sgtable_for_cpu/device` calls should be no-ops for coherent memory, but iterating all attachments under a mutex for a no-op is wasteful. The `invalidate_kernel_vmap_range` / `flush_kernel_vmap_range` calls are also questionable since the vmap in `coherent_heap_do_vmap()` uses `PAGE_KERNEL` (cached), which could actually conflict with the uncached mapping from `dma_alloc_coherent()`. This is a potential aliasing problem -- having both a cached vmap and an uncached DMA mapping to the same physical pages will cause coherency issues on architectures without hardware coherence. 3. **Exported but unused `dma_heap_coherent_register()`.** The header adds: ```c +#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT) +int dma_heap_coherent_register(struct reserved_mem *rmem); +#else +static inline int dma_heap_coherent_register(struct reserved_mem *rmem) +... ``` But no implementation of `dma_heap_coherent_register()` exists in this patch. The actual function is `__coherent_heap_register()` (static). This declaration is never used by patch 5 either (which calls `dma_coherent_get_reserved_region()` instead). This appears to be dead code left over from a previous iteration. 4. **`dma_coerce_mask_and_coherent()` usage.** The function: ```c +static int coherent_heap_init_dma_mask(struct device *dev) +{ + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); ``` `dma_coerce_mask_and_coherent()` is documented as being for platform code / bus code only, not for drivers. The DMA mask should match the actual memory region. Since the reserved region has a known physical address and size, the mask should be derived from `rmem->base + rmem->size - 1` to ensure correctness. 5. **`coherent_heap_map_dma_buf` loses the actual error code:** ```c + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(-ENOMEM); ``` The actual error from `dma_map_sgtable()` is discarded and replaced with `-ENOMEM`. Should be `return ERR_PTR(ret)`. 6. **The heap name comes from `rmem->name`** which is the DT node name (e.g., `"framebuffer@80000000"`). This could contain characters that are invalid for device node names. No sanitization is done. 7. **No `MODULE_LICENSE` in this patch** -- the module_init is added but `MODULE_LICENSE("GPL")` is deferred to patch 6. This means this patch doesn't compile as a module in isolation (though patch 4's Kconfig has it as `bool`, so it only matters for bisect-then-change-Kconfig scenarios). --- Generated by Claude Code Patch Reviewer