From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps Date: Mon, 09 Mar 2026 09:02:40 +1000 Message-ID: In-Reply-To: <20260306-b4-dmabuf-heap-coherent-rmem-v3-5-3d00d36c9bc4@redhat.com> References: <20260306-b4-dmabuf-heap-coherent-rmem-v3-0-3d00d36c9bc4@redhat.com> <20260306-b4-dmabuf-heap-coherent-rmem-v3-5-3d00d36c9bc4@redhat.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Patch Review This is the main heap implementation. **Issue: `begin_cpu_access` / `end_cpu_access` syncing for coherent memory** ```c static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) { ... if (buffer->vmap_cnt) invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); list_for_each_entry(a, &buffer->attachments, list) { if (!a->mapped) continue; dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); } ``` For *coherent* memory, the CPU and device see the same data without cache m= aintenance. The `dma_sync_sgtable_for_cpu/device()` calls and `invalidate_k= ernel_vmap_range`/`flush_kernel_vmap_range` are unnecessary for coherent al= locations and add overhead. The CMA heap does this because CMA memory isn't= inherently coherent, but coherent reserved-memory is, by definition, coher= ent. These sync operations should be documented as to why they're present (= perhaps for non-cache-coherent architectures?) or reconsidered. **Issue: `virt_to_page()` on `dma_alloc_coherent()` return value** ```c for (pg =3D 0; pg < pagecount; pg++) buffer->pages[pg] =3D virt_to_page((char *)buffer->alloc_vaddr + (pg * PAGE_SIZE)); ``` `dma_alloc_coherent()` returns a virtual address, but on some architectures= (notably ARM with `no-map` regions), this might return an ioremap'd addres= s rather than a regular kernel linear map address. `virt_to_page()` on such= addresses is **undefined behavior** and will produce garbage `struct page = *` pointers. This is a significant concern on ARM platforms where these coh= erent reserved-memory regions are commonly used with `no-map`. The pages ob= tained this way are then used in `sg_alloc_table_from_pages()` during attac= h, which would corrupt the scatterlist. Consider using `pfn_to_page(PHYS_PFN(buffer->dma_addr + pg * PAGE_SIZE))` o= r directly computing pages from the physical address of the rmem region ins= tead. **Issue: `coherent_heap_map_dma_buf` returns `-ENOMEM` for all errors** ```c ret =3D dma_map_sgtable(attachment->dev, table, direction, 0); if (ret) return ERR_PTR(-ENOMEM); ``` Should return `ERR_PTR(ret)` to preserve the actual error code. **Issue: Memory leak of `coh_heap->name` on registration failure** In `__coherent_heap_register()`: ```c destroy_heap: dma_heap_destroy(coh_heap->heap); coh_heap->heap =3D NULL; free_name: kfree(coh_heap->name); ``` But `dma_heap_destroy()` =E2=86=92 `put_device()` =E2=86=92 `dma_heap_dev_r= elease()` will `kfree(heap->name)`, which is the **same pointer** as `coh_h= eap->name` (since `exp_info.name =3D coh_heap->name` and `dma_heap_create` = stores it). So `free_name` does a double-free. Either `dma_heap_dev_release= ()` should not free the name (see patch 2 issue), or this `kfree(coh_heap->= name)` should be removed. **Issue: No `module_exit` =E2=80=94 intentional but no suppression of warni= ng** The commit message says the heap can't unload. When built as a module (patc= h 6), the lack of `module_exit` will generate a compiler warning. Consider = adding a comment or `__module_exit` stub. **Minor: Missing newline in pr_warn** ```c pr_warn("Failed to add coherent heap %s", rmem->name ? rmem->name : "unknown"); ``` Missing `\n` at the end of the format string. --- --- Generated by Claude Code Patch Reviewer