From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
Date: Fri, 27 Feb 2026 15:38:23 +1000 [thread overview]
Message-ID: <review-patch1-20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com> (raw)
In-Reply-To: <20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com>
Patch Review
#### Critical Issues
**1. `virt_to_page()` on `dma_alloc_coherent()` return value is unsafe**
```c
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
(pg * PAGE_SIZE));
```
`dma_alloc_coherent()` on a device backed by reserved coherent memory goes through `dma_alloc_from_coherent()`, which returns a pointer into `mem->virt_base`. That virtual base comes from `memremap(phys_addr, size, MEMREMAP_WC)` (in `dma_init_coherent_memory()` at `kernel/dma/coherent.c`). On architectures where `memremap()` returns a vmalloc/ioremap-space address rather than a linear-map address, `virt_to_page()` will produce garbage or crash.
The existing `coherent.c` code avoids this — it stores `pfn_base = PFN_DOWN(phys_addr)` and uses PFN arithmetic directly in `remap_pfn_range()` calls. The correct approach here would be to derive PFNs from the DMA address:
```c
phys_addr_t phys = dma_to_phys(coh_heap->dev, buffer->dma_addr);
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = pfn_to_page(PFN_DOWN(phys) + pg);
```
This is the single most critical issue in the patch.
**2. Missing memory zeroing — information leak to userspace**
The CMA heap explicitly zeroes allocated pages before exporting them:
```c
/* CMA heap does: */
memset(page_address(cma_pages), 0, size);
```
The coherent heap does not zero the memory returned by `dma_alloc_coherent()`. For reserved-memory coherent pools, `dma_alloc_from_coherent()` returns a pointer into pre-mapped memory without zeroing it. Between allocate/free cycles, stale data from prior users is exposed. Since these buffers are exported to userspace, this is a security concern. A `memset(buffer->alloc_vaddr, 0, buffer->len)` is needed after allocation.
#### Architectural Issues
**3. Reverse dependency: `kernel/dma/coherent.c` calling into dma-buf heaps**
```c
/* kernel/dma/coherent.c: */
+#include <linux/dma-heap.h>
...
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = dma_heap_coherent_register(rmem);
```
This creates a layering violation — core DMA infrastructure now calls up into the dma-buf heaps subsystem. The CMA heap does not do this; it uses `cma_for_each_area()` to iterate and register CMA areas autonomously from `drivers/dma-buf/heaps/cma_heap.c`, without touching any CMA core code.
The coherent heap should similarly find reserved-memory regions on its own at `late_initcall` time (e.g., by iterating the reserved-memory DT nodes), rather than hooking into `rmem_dma_setup()`. This would keep the modification entirely within `drivers/dma-buf/heaps/` and avoid touching `kernel/dma/coherent.c`.
**4. All coherent reserved-memory regions are unconditionally exposed to userspace**
Every `shared-dma-pool` (non-reusable) region automatically gets a dma-buf heap. Some of these regions may be intended exclusively for specific kernel drivers. A malicious or buggy userspace process could exhaust a device-specific reserved pool, starving the kernel driver. There should be a DT opt-in mechanism (e.g., a `dma-buf-heap` property) to control which regions are exposed, rather than exposing all of them.
#### Code Quality Issues
**5. Unnecessary `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` guards throughout**
The header `linux/cgroup_dmem.h` already provides no-op stubs when `CONFIG_CGROUP_DMEM` is disabled (`dmem_cgroup_try_charge()` returns 0, `dmem_cgroup_uncharge()` is a no-op, etc.). All the `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` blocks in the patch are unnecessary and clutter the code. The entire cgroup charging path can be written without any `#if` guards:
```c
/* In struct coherent_heap — no #if needed: */
struct dmem_cgroup_region *cg;
/* In allocate — no #if needed: */
if (mem_accounting) {
ret = dmem_cgroup_try_charge(coh_heap->cg, size,
&buffer->pool, NULL);
if (ret)
goto free_buffer;
}
```
The stubs will compile to nothing when disabled. This would significantly clean up `coherent_heap_allocate()`, `coherent_heap_dma_buf_release()`, `__coherent_heap_register()`, and the error paths.
**6. `cg_unregister` error label structure is confusing**
```c
cg_unregister:
#if IS_ENABLED(CONFIG_CGROUP_DMEM)
dmem_cgroup_unregister_region(coh_heap->cg);
#endif
pdev_unregister:
```
When `CONFIG_CGROUP_DMEM` is disabled, `cg_unregister` is an empty label that falls through to `pdev_unregister`. This compiles but is confusing. Removing the ifdefs (as per point 5) would fix this naturally.
**7. Missing `device_release()` call in error/cleanup paths**
In `__coherent_heap_register()`, after a successful `rmem->ops->device_init()` call, the error paths only call `platform_device_unregister()` but never call `rmem->ops->device_release()` to undo the init:
```c
if (rmem->ops && rmem->ops->device_init) {
ret = rmem->ops->device_init(rmem, &coh_heap->pdev->dev);
if (ret)
goto pdev_unregister;
}
/* ... later errors goto cg_unregister or pdev_unregister ... */
/* but never call rmem->ops->device_release() */
```
If `dma_heap_add()` or `dmem_cgroup_register_region()` fails after a successful `device_init()`, the reserved memory binding is leaked. Need a `device_release` cleanup step.
**8. Deferred registration only on `-ENOMEM` is fragile**
```c
int dma_heap_coherent_register(struct reserved_mem *rmem)
{
int ret;
ret = __coherent_heap_register(rmem);
if (ret == -ENOMEM)
return coherent_heap_add_deferred(rmem);
return ret;
}
```
`rmem_dma_setup()` runs from `RESERVEDMEM_OF_DECLARE`, which is during very early boot before slab is available. The assumption is that `kzalloc()` will fail with `-ENOMEM` and trigger deferral. But `platform_device_register_simple()` or other calls could fail with different error codes (e.g., `-ENODEV`, `-EPROBE_DEFER`) before the allocator is ready. If the intent is to unconditionally defer during early boot, just always defer — or better yet, don't hook into `rmem_dma_setup()` at all (see point 3).
**9. `COHERENT_AREAS_DEFERRED` Kconfig naming is inconsistent**
```
config COHERENT_AREAS_DEFERRED
int "Max deferred coherent reserved-memory regions"
```
This name doesn't follow the `DMABUF_HEAPS_` prefix convention used by other options in this Kconfig. Should be something like `DMABUF_HEAPS_COHERENT_MAX_DEFERRED`. Also, making this a Kconfig integer for what is essentially an implementation detail of early boot deferral seems like unnecessary user-facing configuration.
**10. `begin_cpu_access`/`end_cpu_access` sync correctness for coherent memory**
```c
static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
...
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
}
```
This is copied from the CMA heap, but the semantics are subtly different. The underlying allocation is DMA-coherent (uncached/write-combining), so the sync on the allocation-side device is unnecessary. The sync on *attaching* devices' sg_tables may still be needed if those devices' DMA mappings are not coherent. The code works but a comment explaining this would be helpful, since the reader may wonder why coherent memory needs sync at all.
**11. Dependency on unmerged patches**
The `mem_accounting` variable and the dmem cgroup charging pattern are from Eric's series ([1] in the cover letter) which is not yet merged. This should be clearly stated as a dependency, and ideally this patch should be part of or follow that series.
#### Minor Issues
- `kmalloc_array()` is used for `buffer->pages` while `kzalloc_obj()` is used elsewhere — inconsistent with the CMA heap which uses `kmalloc_objs()` for the same purpose.
- The `pr_warn` in `coherent_heap_register_deferred()` is missing a newline: `pr_warn("Failed to add coherent heap %s",` → should have `\n`.
- No `MODULE_LICENSE()` (though since it's `bool` not `tristate`, this is cosmetic — but `MODULE_DESCRIPTION` without `MODULE_LICENSE` is inconsistent).
---
Generated by Claude Code Patch Reviewer
next prev parent reply other threads:[~2026-02-27 5:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 7:57 [PATCH] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-02-26 10:12 ` Maxime Ripard
2026-02-26 15:22 ` Robin Murphy
2026-02-27 5:38 ` Claude Code Review Bot [this message]
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=review-patch1-20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com \
--to=claude-review@example.com \
--cc=dri-devel-reviews@example.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox