* [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap
@ 2026-03-03 12:33 Albert Esteve
2026-03-03 12:33 ` [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
` (7 more replies)
0 siblings, 8 replies; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard, John Stultz
This patch introduces a new heap driver to expose DT non‑reusable
"shared-dma-pool" coherent regions as dma-buf heaps, so userspace can
allocate buffers from each reserved, named region.
Because these regions are device‑dependent, each heap instance binds a
heap device to its reserved‑mem region via a newly introduced helper
function -namely, of_reserved_mem_device_init_with_mem()- so coherent
allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep
review focused on the new heap; I plan to follow up based on Eric’s [1]
and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA
heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@redhat.com/
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
[3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812c707@kernel.org/
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
Changes in v2:
- Removed dmem charging parts
- Moved coherent heap registering logic to coherent.c
- Made heap device a member of struct dma_heap
- Split dma_heap_add logic into create/register, to be able to
access the stored heap device before registered.
- Avoid platform device in favour of heap device
- Added a wrapper to rmem device_init() op
- Switched from late_initcall() to module_init()
- Made the coherent heap driver modular
- Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com
---
Albert Esteve (5):
dma-buf: dma-heap: split dma_heap_add
of_reserved_mem: add a helper for rmem device_init op
dma-buf: heaps: Add Coherent heap to dmabuf heaps
dma: coherent: register to coherent heap
dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1):
dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++--
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 429 ++++++++++++++++++++++++++++++++++
drivers/of/of_reserved_mem.c | 27 ++-
include/linux/dma-heap.h | 16 ++
include/linux/dma-map-ops.h | 7 +
include/linux/of_reserved_mem.h | 8 +
kernel/dma/coherent.c | 34 +++
9 files changed, 642 insertions(+), 27 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread* [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 13:10 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
` (6 subsequent siblings)
7 siblings, 2 replies; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard, John Stultz
From: John Stultz <john.stultz@linaro.org>
Keep track of the heap device struct.
This will be useful for special DMA allocations
and actions.
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 34 ++++++++++++++++++++++++++--------
include/linux/dma-heap.h | 2 ++
2 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index ac5f8685a6494..1124d63eb1398 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -31,6 +31,7 @@
* @heap_devt: heap device node
* @list: list head connecting to list of heaps
* @heap_cdev: heap char device
+ * @heap_dev: heap device
*
* Represents a heap of memory from which buffers can be made.
*/
@@ -41,6 +42,7 @@ struct dma_heap {
dev_t heap_devt;
struct list_head list;
struct cdev heap_cdev;
+ struct device *heap_dev;
};
static LIST_HEAD(heap_list);
@@ -223,6 +225,19 @@ const char *dma_heap_get_name(struct dma_heap *heap)
}
EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
+/**
+ * dma_heap_get_dev() - get device struct for the heap
+ * @heap: DMA-Heap to retrieve device struct from
+ *
+ * Returns:
+ * The device struct for the heap.
+ */
+struct device *dma_heap_get_dev(struct dma_heap *heap)
+{
+ return heap->heap_dev;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+
/**
* dma_heap_add - adds a heap to dmabuf heaps
* @exp_info: information needed to register this heap
@@ -230,7 +245,6 @@ EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
{
struct dma_heap *heap, *h, *err_ret;
- struct device *dev_ret;
unsigned int minor;
int ret;
@@ -272,14 +286,14 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
goto err1;
}
- dev_ret = device_create(dma_heap_class,
- NULL,
- heap->heap_devt,
- NULL,
- heap->name);
- if (IS_ERR(dev_ret)) {
+ heap->heap_dev = device_create(dma_heap_class,
+ NULL,
+ heap->heap_devt,
+ NULL,
+ heap->name);
+ if (IS_ERR(heap->heap_dev)) {
pr_err("dma_heap: Unable to create device\n");
- err_ret = ERR_CAST(dev_ret);
+ err_ret = ERR_CAST(heap->heap_dev);
goto err2;
}
@@ -295,6 +309,10 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
}
}
+ /* Make sure it doesn't disappear on us */
+ heap->heap_dev = get_device(heap->heap_dev);
+
+
/* Add heap to the list */
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 648328a64b27e..493085e69b70e 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -12,6 +12,7 @@
#include <linux/types.h>
struct dma_heap;
+struct device;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -43,6 +44,7 @@ struct dma_heap_export_info {
void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap);
+struct device *dma_heap_get_dev(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct
2026-03-03 12:33 ` [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
@ 2026-03-03 13:10 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Maxime Ripard @ 2026-03-03 13:10 UTC (permalink / raw)
To: Albert Esteve
Cc: devicetree, dri-devel, echanude, iommu, linaro-mm-sig,
linux-kernel, linux-media, mripard, Benjamin Gaignard,
Brian Starkey, Christian König, John Stultz, John Stultz,
Marek Szyprowski, Maxime Ripard, Rob Herring, Robin Murphy,
Saravana Kannan, Sumit Semwal, T.J. Mercier
On Tue, 3 Mar 2026 13:33:44 +0100, Albert Esteve wrote:
> From: John Stultz <john.stultz@linaro.org>
>
> Keep track of the heap device struct.
>
> This will be useful for special DMA allocations
>
> [ ... ]
Reviewed-by: Maxime Ripard <mripard@kernel.org>
Thanks!
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Claude review: dma-buf: dma-heap: Keep track of the heap device struct
2026-03-03 12:33 ` [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
2026-03-03 13:10 ` Maxime Ripard
@ 2026-03-03 21:19 ` Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**From John Stultz, carried by Albert Esteve.**
This patch stores the `device *` returned by `device_create()` in `heap->heap_dev` instead of a local variable, and adds a `dma_heap_get_dev()` accessor.
**Issues:**
1. **Spurious `get_device()` and extra blank line.** The patch adds:
```c
+ /* Make sure it doesn't disappear on us */
+ heap->heap_dev = get_device(heap->heap_dev);
+
+
/* Add heap to the list */
```
There's a double blank line here (minor style). More importantly, this `get_device()` takes an extra reference on a device that was just created by `device_create()` -- the heap already holds a reference from creation, and there is no corresponding `put_device()` in any error or teardown path in this patch. This leaks a refcount. Patch 2 restructures things and removes this `get_device()`, but in the meantime patch 1 is broken in isolation. Ideally each patch should be correct standalone (bisectability).
2. **The `heap_dev` pointer is not cleaned up on the `err3` path** (duplicate name). If `device_create()` succeeds but the name check fails, `heap_dev` is stored in the struct but `device_destroy()` cleans up by devt. The field is then freed with `kfree(heap)`. This is fine functionally, but the extra `get_device()` above means the device won't actually be freed. Again, this is resolved in patch 2, but patch 1 in isolation has a refcount leak.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 2/6] dma-buf: dma-heap: split dma_heap_add
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-03 12:33 ` [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
` (5 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard
Split dma_heap_add() into creation and registration
phases while preserving the ordering between
cdev_add() and device_add(), and ensuring all
device fields are initialised.
This will allow to access the heap_dev before
it is registered and becomes available to
userspace, making error handling easier.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 126 +++++++++++++++++++++++++++++++++++----------
include/linux/dma-heap.h | 3 ++
2 files changed, 103 insertions(+), 26 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index 1124d63eb1398..88189d4e48561 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -238,15 +238,30 @@ struct device *dma_heap_get_dev(struct dma_heap *heap)
}
EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+static void dma_heap_dev_release(struct device *dev)
+{
+ struct dma_heap *heap;
+
+ pr_debug("heap device: '%s': %s\n", dev_name(dev), __func__);
+ heap = dev_get_drvdata(dev);
+ kfree(heap->name);
+ kfree(heap);
+ kfree(dev);
+}
+
/**
- * dma_heap_add - adds a heap to dmabuf heaps
- * @exp_info: information needed to register this heap
+ * dma_heap_create() - allocate and initialize a heap object
+ * @exp_info: information needed to create a heap
+ *
+ * Creates a heap instance but does not register it or create device nodes.
+ * Use dma_heap_register() to make it visible to userspace, or
+ * dma_heap_destroy() to release it.
+ *
+ * Returns a heap on success or ERR_PTR(-errno) on failure.
*/
-struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info)
{
- struct dma_heap *heap, *h, *err_ret;
- unsigned int minor;
- int ret;
+ struct dma_heap *heap;
if (!exp_info->name || !strcmp(exp_info->name, "")) {
pr_err("dma_heap: Cannot add heap without a name\n");
@@ -265,13 +280,41 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
heap->name = exp_info->name;
heap->ops = exp_info->ops;
heap->priv = exp_info->priv;
+ heap->heap_dev = kzalloc_obj(*heap->heap_dev);
+ if (!heap->heap_dev) {
+ kfree(heap);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ device_initialize(heap->heap_dev);
+ dev_set_drvdata(heap->heap_dev, heap);
+
+ dev_set_name(heap->heap_dev, heap->name);
+ heap->heap_dev->class = dma_heap_class;
+ heap->heap_dev->release = dma_heap_dev_release;
+
+ return heap;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_create, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_register() - register a heap with the dma-heap framework
+ * @heap: heap instance created with dma_heap_create()
+ *
+ * Registers the heap, creating its device node and adding it to the heap
+ * list. Returns 0 on success or a negative error code on failure.
+ */
+int dma_heap_register(struct dma_heap *heap)
+{
+ struct dma_heap *h;
+ unsigned int minor;
+ int ret;
/* Find unused minor number */
ret = xa_alloc(&dma_heap_minors, &minor, heap,
XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
if (ret < 0) {
pr_err("dma_heap: Unable to get minor number for heap\n");
- err_ret = ERR_PTR(ret);
goto err0;
}
@@ -282,42 +325,34 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
if (ret < 0) {
pr_err("dma_heap: Unable to add char device\n");
- err_ret = ERR_PTR(ret);
goto err1;
}
- heap->heap_dev = device_create(dma_heap_class,
- NULL,
- heap->heap_devt,
- NULL,
- heap->name);
- if (IS_ERR(heap->heap_dev)) {
- pr_err("dma_heap: Unable to create device\n");
- err_ret = ERR_CAST(heap->heap_dev);
+ heap->heap_dev->devt = heap->heap_devt;
+
+ ret = device_add(heap->heap_dev);
+ if (ret) {
+ pr_err("dma_heap: Unable to add device\n");
goto err2;
}
mutex_lock(&heap_list_lock);
/* check the name is unique */
list_for_each_entry(h, &heap_list, list) {
- if (!strcmp(h->name, exp_info->name)) {
+ if (!strcmp(h->name, heap->name)) {
mutex_unlock(&heap_list_lock);
pr_err("dma_heap: Already registered heap named %s\n",
- exp_info->name);
- err_ret = ERR_PTR(-EINVAL);
+ heap->name);
+ ret = -EINVAL;
goto err3;
}
}
- /* Make sure it doesn't disappear on us */
- heap->heap_dev = get_device(heap->heap_dev);
-
-
/* Add heap to the list */
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
- return heap;
+ return 0;
err3:
device_destroy(dma_heap_class, heap->heap_devt);
@@ -326,8 +361,47 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
err1:
xa_erase(&dma_heap_minors, minor);
err0:
- kfree(heap);
- return err_ret;
+ dma_heap_destroy(heap);
+ return ret;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_register, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_destroy() - release a heap created by dma_heap_create()
+ * @heap: heap instance to release
+ *
+ * Drops the heap device reference; the heap and its device are freed in the
+ * device release path when the last reference is gone.
+ */
+void dma_heap_destroy(struct dma_heap *heap)
+{
+ put_device(heap->heap_dev);
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_destroy, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_add - adds a heap to dmabuf heaps
+ * @exp_info: information needed to register this heap
+ */
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+{
+ struct dma_heap *heap;
+ int ret;
+
+ heap = dma_heap_create(exp_info);
+ if (IS_ERR(heap)) {
+ pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap));
+ return PTR_ERR(heap);
+ }
+
+ ret = dma_heap_register(heap);
+ if (ret) {
+ pr_err("dma_heap: failed to register heap (%d)\n", ret);
+ dma_heap_destroy(heap);
+ return ERR_PTR(ret);
+ }
+
+ return heap;
}
EXPORT_SYMBOL_NS_GPL(dma_heap_add, "DMA_BUF_HEAP");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 493085e69b70e..1b0ea43ba66c3 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -46,6 +46,9 @@ void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap);
struct device *dma_heap_get_dev(struct dma_heap *heap);
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info);
+int dma_heap_register(struct dma_heap *heap);
+void dma_heap_destroy(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Claude review: dma-buf: dma-heap: split dma_heap_add
2026-03-03 12:33 ` [PATCH v2 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
@ 2026-03-03 21:19 ` Claude Code Review Bot
0 siblings, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This splits `dma_heap_add()` into `dma_heap_create()` + `dma_heap_register()`, with a `dma_heap_destroy()` that calls `put_device()`, and a `dma_heap_dev_release()` callback.
**Issues:**
1. **Type error / wrong return in refactored `dma_heap_add()`.** The refactored `dma_heap_add()` has:
```c
+ heap = dma_heap_create(exp_info);
+ if (IS_ERR(heap)) {
+ pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap));
+ return PTR_ERR(heap);
+ }
```
`dma_heap_add()` returns `struct dma_heap *`, but `PTR_ERR(heap)` returns `long`. This is a type mismatch -- it should be `return ERR_CAST(heap)` or just `return heap`. Note: Patch 4 fixes this exact bug by changing it to `ERR_CAST(heap)` and the format specifier to `%ld`, but that means patch 2 is broken in isolation (it will produce a compiler warning and return a truncated pointer, not an error-encoded pointer, on 64-bit).
2. **`dma_heap_register()` error path calls `dma_heap_destroy()` after `device_destroy()`.** In `dma_heap_register()`:
```c
err3:
device_destroy(dma_heap_class, heap->heap_devt);
...
err0:
- kfree(heap);
- return err_ret;
+ dma_heap_destroy(heap);
+ return ret;
```
`dma_heap_destroy()` calls `put_device(heap->heap_dev)`, which may trigger `dma_heap_dev_release()` and free the device. But `device_destroy()` on `err3` already destroys the device by class+devt. If `device_add()` succeeded (err3 path), calling both `device_destroy()` and then `put_device()` would be a double-free/use-after-free. The err3 path should probably just call `device_del()` (remove from sysfs) and let the `put_device()` in `dma_heap_destroy()` handle the final free, or the destroy should skip the put if device_destroy already handled it.
Actually, looking more carefully: `device_destroy()` calls `device_unregister()` which is `device_del()` + `put_device()`. So after `device_destroy()`, the device reference from `device_add()` is dropped. Then `dma_heap_destroy()` -> `put_device()` drops the reference from `device_initialize()`. Since `device_add()` takes an extra ref, and `device_initialize()` sets refcount to 1, this sequence might actually be correct (the initialize ref + the add ref = 2, device_destroy drops 1, dma_heap_destroy drops 1). But this is fragile and non-obvious -- it relies on `device_add()` taking a get_device. This deserves a comment at minimum.
3. **`dma_heap_dev_release()` frees `heap->name` and `heap`.** In `dma_heap_create()`, `heap->name = exp_info->name` just copies the pointer. If the caller allocated the name (as coherent_heap does with `kstrdup`), freeing it in the device release is correct. But for existing callers that pass string literals (e.g., the system heap passes `"system"`), `kfree(heap->name)` on a string literal would be undefined behavior. The existing `dma_heap_add()` wrapper doesn't copy the name either. This is a latent bug for any code path that triggers device release on a heap created via `dma_heap_add()`.
4. **Minor:** `dma_heap_create()` uses `kzalloc_obj(*heap->heap_dev)` which allocates a bare `struct device`. This works but is unusual -- most kernel subsystems embed the device in a larger struct or use specialized device allocation. The comment in the release function `kfree(dev)` confirms this is a standalone allocation.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-03 12:33 ` [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
2026-03-03 12:33 ` [PATCH v2 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 13:13 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
` (4 subsequent siblings)
7 siblings, 2 replies; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard
Add a helper function wrapping internal reserved memory
device_init call and expose it externally.
Use the new helper function within of_reserved_mem_device_init_by_idx().
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/of/of_reserved_mem.c | 27 +++++++++++++++++++++++----
include/linux/of_reserved_mem.h | 8 ++++++++
2 files changed, 31 insertions(+), 4 deletions(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 1fd28f8056108..3a350bef8f11e 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -605,6 +605,28 @@ struct rmem_assigned_device {
static LIST_HEAD(of_rmem_assigned_device_list);
static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
+/**
+ * of_reserved_mem_device_init_with_mem() - assign reserved memory region to
+ * given device
+ * @dev: Pointer to the device to configure
+ * @rmem: Reserved memory region to assign
+ *
+ * This function assigns respective DMA-mapping operations based on the
+ * reserved memory region already provided in @rmem to the @dev device,
+ * without walking DT nodes.
+ *
+ * Returns error code or zero on success.
+ */
+int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem)
+{
+ if (!dev || !rmem || !rmem->ops || !rmem->ops->device_init)
+ return -EINVAL;
+
+ return rmem->ops->device_init(rmem, dev);
+}
+EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_with_mem);
+
/**
* of_reserved_mem_device_init_by_idx() - assign reserved memory region to
* given device
@@ -643,14 +665,11 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
rmem = of_reserved_mem_lookup(target);
of_node_put(target);
- if (!rmem || !rmem->ops || !rmem->ops->device_init)
- return -EINVAL;
-
rd = kmalloc_obj(struct rmem_assigned_device);
if (!rd)
return -ENOMEM;
- ret = rmem->ops->device_init(rmem, dev);
+ ret = of_reserved_mem_device_init_with_mem(dev, rmem);
if (ret == 0) {
rd->dev = dev;
rd->rmem = rmem;
diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h
index f573423359f48..12f7ddb7ee61f 100644
--- a/include/linux/of_reserved_mem.h
+++ b/include/linux/of_reserved_mem.h
@@ -32,6 +32,8 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem);
#define RESERVEDMEM_OF_DECLARE(name, compat, init) \
_OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
+int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem);
int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx);
int of_reserved_mem_device_init_by_name(struct device *dev,
@@ -51,6 +53,12 @@ int of_reserved_mem_region_count(const struct device_node *np);
#define RESERVEDMEM_OF_DECLARE(name, compat, init) \
_OF_DECLARE_STUB(reservedmem, name, compat, init, reservedmem_of_init_fn)
+static inline int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
{
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op
2026-03-03 12:33 ` [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
@ 2026-03-03 13:13 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Maxime Ripard @ 2026-03-03 13:13 UTC (permalink / raw)
To: Albert Esteve
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan, linux-kernel,
linux-media, dri-devel, linaro-mm-sig, iommu, devicetree,
echanude
[-- Attachment #1: Type: text/plain, Size: 2418 bytes --]
On Tue, Mar 03, 2026 at 01:33:46PM +0100, Albert Esteve wrote:
> Add a helper function wrapping internal reserved memory
> device_init call and expose it externally.
>
> Use the new helper function within of_reserved_mem_device_init_by_idx().
>
> Signed-off-by: Albert Esteve <aesteve@redhat.com>
> ---
> drivers/of/of_reserved_mem.c | 27 +++++++++++++++++++++++----
> include/linux/of_reserved_mem.h | 8 ++++++++
> 2 files changed, 31 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 1fd28f8056108..3a350bef8f11e 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -605,6 +605,28 @@ struct rmem_assigned_device {
> static LIST_HEAD(of_rmem_assigned_device_list);
> static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
>
> +/**
> + * of_reserved_mem_device_init_with_mem() - assign reserved memory region to
> + * given device
> + * @dev: Pointer to the device to configure
> + * @rmem: Reserved memory region to assign
> + *
> + * This function assigns respective DMA-mapping operations based on the
> + * reserved memory region already provided in @rmem to the @dev device,
> + * without walking DT nodes.
> + *
> + * Returns error code or zero on success.
> + */
> +int of_reserved_mem_device_init_with_mem(struct device *dev,
> + struct reserved_mem *rmem)
> +{
> + if (!dev || !rmem || !rmem->ops || !rmem->ops->device_init)
> + return -EINVAL;
> +
> + return rmem->ops->device_init(rmem, dev);
> +}
> +EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_with_mem);
> +
> /**
> * of_reserved_mem_device_init_by_idx() - assign reserved memory region to
> * given device
> @@ -643,14 +665,11 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
> rmem = of_reserved_mem_lookup(target);
> of_node_put(target);
>
> - if (!rmem || !rmem->ops || !rmem->ops->device_init)
> - return -EINVAL;
> -
> rd = kmalloc_obj(struct rmem_assigned_device);
> if (!rd)
> return -ENOMEM;
>
> - ret = rmem->ops->device_init(rmem, dev);
> + ret = of_reserved_mem_device_init_with_mem(dev, rmem);
> if (ret == 0) {
> rd->dev = dev;
> rd->rmem = rmem;
I think you need to take the allocation of rd, and everything below.
Otherwise, your device, despite being attechd, wouldn't be listed
anywhere.
Maxime
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread* Claude review: of_reserved_mem: add a helper for rmem device_init op
2026-03-03 12:33 ` [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
2026-03-03 13:13 ` Maxime Ripard
@ 2026-03-03 21:19 ` Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds `of_reserved_mem_device_init_with_mem()` that calls `rmem->ops->device_init()` directly, and refactors `of_reserved_mem_device_init_by_idx()` to use it.
**Issues:**
1. **Memory leak introduced in `of_reserved_mem_device_init_by_idx()`.** The original code checks `if (!rmem || !rmem->ops || !rmem->ops->device_init) return -EINVAL;` **before** the `kmalloc_obj()`. The refactored code moves the validation into `of_reserved_mem_device_init_with_mem()`, but now `rd` is allocated first:
```c
- if (!rmem || !rmem->ops || !rmem->ops->device_init)
- return -EINVAL;
-
rd = kmalloc_obj(struct rmem_assigned_device);
if (!rd)
return -ENOMEM;
- ret = rmem->ops->device_init(rmem, dev);
+ ret = of_reserved_mem_device_init_with_mem(dev, rmem);
```
If `rmem` is NULL or has no ops, the new helper returns `-EINVAL`, but `rd` was already allocated and is only freed in the `else` branch of `if (ret == 0)`. Looking at the original code after the hunk: `if (ret == 0) { ... } else { kfree(rd); }` -- so `rd` does get freed on the error path. This is actually fine. However, there's a subtler issue: the original code never reached `kmalloc_obj()` when `rmem` was NULL, but now it does, wasting memory only to immediately free it. Not a bug per se, but a minor inefficiency change.
2. **Good:** The `#ifndef CONFIG_OF_RESERVED_MEM` stub returns `-EOPNOTSUPP` which is appropriate.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (2 preceding siblings ...)
2026-03-03 12:33 ` [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 13:20 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 5/6] dma: coherent: register to coherent heap Albert Esteve
` (3 subsequent siblings)
7 siblings, 2 replies; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard
Add a dma-buf heap for DT coherent reserved-memory
(i.e., 'shared-dma-pool' without 'reusable' property),
exposing one heap per region for userspace buffers.
The heap binds the heap device to each memory region so
coherent allocations use the correct dev->dma_mem, and
it defers registration until module_init when normal
allocators are available.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 4 +-
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 426 ++++++++++++++++++++++++++++++++++
include/linux/dma-heap.h | 11 +
include/linux/dma-map-ops.h | 7 +
6 files changed, 456 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index 88189d4e48561..ba87e5ac16ae2 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -390,8 +390,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
heap = dma_heap_create(exp_info);
if (IS_ERR(heap)) {
- pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap));
- return PTR_ERR(heap);
+ pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap));
+ return ERR_CAST(heap);
}
ret = dma_heap_register(heap);
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..aeb475e585048 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..d033d737bb9df
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,426 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ return dma_mmap_coherent(heap_dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ struct device *heap_dev;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ if (!heap_dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+ goto free_buffer;
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int coherent_heap_init_dma_mask(struct device *dev)
+{
+ int ret;
+
+ ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (!ret)
+ return 0;
+
+ /* Fallback to 32-bit DMA mask */
+ return dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));
+}
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+ struct device *heap_dev;
+ int ret;
+
+ if (!rmem || !rmem->name)
+ return -EINVAL;
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->rmem = rmem;
+ coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_create(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto free_name;
+ }
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ ret = coherent_heap_init_dma_mask(heap_dev);
+ if (ret) {
+ pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
+ if (ret) {
+ pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = dma_heap_register(coh_heap->heap);
+ if (ret) {
+ pr_err("coherent_heap: failed to register heap (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ return 0;
+
+destroy_heap:
+ dma_heap_destroy(coh_heap->heap);
+ coh_heap->heap = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+static int __init coherent_heap_register(void)
+{
+ struct reserved_mem *rmem;
+ unsigned int i;
+ int ret;
+
+ for (i = 0; (rmem = dma_coherent_get_reserved_region(i)) != NULL; i++) {
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+module_init(coherent_heap_register);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 1b0ea43ba66c3..77e6cb66ffce1 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -9,10 +9,12 @@
#ifndef _DMA_HEAPS_H
#define _DMA_HEAPS_H
+#include <linux/errno.h>
#include <linux/types.h>
struct dma_heap;
struct device;
+struct reserved_mem;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -53,4 +55,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
+int dma_heap_coherent_register(struct reserved_mem *rmem);
+#else
+static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
#endif /* _DMA_HEAPS_H */
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 60b63756df821..c87e5e44e5383 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -12,6 +12,7 @@
struct cma;
struct iommu_ops;
+struct reserved_mem;
struct dma_map_ops {
void *(*alloc)(struct device *dev, size_t size,
@@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, size_t size, int *ret);
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx);
#else
static inline int dma_declare_coherent_memory(struct device *dev,
phys_addr_t phys_addr, dma_addr_t device_addr, size_t size)
@@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
#define dma_release_from_dev_coherent(dev, order, vaddr) (0)
#define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
static inline void dma_release_coherent_memory(struct device *dev) { }
+static inline
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ return NULL;
+}
#endif /* CONFIG_DMA_DECLARE_COHERENT */
#ifdef CONFIG_DMA_GLOBAL_POOL
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-03 13:20 ` Maxime Ripard
2026-03-03 14:47 ` Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 22+ messages in thread
From: Maxime Ripard @ 2026-03-03 13:20 UTC (permalink / raw)
To: Albert Esteve
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan, linux-kernel,
linux-media, dri-devel, linaro-mm-sig, iommu, devicetree,
echanude
[-- Attachment #1: Type: text/plain, Size: 6033 bytes --]
Hi,
On Tue, Mar 03, 2026 at 01:33:47PM +0100, Albert Esteve wrote:
> Add a dma-buf heap for DT coherent reserved-memory
> (i.e., 'shared-dma-pool' without 'reusable' property),
> exposing one heap per region for userspace buffers.
>
> The heap binds the heap device to each memory region so
> coherent allocations use the correct dev->dma_mem, and
> it defers registration until module_init when normal
> allocators are available.
>
> Signed-off-by: Albert Esteve <aesteve@redhat.com>
> ---
> drivers/dma-buf/dma-heap.c | 4 +-
> drivers/dma-buf/heaps/Kconfig | 9 +
> drivers/dma-buf/heaps/Makefile | 1 +
> drivers/dma-buf/heaps/coherent_heap.c | 426 ++++++++++++++++++++++++++++++++++
> include/linux/dma-heap.h | 11 +
> include/linux/dma-map-ops.h | 7 +
> 6 files changed, 456 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> index 88189d4e48561..ba87e5ac16ae2 100644
> --- a/drivers/dma-buf/dma-heap.c
> +++ b/drivers/dma-buf/dma-heap.c
> @@ -390,8 +390,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
>
> heap = dma_heap_create(exp_info);
> if (IS_ERR(heap)) {
> - pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap));
> - return PTR_ERR(heap);
> + pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap));
> + return ERR_CAST(heap);
This looks unrelated and should possibly be squashed into the previous
patch that introduces dma_heap_create()?
> +static int coherent_heap_init_dma_mask(struct device *dev)
> +{
> + int ret;
> +
> + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
> + if (!ret)
> + return 0;
> +
> + /* Fallback to 32-bit DMA mask */
> + return dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));
> +}
Why do you need to mess with the DMA mask? I'd expect that device to be
able to access everything.
> +static int __coherent_heap_register(struct reserved_mem *rmem)
> +{
> + struct dma_heap_export_info exp_info;
> + struct coherent_heap *coh_heap;
> + struct device *heap_dev;
> + int ret;
> +
> + if (!rmem || !rmem->name)
> + return -EINVAL;
> +
> + coh_heap = kzalloc_obj(*coh_heap);
> + if (!coh_heap)
> + return -ENOMEM;
> +
> + coh_heap->rmem = rmem;
> + coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
> + if (!coh_heap->name) {
> + ret = -ENOMEM;
> + goto free_coherent_heap;
> + }
> +
> + exp_info.name = coh_heap->name;
> + exp_info.ops = &coherent_heap_ops;
> + exp_info.priv = coh_heap;
> +
> + coh_heap->heap = dma_heap_create(&exp_info);
> + if (IS_ERR(coh_heap->heap)) {
> + ret = PTR_ERR(coh_heap->heap);
> + goto free_name;
> + }
> +
> + heap_dev = dma_heap_get_dev(coh_heap->heap);
> + ret = coherent_heap_init_dma_mask(heap_dev);
> + if (ret) {
> + pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
> + goto destroy_heap;
> + }
> +
> + ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
> + if (ret) {
> + pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
> + goto destroy_heap;
> + }
> +
> + ret = dma_heap_register(coh_heap->heap);
> + if (ret) {
> + pr_err("coherent_heap: failed to register heap (%d)\n", ret);
> + goto destroy_heap;
> + }
I guess it's more of a comment about your previous patch, but it's not
clear to me why you needed to split dma_heap_add into dma_heap_create /
_register. Can you expand a bit?
> diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> index 1b0ea43ba66c3..77e6cb66ffce1 100644
> --- a/include/linux/dma-heap.h
> +++ b/include/linux/dma-heap.h
> @@ -9,10 +9,12 @@
> #ifndef _DMA_HEAPS_H
> #define _DMA_HEAPS_H
>
> +#include <linux/errno.h>
> #include <linux/types.h>
>
> struct dma_heap;
> struct device;
> +struct reserved_mem;
>
> /**
> * struct dma_heap_ops - ops to operate on a given heap
> @@ -53,4 +55,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
>
> extern bool mem_accounting;
>
> +#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
> +int dma_heap_coherent_register(struct reserved_mem *rmem);
> +#else
> +static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
> +{
> + return -EOPNOTSUPP;
> +}
> +#endif
> +
> #endif /* _DMA_HEAPS_H */
Do you still need that now that you switched to an iterator-like
function?
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 60b63756df821..c87e5e44e5383 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -12,6 +12,7 @@
>
> struct cma;
> struct iommu_ops;
> +struct reserved_mem;
>
> struct dma_map_ops {
> void *(*alloc)(struct device *dev, size_t size,
> @@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
> int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
> int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
> void *cpu_addr, size_t size, int *ret);
> +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx);
> #else
> static inline int dma_declare_coherent_memory(struct device *dev,
> phys_addr_t phys_addr, dma_addr_t device_addr, size_t size)
> @@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
> #define dma_release_from_dev_coherent(dev, order, vaddr) (0)
> #define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
> static inline void dma_release_coherent_memory(struct device *dev) { }
> +static inline
> +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
> +{
> + return NULL;
> +}
> #endif /* CONFIG_DMA_DECLARE_COHERENT */
>
> #ifdef CONFIG_DMA_GLOBAL_POOL
To preserve bisectability, you shouldn't do it that way. Introduce this
function into a preliminary patch, and then use it in this one.
Maxime
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-03 13:20 ` Maxime Ripard
@ 2026-03-03 14:47 ` Albert Esteve
0 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 14:47 UTC (permalink / raw)
To: Maxime Ripard
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan, linux-kernel,
linux-media, dri-devel, linaro-mm-sig, iommu, devicetree,
echanude
On Tue, Mar 3, 2026 at 2:20 PM Maxime Ripard <mripard@redhat.com> wrote:
>
> Hi,
>
> On Tue, Mar 03, 2026 at 01:33:47PM +0100, Albert Esteve wrote:
> > Add a dma-buf heap for DT coherent reserved-memory
> > (i.e., 'shared-dma-pool' without 'reusable' property),
> > exposing one heap per region for userspace buffers.
> >
> > The heap binds the heap device to each memory region so
> > coherent allocations use the correct dev->dma_mem, and
> > it defers registration until module_init when normal
> > allocators are available.
> >
> > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > ---
> > drivers/dma-buf/dma-heap.c | 4 +-
> > drivers/dma-buf/heaps/Kconfig | 9 +
> > drivers/dma-buf/heaps/Makefile | 1 +
> > drivers/dma-buf/heaps/coherent_heap.c | 426 ++++++++++++++++++++++++++++++++++
> > include/linux/dma-heap.h | 11 +
> > include/linux/dma-map-ops.h | 7 +
> > 6 files changed, 456 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> > index 88189d4e48561..ba87e5ac16ae2 100644
> > --- a/drivers/dma-buf/dma-heap.c
> > +++ b/drivers/dma-buf/dma-heap.c
> > @@ -390,8 +390,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
> >
> > heap = dma_heap_create(exp_info);
> > if (IS_ERR(heap)) {
> > - pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap));
> > - return PTR_ERR(heap);
> > + pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap));
> > + return ERR_CAST(heap);
>
> This looks unrelated and should possibly be squashed into the previous
> patch that introduces dma_heap_create()?
>
> > +static int coherent_heap_init_dma_mask(struct device *dev)
> > +{
> > + int ret;
> > +
> > + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
> > + if (!ret)
> > + return 0;
> > +
> > + /* Fallback to 32-bit DMA mask */
> > + return dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));
> > +}
>
> Why do you need to mess with the DMA mask? I'd expect that device to be
> able to access everything.
When I tested I was getting: "reserved memory is beyond device's set
DMA address range", so I tested if it was fixed with
dma_coerce_mask_and_coherent() and/or dma_set_mask_coherent(). I did
not debug the value of coherent_dma_mask, but given the error I assume
it was not set properly? Ultimately, using the 64 bit mask fixed it,
and I added a 32-bit fallback to ensure support for 32-bit systems.
>
> > +static int __coherent_heap_register(struct reserved_mem *rmem)
> > +{
> > + struct dma_heap_export_info exp_info;
> > + struct coherent_heap *coh_heap;
> > + struct device *heap_dev;
> > + int ret;
> > +
> > + if (!rmem || !rmem->name)
> > + return -EINVAL;
> > +
> > + coh_heap = kzalloc_obj(*coh_heap);
> > + if (!coh_heap)
> > + return -ENOMEM;
> > +
> > + coh_heap->rmem = rmem;
> > + coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
> > + if (!coh_heap->name) {
> > + ret = -ENOMEM;
> > + goto free_coherent_heap;
> > + }
> > +
> > + exp_info.name = coh_heap->name;
> > + exp_info.ops = &coherent_heap_ops;
> > + exp_info.priv = coh_heap;
> > +
> > + coh_heap->heap = dma_heap_create(&exp_info);
> > + if (IS_ERR(coh_heap->heap)) {
> > + ret = PTR_ERR(coh_heap->heap);
> > + goto free_name;
> > + }
> > +
> > + heap_dev = dma_heap_get_dev(coh_heap->heap);
> > + ret = coherent_heap_init_dma_mask(heap_dev);
> > + if (ret) {
> > + pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
> > + goto destroy_heap;
> > + }
> > +
> > + ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
> > + if (ret) {
> > + pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
> > + goto destroy_heap;
> > + }
> > +
> > + ret = dma_heap_register(coh_heap->heap);
> > + if (ret) {
> > + pr_err("coherent_heap: failed to register heap (%d)\n", ret);
> > + goto destroy_heap;
> > + }
>
> I guess it's more of a comment about your previous patch, but it's not
> clear to me why you needed to split dma_heap_add into dma_heap_create /
> _register. Can you expand a bit?
So first I tried to just use dma_heap_add() and then use the heap_dev
afterward to call of_reserved_mem_device_init_with_mem(), but if that
call failed, the error path required some kind dma_heap_remove()
function as the heap was already registered by then.
In the CMA heap for example, dma_heap_add() is invoked at the end of
the `init` function. Therefore, you do not have this issue, if it
failed it means the heap was not added and you just need to clean
everything else.
However, performing a remove() does not sound like something that can
be done safely. I've spent some time thinking on alternatives, but
splitting felt the best pattern.
This way I can:
1. Create the device
2. Call of_reserved_mem_device_init_with_mem
3. Register the heap
This places registration at the end, making every error path and
cleanup easy to handle.
Also, the `dma_heap_add()` code already seemed to handle these two
parts/phases implicitly with device_create(), so splitting felt
architecturally sound.
>
> > diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> > index 1b0ea43ba66c3..77e6cb66ffce1 100644
> > --- a/include/linux/dma-heap.h
> > +++ b/include/linux/dma-heap.h
> > @@ -9,10 +9,12 @@
> > #ifndef _DMA_HEAPS_H
> > #define _DMA_HEAPS_H
> >
> > +#include <linux/errno.h>
> > #include <linux/types.h>
> >
> > struct dma_heap;
> > struct device;
> > +struct reserved_mem;
> >
> > /**
> > * struct dma_heap_ops - ops to operate on a given heap
> > @@ -53,4 +55,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
> >
> > extern bool mem_accounting;
> >
> > +#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
> > +int dma_heap_coherent_register(struct reserved_mem *rmem);
> > +#else
> > +static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
> > +{
> > + return -EOPNOTSUPP;
> > +}
> > +#endif
> > +
> > #endif /* _DMA_HEAPS_H */
>
> Do you still need that now that you switched to an iterator-like
> function?
>
> > diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> > index 60b63756df821..c87e5e44e5383 100644
> > --- a/include/linux/dma-map-ops.h
> > +++ b/include/linux/dma-map-ops.h
> > @@ -12,6 +12,7 @@
> >
> > struct cma;
> > struct iommu_ops;
> > +struct reserved_mem;
> >
> > struct dma_map_ops {
> > void *(*alloc)(struct device *dev, size_t size,
> > @@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
> > int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
> > int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
> > void *cpu_addr, size_t size, int *ret);
> > +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx);
> > #else
> > static inline int dma_declare_coherent_memory(struct device *dev,
> > phys_addr_t phys_addr, dma_addr_t device_addr, size_t size)
> > @@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
> > #define dma_release_from_dev_coherent(dev, order, vaddr) (0)
> > #define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
> > static inline void dma_release_coherent_memory(struct device *dev) { }
> > +static inline
> > +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
> > +{
> > + return NULL;
> > +}
> > #endif /* CONFIG_DMA_DECLARE_COHERENT */
> >
> > #ifdef CONFIG_DMA_GLOBAL_POOL
>
> To preserve bisectability, you shouldn't do it that way. Introduce this
> function into a preliminary patch, and then use it in this one.
>
> Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-03 13:20 ` Maxime Ripard
@ 2026-03-03 21:19 ` Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The main patch -- a 426-line new file `coherent_heap.c`. This is the core of the series.
**Issues:**
1. **`virt_to_page()` on DMA coherent memory is not universally safe.** The allocation path does:
```c
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
...
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
```
`dma_alloc_coherent()` may return memory from vmalloc space (especially on architectures that need uncached mappings), in which case `virt_to_page()` is incorrect -- it only works for direct-mapped (lowmem) kernel addresses. On ARM with `no-map` regions (which these coherent regions typically use), the return from `dma_alloc_coherent()` goes through `ioremap()` / `memremap()` paths, making `virt_to_page()` invalid. This could lead to returning bogus page structs. Consider using `vmalloc_to_page()` or `pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_addr)))` instead (similar to how some other drivers handle this).
2. **`begin_cpu_access` / `end_cpu_access` syncs are unnecessary for coherent memory.** The whole point of coherent DMA memory is that it is cache-coherent and doesn't need explicit sync. The implementations:
```c
+static int coherent_heap_dma_buf_begin_cpu_access(...)
+{
+ ...
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
```
These `dma_sync_sgtable_for_cpu/device` calls should be no-ops for coherent memory, but iterating all attachments under a mutex for a no-op is wasteful. The `invalidate_kernel_vmap_range` / `flush_kernel_vmap_range` calls are also questionable since the vmap in `coherent_heap_do_vmap()` uses `PAGE_KERNEL` (cached), which could actually conflict with the uncached mapping from `dma_alloc_coherent()`. This is a potential aliasing problem -- having both a cached vmap and an uncached DMA mapping to the same physical pages will cause coherency issues on architectures without hardware coherence.
3. **Exported but unused `dma_heap_coherent_register()`.** The header adds:
```c
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
+int dma_heap_coherent_register(struct reserved_mem *rmem);
+#else
+static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
+...
```
But no implementation of `dma_heap_coherent_register()` exists in this patch. The actual function is `__coherent_heap_register()` (static). This declaration is never used by patch 5 either (which calls `dma_coherent_get_reserved_region()` instead). This appears to be dead code left over from a previous iteration.
4. **`dma_coerce_mask_and_coherent()` usage.** The function:
```c
+static int coherent_heap_init_dma_mask(struct device *dev)
+{
+ ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
```
`dma_coerce_mask_and_coherent()` is documented as being for platform code / bus code only, not for drivers. The DMA mask should match the actual memory region. Since the reserved region has a known physical address and size, the mask should be derived from `rmem->base + rmem->size - 1` to ensure correctness.
5. **`coherent_heap_map_dma_buf` loses the actual error code:**
```c
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
```
The actual error from `dma_map_sgtable()` is discarded and replaced with `-ENOMEM`. Should be `return ERR_PTR(ret)`.
6. **The heap name comes from `rmem->name`** which is the DT node name (e.g., `"framebuffer@80000000"`). This could contain characters that are invalid for device node names. No sanitization is done.
7. **No `MODULE_LICENSE` in this patch** -- the module_init is added but `MODULE_LICENSE("GPL")` is deferred to patch 6. This means this patch doesn't compile as a module in isolation (though patch 4's Kconfig has it as `bool`, so it only matters for bisect-then-change-Kconfig scenarios).
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 5/6] dma: coherent: register to coherent heap
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (3 preceding siblings ...)
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
` (2 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard
Add dma_heap_coherent_register() call within
reserved memory DMA setup logic for non-reusable
DT nodes.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
kernel/dma/coherent.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 1147497bc512c..d0d0979ffb153 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
+#include <linux/dma-heap.h>
struct dma_coherent_mem {
void *virt_base;
@@ -334,6 +335,31 @@ static phys_addr_t dma_reserved_default_memory_base __initdata;
static phys_addr_t dma_reserved_default_memory_size __initdata;
#endif
+#define MAX_COHERENT_REGIONS 64
+
+static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS];
+static unsigned int rmem_coherent_areas_num;
+
+static int rmem_coherent_insert_area(struct reserved_mem *rmem)
+{
+ if (rmem_coherent_areas_num >= MAX_COHERENT_REGIONS) {
+ pr_warn("Deferred heap areas list full, dropping %s\n",
+ rmem->name ? rmem->name : "unknown");
+ return -EINVAL;
+ }
+ rmem_coherent_areas[rmem_coherent_areas_num++] = rmem;
+ return 0;
+}
+
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ if (idx >= rmem_coherent_areas_num)
+ return NULL;
+
+ return rmem_coherent_areas[idx];
+}
+EXPORT_SYMBOL_GPL(dma_coherent_get_reserved_region);
+
static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
{
struct dma_coherent_mem *mem = rmem->priv;
@@ -393,6 +419,14 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
rmem->ops = &rmem_dma_ops;
pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = rmem_coherent_insert_area(rmem);
+
+ if (ret)
+ pr_warn("Reserved memory: failed to store coherent area for %s (%d)\n",
+ rmem->name ? rmem->name : "unknown", ret);
+ }
return 0;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Claude review: dma: coherent: register to coherent heap
2026-03-03 12:33 ` [PATCH v2 5/6] dma: coherent: register to coherent heap Albert Esteve
@ 2026-03-03 21:19 ` Claude Code Review Bot
0 siblings, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds a static array `rmem_coherent_areas[MAX_COHERENT_REGIONS]` in `kernel/dma/coherent.c` to record reserved regions during early boot, then exposes them via `dma_coherent_get_reserved_region()`.
**Issues:**
1. **Fixed-size static array of 64 entries is a design concern.** The `MAX_COHERENT_REGIONS` of 64 is hardcoded with no Kconfig tunability:
```c
+#define MAX_COHERENT_REGIONS 64
+static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS];
```
This wastes memory when unused (most systems have 0-2 coherent regions) and is an arbitrary limit. A dynamic list or leveraging the existing reserved_mem infrastructure would be cleaner. This is similar to how `cma_heap.c` uses `MAX_CMA_AREAS`, but `MAX_CMA_AREAS` is a well-known kernel-wide constant, while `MAX_COHERENT_REGIONS` is newly invented here.
2. **`rmem_coherent_insert_area()` returns `-EINVAL` on overflow** but `-ENOSPC` or `-ENOMEM` would be more semantically correct.
3. **The array is not `__initdata`.** Unlike the CMA heap's `dma_areas` which is `__initdata` (freed after init), `rmem_coherent_areas` persists forever. Since `dma_coherent_get_reserved_region()` is called from `module_init` (which could be called after `__init` section is freed if built as a module), this is actually necessary, but worth noting the permanent memory cost.
4. **`IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)` guard in `rmem_dma_setup()`:**
```c
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
```
This is `__init` code. If the coherent heap is built as a module (`=m`), `IS_ENABLED()` returns true, so regions are collected. Good -- this is the correct behavior to support the modular case.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 6/6] dma-buf: heaps: coherent: Turn heap into a module
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (4 preceding siblings ...)
2026-03-03 12:33 ` [PATCH v2 5/6] dma: coherent: register to coherent heap Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 20:55 ` [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap John Stultz
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
7 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard
Following the current efforts to make CMA heap as module,
we can do the same and turn the Coherent heap into
a module as well, by changing the Kconfig into a tristate
and importing the proper dma-buf namespaces.
This heap won't be able to unload (same as happens with
the CMA heap), since we're missing a big part of the
infrastructure that would allow to make it safe.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/heaps/Kconfig | 2 +-
drivers/dma-buf/heaps/coherent_heap.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index aeb475e585048..2f84a1018b900 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -14,7 +14,7 @@ config DMABUF_HEAPS_CMA
regions, you should say Y here.
config DMABUF_HEAPS_COHERENT
- bool "DMA-BUF Coherent Reserved-Memory Heap"
+ tristate "DMA-BUF Coherent Reserved-Memory Heap"
depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
help
Choose this option to enable coherent reserved-memory dma-buf heaps.
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
index d033d737bb9df..cdf8efa6c1564 100644
--- a/drivers/dma-buf/heaps/coherent_heap.c
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -424,3 +424,6 @@ static int __init coherent_heap_register(void)
}
module_init(coherent_heap_register);
MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
+MODULE_LICENSE("GPL");
+MODULE_IMPORT_NS("DMA_BUF");
+MODULE_IMPORT_NS("DMA_BUF_HEAP");
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Claude review: dma-buf: heaps: coherent: Turn heap into a module
2026-03-03 12:33 ` [PATCH v2 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
@ 2026-03-03 21:19 ` Claude Code Review Bot
0 siblings, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Changes Kconfig from `bool` to `tristate` and adds `MODULE_LICENSE`/`MODULE_IMPORT_NS`.
**Issues:**
1. **No module unload support.** The commit message says "This heap won't be able to unload" but there is no mechanism to prevent unloading -- no `module_exit` is defined, which means the module simply can't be unloaded (rmmod will fail). This is acceptable but should ideally use a comment or a `[permanent]` module attribute to make the intent explicit.
2. **`MODULE_LICENSE("GPL")` should have been in patch 4** where `module_init()` was first added. The split makes patch 4 not compile as a standalone module (though it's `bool` in that patch).
3. **Overall looks fine** -- the namespace imports are correct.
---
**Summary of key concerns (by severity):**
- **High:** `virt_to_page()` on `dma_alloc_coherent()` memory is not portable (patch 4)
- **High:** Type error in `dma_heap_add()` returning `PTR_ERR()` as a pointer (patch 2, fixed in patch 4 but breaks bisect)
- **Medium:** Potential double-release in `dma_heap_register()` error path with `device_destroy()` + `dma_heap_destroy()` (patch 2)
- **Medium:** `kfree(heap->name)` in `dma_heap_dev_release()` is unsafe for string literal names from existing callers (patch 2)
- **Medium:** Cache aliasing between `vmap(PAGE_KERNEL)` and uncached DMA coherent mapping (patch 4)
- **Medium:** Dead `dma_heap_coherent_register()` declaration with no implementation (patch 4)
- **Low:** Refcount leak from `get_device()` in patch 1 (fixed by patch 2, but breaks bisect)
- **Low:** `dma_map_sgtable` error code discarded (patch 4)
- **Low:** `dma_coerce_mask_and_coherent()` misuse (patch 4)
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (5 preceding siblings ...)
2026-03-03 12:33 ` [PATCH v2 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
@ 2026-03-03 20:55 ` John Stultz
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
7 siblings, 0 replies; 22+ messages in thread
From: John Stultz @ 2026-03-03 20:55 UTC (permalink / raw)
To: Albert Esteve
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, T.J. Mercier,
Christian König, Marek Szyprowski, Robin Murphy, Rob Herring,
Saravana Kannan, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, iommu, devicetree, echanude, mripard, John Stultz
On Tue, Mar 3, 2026 at 4:34 AM Albert Esteve <aesteve@redhat.com> wrote:
>
> This patch introduces a new heap driver to expose DT non‑reusable
> "shared-dma-pool" coherent regions as dma-buf heaps, so userspace can
> allocate buffers from each reserved, named region.
Just a nit here: Might be good to provide some higher level context as
to why this is wanted, and what it enables.
Also, "shared-dma-pool" is also used for CMA regions, so it might be
unclear initially how this is different from the CMA heap (you do
mention non-reusable, but that's a prettty subtle detail).
Might be good to add some of the rationale to the patch adding the
heap implementation as well so it makes it into the git history.
thanks
-john
^ permalink raw reply [flat|nested] 22+ messages in thread* Claude review: dma-buf: heaps: add coherent reserved-memory heap
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (6 preceding siblings ...)
2026-03-03 20:55 ` [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap John Stultz
@ 2026-03-03 21:19 ` Claude Code Review Bot
7 siblings, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: dma-buf: heaps: add coherent reserved-memory heap
Author: Albert Esteve <aesteve@redhat.com>
Patches: 12
Reviewed: 2026-03-04T07:19:19.962760
---
This v2 series adds a new dma-buf heap driver that exposes DT non-reusable "shared-dma-pool" coherent reserved-memory regions as dma-buf heaps. The series is logically organized: infrastructure changes first (patches 1-3), the core heap driver (patch 4), wiring it up (patch 5), and making it modular (patch 6). The overall design is reasonable -- the create/register split allows the heap device to be configured with the correct `dma_mem` before being exposed to userspace.
There are several issues worth addressing, ranging from a type-confusion bug in the refactored `dma_heap_add()`, a memory leak in the `of_reserved_mem` refactoring, questionable use of `virt_to_page()` on coherent DMA memory, the `begin_cpu_access`/`end_cpu_access` sync being redundant for coherent memory, and some API design concerns around the fixed-size static array for tracking regions.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap
@ 2026-03-06 10:36 Albert Esteve
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
0 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude, John Stultz,
Maxime Ripard
This patch series adds a new dma-buf heap driver that exposes coherent,
non‑reusable reserved-memory regions as named heaps, so userspace can
explicitly allocate buffers from those device‑specific pools.
Motivation: we want cgroup accounting for all userspace‑visible buffer
allocations (DRM, v4l2, dma‑buf heaps, etc.). That’s hard to do when
drivers call dma_alloc_attrs() directly because the accounting controller
(memcg vs dmem) is ambiguous. The long‑term plan is to steer those paths
toward dma‑buf heaps, where each heap can unambiguously charge a single
controller. To reach that goal, we need a heap backend for each
dma_alloc_attrs() memory type. CMA and system heaps already exist;
coherent reserved‑memory was the missing piece, since many SoCs define
dedicated, device‑local coherent pools in DT under /reserved-memory using
"shared-dma-pool" with non‑reusable regions (i.e., not CMA) that are
carved out exclusively for coherent DMA and are currently only usable by
in‑kernel drivers.
Because these regions are device‑dependent, each heap instance binds a
heap device to its reserved‑mem region via a newly introduced helper
function -namely, of_reserved_mem_device_init_with_mem()- so coherent
allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep
review focused on the new heap; I plan to follow up based on Eric’s [1]
and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA
heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@redhat.com/
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
[3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812c707@kernel.org/
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
Changes in v3:
- Reorganized changesets among patches to ensure bisectability
- Removed unused dma_heap_coherent_register() leftover
- Removed fallback when setting mask in coherent heap dev, since
dma_set_mask() already truncates to supported masks
- Moved struct rmem_assigned_device (rd) logic to
of_reserved_mem_device_init_with_mem() to allow listing the device
- Link to v2: https://lore.kernel.org/r/20260303-b4-dmabuf-heap-coherent-rmem-v2-0-65a4653b3378@redhat.com
Changes in v2:
- Removed dmem charging parts
- Moved coherent heap registering logic to coherent.c
- Made heap device a member of struct dma_heap
- Split dma_heap_add logic into create/register, to be able to
access the stored heap device before registered.
- Avoid platform device in favour of heap device
- Added a wrapper to rmem device_init() op
- Switched from late_initcall() to module_init()
- Made the coherent heap driver modular
- Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com
---
Albert Esteve (5):
dma-buf: dma-heap: split dma_heap_add
of_reserved_mem: add a helper for rmem device_init op
dma: coherent: store reserved memory coherent regions
dma-buf: heaps: Add Coherent heap to dmabuf heaps
dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1):
dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++--
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 417 ++++++++++++++++++++++++++++++++++
drivers/of/of_reserved_mem.c | 68 ++++--
include/linux/dma-heap.h | 5 +
include/linux/dma-map-ops.h | 7 +
include/linux/of_reserved_mem.h | 8 +
kernel/dma/coherent.c | 34 +++
9 files changed, 640 insertions(+), 47 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread* [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-06 10:36 [PATCH v3 0/6] " Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Expose DT coherent reserved-memory pools ("shared-dma-pool"
without "reusable") as dma-buf heaps, creating one heap per
region so userspace can allocate from the exact device-local
pool intended for coherent DMA.
This is a missing backend in the long-term effort to steer
userspace buffer allocations (DRM, v4l2, dma-buf heaps)
through heaps for clearer cgroup accounting. CMA and system
heaps already exist; non-reusable coherent reserved memory
did not.
The heap binds the heap device to each memory region so
coherent allocations use the correct dev->dma_mem, and
it defers registration until module_init when normal
allocators are available.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 414 ++++++++++++++++++++++++++++++++++
3 files changed, 424 insertions(+)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..aeb475e585048 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..55f53f87c4c15
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,414 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ return dma_mmap_coherent(heap_dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ struct device *heap_dev;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ if (!heap_dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+ goto free_buffer;
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+ struct device *heap_dev;
+ int ret;
+
+ if (!rmem || !rmem->name)
+ return -EINVAL;
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->rmem = rmem;
+ coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_create(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto free_name;
+ }
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ ret = dma_coerce_mask_and_coherent(heap_dev, DMA_BIT_MASK(64));
+ if (ret) {
+ pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
+ if (ret) {
+ pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = dma_heap_register(coh_heap->heap);
+ if (ret) {
+ pr_err("coherent_heap: failed to register heap (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ return 0;
+
+destroy_heap:
+ dma_heap_destroy(coh_heap->heap);
+ coh_heap->heap = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+static int __init coherent_heap_register(void)
+{
+ struct reserved_mem *rmem;
+ unsigned int i;
+ int ret;
+
+ for (i = 0; (rmem = dma_coherent_get_reserved_region(i)) != NULL; i++) {
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+module_init(coherent_heap_register);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
--
2.52.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the main heap implementation.
**Issue: `begin_cpu_access` / `end_cpu_access` syncing for coherent memory**
```c
static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
...
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
}
```
For *coherent* memory, the CPU and device see the same data without cache maintenance. The `dma_sync_sgtable_for_cpu/device()` calls and `invalidate_kernel_vmap_range`/`flush_kernel_vmap_range` are unnecessary for coherent allocations and add overhead. The CMA heap does this because CMA memory isn't inherently coherent, but coherent reserved-memory is, by definition, coherent. These sync operations should be documented as to why they're present (perhaps for non-cache-coherent architectures?) or reconsidered.
**Issue: `virt_to_page()` on `dma_alloc_coherent()` return value**
```c
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
(pg * PAGE_SIZE));
```
`dma_alloc_coherent()` returns a virtual address, but on some architectures (notably ARM with `no-map` regions), this might return an ioremap'd address rather than a regular kernel linear map address. `virt_to_page()` on such addresses is **undefined behavior** and will produce garbage `struct page *` pointers. This is a significant concern on ARM platforms where these coherent reserved-memory regions are commonly used with `no-map`. The pages obtained this way are then used in `sg_alloc_table_from_pages()` during attach, which would corrupt the scatterlist.
Consider using `pfn_to_page(PHYS_PFN(buffer->dma_addr + pg * PAGE_SIZE))` or directly computing pages from the physical address of the rmem region instead.
**Issue: `coherent_heap_map_dma_buf` returns `-ENOMEM` for all errors**
```c
ret = dma_map_sgtable(attachment->dev, table, direction, 0);
if (ret)
return ERR_PTR(-ENOMEM);
```
Should return `ERR_PTR(ret)` to preserve the actual error code.
**Issue: Memory leak of `coh_heap->name` on registration failure**
In `__coherent_heap_register()`:
```c
destroy_heap:
dma_heap_destroy(coh_heap->heap);
coh_heap->heap = NULL;
free_name:
kfree(coh_heap->name);
```
But `dma_heap_destroy()` → `put_device()` → `dma_heap_dev_release()` will `kfree(heap->name)`, which is the **same pointer** as `coh_heap->name` (since `exp_info.name = coh_heap->name` and `dma_heap_create` stores it). So `free_name` does a double-free. Either `dma_heap_dev_release()` should not free the name (see patch 2 issue), or this `kfree(coh_heap->name)` should be removed.
**Issue: No `module_exit` — intentional but no suppression of warning**
The commit message says the heap can't unload. When built as a module (patch 6), the lack of `module_exit` will generate a compiler warning. Consider adding a comment or `__module_exit` stub.
**Minor: Missing newline in pr_warn**
```c
pr_warn("Failed to add coherent heap %s",
rmem->name ? rmem->name : "unknown");
```
Missing `\n` at the end of the format string.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH] dma-buf: heaps: Add Coherent heap to dmabuf heaps
@ 2026-02-24 7:57 Albert Esteve
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
2026-02-27 5:38 ` Claude Code Review Bot
0 siblings, 2 replies; 22+ messages in thread
From: Albert Esteve @ 2026-02-24 7:57 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
echanude, mripard, Albert Esteve
Add a dma-buf heap for DT coherent reserved-memory
(i.e., 'shared-dma-pool' without 'reusable' property),
exposing one heap per region for userspace buffers.
The heap binds a synthetic platform device to each region
so coherent allocations use the correct dev->dma_mem,
and it defers registration until late_initcall when
normal allocator are available.
This patch includes charging of the coherent heap
allocator to the dmem cgroup.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
This patch introduces a new driver to expose DT coherent reserved-memory
regions as dma-buf heaps, allowing userspace buffers to be created.
Since these regions are device-dependent, we bind a synthetic platform
device to each region so coherent allocations use the correct dev->dma_mem.
Following Eric’s [1] and Maxime’s [2] work on charging DMA buffers
allocated from userspace to cgroups (dmem), this patch adds the same
charging pattern used by CMA heaps patch. Charging is done only through
the dma-buf heap interface so it can be attributed to a userspace allocator.
This allows each device-specific reserved-memory region to enforce its
own limits.
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@redhat.com/
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
---
drivers/dma-buf/heaps/Kconfig | 17 ++
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 485 ++++++++++++++++++++++++++++++++++
include/linux/dma-heap.h | 11 +
kernel/dma/coherent.c | 9 +
5 files changed, 523 insertions(+)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..93765dca164e3 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,20 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
+
+config COHERENT_AREAS_DEFERRED
+ int "Max deferred coherent reserved-memory regions"
+ depends on DMABUF_HEAPS_COHERENT
+ default 16
+ help
+ Maximum number of coherent reserved-memory regions that can be
+ deferred for later registration during early boot.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..870b2b89aefcb
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,485 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/cgroup_dmem.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+#define DEFERRED_AREAS_MAX CONFIG_COHERENT_AREAS_DEFERRED
+
+/*
+ * Early init can't use normal memory management yet (memblock is used
+ * instead), so keep a small deferred list and retry at late_initcall.
+ */
+static struct reserved_mem *rmem_areas_deferred[DEFERRED_AREAS_MAX];
+static unsigned int rmem_areas_deferred_num;
+
+static int coherent_heap_add_deferred(struct reserved_mem *rmem)
+{
+ if (rmem_areas_deferred_num >= DEFERRED_AREAS_MAX) {
+ pr_warn("Deferred heap areas list full, dropping %s\n",
+ rmem->name ? rmem->name : "unknown");
+ return -EINVAL;
+ }
+ rmem_areas_deferred[rmem_areas_deferred_num++] = rmem;
+ return 0;
+}
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+ struct device *dev;
+ struct platform_device *pdev;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ struct dmem_cgroup_region *cg;
+#endif
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ struct dmem_cgroup_pool_state *pool;
+#endif
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+
+ return dma_mmap_coherent(coh_heap->dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(coh_heap->dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ dmem_cgroup_uncharge(buffer->pool, buffer->len);
+#endif
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+ if (!coh_heap->dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ if (mem_accounting) {
+ ret = dmem_cgroup_try_charge(coh_heap->cg, size,
+ &buffer->pool, NULL);
+ if (ret)
+ goto free_buffer;
+ }
+#endif
+
+ buffer->alloc_vaddr = dma_alloc_coherent(coh_heap->dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ goto uncharge_cgroup;
+#else
+ goto free_buffer;
+#endif
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(coh_heap->dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+uncharge_cgroup:
+ dmem_cgroup_uncharge(buffer->pool, size);
+#endif
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ struct dmem_cgroup_region *region;
+#endif
+ const char *rmem_name;
+ int ret;
+
+ if (!rmem)
+ return -EINVAL;
+
+ rmem_name = rmem->name ? rmem->name : "unknown";
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->name = kasprintf(GFP_KERNEL, "coherent_%s", rmem_name);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ coh_heap->rmem = rmem;
+
+ /* create a platform device per rmem and bind it */
+ coh_heap->pdev = platform_device_register_simple("coherent-heap",
+ PLATFORM_DEVID_AUTO,
+ NULL, 0);
+ if (IS_ERR(coh_heap->pdev)) {
+ ret = PTR_ERR(coh_heap->pdev);
+ goto free_name;
+ }
+
+ if (rmem->ops && rmem->ops->device_init) {
+ ret = rmem->ops->device_init(rmem, &coh_heap->pdev->dev);
+ if (ret)
+ goto pdev_unregister;
+ }
+
+ coh_heap->dev = &coh_heap->pdev->dev;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ region = dmem_cgroup_register_region(rmem->size, "coh/%s", rmem_name);
+ if (IS_ERR(region)) {
+ ret = PTR_ERR(region);
+ goto pdev_unregister;
+ }
+ coh_heap->cg = region;
+#endif
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_add(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto cg_unregister;
+ }
+
+ return 0;
+
+cg_unregister:
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ dmem_cgroup_unregister_region(coh_heap->cg);
+#endif
+pdev_unregister:
+ platform_device_unregister(coh_heap->pdev);
+ coh_heap->pdev = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+int dma_heap_coherent_register(struct reserved_mem *rmem)
+{
+ int ret;
+
+ ret = __coherent_heap_register(rmem);
+ if (ret == -ENOMEM)
+ return coherent_heap_add_deferred(rmem);
+ return ret;
+}
+
+static int __init coherent_heap_register_deferred(void)
+{
+ unsigned int i;
+ int ret;
+
+ for (i = 0; i < rmem_areas_deferred_num; i++) {
+ struct reserved_mem *rmem = rmem_areas_deferred[i];
+
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+late_initcall(coherent_heap_register_deferred);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 648328a64b27e..e894cfa1ecf1a 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -9,9 +9,11 @@
#ifndef _DMA_HEAPS_H
#define _DMA_HEAPS_H
+#include <linux/errno.h>
#include <linux/types.h>
struct dma_heap;
+struct reserved_mem;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -48,4 +50,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
+int dma_heap_coherent_register(struct reserved_mem *rmem);
+#else
+static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
#endif /* _DMA_HEAPS_H */
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 1147497bc512c..f49d13e460e4b 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
+#include <linux/dma-heap.h>
struct dma_coherent_mem {
void *virt_base;
@@ -393,6 +394,14 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
rmem->ops = &rmem_dma_ops;
pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = dma_heap_coherent_register(rmem);
+
+ if (ret)
+ pr_warn("Reserved memory: failed to register coherent heap for %s (%d)\n",
+ rmem->name ? rmem->name : "unknown", ret);
+ }
return 0;
}
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve@redhat.com>
^ permalink raw reply related [flat|nested] 22+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-02-24 7:57 [PATCH] " Albert Esteve
@ 2026-02-27 5:38 ` Claude Code Review Bot
2026-02-27 5:38 ` Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 5:38 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
#### Critical Issues
**1. `virt_to_page()` on `dma_alloc_coherent()` return value is unsafe**
```c
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
(pg * PAGE_SIZE));
```
`dma_alloc_coherent()` on a device backed by reserved coherent memory goes through `dma_alloc_from_coherent()`, which returns a pointer into `mem->virt_base`. That virtual base comes from `memremap(phys_addr, size, MEMREMAP_WC)` (in `dma_init_coherent_memory()` at `kernel/dma/coherent.c`). On architectures where `memremap()` returns a vmalloc/ioremap-space address rather than a linear-map address, `virt_to_page()` will produce garbage or crash.
The existing `coherent.c` code avoids this — it stores `pfn_base = PFN_DOWN(phys_addr)` and uses PFN arithmetic directly in `remap_pfn_range()` calls. The correct approach here would be to derive PFNs from the DMA address:
```c
phys_addr_t phys = dma_to_phys(coh_heap->dev, buffer->dma_addr);
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = pfn_to_page(PFN_DOWN(phys) + pg);
```
This is the single most critical issue in the patch.
**2. Missing memory zeroing — information leak to userspace**
The CMA heap explicitly zeroes allocated pages before exporting them:
```c
/* CMA heap does: */
memset(page_address(cma_pages), 0, size);
```
The coherent heap does not zero the memory returned by `dma_alloc_coherent()`. For reserved-memory coherent pools, `dma_alloc_from_coherent()` returns a pointer into pre-mapped memory without zeroing it. Between allocate/free cycles, stale data from prior users is exposed. Since these buffers are exported to userspace, this is a security concern. A `memset(buffer->alloc_vaddr, 0, buffer->len)` is needed after allocation.
#### Architectural Issues
**3. Reverse dependency: `kernel/dma/coherent.c` calling into dma-buf heaps**
```c
/* kernel/dma/coherent.c: */
+#include <linux/dma-heap.h>
...
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = dma_heap_coherent_register(rmem);
```
This creates a layering violation — core DMA infrastructure now calls up into the dma-buf heaps subsystem. The CMA heap does not do this; it uses `cma_for_each_area()` to iterate and register CMA areas autonomously from `drivers/dma-buf/heaps/cma_heap.c`, without touching any CMA core code.
The coherent heap should similarly find reserved-memory regions on its own at `late_initcall` time (e.g., by iterating the reserved-memory DT nodes), rather than hooking into `rmem_dma_setup()`. This would keep the modification entirely within `drivers/dma-buf/heaps/` and avoid touching `kernel/dma/coherent.c`.
**4. All coherent reserved-memory regions are unconditionally exposed to userspace**
Every `shared-dma-pool` (non-reusable) region automatically gets a dma-buf heap. Some of these regions may be intended exclusively for specific kernel drivers. A malicious or buggy userspace process could exhaust a device-specific reserved pool, starving the kernel driver. There should be a DT opt-in mechanism (e.g., a `dma-buf-heap` property) to control which regions are exposed, rather than exposing all of them.
#### Code Quality Issues
**5. Unnecessary `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` guards throughout**
The header `linux/cgroup_dmem.h` already provides no-op stubs when `CONFIG_CGROUP_DMEM` is disabled (`dmem_cgroup_try_charge()` returns 0, `dmem_cgroup_uncharge()` is a no-op, etc.). All the `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` blocks in the patch are unnecessary and clutter the code. The entire cgroup charging path can be written without any `#if` guards:
```c
/* In struct coherent_heap — no #if needed: */
struct dmem_cgroup_region *cg;
/* In allocate — no #if needed: */
if (mem_accounting) {
ret = dmem_cgroup_try_charge(coh_heap->cg, size,
&buffer->pool, NULL);
if (ret)
goto free_buffer;
}
```
The stubs will compile to nothing when disabled. This would significantly clean up `coherent_heap_allocate()`, `coherent_heap_dma_buf_release()`, `__coherent_heap_register()`, and the error paths.
**6. `cg_unregister` error label structure is confusing**
```c
cg_unregister:
#if IS_ENABLED(CONFIG_CGROUP_DMEM)
dmem_cgroup_unregister_region(coh_heap->cg);
#endif
pdev_unregister:
```
When `CONFIG_CGROUP_DMEM` is disabled, `cg_unregister` is an empty label that falls through to `pdev_unregister`. This compiles but is confusing. Removing the ifdefs (as per point 5) would fix this naturally.
**7. Missing `device_release()` call in error/cleanup paths**
In `__coherent_heap_register()`, after a successful `rmem->ops->device_init()` call, the error paths only call `platform_device_unregister()` but never call `rmem->ops->device_release()` to undo the init:
```c
if (rmem->ops && rmem->ops->device_init) {
ret = rmem->ops->device_init(rmem, &coh_heap->pdev->dev);
if (ret)
goto pdev_unregister;
}
/* ... later errors goto cg_unregister or pdev_unregister ... */
/* but never call rmem->ops->device_release() */
```
If `dma_heap_add()` or `dmem_cgroup_register_region()` fails after a successful `device_init()`, the reserved memory binding is leaked. Need a `device_release` cleanup step.
**8. Deferred registration only on `-ENOMEM` is fragile**
```c
int dma_heap_coherent_register(struct reserved_mem *rmem)
{
int ret;
ret = __coherent_heap_register(rmem);
if (ret == -ENOMEM)
return coherent_heap_add_deferred(rmem);
return ret;
}
```
`rmem_dma_setup()` runs from `RESERVEDMEM_OF_DECLARE`, which is during very early boot before slab is available. The assumption is that `kzalloc()` will fail with `-ENOMEM` and trigger deferral. But `platform_device_register_simple()` or other calls could fail with different error codes (e.g., `-ENODEV`, `-EPROBE_DEFER`) before the allocator is ready. If the intent is to unconditionally defer during early boot, just always defer — or better yet, don't hook into `rmem_dma_setup()` at all (see point 3).
**9. `COHERENT_AREAS_DEFERRED` Kconfig naming is inconsistent**
```
config COHERENT_AREAS_DEFERRED
int "Max deferred coherent reserved-memory regions"
```
This name doesn't follow the `DMABUF_HEAPS_` prefix convention used by other options in this Kconfig. Should be something like `DMABUF_HEAPS_COHERENT_MAX_DEFERRED`. Also, making this a Kconfig integer for what is essentially an implementation detail of early boot deferral seems like unnecessary user-facing configuration.
**10. `begin_cpu_access`/`end_cpu_access` sync correctness for coherent memory**
```c
static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
...
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
}
```
This is copied from the CMA heap, but the semantics are subtly different. The underlying allocation is DMA-coherent (uncached/write-combining), so the sync on the allocation-side device is unnecessary. The sync on *attaching* devices' sg_tables may still be needed if those devices' DMA mappings are not coherent. The code works but a comment explaining this would be helpful, since the reader may wonder why coherent memory needs sync at all.
**11. Dependency on unmerged patches**
The `mem_accounting` variable and the dmem cgroup charging pattern are from Eric's series ([1] in the cover letter) which is not yet merged. This should be clearly stated as a dependency, and ideally this patch should be part of or follow that series.
#### Minor Issues
- `kmalloc_array()` is used for `buffer->pages` while `kzalloc_obj()` is used elsewhere — inconsistent with the CMA heap which uses `kmalloc_objs()` for the same purpose.
- The `pr_warn` in `coherent_heap_register_deferred()` is missing a newline: `pr_warn("Failed to add coherent heap %s",` → should have `\n`.
- No `MODULE_LICENSE()` (though since it's `bool` not `tristate`, this is cosmetic — but `MODULE_DESCRIPTION` without `MODULE_LICENSE` is inconsistent).
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-02-24 7:57 [PATCH] " Albert Esteve
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
@ 2026-02-27 5:38 ` Claude Code Review Bot
1 sibling, 0 replies; 22+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 5:38 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: dma-buf: heaps: Add Coherent heap to dmabuf heaps
Author: Albert Esteve <aesteve@redhat.com>
Patches: 3
Reviewed: 2026-02-27T15:38:23.597909
---
This is a single patch adding a new dma-buf heap driver for DT coherent reserved-memory (`shared-dma-pool` without `reusable`). The goal is to expose these regions to userspace via the dma-buf heaps interface, with dmem cgroup accounting.
**Overall assessment: Needs significant rework.** The patch has several critical issues including unsafe use of `virt_to_page()` on `dma_alloc_coherent()` return values, missing memory zeroing (information leak to userspace), and an architectural concern where core DMA code is modified to call into the dma-buf heap subsystem. The `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` ifdefs throughout are unnecessary since the header provides stubs, and there is no mechanism to prevent unconditional exposure of all coherent reserved-memory to userspace.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2026-03-08 23:02 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-03 12:33 [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-03 12:33 ` [PATCH v2 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
2026-03-03 13:10 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
2026-03-03 13:13 ` Maxime Ripard
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-03 13:20 ` Maxime Ripard
2026-03-03 14:47 ` Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 5/6] dma: coherent: register to coherent heap Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 12:33 ` [PATCH v2 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-03-03 20:55 ` [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap John Stultz
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-03-06 10:36 [PATCH v3 0/6] " Albert Esteve
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-02-24 7:57 [PATCH] " Albert Esteve
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
2026-02-27 5:38 ` Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox