* [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap
@ 2026-03-06 10:36 Albert Esteve
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
` (6 more replies)
0 siblings, 7 replies; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude, John Stultz,
Maxime Ripard
This patch series adds a new dma-buf heap driver that exposes coherent,
non‑reusable reserved-memory regions as named heaps, so userspace can
explicitly allocate buffers from those device‑specific pools.
Motivation: we want cgroup accounting for all userspace‑visible buffer
allocations (DRM, v4l2, dma‑buf heaps, etc.). That’s hard to do when
drivers call dma_alloc_attrs() directly because the accounting controller
(memcg vs dmem) is ambiguous. The long‑term plan is to steer those paths
toward dma‑buf heaps, where each heap can unambiguously charge a single
controller. To reach that goal, we need a heap backend for each
dma_alloc_attrs() memory type. CMA and system heaps already exist;
coherent reserved‑memory was the missing piece, since many SoCs define
dedicated, device‑local coherent pools in DT under /reserved-memory using
"shared-dma-pool" with non‑reusable regions (i.e., not CMA) that are
carved out exclusively for coherent DMA and are currently only usable by
in‑kernel drivers.
Because these regions are device‑dependent, each heap instance binds a
heap device to its reserved‑mem region via a newly introduced helper
function -namely, of_reserved_mem_device_init_with_mem()- so coherent
allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep
review focused on the new heap; I plan to follow up based on Eric’s [1]
and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA
heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@redhat.com/
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
[3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812c707@kernel.org/
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
Changes in v3:
- Reorganized changesets among patches to ensure bisectability
- Removed unused dma_heap_coherent_register() leftover
- Removed fallback when setting mask in coherent heap dev, since
dma_set_mask() already truncates to supported masks
- Moved struct rmem_assigned_device (rd) logic to
of_reserved_mem_device_init_with_mem() to allow listing the device
- Link to v2: https://lore.kernel.org/r/20260303-b4-dmabuf-heap-coherent-rmem-v2-0-65a4653b3378@redhat.com
Changes in v2:
- Removed dmem charging parts
- Moved coherent heap registering logic to coherent.c
- Made heap device a member of struct dma_heap
- Split dma_heap_add logic into create/register, to be able to
access the stored heap device before registered.
- Avoid platform device in favour of heap device
- Added a wrapper to rmem device_init() op
- Switched from late_initcall() to module_init()
- Made the coherent heap driver modular
- Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com
---
Albert Esteve (5):
dma-buf: dma-heap: split dma_heap_add
of_reserved_mem: add a helper for rmem device_init op
dma: coherent: store reserved memory coherent regions
dma-buf: heaps: Add Coherent heap to dmabuf heaps
dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1):
dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++--
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 417 ++++++++++++++++++++++++++++++++++
drivers/of/of_reserved_mem.c | 68 ++++--
include/linux/dma-heap.h | 5 +
include/linux/dma-map-ops.h | 7 +
include/linux/of_reserved_mem.h | 8 +
kernel/dma/coherent.c | 34 +++
9 files changed, 640 insertions(+), 47 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve@redhat.com>
^ permalink raw reply [flat|nested] 17+ messages in thread* [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
` (5 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude, John Stultz,
Maxime Ripard
From: John Stultz <john.stultz@linaro.org>
Keep track of the heap device struct.
This will be useful for special DMA allocations
and actions.
Signed-off-by: John Stultz <john.stultz@linaro.org>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 34 ++++++++++++++++++++++++++--------
include/linux/dma-heap.h | 2 ++
2 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index ac5f8685a6494..1124d63eb1398 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -31,6 +31,7 @@
* @heap_devt: heap device node
* @list: list head connecting to list of heaps
* @heap_cdev: heap char device
+ * @heap_dev: heap device
*
* Represents a heap of memory from which buffers can be made.
*/
@@ -41,6 +42,7 @@ struct dma_heap {
dev_t heap_devt;
struct list_head list;
struct cdev heap_cdev;
+ struct device *heap_dev;
};
static LIST_HEAD(heap_list);
@@ -223,6 +225,19 @@ const char *dma_heap_get_name(struct dma_heap *heap)
}
EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
+/**
+ * dma_heap_get_dev() - get device struct for the heap
+ * @heap: DMA-Heap to retrieve device struct from
+ *
+ * Returns:
+ * The device struct for the heap.
+ */
+struct device *dma_heap_get_dev(struct dma_heap *heap)
+{
+ return heap->heap_dev;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+
/**
* dma_heap_add - adds a heap to dmabuf heaps
* @exp_info: information needed to register this heap
@@ -230,7 +245,6 @@ EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
{
struct dma_heap *heap, *h, *err_ret;
- struct device *dev_ret;
unsigned int minor;
int ret;
@@ -272,14 +286,14 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
goto err1;
}
- dev_ret = device_create(dma_heap_class,
- NULL,
- heap->heap_devt,
- NULL,
- heap->name);
- if (IS_ERR(dev_ret)) {
+ heap->heap_dev = device_create(dma_heap_class,
+ NULL,
+ heap->heap_devt,
+ NULL,
+ heap->name);
+ if (IS_ERR(heap->heap_dev)) {
pr_err("dma_heap: Unable to create device\n");
- err_ret = ERR_CAST(dev_ret);
+ err_ret = ERR_CAST(heap->heap_dev);
goto err2;
}
@@ -295,6 +309,10 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
}
}
+ /* Make sure it doesn't disappear on us */
+ heap->heap_dev = get_device(heap->heap_dev);
+
+
/* Add heap to the list */
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 648328a64b27e..493085e69b70e 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -12,6 +12,7 @@
#include <linux/types.h>
struct dma_heap;
+struct device;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -43,6 +44,7 @@ struct dma_heap_export_info {
void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap);
+struct device *dma_heap_get_dev(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma-buf: dma-heap: Keep track of the heap device struct
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This patch stores `heap->heap_dev` and adds a `dma_heap_get_dev()` accessor. Originally from John Stultz with Maxime's Reviewed-by.
**Issue: Double blank line**
```c
+ /* Make sure it doesn't disappear on us */
+ heap->heap_dev = get_device(heap->heap_dev);
+
+
/* Add heap to the list */
```
There's an extra blank line (cosmetic, minor). This gets removed in patch 2 anyway, but still worth noting for the patch itself.
**Observation**: The `get_device()` call is done *after* the duplicate-name check loop, which means if the name check passes, we do `get_device()`. But if we fail anywhere after `device_create()` in err3, we do `device_destroy()` without a corresponding `put_device()`. This asymmetry is benign in the current patch because `device_destroy()` does the cleanup, but the patch 2 refactoring changes this significantly. Fine for bisectability since patch 2 immediately reworks this.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
` (4 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Split dma_heap_add() into creation and registration
phases while preserving the ordering between
cdev_add() and device_add(), and ensuring all
device fields are initialised.
This lets callers build a heap and its device,
bind reserved memory, and cleanly unwind on failure
before the heap is registered. It also avoids a window
where userspace can see a heap that exists but isn’t
fully functional. The coherent heap will need this to
bind rmem to the heap device prior to registration.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 126 +++++++++++++++++++++++++++++++++++----------
include/linux/dma-heap.h | 3 ++
2 files changed, 103 insertions(+), 26 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index 1124d63eb1398..ba87e5ac16ae2 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -238,15 +238,30 @@ struct device *dma_heap_get_dev(struct dma_heap *heap)
}
EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+static void dma_heap_dev_release(struct device *dev)
+{
+ struct dma_heap *heap;
+
+ pr_debug("heap device: '%s': %s\n", dev_name(dev), __func__);
+ heap = dev_get_drvdata(dev);
+ kfree(heap->name);
+ kfree(heap);
+ kfree(dev);
+}
+
/**
- * dma_heap_add - adds a heap to dmabuf heaps
- * @exp_info: information needed to register this heap
+ * dma_heap_create() - allocate and initialize a heap object
+ * @exp_info: information needed to create a heap
+ *
+ * Creates a heap instance but does not register it or create device nodes.
+ * Use dma_heap_register() to make it visible to userspace, or
+ * dma_heap_destroy() to release it.
+ *
+ * Returns a heap on success or ERR_PTR(-errno) on failure.
*/
-struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info)
{
- struct dma_heap *heap, *h, *err_ret;
- unsigned int minor;
- int ret;
+ struct dma_heap *heap;
if (!exp_info->name || !strcmp(exp_info->name, "")) {
pr_err("dma_heap: Cannot add heap without a name\n");
@@ -265,13 +280,41 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
heap->name = exp_info->name;
heap->ops = exp_info->ops;
heap->priv = exp_info->priv;
+ heap->heap_dev = kzalloc_obj(*heap->heap_dev);
+ if (!heap->heap_dev) {
+ kfree(heap);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ device_initialize(heap->heap_dev);
+ dev_set_drvdata(heap->heap_dev, heap);
+
+ dev_set_name(heap->heap_dev, heap->name);
+ heap->heap_dev->class = dma_heap_class;
+ heap->heap_dev->release = dma_heap_dev_release;
+
+ return heap;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_create, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_register() - register a heap with the dma-heap framework
+ * @heap: heap instance created with dma_heap_create()
+ *
+ * Registers the heap, creating its device node and adding it to the heap
+ * list. Returns 0 on success or a negative error code on failure.
+ */
+int dma_heap_register(struct dma_heap *heap)
+{
+ struct dma_heap *h;
+ unsigned int minor;
+ int ret;
/* Find unused minor number */
ret = xa_alloc(&dma_heap_minors, &minor, heap,
XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
if (ret < 0) {
pr_err("dma_heap: Unable to get minor number for heap\n");
- err_ret = ERR_PTR(ret);
goto err0;
}
@@ -282,42 +325,34 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
if (ret < 0) {
pr_err("dma_heap: Unable to add char device\n");
- err_ret = ERR_PTR(ret);
goto err1;
}
- heap->heap_dev = device_create(dma_heap_class,
- NULL,
- heap->heap_devt,
- NULL,
- heap->name);
- if (IS_ERR(heap->heap_dev)) {
- pr_err("dma_heap: Unable to create device\n");
- err_ret = ERR_CAST(heap->heap_dev);
+ heap->heap_dev->devt = heap->heap_devt;
+
+ ret = device_add(heap->heap_dev);
+ if (ret) {
+ pr_err("dma_heap: Unable to add device\n");
goto err2;
}
mutex_lock(&heap_list_lock);
/* check the name is unique */
list_for_each_entry(h, &heap_list, list) {
- if (!strcmp(h->name, exp_info->name)) {
+ if (!strcmp(h->name, heap->name)) {
mutex_unlock(&heap_list_lock);
pr_err("dma_heap: Already registered heap named %s\n",
- exp_info->name);
- err_ret = ERR_PTR(-EINVAL);
+ heap->name);
+ ret = -EINVAL;
goto err3;
}
}
- /* Make sure it doesn't disappear on us */
- heap->heap_dev = get_device(heap->heap_dev);
-
-
/* Add heap to the list */
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
- return heap;
+ return 0;
err3:
device_destroy(dma_heap_class, heap->heap_devt);
@@ -326,8 +361,47 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
err1:
xa_erase(&dma_heap_minors, minor);
err0:
- kfree(heap);
- return err_ret;
+ dma_heap_destroy(heap);
+ return ret;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_register, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_destroy() - release a heap created by dma_heap_create()
+ * @heap: heap instance to release
+ *
+ * Drops the heap device reference; the heap and its device are freed in the
+ * device release path when the last reference is gone.
+ */
+void dma_heap_destroy(struct dma_heap *heap)
+{
+ put_device(heap->heap_dev);
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_destroy, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_add - adds a heap to dmabuf heaps
+ * @exp_info: information needed to register this heap
+ */
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+{
+ struct dma_heap *heap;
+ int ret;
+
+ heap = dma_heap_create(exp_info);
+ if (IS_ERR(heap)) {
+ pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap));
+ return ERR_CAST(heap);
+ }
+
+ ret = dma_heap_register(heap);
+ if (ret) {
+ pr_err("dma_heap: failed to register heap (%d)\n", ret);
+ dma_heap_destroy(heap);
+ return ERR_PTR(ret);
+ }
+
+ return heap;
}
EXPORT_SYMBOL_NS_GPL(dma_heap_add, "DMA_BUF_HEAP");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 493085e69b70e..1b0ea43ba66c3 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -46,6 +46,9 @@ void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap);
struct device *dma_heap_get_dev(struct dma_heap *heap);
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info);
+int dma_heap_register(struct dma_heap *heap);
+void dma_heap_destroy(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma-buf: dma-heap: split dma_heap_add
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the core refactoring, splitting `dma_heap_add()` into `dma_heap_create()` / `dma_heap_register()` / `dma_heap_destroy()`.
**Issue: `heap->name` is not duplicated — double-free risk**
In `dma_heap_create()`:
```c
heap->name = exp_info->name;
```
The name is stored as a raw pointer from `exp_info`. But `dma_heap_dev_release()` does:
```c
kfree(heap->name);
```
If the caller passed a string literal or stack-allocated name (which `dma_heap_add()` callers like system_heap and cma_heap do with string literals), this `kfree()` will corrupt memory. Looking at the coherent heap (patch 5), `coh_heap->name = kstrdup(rmem->name, GFP_KERNEL)` is passed in, so it's safe for that caller. But *any existing caller* of `dma_heap_add()` will now have their name kfree'd in the release path. The original code (pre-series) never freed `heap->name` — the heap struct was freed directly in the error path, and heaps were never removed. This is a **regression for existing callers**.
The `dma_heap_create()` should `kstrdup()` the name, or `dma_heap_dev_release()` should not free it.
**Issue: `dma_heap_register()` error path calls `dma_heap_destroy()` — double destroy possible**
In `dma_heap_register()` error paths, `err0` calls `dma_heap_destroy(heap)`:
```c
err0:
dma_heap_destroy(heap);
return ret;
```
And `dma_heap_add()` on register failure also calls:
```c
ret = dma_heap_register(heap);
if (ret) {
pr_err("dma_heap: failed to register heap (%d)\n", ret);
dma_heap_destroy(heap);
return ERR_PTR(ret);
}
```
So `dma_heap_destroy()` (which is `put_device()`) is called **twice** — once inside `dma_heap_register()` and once in `dma_heap_add()`. After `device_initialize()`, the refcount is 1. The first `put_device()` drops it to 0 and triggers the release callback, freeing the heap. The second `put_device()` is a UAF. This is a **bug**.
**Issue: `err3` path does `device_destroy()` then falls through to `dma_heap_destroy()`**
```c
err3:
device_destroy(dma_heap_class, heap->heap_devt);
err2:
cdev_del(&heap->heap_cdev);
err1:
xa_erase(&dma_heap_minors, minor);
err0:
dma_heap_destroy(heap);
```
After `device_add()` succeeds, `err3` calls `device_destroy()` which does `device_unregister()` → `device_del()` + `put_device()`. Then execution falls through to `err0` which calls `dma_heap_destroy()` → `put_device()` again. Since `device_initialize()` sets refcount to 1 and `device_add()` doesn't bump it, the `device_destroy()`'s `put_device()` drops to 0 and frees, then `dma_heap_destroy()` is UAF. This needs restructuring.
**Minor: `kzalloc_obj` usage**
```c
heap->heap_dev = kzalloc_obj(*heap->heap_dev);
```
This appears to be a custom macro. It works but is unconventional — `kzalloc(sizeof(*heap->heap_dev), GFP_KERNEL)` is the standard pattern.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
` (3 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Add a helper function wrapping internal reserved memory
device_init call and expose it externally.
Use the new helper function within of_reserved_mem_device_init_by_idx().
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/of/of_reserved_mem.c | 68 ++++++++++++++++++++++++++---------------
include/linux/of_reserved_mem.h | 8 +++++
2 files changed, 52 insertions(+), 24 deletions(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 1fd28f8056108..26ca871f7f919 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -605,6 +605,49 @@ struct rmem_assigned_device {
static LIST_HEAD(of_rmem_assigned_device_list);
static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
+/**
+ * of_reserved_mem_device_init_with_mem() - assign reserved memory region to
+ * given device
+ * @dev: Pointer to the device to configure
+ * @rmem: Reserved memory region to assign
+ *
+ * This function assigns respective DMA-mapping operations based on the
+ * reserved memory region already provided in @rmem to the @dev device,
+ * without walking DT nodes.
+ *
+ * Returns error code or zero on success.
+ */
+int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem)
+{
+ struct rmem_assigned_device *rd;
+ int ret;
+
+ if (!dev || !rmem || !rmem->ops || !rmem->ops->device_init)
+ return -EINVAL;
+
+ rd = kmalloc_obj(struct rmem_assigned_device);
+ if (!rd)
+ return -ENOMEM;
+
+ ret = rmem->ops->device_init(rmem, dev);
+ if (ret == 0) {
+ rd->dev = dev;
+ rd->rmem = rmem;
+
+ mutex_lock(&of_rmem_assigned_device_mutex);
+ list_add(&rd->list, &of_rmem_assigned_device_list);
+ mutex_unlock(&of_rmem_assigned_device_mutex);
+
+ dev_info(dev, "assigned reserved memory node %s\n", rmem->name);
+ } else {
+ kfree(rd);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_with_mem);
+
/**
* of_reserved_mem_device_init_by_idx() - assign reserved memory region to
* given device
@@ -623,10 +666,8 @@ static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
{
- struct rmem_assigned_device *rd;
struct device_node *target;
struct reserved_mem *rmem;
- int ret;
if (!np || !dev)
return -EINVAL;
@@ -643,28 +684,7 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
rmem = of_reserved_mem_lookup(target);
of_node_put(target);
- if (!rmem || !rmem->ops || !rmem->ops->device_init)
- return -EINVAL;
-
- rd = kmalloc_obj(struct rmem_assigned_device);
- if (!rd)
- return -ENOMEM;
-
- ret = rmem->ops->device_init(rmem, dev);
- if (ret == 0) {
- rd->dev = dev;
- rd->rmem = rmem;
-
- mutex_lock(&of_rmem_assigned_device_mutex);
- list_add(&rd->list, &of_rmem_assigned_device_list);
- mutex_unlock(&of_rmem_assigned_device_mutex);
-
- dev_info(dev, "assigned reserved memory node %s\n", rmem->name);
- } else {
- kfree(rd);
- }
-
- return ret;
+ return of_reserved_mem_device_init_with_mem(dev, rmem);
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx);
diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h
index f573423359f48..12f7ddb7ee61f 100644
--- a/include/linux/of_reserved_mem.h
+++ b/include/linux/of_reserved_mem.h
@@ -32,6 +32,8 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem);
#define RESERVEDMEM_OF_DECLARE(name, compat, init) \
_OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
+int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem);
int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx);
int of_reserved_mem_device_init_by_name(struct device *dev,
@@ -51,6 +53,12 @@ int of_reserved_mem_region_count(const struct device_node *np);
#define RESERVEDMEM_OF_DECLARE(name, compat, init) \
_OF_DECLARE_STUB(reservedmem, name, compat, init, reservedmem_of_init_fn)
+static inline int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
{
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: of_reserved_mem: add a helper for rmem device_init op
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean refactoring. The new `of_reserved_mem_device_init_with_mem()` extracts the core logic from `of_reserved_mem_device_init_by_idx()`.
**Minor: NULL check change in behavior**
The original code in `of_reserved_mem_device_init_by_idx()` checked:
```c
if (!rmem || !rmem->ops || !rmem->ops->device_init)
return -EINVAL;
```
Now `of_reserved_mem_device_init_by_idx()` calls the new helper which does these checks, but `of_reserved_mem_lookup()` can return NULL, and the new helper also checks `!rmem`, so the behavior is preserved. The stub for `!CONFIG_OF_RESERVED_MEM` returns `-EOPNOTSUPP` instead of the previous `-ENOSYS` (from `of_reserved_mem_device_init_by_idx`'s stub) — this is fine and arguably more correct.
No issues found. Well done.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (2 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
` (2 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Create the logic to store coherent reserved memory regions
within the coherent DMA code; and provide an iterator
(i.e., dma_coherent_get_reserved_region()) to allow
consumers of this API retrieving the regions.
Note: since the consumer of this iterator is going
to be the specific coherent memory dmabuf heap module, this
commit introduces a check for CONFIG_DMABUF_HEAPS_COHERENT,
which is defined in the subsequent patch, to maintain a
clean split between the kernel code and the heap
module code.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
include/linux/dma-map-ops.h | 7 +++++++
kernel/dma/coherent.c | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+)
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 60b63756df821..c87e5e44e5383 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -12,6 +12,7 @@
struct cma;
struct iommu_ops;
+struct reserved_mem;
struct dma_map_ops {
void *(*alloc)(struct device *dev, size_t size,
@@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, size_t size, int *ret);
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx);
#else
static inline int dma_declare_coherent_memory(struct device *dev,
phys_addr_t phys_addr, dma_addr_t device_addr, size_t size)
@@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
#define dma_release_from_dev_coherent(dev, order, vaddr) (0)
#define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
static inline void dma_release_coherent_memory(struct device *dev) { }
+static inline
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ return NULL;
+}
#endif /* CONFIG_DMA_DECLARE_COHERENT */
#ifdef CONFIG_DMA_GLOBAL_POOL
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 1147497bc512c..d0d0979ffb153 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
+#include <linux/dma-heap.h>
struct dma_coherent_mem {
void *virt_base;
@@ -334,6 +335,31 @@ static phys_addr_t dma_reserved_default_memory_base __initdata;
static phys_addr_t dma_reserved_default_memory_size __initdata;
#endif
+#define MAX_COHERENT_REGIONS 64
+
+static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS];
+static unsigned int rmem_coherent_areas_num;
+
+static int rmem_coherent_insert_area(struct reserved_mem *rmem)
+{
+ if (rmem_coherent_areas_num >= MAX_COHERENT_REGIONS) {
+ pr_warn("Deferred heap areas list full, dropping %s\n",
+ rmem->name ? rmem->name : "unknown");
+ return -EINVAL;
+ }
+ rmem_coherent_areas[rmem_coherent_areas_num++] = rmem;
+ return 0;
+}
+
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ if (idx >= rmem_coherent_areas_num)
+ return NULL;
+
+ return rmem_coherent_areas[idx];
+}
+EXPORT_SYMBOL_GPL(dma_coherent_get_reserved_region);
+
static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
{
struct dma_coherent_mem *mem = rmem->priv;
@@ -393,6 +419,14 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
rmem->ops = &rmem_dma_ops;
pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = rmem_coherent_insert_area(rmem);
+
+ if (ret)
+ pr_warn("Reserved memory: failed to store coherent area for %s (%d)\n",
+ rmem->name ? rmem->name : "unknown", ret);
+ }
return 0;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma: coherent: store reserved memory coherent regions
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Issue: Static array with hardcoded limit**
```c
#define MAX_COHERENT_REGIONS 64
static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS];
static unsigned int rmem_coherent_areas_num;
```
64 regions is probably sufficient for practical purposes, but using a static array feels fragile. Since `rmem_dma_setup()` is called early (from `__reservedmem_of_table`), dynamic allocation may not be available, so this is acceptable. However, the error return value should be `-ENOSPC` rather than `-EINVAL`:
```c
if (rmem_coherent_areas_num >= MAX_COHERENT_REGIONS) {
...
return -EINVAL;
}
```
`-ENOSPC` or `-ENOMEM` would be more descriptive.
**Issue: `#include <linux/dma-heap.h>` added to coherent.c but not used**
```c
+#include <linux/dma-heap.h>
```
This header is added but nothing from it is used in this file. The `IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)` check doesn't require it. This include should be dropped.
**Design concern: `IS_ENABLED()` check couples DMA core to heap config**
```c
if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
int ret = rmem_coherent_insert_area(rmem);
```
This is noted in the commit message. The `IS_ENABLED()` approach means the storage array and insertion code are always compiled in but guarded at runtime. Since this is `__init` code, it's not a runtime concern, but it does add code to `kernel/dma/coherent.c` that only matters when the heap module is loaded. An alternative would be to have the heap module iterate reserved memory regions itself, avoiding coupling. However, since `rmem_dma_setup()` is an `__init` function and the rmem structs are available post-init, the module could potentially walk the reserved_mem array directly. That said, this approach works.
**Thread safety**: The array is populated at `__init` time (single-threaded) and read later by the module init. No races possible. Fine.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (3 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
2026-03-08 23:02 ` Claude review: dma-buf: heaps: add coherent reserved-memory heap Claude Code Review Bot
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Expose DT coherent reserved-memory pools ("shared-dma-pool"
without "reusable") as dma-buf heaps, creating one heap per
region so userspace can allocate from the exact device-local
pool intended for coherent DMA.
This is a missing backend in the long-term effort to steer
userspace buffer allocations (DRM, v4l2, dma-buf heaps)
through heaps for clearer cgroup accounting. CMA and system
heaps already exist; non-reusable coherent reserved memory
did not.
The heap binds the heap device to each memory region so
coherent allocations use the correct dev->dma_mem, and
it defers registration until module_init when normal
allocators are available.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 414 ++++++++++++++++++++++++++++++++++
3 files changed, 424 insertions(+)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..aeb475e585048 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..55f53f87c4c15
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,414 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ return dma_mmap_coherent(heap_dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ struct device *heap_dev;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ if (!heap_dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+ goto free_buffer;
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+ struct device *heap_dev;
+ int ret;
+
+ if (!rmem || !rmem->name)
+ return -EINVAL;
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->rmem = rmem;
+ coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_create(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto free_name;
+ }
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ ret = dma_coerce_mask_and_coherent(heap_dev, DMA_BIT_MASK(64));
+ if (ret) {
+ pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
+ if (ret) {
+ pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = dma_heap_register(coh_heap->heap);
+ if (ret) {
+ pr_err("coherent_heap: failed to register heap (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ return 0;
+
+destroy_heap:
+ dma_heap_destroy(coh_heap->heap);
+ coh_heap->heap = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+static int __init coherent_heap_register(void)
+{
+ struct reserved_mem *rmem;
+ unsigned int i;
+ int ret;
+
+ for (i = 0; (rmem = dma_coherent_get_reserved_region(i)) != NULL; i++) {
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+module_init(coherent_heap_register);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the main heap implementation.
**Issue: `begin_cpu_access` / `end_cpu_access` syncing for coherent memory**
```c
static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
...
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
}
```
For *coherent* memory, the CPU and device see the same data without cache maintenance. The `dma_sync_sgtable_for_cpu/device()` calls and `invalidate_kernel_vmap_range`/`flush_kernel_vmap_range` are unnecessary for coherent allocations and add overhead. The CMA heap does this because CMA memory isn't inherently coherent, but coherent reserved-memory is, by definition, coherent. These sync operations should be documented as to why they're present (perhaps for non-cache-coherent architectures?) or reconsidered.
**Issue: `virt_to_page()` on `dma_alloc_coherent()` return value**
```c
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
(pg * PAGE_SIZE));
```
`dma_alloc_coherent()` returns a virtual address, but on some architectures (notably ARM with `no-map` regions), this might return an ioremap'd address rather than a regular kernel linear map address. `virt_to_page()` on such addresses is **undefined behavior** and will produce garbage `struct page *` pointers. This is a significant concern on ARM platforms where these coherent reserved-memory regions are commonly used with `no-map`. The pages obtained this way are then used in `sg_alloc_table_from_pages()` during attach, which would corrupt the scatterlist.
Consider using `pfn_to_page(PHYS_PFN(buffer->dma_addr + pg * PAGE_SIZE))` or directly computing pages from the physical address of the rmem region instead.
**Issue: `coherent_heap_map_dma_buf` returns `-ENOMEM` for all errors**
```c
ret = dma_map_sgtable(attachment->dev, table, direction, 0);
if (ret)
return ERR_PTR(-ENOMEM);
```
Should return `ERR_PTR(ret)` to preserve the actual error code.
**Issue: Memory leak of `coh_heap->name` on registration failure**
In `__coherent_heap_register()`:
```c
destroy_heap:
dma_heap_destroy(coh_heap->heap);
coh_heap->heap = NULL;
free_name:
kfree(coh_heap->name);
```
But `dma_heap_destroy()` → `put_device()` → `dma_heap_dev_release()` will `kfree(heap->name)`, which is the **same pointer** as `coh_heap->name` (since `exp_info.name = coh_heap->name` and `dma_heap_create` stores it). So `free_name` does a double-free. Either `dma_heap_dev_release()` should not free the name (see patch 2 issue), or this `kfree(coh_heap->name)` should be removed.
**Issue: No `module_exit` — intentional but no suppression of warning**
The commit message says the heap can't unload. When built as a module (patch 6), the lack of `module_exit` will generate a compiler warning. Consider adding a comment or `__module_exit` stub.
**Minor: Missing newline in pr_warn**
```c
pr_warn("Failed to add coherent heap %s",
rmem->name ? rmem->name : "unknown");
```
Missing `\n` at the end of the format string.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (4 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-08 23:02 ` Claude review: dma-buf: heaps: add coherent reserved-memory heap Claude Code Review Bot
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Following the current efforts to make CMA heap as module,
we can do the same and turn the Coherent heap into
a module as well, by changing the Kconfig into a tristate
and importing the proper dma-buf namespaces.
This heap won't be able to unload (same as happens with
the CMA heap), since we're missing a big part of the
infrastructure that would allow to make it safe.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/heaps/Kconfig | 2 +-
drivers/dma-buf/heaps/coherent_heap.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index aeb475e585048..2f84a1018b900 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -14,7 +14,7 @@ config DMABUF_HEAPS_CMA
regions, you should say Y here.
config DMABUF_HEAPS_COHERENT
- bool "DMA-BUF Coherent Reserved-Memory Heap"
+ tristate "DMA-BUF Coherent Reserved-Memory Heap"
depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
help
Choose this option to enable coherent reserved-memory dma-buf heaps.
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
index 55f53f87c4c15..fdb3f5d907e88 100644
--- a/drivers/dma-buf/heaps/coherent_heap.c
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -412,3 +412,6 @@ static int __init coherent_heap_register(void)
}
module_init(coherent_heap_register);
MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
+MODULE_LICENSE("GPL");
+MODULE_IMPORT_NS("DMA_BUF");
+MODULE_IMPORT_NS("DMA_BUF_HEAP");
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma-buf: heaps: coherent: Turn heap into a module
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Straightforward change from `bool` to `tristate` and adding `MODULE_LICENSE`/`MODULE_IMPORT_NS`.
**Issue: Missing `MODULE_LICENSE` should have been in patch 5**
Patch 5 adds `MODULE_DESCRIPTION` but not `MODULE_LICENSE("GPL")`. Even for built-in code, it's good practice. More importantly, this means patch 5 doesn't compile as a module (the Kconfig is `bool` in patch 5, so it's technically fine, but it's cleaner to include the license from the start).
**No `module_exit`**: As noted above, this will warn when built as a module. The CMA heap series presumably addresses this the same way, but it should be explicitly noted or suppressed.
---
### Summary of key issues:
1. **Bug (patch 2)**: Double `put_device()` in `dma_heap_register()` error path + `dma_heap_add()` error path causes UAF
2. **Bug (patch 2)**: `dma_heap_dev_release()` does `kfree(heap->name)` but `dma_heap_create()` doesn't `kstrdup()` the name — existing callers pass string literals
3. **Bug (patch 2 + 5)**: `err3` path in `dma_heap_register()` does `device_destroy()` + `dma_heap_destroy()` — double put/free
4. **Bug (patch 5)**: Double-free of `coh_heap->name` in `__coherent_heap_register()` error path
5. **Concern (patch 5)**: `virt_to_page()` on `dma_alloc_coherent()` return is unsafe on architectures with `no-map` coherent regions
6. **Minor (patch 5)**: Unnecessary cache sync ops for coherent memory, swallowed error code in `map_dma_buf`, missing `\n` in `pr_warn`
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* Claude review: dma-buf: heaps: add coherent reserved-memory heap
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (5 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
6 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: dma-buf: heaps: add coherent reserved-memory heap
Author: Albert Esteve <aesteve@redhat.com>
Patches: 7
Reviewed: 2026-03-09T09:02:39.213764
---
This v3 series adds a new dma-buf heap backed by coherent reserved-memory regions (DT "shared-dma-pool" without "reusable"). The motivation is sound: enabling cgroup accounting for coherent reserved-memory allocations by routing them through dma-buf heaps. The series is well-structured and splits the work into logical steps: tracking the heap device, splitting heap creation/registration, adding an rmem helper, storing coherent regions, the actual heap driver, and modularization.
However, there are several issues ranging from potential memory corruption/UAF in the dma-heap core refactoring, questionable design choices in the coherent region tracking, and missing safety in the coherent heap driver itself.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 0/6] dma-buf: heaps: add coherent reserved-memory heap
@ 2026-03-03 12:33 Albert Esteve
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
0 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard, John Stultz
This patch introduces a new heap driver to expose DT non‑reusable
"shared-dma-pool" coherent regions as dma-buf heaps, so userspace can
allocate buffers from each reserved, named region.
Because these regions are device‑dependent, each heap instance binds a
heap device to its reserved‑mem region via a newly introduced helper
function -namely, of_reserved_mem_device_init_with_mem()- so coherent
allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep
review focused on the new heap; I plan to follow up based on Eric’s [1]
and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA
heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@redhat.com/
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
[3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812c707@kernel.org/
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
Changes in v2:
- Removed dmem charging parts
- Moved coherent heap registering logic to coherent.c
- Made heap device a member of struct dma_heap
- Split dma_heap_add logic into create/register, to be able to
access the stored heap device before registered.
- Avoid platform device in favour of heap device
- Added a wrapper to rmem device_init() op
- Switched from late_initcall() to module_init()
- Made the coherent heap driver modular
- Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43298ac@redhat.com
---
Albert Esteve (5):
dma-buf: dma-heap: split dma_heap_add
of_reserved_mem: add a helper for rmem device_init op
dma-buf: heaps: Add Coherent heap to dmabuf heaps
dma: coherent: register to coherent heap
dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1):
dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++--
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 429 ++++++++++++++++++++++++++++++++++
drivers/of/of_reserved_mem.c | 27 ++-
include/linux/dma-heap.h | 16 ++
include/linux/dma-map-ops.h | 7 +
include/linux/of_reserved_mem.h | 8 +
kernel/dma/coherent.c | 34 +++
9 files changed, 642 insertions(+), 27 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve@redhat.com>
^ permalink raw reply [flat|nested] 17+ messages in thread* [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-03 12:33 [PATCH v2 0/6] " Albert Esteve
@ 2026-03-03 12:33 ` Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2026-03-03 12:33 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, echanude, mripard
Add a dma-buf heap for DT coherent reserved-memory
(i.e., 'shared-dma-pool' without 'reusable' property),
exposing one heap per region for userspace buffers.
The heap binds the heap device to each memory region so
coherent allocations use the correct dev->dma_mem, and
it defers registration until module_init when normal
allocators are available.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 4 +-
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 426 ++++++++++++++++++++++++++++++++++
include/linux/dma-heap.h | 11 +
include/linux/dma-map-ops.h | 7 +
6 files changed, 456 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index 88189d4e48561..ba87e5ac16ae2 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -390,8 +390,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
heap = dma_heap_create(exp_info);
if (IS_ERR(heap)) {
- pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap));
- return PTR_ERR(heap);
+ pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap));
+ return ERR_CAST(heap);
}
ret = dma_heap_register(heap);
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..aeb475e585048 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..d033d737bb9df
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,426 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ return dma_mmap_coherent(heap_dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ struct device *heap_dev;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ if (!heap_dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+ goto free_buffer;
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int coherent_heap_init_dma_mask(struct device *dev)
+{
+ int ret;
+
+ ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (!ret)
+ return 0;
+
+ /* Fallback to 32-bit DMA mask */
+ return dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));
+}
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+ struct device *heap_dev;
+ int ret;
+
+ if (!rmem || !rmem->name)
+ return -EINVAL;
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->rmem = rmem;
+ coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_create(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto free_name;
+ }
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ ret = coherent_heap_init_dma_mask(heap_dev);
+ if (ret) {
+ pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
+ if (ret) {
+ pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = dma_heap_register(coh_heap->heap);
+ if (ret) {
+ pr_err("coherent_heap: failed to register heap (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ return 0;
+
+destroy_heap:
+ dma_heap_destroy(coh_heap->heap);
+ coh_heap->heap = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+static int __init coherent_heap_register(void)
+{
+ struct reserved_mem *rmem;
+ unsigned int i;
+ int ret;
+
+ for (i = 0; (rmem = dma_coherent_get_reserved_region(i)) != NULL; i++) {
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+module_init(coherent_heap_register);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 1b0ea43ba66c3..77e6cb66ffce1 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -9,10 +9,12 @@
#ifndef _DMA_HEAPS_H
#define _DMA_HEAPS_H
+#include <linux/errno.h>
#include <linux/types.h>
struct dma_heap;
struct device;
+struct reserved_mem;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -53,4 +55,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
+int dma_heap_coherent_register(struct reserved_mem *rmem);
+#else
+static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
#endif /* _DMA_HEAPS_H */
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 60b63756df821..c87e5e44e5383 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -12,6 +12,7 @@
struct cma;
struct iommu_ops;
+struct reserved_mem;
struct dma_map_ops {
void *(*alloc)(struct device *dev, size_t size,
@@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, size_t size, int *ret);
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx);
#else
static inline int dma_declare_coherent_memory(struct device *dev,
phys_addr_t phys_addr, dma_addr_t device_addr, size_t size)
@@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
#define dma_release_from_dev_coherent(dev, order, vaddr) (0)
#define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
static inline void dma_release_coherent_memory(struct device *dev) { }
+static inline
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ return NULL;
+}
#endif /* CONFIG_DMA_DECLARE_COHERENT */
#ifdef CONFIG_DMA_GLOBAL_POOL
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-03 21:19 ` Claude Code Review Bot
0 siblings, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-03-03 21:19 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The main patch -- a 426-line new file `coherent_heap.c`. This is the core of the series.
**Issues:**
1. **`virt_to_page()` on DMA coherent memory is not universally safe.** The allocation path does:
```c
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
...
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
```
`dma_alloc_coherent()` may return memory from vmalloc space (especially on architectures that need uncached mappings), in which case `virt_to_page()` is incorrect -- it only works for direct-mapped (lowmem) kernel addresses. On ARM with `no-map` regions (which these coherent regions typically use), the return from `dma_alloc_coherent()` goes through `ioremap()` / `memremap()` paths, making `virt_to_page()` invalid. This could lead to returning bogus page structs. Consider using `vmalloc_to_page()` or `pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_addr)))` instead (similar to how some other drivers handle this).
2. **`begin_cpu_access` / `end_cpu_access` syncs are unnecessary for coherent memory.** The whole point of coherent DMA memory is that it is cache-coherent and doesn't need explicit sync. The implementations:
```c
+static int coherent_heap_dma_buf_begin_cpu_access(...)
+{
+ ...
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
```
These `dma_sync_sgtable_for_cpu/device` calls should be no-ops for coherent memory, but iterating all attachments under a mutex for a no-op is wasteful. The `invalidate_kernel_vmap_range` / `flush_kernel_vmap_range` calls are also questionable since the vmap in `coherent_heap_do_vmap()` uses `PAGE_KERNEL` (cached), which could actually conflict with the uncached mapping from `dma_alloc_coherent()`. This is a potential aliasing problem -- having both a cached vmap and an uncached DMA mapping to the same physical pages will cause coherency issues on architectures without hardware coherence.
3. **Exported but unused `dma_heap_coherent_register()`.** The header adds:
```c
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
+int dma_heap_coherent_register(struct reserved_mem *rmem);
+#else
+static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
+...
```
But no implementation of `dma_heap_coherent_register()` exists in this patch. The actual function is `__coherent_heap_register()` (static). This declaration is never used by patch 5 either (which calls `dma_coherent_get_reserved_region()` instead). This appears to be dead code left over from a previous iteration.
4. **`dma_coerce_mask_and_coherent()` usage.** The function:
```c
+static int coherent_heap_init_dma_mask(struct device *dev)
+{
+ ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
```
`dma_coerce_mask_and_coherent()` is documented as being for platform code / bus code only, not for drivers. The DMA mask should match the actual memory region. Since the reserved region has a known physical address and size, the mask should be derived from `rmem->base + rmem->size - 1` to ensure correctness.
5. **`coherent_heap_map_dma_buf` loses the actual error code:**
```c
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
```
The actual error from `dma_map_sgtable()` is discarded and replaced with `-ENOMEM`. Should be `return ERR_PTR(ret)`.
6. **The heap name comes from `rmem->name`** which is the DT node name (e.g., `"framebuffer@80000000"`). This could contain characters that are invalid for device node names. No sanitization is done.
7. **No `MODULE_LICENSE` in this patch** -- the module_init is added but `MODULE_LICENSE("GPL")` is deferred to patch 6. This means this patch doesn't compile as a module in isolation (though patch 4's Kconfig has it as `bool`, so it only matters for bisect-then-change-Kconfig scenarios).
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH] dma-buf: heaps: Add Coherent heap to dmabuf heaps
@ 2026-02-24 7:57 Albert Esteve
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
2026-02-27 5:38 ` Claude Code Review Bot
0 siblings, 2 replies; 17+ messages in thread
From: Albert Esteve @ 2026-02-24 7:57 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
echanude, mripard, Albert Esteve
Add a dma-buf heap for DT coherent reserved-memory
(i.e., 'shared-dma-pool' without 'reusable' property),
exposing one heap per region for userspace buffers.
The heap binds a synthetic platform device to each region
so coherent allocations use the correct dev->dma_mem,
and it defers registration until late_initcall when
normal allocator are available.
This patch includes charging of the coherent heap
allocator to the dmem cgroup.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
This patch introduces a new driver to expose DT coherent reserved-memory
regions as dma-buf heaps, allowing userspace buffers to be created.
Since these regions are device-dependent, we bind a synthetic platform
device to each region so coherent allocations use the correct dev->dma_mem.
Following Eric’s [1] and Maxime’s [2] work on charging DMA buffers
allocated from userspace to cgroups (dmem), this patch adds the same
charging pattern used by CMA heaps patch. Charging is done only through
the dma-buf heap interface so it can be attributed to a userspace allocator.
This allows each device-specific reserved-memory region to enforce its
own limits.
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@redhat.com/
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
---
drivers/dma-buf/heaps/Kconfig | 17 ++
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 485 ++++++++++++++++++++++++++++++++++
include/linux/dma-heap.h | 11 +
kernel/dma/coherent.c | 9 +
5 files changed, 523 insertions(+)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..93765dca164e3 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,20 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
+
+config COHERENT_AREAS_DEFERRED
+ int "Max deferred coherent reserved-memory regions"
+ depends on DMABUF_HEAPS_COHERENT
+ default 16
+ help
+ Maximum number of coherent reserved-memory regions that can be
+ deferred for later registration during early boot.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..870b2b89aefcb
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,485 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/cgroup_dmem.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+#define DEFERRED_AREAS_MAX CONFIG_COHERENT_AREAS_DEFERRED
+
+/*
+ * Early init can't use normal memory management yet (memblock is used
+ * instead), so keep a small deferred list and retry at late_initcall.
+ */
+static struct reserved_mem *rmem_areas_deferred[DEFERRED_AREAS_MAX];
+static unsigned int rmem_areas_deferred_num;
+
+static int coherent_heap_add_deferred(struct reserved_mem *rmem)
+{
+ if (rmem_areas_deferred_num >= DEFERRED_AREAS_MAX) {
+ pr_warn("Deferred heap areas list full, dropping %s\n",
+ rmem->name ? rmem->name : "unknown");
+ return -EINVAL;
+ }
+ rmem_areas_deferred[rmem_areas_deferred_num++] = rmem;
+ return 0;
+}
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+ struct device *dev;
+ struct platform_device *pdev;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ struct dmem_cgroup_region *cg;
+#endif
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ struct dmem_cgroup_pool_state *pool;
+#endif
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+
+ return dma_mmap_coherent(coh_heap->dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(coh_heap->dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ dmem_cgroup_uncharge(buffer->pool, buffer->len);
+#endif
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+ if (!coh_heap->dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ if (mem_accounting) {
+ ret = dmem_cgroup_try_charge(coh_heap->cg, size,
+ &buffer->pool, NULL);
+ if (ret)
+ goto free_buffer;
+ }
+#endif
+
+ buffer->alloc_vaddr = dma_alloc_coherent(coh_heap->dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ goto uncharge_cgroup;
+#else
+ goto free_buffer;
+#endif
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(coh_heap->dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+uncharge_cgroup:
+ dmem_cgroup_uncharge(buffer->pool, size);
+#endif
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ struct dmem_cgroup_region *region;
+#endif
+ const char *rmem_name;
+ int ret;
+
+ if (!rmem)
+ return -EINVAL;
+
+ rmem_name = rmem->name ? rmem->name : "unknown";
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->name = kasprintf(GFP_KERNEL, "coherent_%s", rmem_name);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ coh_heap->rmem = rmem;
+
+ /* create a platform device per rmem and bind it */
+ coh_heap->pdev = platform_device_register_simple("coherent-heap",
+ PLATFORM_DEVID_AUTO,
+ NULL, 0);
+ if (IS_ERR(coh_heap->pdev)) {
+ ret = PTR_ERR(coh_heap->pdev);
+ goto free_name;
+ }
+
+ if (rmem->ops && rmem->ops->device_init) {
+ ret = rmem->ops->device_init(rmem, &coh_heap->pdev->dev);
+ if (ret)
+ goto pdev_unregister;
+ }
+
+ coh_heap->dev = &coh_heap->pdev->dev;
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ region = dmem_cgroup_register_region(rmem->size, "coh/%s", rmem_name);
+ if (IS_ERR(region)) {
+ ret = PTR_ERR(region);
+ goto pdev_unregister;
+ }
+ coh_heap->cg = region;
+#endif
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_add(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto cg_unregister;
+ }
+
+ return 0;
+
+cg_unregister:
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+ dmem_cgroup_unregister_region(coh_heap->cg);
+#endif
+pdev_unregister:
+ platform_device_unregister(coh_heap->pdev);
+ coh_heap->pdev = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+int dma_heap_coherent_register(struct reserved_mem *rmem)
+{
+ int ret;
+
+ ret = __coherent_heap_register(rmem);
+ if (ret == -ENOMEM)
+ return coherent_heap_add_deferred(rmem);
+ return ret;
+}
+
+static int __init coherent_heap_register_deferred(void)
+{
+ unsigned int i;
+ int ret;
+
+ for (i = 0; i < rmem_areas_deferred_num; i++) {
+ struct reserved_mem *rmem = rmem_areas_deferred[i];
+
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+late_initcall(coherent_heap_register_deferred);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 648328a64b27e..e894cfa1ecf1a 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -9,9 +9,11 @@
#ifndef _DMA_HEAPS_H
#define _DMA_HEAPS_H
+#include <linux/errno.h>
#include <linux/types.h>
struct dma_heap;
+struct reserved_mem;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -48,4 +50,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)
+int dma_heap_coherent_register(struct reserved_mem *rmem);
+#else
+static inline int dma_heap_coherent_register(struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
#endif /* _DMA_HEAPS_H */
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 1147497bc512c..f49d13e460e4b 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
+#include <linux/dma-heap.h>
struct dma_coherent_mem {
void *virt_base;
@@ -393,6 +394,14 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
rmem->ops = &rmem_dma_ops;
pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = dma_heap_coherent_register(rmem);
+
+ if (ret)
+ pr_warn("Reserved memory: failed to register coherent heap for %s (%d)\n",
+ rmem->name ? rmem->name : "unknown", ret);
+ }
return 0;
}
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve@redhat.com>
^ permalink raw reply related [flat|nested] 17+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-02-24 7:57 [PATCH] " Albert Esteve
@ 2026-02-27 5:38 ` Claude Code Review Bot
2026-02-27 5:38 ` Claude Code Review Bot
1 sibling, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 5:38 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: dma-buf: heaps: Add Coherent heap to dmabuf heaps
Author: Albert Esteve <aesteve@redhat.com>
Patches: 3
Reviewed: 2026-02-27T15:38:23.597909
---
This is a single patch adding a new dma-buf heap driver for DT coherent reserved-memory (`shared-dma-pool` without `reusable`). The goal is to expose these regions to userspace via the dma-buf heaps interface, with dmem cgroup accounting.
**Overall assessment: Needs significant rework.** The patch has several critical issues including unsafe use of `virt_to_page()` on `dma_alloc_coherent()` return values, missing memory zeroing (information leak to userspace), and an architectural concern where core DMA code is modified to call into the dma-buf heap subsystem. The `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` ifdefs throughout are unnecessary since the header provides stubs, and there is no mechanism to prevent unconditional exposure of all coherent reserved-memory to userspace.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-02-24 7:57 [PATCH] " Albert Esteve
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
@ 2026-02-27 5:38 ` Claude Code Review Bot
1 sibling, 0 replies; 17+ messages in thread
From: Claude Code Review Bot @ 2026-02-27 5:38 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
#### Critical Issues
**1. `virt_to_page()` on `dma_alloc_coherent()` return value is unsafe**
```c
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
(pg * PAGE_SIZE));
```
`dma_alloc_coherent()` on a device backed by reserved coherent memory goes through `dma_alloc_from_coherent()`, which returns a pointer into `mem->virt_base`. That virtual base comes from `memremap(phys_addr, size, MEMREMAP_WC)` (in `dma_init_coherent_memory()` at `kernel/dma/coherent.c`). On architectures where `memremap()` returns a vmalloc/ioremap-space address rather than a linear-map address, `virt_to_page()` will produce garbage or crash.
The existing `coherent.c` code avoids this — it stores `pfn_base = PFN_DOWN(phys_addr)` and uses PFN arithmetic directly in `remap_pfn_range()` calls. The correct approach here would be to derive PFNs from the DMA address:
```c
phys_addr_t phys = dma_to_phys(coh_heap->dev, buffer->dma_addr);
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = pfn_to_page(PFN_DOWN(phys) + pg);
```
This is the single most critical issue in the patch.
**2. Missing memory zeroing — information leak to userspace**
The CMA heap explicitly zeroes allocated pages before exporting them:
```c
/* CMA heap does: */
memset(page_address(cma_pages), 0, size);
```
The coherent heap does not zero the memory returned by `dma_alloc_coherent()`. For reserved-memory coherent pools, `dma_alloc_from_coherent()` returns a pointer into pre-mapped memory without zeroing it. Between allocate/free cycles, stale data from prior users is exposed. Since these buffers are exported to userspace, this is a security concern. A `memset(buffer->alloc_vaddr, 0, buffer->len)` is needed after allocation.
#### Architectural Issues
**3. Reverse dependency: `kernel/dma/coherent.c` calling into dma-buf heaps**
```c
/* kernel/dma/coherent.c: */
+#include <linux/dma-heap.h>
...
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = dma_heap_coherent_register(rmem);
```
This creates a layering violation — core DMA infrastructure now calls up into the dma-buf heaps subsystem. The CMA heap does not do this; it uses `cma_for_each_area()` to iterate and register CMA areas autonomously from `drivers/dma-buf/heaps/cma_heap.c`, without touching any CMA core code.
The coherent heap should similarly find reserved-memory regions on its own at `late_initcall` time (e.g., by iterating the reserved-memory DT nodes), rather than hooking into `rmem_dma_setup()`. This would keep the modification entirely within `drivers/dma-buf/heaps/` and avoid touching `kernel/dma/coherent.c`.
**4. All coherent reserved-memory regions are unconditionally exposed to userspace**
Every `shared-dma-pool` (non-reusable) region automatically gets a dma-buf heap. Some of these regions may be intended exclusively for specific kernel drivers. A malicious or buggy userspace process could exhaust a device-specific reserved pool, starving the kernel driver. There should be a DT opt-in mechanism (e.g., a `dma-buf-heap` property) to control which regions are exposed, rather than exposing all of them.
#### Code Quality Issues
**5. Unnecessary `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` guards throughout**
The header `linux/cgroup_dmem.h` already provides no-op stubs when `CONFIG_CGROUP_DMEM` is disabled (`dmem_cgroup_try_charge()` returns 0, `dmem_cgroup_uncharge()` is a no-op, etc.). All the `#if IS_ENABLED(CONFIG_CGROUP_DMEM)` blocks in the patch are unnecessary and clutter the code. The entire cgroup charging path can be written without any `#if` guards:
```c
/* In struct coherent_heap — no #if needed: */
struct dmem_cgroup_region *cg;
/* In allocate — no #if needed: */
if (mem_accounting) {
ret = dmem_cgroup_try_charge(coh_heap->cg, size,
&buffer->pool, NULL);
if (ret)
goto free_buffer;
}
```
The stubs will compile to nothing when disabled. This would significantly clean up `coherent_heap_allocate()`, `coherent_heap_dma_buf_release()`, `__coherent_heap_register()`, and the error paths.
**6. `cg_unregister` error label structure is confusing**
```c
cg_unregister:
#if IS_ENABLED(CONFIG_CGROUP_DMEM)
dmem_cgroup_unregister_region(coh_heap->cg);
#endif
pdev_unregister:
```
When `CONFIG_CGROUP_DMEM` is disabled, `cg_unregister` is an empty label that falls through to `pdev_unregister`. This compiles but is confusing. Removing the ifdefs (as per point 5) would fix this naturally.
**7. Missing `device_release()` call in error/cleanup paths**
In `__coherent_heap_register()`, after a successful `rmem->ops->device_init()` call, the error paths only call `platform_device_unregister()` but never call `rmem->ops->device_release()` to undo the init:
```c
if (rmem->ops && rmem->ops->device_init) {
ret = rmem->ops->device_init(rmem, &coh_heap->pdev->dev);
if (ret)
goto pdev_unregister;
}
/* ... later errors goto cg_unregister or pdev_unregister ... */
/* but never call rmem->ops->device_release() */
```
If `dma_heap_add()` or `dmem_cgroup_register_region()` fails after a successful `device_init()`, the reserved memory binding is leaked. Need a `device_release` cleanup step.
**8. Deferred registration only on `-ENOMEM` is fragile**
```c
int dma_heap_coherent_register(struct reserved_mem *rmem)
{
int ret;
ret = __coherent_heap_register(rmem);
if (ret == -ENOMEM)
return coherent_heap_add_deferred(rmem);
return ret;
}
```
`rmem_dma_setup()` runs from `RESERVEDMEM_OF_DECLARE`, which is during very early boot before slab is available. The assumption is that `kzalloc()` will fail with `-ENOMEM` and trigger deferral. But `platform_device_register_simple()` or other calls could fail with different error codes (e.g., `-ENODEV`, `-EPROBE_DEFER`) before the allocator is ready. If the intent is to unconditionally defer during early boot, just always defer — or better yet, don't hook into `rmem_dma_setup()` at all (see point 3).
**9. `COHERENT_AREAS_DEFERRED` Kconfig naming is inconsistent**
```
config COHERENT_AREAS_DEFERRED
int "Max deferred coherent reserved-memory regions"
```
This name doesn't follow the `DMABUF_HEAPS_` prefix convention used by other options in this Kconfig. Should be something like `DMABUF_HEAPS_COHERENT_MAX_DEFERRED`. Also, making this a Kconfig integer for what is essentially an implementation detail of early boot deferral seems like unnecessary user-facing configuration.
**10. `begin_cpu_access`/`end_cpu_access` sync correctness for coherent memory**
```c
static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
...
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
}
```
This is copied from the CMA heap, but the semantics are subtly different. The underlying allocation is DMA-coherent (uncached/write-combining), so the sync on the allocation-side device is unnecessary. The sync on *attaching* devices' sg_tables may still be needed if those devices' DMA mappings are not coherent. The code works but a comment explaining this would be helpful, since the reader may wonder why coherent memory needs sync at all.
**11. Dependency on unmerged patches**
The `mem_accounting` variable and the dmem cgroup charging pattern are from Eric's series ([1] in the cover letter) which is not yet merged. This should be clearly stated as a dependency, and ideally this patch should be part of or follow that series.
#### Minor Issues
- `kmalloc_array()` is used for `buffer->pages` while `kzalloc_obj()` is used elsewhere — inconsistent with the CMA heap which uses `kmalloc_objs()` for the same purpose.
- The `pr_warn` in `coherent_heap_register_deferred()` is missing a newline: `pr_warn("Failed to add coherent heap %s",` → should have `\n`.
- No `MODULE_LICENSE()` (though since it's `bool` not `tristate`, this is cosmetic — but `MODULE_DESCRIPTION` without `MODULE_LICENSE` is inconsistent).
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-03-08 23:02 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-08 23:02 ` Claude review: dma-buf: heaps: add coherent reserved-memory heap Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-03-03 12:33 [PATCH v2 0/6] " Albert Esteve
2026-03-03 12:33 ` [PATCH v2 4/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
2026-03-03 21:19 ` Claude review: " Claude Code Review Bot
2026-02-24 7:57 [PATCH] " Albert Esteve
2026-02-27 5:38 ` Claude review: " Claude Code Review Bot
2026-02-27 5:38 ` Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox