* [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
` (5 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude, John Stultz,
Maxime Ripard
From: John Stultz <john.stultz@linaro.org>
Keep track of the heap device struct.
This will be useful for special DMA allocations
and actions.
Signed-off-by: John Stultz <john.stultz@linaro.org>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 34 ++++++++++++++++++++++++++--------
include/linux/dma-heap.h | 2 ++
2 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index ac5f8685a6494..1124d63eb1398 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -31,6 +31,7 @@
* @heap_devt: heap device node
* @list: list head connecting to list of heaps
* @heap_cdev: heap char device
+ * @heap_dev: heap device
*
* Represents a heap of memory from which buffers can be made.
*/
@@ -41,6 +42,7 @@ struct dma_heap {
dev_t heap_devt;
struct list_head list;
struct cdev heap_cdev;
+ struct device *heap_dev;
};
static LIST_HEAD(heap_list);
@@ -223,6 +225,19 @@ const char *dma_heap_get_name(struct dma_heap *heap)
}
EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
+/**
+ * dma_heap_get_dev() - get device struct for the heap
+ * @heap: DMA-Heap to retrieve device struct from
+ *
+ * Returns:
+ * The device struct for the heap.
+ */
+struct device *dma_heap_get_dev(struct dma_heap *heap)
+{
+ return heap->heap_dev;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+
/**
* dma_heap_add - adds a heap to dmabuf heaps
* @exp_info: information needed to register this heap
@@ -230,7 +245,6 @@ EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
{
struct dma_heap *heap, *h, *err_ret;
- struct device *dev_ret;
unsigned int minor;
int ret;
@@ -272,14 +286,14 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
goto err1;
}
- dev_ret = device_create(dma_heap_class,
- NULL,
- heap->heap_devt,
- NULL,
- heap->name);
- if (IS_ERR(dev_ret)) {
+ heap->heap_dev = device_create(dma_heap_class,
+ NULL,
+ heap->heap_devt,
+ NULL,
+ heap->name);
+ if (IS_ERR(heap->heap_dev)) {
pr_err("dma_heap: Unable to create device\n");
- err_ret = ERR_CAST(dev_ret);
+ err_ret = ERR_CAST(heap->heap_dev);
goto err2;
}
@@ -295,6 +309,10 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
}
}
+ /* Make sure it doesn't disappear on us */
+ heap->heap_dev = get_device(heap->heap_dev);
+
+
/* Add heap to the list */
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 648328a64b27e..493085e69b70e 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -12,6 +12,7 @@
#include <linux/types.h>
struct dma_heap;
+struct device;
/**
* struct dma_heap_ops - ops to operate on a given heap
@@ -43,6 +44,7 @@ struct dma_heap_export_info {
void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap);
+struct device *dma_heap_get_dev(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
--
2.52.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: dma-buf: dma-heap: Keep track of the heap device struct
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This patch stores `heap->heap_dev` and adds a `dma_heap_get_dev()` accessor. Originally from John Stultz with Maxime's Reviewed-by.
**Issue: Double blank line**
```c
+ /* Make sure it doesn't disappear on us */
+ heap->heap_dev = get_device(heap->heap_dev);
+
+
/* Add heap to the list */
```
There's an extra blank line (cosmetic, minor). This gets removed in patch 2 anyway, but still worth noting for the patch itself.
**Observation**: The `get_device()` call is done *after* the duplicate-name check loop, which means if the name check passes, we do `get_device()`. But if we fail anywhere after `device_create()` in err3, we do `device_destroy()` without a corresponding `put_device()`. This asymmetry is benign in the current patch because `device_destroy()` does the cleanup, but the patch 2 refactoring changes this significantly. Fine for bisectability since patch 2 immediately reworks this.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
` (4 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Split dma_heap_add() into creation and registration
phases while preserving the ordering between
cdev_add() and device_add(), and ensuring all
device fields are initialised.
This lets callers build a heap and its device,
bind reserved memory, and cleanly unwind on failure
before the heap is registered. It also avoids a window
where userspace can see a heap that exists but isn’t
fully functional. The coherent heap will need this to
bind rmem to the heap device prior to registration.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/dma-heap.c | 126 +++++++++++++++++++++++++++++++++++----------
include/linux/dma-heap.h | 3 ++
2 files changed, 103 insertions(+), 26 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index 1124d63eb1398..ba87e5ac16ae2 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -238,15 +238,30 @@ struct device *dma_heap_get_dev(struct dma_heap *heap)
}
EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+static void dma_heap_dev_release(struct device *dev)
+{
+ struct dma_heap *heap;
+
+ pr_debug("heap device: '%s': %s\n", dev_name(dev), __func__);
+ heap = dev_get_drvdata(dev);
+ kfree(heap->name);
+ kfree(heap);
+ kfree(dev);
+}
+
/**
- * dma_heap_add - adds a heap to dmabuf heaps
- * @exp_info: information needed to register this heap
+ * dma_heap_create() - allocate and initialize a heap object
+ * @exp_info: information needed to create a heap
+ *
+ * Creates a heap instance but does not register it or create device nodes.
+ * Use dma_heap_register() to make it visible to userspace, or
+ * dma_heap_destroy() to release it.
+ *
+ * Returns a heap on success or ERR_PTR(-errno) on failure.
*/
-struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info)
{
- struct dma_heap *heap, *h, *err_ret;
- unsigned int minor;
- int ret;
+ struct dma_heap *heap;
if (!exp_info->name || !strcmp(exp_info->name, "")) {
pr_err("dma_heap: Cannot add heap without a name\n");
@@ -265,13 +280,41 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
heap->name = exp_info->name;
heap->ops = exp_info->ops;
heap->priv = exp_info->priv;
+ heap->heap_dev = kzalloc_obj(*heap->heap_dev);
+ if (!heap->heap_dev) {
+ kfree(heap);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ device_initialize(heap->heap_dev);
+ dev_set_drvdata(heap->heap_dev, heap);
+
+ dev_set_name(heap->heap_dev, heap->name);
+ heap->heap_dev->class = dma_heap_class;
+ heap->heap_dev->release = dma_heap_dev_release;
+
+ return heap;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_create, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_register() - register a heap with the dma-heap framework
+ * @heap: heap instance created with dma_heap_create()
+ *
+ * Registers the heap, creating its device node and adding it to the heap
+ * list. Returns 0 on success or a negative error code on failure.
+ */
+int dma_heap_register(struct dma_heap *heap)
+{
+ struct dma_heap *h;
+ unsigned int minor;
+ int ret;
/* Find unused minor number */
ret = xa_alloc(&dma_heap_minors, &minor, heap,
XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
if (ret < 0) {
pr_err("dma_heap: Unable to get minor number for heap\n");
- err_ret = ERR_PTR(ret);
goto err0;
}
@@ -282,42 +325,34 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
if (ret < 0) {
pr_err("dma_heap: Unable to add char device\n");
- err_ret = ERR_PTR(ret);
goto err1;
}
- heap->heap_dev = device_create(dma_heap_class,
- NULL,
- heap->heap_devt,
- NULL,
- heap->name);
- if (IS_ERR(heap->heap_dev)) {
- pr_err("dma_heap: Unable to create device\n");
- err_ret = ERR_CAST(heap->heap_dev);
+ heap->heap_dev->devt = heap->heap_devt;
+
+ ret = device_add(heap->heap_dev);
+ if (ret) {
+ pr_err("dma_heap: Unable to add device\n");
goto err2;
}
mutex_lock(&heap_list_lock);
/* check the name is unique */
list_for_each_entry(h, &heap_list, list) {
- if (!strcmp(h->name, exp_info->name)) {
+ if (!strcmp(h->name, heap->name)) {
mutex_unlock(&heap_list_lock);
pr_err("dma_heap: Already registered heap named %s\n",
- exp_info->name);
- err_ret = ERR_PTR(-EINVAL);
+ heap->name);
+ ret = -EINVAL;
goto err3;
}
}
- /* Make sure it doesn't disappear on us */
- heap->heap_dev = get_device(heap->heap_dev);
-
-
/* Add heap to the list */
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
- return heap;
+ return 0;
err3:
device_destroy(dma_heap_class, heap->heap_devt);
@@ -326,8 +361,47 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
err1:
xa_erase(&dma_heap_minors, minor);
err0:
- kfree(heap);
- return err_ret;
+ dma_heap_destroy(heap);
+ return ret;
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_register, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_destroy() - release a heap created by dma_heap_create()
+ * @heap: heap instance to release
+ *
+ * Drops the heap device reference; the heap and its device are freed in the
+ * device release path when the last reference is gone.
+ */
+void dma_heap_destroy(struct dma_heap *heap)
+{
+ put_device(heap->heap_dev);
+}
+EXPORT_SYMBOL_NS_GPL(dma_heap_destroy, "DMA_BUF_HEAP");
+
+/**
+ * dma_heap_add - adds a heap to dmabuf heaps
+ * @exp_info: information needed to register this heap
+ */
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+{
+ struct dma_heap *heap;
+ int ret;
+
+ heap = dma_heap_create(exp_info);
+ if (IS_ERR(heap)) {
+ pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap));
+ return ERR_CAST(heap);
+ }
+
+ ret = dma_heap_register(heap);
+ if (ret) {
+ pr_err("dma_heap: failed to register heap (%d)\n", ret);
+ dma_heap_destroy(heap);
+ return ERR_PTR(ret);
+ }
+
+ return heap;
}
EXPORT_SYMBOL_NS_GPL(dma_heap_add, "DMA_BUF_HEAP");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 493085e69b70e..1b0ea43ba66c3 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -46,6 +46,9 @@ void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap);
struct device *dma_heap_get_dev(struct dma_heap *heap);
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info);
+int dma_heap_register(struct dma_heap *heap);
+void dma_heap_destroy(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
--
2.52.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: dma-buf: dma-heap: split dma_heap_add
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the core refactoring, splitting `dma_heap_add()` into `dma_heap_create()` / `dma_heap_register()` / `dma_heap_destroy()`.
**Issue: `heap->name` is not duplicated — double-free risk**
In `dma_heap_create()`:
```c
heap->name = exp_info->name;
```
The name is stored as a raw pointer from `exp_info`. But `dma_heap_dev_release()` does:
```c
kfree(heap->name);
```
If the caller passed a string literal or stack-allocated name (which `dma_heap_add()` callers like system_heap and cma_heap do with string literals), this `kfree()` will corrupt memory. Looking at the coherent heap (patch 5), `coh_heap->name = kstrdup(rmem->name, GFP_KERNEL)` is passed in, so it's safe for that caller. But *any existing caller* of `dma_heap_add()` will now have their name kfree'd in the release path. The original code (pre-series) never freed `heap->name` — the heap struct was freed directly in the error path, and heaps were never removed. This is a **regression for existing callers**.
The `dma_heap_create()` should `kstrdup()` the name, or `dma_heap_dev_release()` should not free it.
**Issue: `dma_heap_register()` error path calls `dma_heap_destroy()` — double destroy possible**
In `dma_heap_register()` error paths, `err0` calls `dma_heap_destroy(heap)`:
```c
err0:
dma_heap_destroy(heap);
return ret;
```
And `dma_heap_add()` on register failure also calls:
```c
ret = dma_heap_register(heap);
if (ret) {
pr_err("dma_heap: failed to register heap (%d)\n", ret);
dma_heap_destroy(heap);
return ERR_PTR(ret);
}
```
So `dma_heap_destroy()` (which is `put_device()`) is called **twice** — once inside `dma_heap_register()` and once in `dma_heap_add()`. After `device_initialize()`, the refcount is 1. The first `put_device()` drops it to 0 and triggers the release callback, freeing the heap. The second `put_device()` is a UAF. This is a **bug**.
**Issue: `err3` path does `device_destroy()` then falls through to `dma_heap_destroy()`**
```c
err3:
device_destroy(dma_heap_class, heap->heap_devt);
err2:
cdev_del(&heap->heap_cdev);
err1:
xa_erase(&dma_heap_minors, minor);
err0:
dma_heap_destroy(heap);
```
After `device_add()` succeeds, `err3` calls `device_destroy()` which does `device_unregister()` → `device_del()` + `put_device()`. Then execution falls through to `err0` which calls `dma_heap_destroy()` → `put_device()` again. Since `device_initialize()` sets refcount to 1 and `device_add()` doesn't bump it, the `device_destroy()`'s `put_device()` drops to 0 and frees, then `dma_heap_destroy()` is UAF. This needs restructuring.
**Minor: `kzalloc_obj` usage**
```c
heap->heap_dev = kzalloc_obj(*heap->heap_dev);
```
This appears to be a custom macro. It works but is unconventional — `kzalloc(sizeof(*heap->heap_dev), GFP_KERNEL)` is the standard pattern.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
2026-03-06 10:36 ` [PATCH v3 1/6] dma-buf: dma-heap: Keep track of the heap device struct Albert Esteve
2026-03-06 10:36 ` [PATCH v3 2/6] dma-buf: dma-heap: split dma_heap_add Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
` (3 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Add a helper function wrapping internal reserved memory
device_init call and expose it externally.
Use the new helper function within of_reserved_mem_device_init_by_idx().
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/of/of_reserved_mem.c | 68 ++++++++++++++++++++++++++---------------
include/linux/of_reserved_mem.h | 8 +++++
2 files changed, 52 insertions(+), 24 deletions(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 1fd28f8056108..26ca871f7f919 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -605,6 +605,49 @@ struct rmem_assigned_device {
static LIST_HEAD(of_rmem_assigned_device_list);
static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
+/**
+ * of_reserved_mem_device_init_with_mem() - assign reserved memory region to
+ * given device
+ * @dev: Pointer to the device to configure
+ * @rmem: Reserved memory region to assign
+ *
+ * This function assigns respective DMA-mapping operations based on the
+ * reserved memory region already provided in @rmem to the @dev device,
+ * without walking DT nodes.
+ *
+ * Returns error code or zero on success.
+ */
+int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem)
+{
+ struct rmem_assigned_device *rd;
+ int ret;
+
+ if (!dev || !rmem || !rmem->ops || !rmem->ops->device_init)
+ return -EINVAL;
+
+ rd = kmalloc_obj(struct rmem_assigned_device);
+ if (!rd)
+ return -ENOMEM;
+
+ ret = rmem->ops->device_init(rmem, dev);
+ if (ret == 0) {
+ rd->dev = dev;
+ rd->rmem = rmem;
+
+ mutex_lock(&of_rmem_assigned_device_mutex);
+ list_add(&rd->list, &of_rmem_assigned_device_list);
+ mutex_unlock(&of_rmem_assigned_device_mutex);
+
+ dev_info(dev, "assigned reserved memory node %s\n", rmem->name);
+ } else {
+ kfree(rd);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_with_mem);
+
/**
* of_reserved_mem_device_init_by_idx() - assign reserved memory region to
* given device
@@ -623,10 +666,8 @@ static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
{
- struct rmem_assigned_device *rd;
struct device_node *target;
struct reserved_mem *rmem;
- int ret;
if (!np || !dev)
return -EINVAL;
@@ -643,28 +684,7 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
rmem = of_reserved_mem_lookup(target);
of_node_put(target);
- if (!rmem || !rmem->ops || !rmem->ops->device_init)
- return -EINVAL;
-
- rd = kmalloc_obj(struct rmem_assigned_device);
- if (!rd)
- return -ENOMEM;
-
- ret = rmem->ops->device_init(rmem, dev);
- if (ret == 0) {
- rd->dev = dev;
- rd->rmem = rmem;
-
- mutex_lock(&of_rmem_assigned_device_mutex);
- list_add(&rd->list, &of_rmem_assigned_device_list);
- mutex_unlock(&of_rmem_assigned_device_mutex);
-
- dev_info(dev, "assigned reserved memory node %s\n", rmem->name);
- } else {
- kfree(rd);
- }
-
- return ret;
+ return of_reserved_mem_device_init_with_mem(dev, rmem);
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx);
diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h
index f573423359f48..12f7ddb7ee61f 100644
--- a/include/linux/of_reserved_mem.h
+++ b/include/linux/of_reserved_mem.h
@@ -32,6 +32,8 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem);
#define RESERVEDMEM_OF_DECLARE(name, compat, init) \
_OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
+int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem);
int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx);
int of_reserved_mem_device_init_by_name(struct device *dev,
@@ -51,6 +53,12 @@ int of_reserved_mem_region_count(const struct device_node *np);
#define RESERVEDMEM_OF_DECLARE(name, compat, init) \
_OF_DECLARE_STUB(reservedmem, name, compat, init, reservedmem_of_init_fn)
+static inline int of_reserved_mem_device_init_with_mem(struct device *dev,
+ struct reserved_mem *rmem)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
{
--
2.52.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: of_reserved_mem: add a helper for rmem device_init op
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean refactoring. The new `of_reserved_mem_device_init_with_mem()` extracts the core logic from `of_reserved_mem_device_init_by_idx()`.
**Minor: NULL check change in behavior**
The original code in `of_reserved_mem_device_init_by_idx()` checked:
```c
if (!rmem || !rmem->ops || !rmem->ops->device_init)
return -EINVAL;
```
Now `of_reserved_mem_device_init_by_idx()` calls the new helper which does these checks, but `of_reserved_mem_lookup()` can return NULL, and the new helper also checks `!rmem`, so the behavior is preserved. The stub for `!CONFIG_OF_RESERVED_MEM` returns `-EOPNOTSUPP` instead of the previous `-ENOSYS` (from `of_reserved_mem_device_init_by_idx`'s stub) — this is fine and arguably more correct.
No issues found. Well done.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (2 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 3/6] of_reserved_mem: add a helper for rmem device_init op Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
` (2 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Create the logic to store coherent reserved memory regions
within the coherent DMA code; and provide an iterator
(i.e., dma_coherent_get_reserved_region()) to allow
consumers of this API retrieving the regions.
Note: since the consumer of this iterator is going
to be the specific coherent memory dmabuf heap module, this
commit introduces a check for CONFIG_DMABUF_HEAPS_COHERENT,
which is defined in the subsequent patch, to maintain a
clean split between the kernel code and the heap
module code.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
include/linux/dma-map-ops.h | 7 +++++++
kernel/dma/coherent.c | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+)
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 60b63756df821..c87e5e44e5383 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -12,6 +12,7 @@
struct cma;
struct iommu_ops;
+struct reserved_mem;
struct dma_map_ops {
void *(*alloc)(struct device *dev, size_t size,
@@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, size_t size, int *ret);
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx);
#else
static inline int dma_declare_coherent_memory(struct device *dev,
phys_addr_t phys_addr, dma_addr_t device_addr, size_t size)
@@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
#define dma_release_from_dev_coherent(dev, order, vaddr) (0)
#define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
static inline void dma_release_coherent_memory(struct device *dev) { }
+static inline
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ return NULL;
+}
#endif /* CONFIG_DMA_DECLARE_COHERENT */
#ifdef CONFIG_DMA_GLOBAL_POOL
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 1147497bc512c..d0d0979ffb153 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
+#include <linux/dma-heap.h>
struct dma_coherent_mem {
void *virt_base;
@@ -334,6 +335,31 @@ static phys_addr_t dma_reserved_default_memory_base __initdata;
static phys_addr_t dma_reserved_default_memory_size __initdata;
#endif
+#define MAX_COHERENT_REGIONS 64
+
+static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS];
+static unsigned int rmem_coherent_areas_num;
+
+static int rmem_coherent_insert_area(struct reserved_mem *rmem)
+{
+ if (rmem_coherent_areas_num >= MAX_COHERENT_REGIONS) {
+ pr_warn("Deferred heap areas list full, dropping %s\n",
+ rmem->name ? rmem->name : "unknown");
+ return -EINVAL;
+ }
+ rmem_coherent_areas[rmem_coherent_areas_num++] = rmem;
+ return 0;
+}
+
+struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx)
+{
+ if (idx >= rmem_coherent_areas_num)
+ return NULL;
+
+ return rmem_coherent_areas[idx];
+}
+EXPORT_SYMBOL_GPL(dma_coherent_get_reserved_region);
+
static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
{
struct dma_coherent_mem *mem = rmem->priv;
@@ -393,6 +419,14 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
rmem->ops = &rmem_dma_ops;
pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+ if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
+ int ret = rmem_coherent_insert_area(rmem);
+
+ if (ret)
+ pr_warn("Reserved memory: failed to store coherent area for %s (%d)\n",
+ rmem->name ? rmem->name : "unknown", ret);
+ }
return 0;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: dma: coherent: store reserved memory coherent regions
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Issue: Static array with hardcoded limit**
```c
#define MAX_COHERENT_REGIONS 64
static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS];
static unsigned int rmem_coherent_areas_num;
```
64 regions is probably sufficient for practical purposes, but using a static array feels fragile. Since `rmem_dma_setup()` is called early (from `__reservedmem_of_table`), dynamic allocation may not be available, so this is acceptable. However, the error return value should be `-ENOSPC` rather than `-EINVAL`:
```c
if (rmem_coherent_areas_num >= MAX_COHERENT_REGIONS) {
...
return -EINVAL;
}
```
`-ENOSPC` or `-ENOMEM` would be more descriptive.
**Issue: `#include <linux/dma-heap.h>` added to coherent.c but not used**
```c
+#include <linux/dma-heap.h>
```
This header is added but nothing from it is used in this file. The `IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)` check doesn't require it. This include should be dropped.
**Design concern: `IS_ENABLED()` check couples DMA core to heap config**
```c
if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) {
int ret = rmem_coherent_insert_area(rmem);
```
This is noted in the commit message. The `IS_ENABLED()` approach means the storage array and insertion code are always compiled in but guarded at runtime. Since this is `__init` code, it's not a runtime concern, but it does add code to `kernel/dma/coherent.c` that only matters when the heap module is loaded. An alternative would be to have the heap module iterate reserved memory regions itself, avoiding coupling. However, since `rmem_dma_setup()` is an `__init` function and the rmem structs are available post-init, the module could potentially walk the reserved_mem array directly. That said, this approach works.
**Thread safety**: The array is populated at `__init` time (single-threaded) and read later by the module init. No races possible. Fine.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (3 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 4/6] dma: coherent: store reserved memory coherent regions Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
2026-03-08 23:02 ` Claude review: dma-buf: heaps: add coherent reserved-memory heap Claude Code Review Bot
6 siblings, 1 reply; 15+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Expose DT coherent reserved-memory pools ("shared-dma-pool"
without "reusable") as dma-buf heaps, creating one heap per
region so userspace can allocate from the exact device-local
pool intended for coherent DMA.
This is a missing backend in the long-term effort to steer
userspace buffer allocations (DRM, v4l2, dma-buf heaps)
through heaps for clearer cgroup accounting. CMA and system
heaps already exist; non-reusable coherent reserved memory
did not.
The heap binds the heap device to each memory region so
coherent allocations use the correct dev->dma_mem, and
it defers registration until module_init when normal
allocators are available.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 414 ++++++++++++++++++++++++++++++++++
3 files changed, 424 insertions(+)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c42264..aeb475e585048 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
+
+config DMABUF_HEAPS_COHERENT
+ bool "DMA-BUF Coherent Reserved-Memory Heap"
+ depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
+ help
+ Choose this option to enable coherent reserved-memory dma-buf heaps.
+ This heap is backed by non-reusable DT "shared-dma-pool" regions.
+ If your system defines coherent reserved-memory regions, you should
+ say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032f..96bda7a65f041 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
new file mode 100644
index 0000000000000..55f53f87c4c15
--- /dev/null
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -0,0 +1,414 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF heap for coherent reserved-memory regions
+ *
+ * Copyright (C) 2026 Red Hat, Inc.
+ * Author: Albert Esteve <aesteve@redhat.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/iosys-map.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+struct coherent_heap {
+ struct dma_heap *heap;
+ struct reserved_mem *rmem;
+ char *name;
+};
+
+struct coherent_heap_buffer {
+ struct coherent_heap *heap;
+ struct list_head attachments;
+ struct mutex lock;
+ unsigned long len;
+ dma_addr_t dma_addr;
+ void *alloc_vaddr;
+ struct page **pages;
+ pgoff_t pagecount;
+ int vmap_cnt;
+ void *vaddr;
+};
+
+struct dma_heap_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+ bool mapped;
+};
+
+static int coherent_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+ int ret;
+
+ a = kzalloc_obj(*a);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void coherent_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
+ return table;
+}
+
+static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct dma_heap_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt)
+ flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+ list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ return dma_mmap_coherent(heap_dev, vma, buffer->alloc_vaddr,
+ buffer->dma_addr, buffer->len);
+}
+
+static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ void *vaddr;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+ goto out;
+ }
+
+ vaddr = coherent_heap_do_vmap(buffer);
+ if (IS_ERR(vaddr)) {
+ ret = PTR_ERR(vaddr);
+ goto out;
+ }
+
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ iosys_map_set_vaddr(map, buffer->vaddr);
+out:
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+
+ mutex_lock(&buffer->lock);
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+ mutex_unlock(&buffer->lock);
+ iosys_map_clear(map);
+}
+
+static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct coherent_heap_buffer *buffer = dmabuf->priv;
+ struct coherent_heap *coh_heap = buffer->heap;
+ struct device *heap_dev = dma_heap_get_dev(coh_heap->heap);
+
+ if (buffer->vmap_cnt > 0) {
+ WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ buffer->vmap_cnt = 0;
+ }
+
+ if (buffer->alloc_vaddr)
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+ kfree(buffer->pages);
+ kfree(buffer);
+}
+
+static const struct dma_buf_ops coherent_heap_buf_ops = {
+ .attach = coherent_heap_attach,
+ .detach = coherent_heap_detach,
+ .map_dma_buf = coherent_heap_map_dma_buf,
+ .unmap_dma_buf = coherent_heap_unmap_dma_buf,
+ .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = coherent_heap_dma_buf_end_cpu_access,
+ .mmap = coherent_heap_mmap,
+ .vmap = coherent_heap_vmap,
+ .vunmap = coherent_heap_vunmap,
+ .release = coherent_heap_dma_buf_release,
+};
+
+static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct coherent_heap *coh_heap;
+ struct coherent_heap_buffer *buffer;
+ struct device *heap_dev;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ size_t size = PAGE_ALIGN(len);
+ pgoff_t pagecount = size >> PAGE_SHIFT;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ coh_heap = dma_heap_get_drvdata(heap);
+ if (!coh_heap)
+ return ERR_PTR(-EINVAL);
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ if (!heap_dev)
+ return ERR_PTR(-ENODEV);
+
+ buffer = kzalloc_obj(*buffer);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer->attachments);
+ mutex_init(&buffer->lock);
+ buffer->len = size;
+ buffer->heap = coh_heap;
+ buffer->pagecount = pagecount;
+
+ buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len,
+ &buffer->dma_addr, GFP_KERNEL);
+ if (!buffer->alloc_vaddr) {
+ ret = -ENOMEM;
+ goto free_buffer;
+ }
+
+ buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages),
+ GFP_KERNEL);
+ if (!buffer->pages) {
+ ret = -ENOMEM;
+ goto free_dma;
+ }
+
+ for (pg = 0; pg < pagecount; pg++)
+ buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
+ (pg * PAGE_SIZE));
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &coherent_heap_buf_ops;
+ exp_info.size = buffer->len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+ return dmabuf;
+
+free_pages:
+ kfree(buffer->pages);
+free_dma:
+ dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr,
+ buffer->dma_addr);
+free_buffer:
+ kfree(buffer);
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops coherent_heap_ops = {
+ .allocate = coherent_heap_allocate,
+};
+
+static int __coherent_heap_register(struct reserved_mem *rmem)
+{
+ struct dma_heap_export_info exp_info;
+ struct coherent_heap *coh_heap;
+ struct device *heap_dev;
+ int ret;
+
+ if (!rmem || !rmem->name)
+ return -EINVAL;
+
+ coh_heap = kzalloc_obj(*coh_heap);
+ if (!coh_heap)
+ return -ENOMEM;
+
+ coh_heap->rmem = rmem;
+ coh_heap->name = kstrdup(rmem->name, GFP_KERNEL);
+ if (!coh_heap->name) {
+ ret = -ENOMEM;
+ goto free_coherent_heap;
+ }
+
+ exp_info.name = coh_heap->name;
+ exp_info.ops = &coherent_heap_ops;
+ exp_info.priv = coh_heap;
+
+ coh_heap->heap = dma_heap_create(&exp_info);
+ if (IS_ERR(coh_heap->heap)) {
+ ret = PTR_ERR(coh_heap->heap);
+ goto free_name;
+ }
+
+ heap_dev = dma_heap_get_dev(coh_heap->heap);
+ ret = dma_coerce_mask_and_coherent(heap_dev, DMA_BIT_MASK(64));
+ if (ret) {
+ pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem);
+ if (ret) {
+ pr_err("coherent_heap: failed to initialize memory (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ ret = dma_heap_register(coh_heap->heap);
+ if (ret) {
+ pr_err("coherent_heap: failed to register heap (%d)\n", ret);
+ goto destroy_heap;
+ }
+
+ return 0;
+
+destroy_heap:
+ dma_heap_destroy(coh_heap->heap);
+ coh_heap->heap = NULL;
+free_name:
+ kfree(coh_heap->name);
+free_coherent_heap:
+ kfree(coh_heap);
+
+ return ret;
+}
+
+static int __init coherent_heap_register(void)
+{
+ struct reserved_mem *rmem;
+ unsigned int i;
+ int ret;
+
+ for (i = 0; (rmem = dma_coherent_get_reserved_region(i)) != NULL; i++) {
+ ret = __coherent_heap_register(rmem);
+ if (ret) {
+ pr_warn("Failed to add coherent heap %s",
+ rmem->name ? rmem->name : "unknown");
+ continue;
+ }
+ }
+
+ return 0;
+}
+module_init(coherent_heap_register);
+MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
--
2.52.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: dma-buf: heaps: Add Coherent heap to dmabuf heaps
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the main heap implementation.
**Issue: `begin_cpu_access` / `end_cpu_access` syncing for coherent memory**
```c
static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
...
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
}
```
For *coherent* memory, the CPU and device see the same data without cache maintenance. The `dma_sync_sgtable_for_cpu/device()` calls and `invalidate_kernel_vmap_range`/`flush_kernel_vmap_range` are unnecessary for coherent allocations and add overhead. The CMA heap does this because CMA memory isn't inherently coherent, but coherent reserved-memory is, by definition, coherent. These sync operations should be documented as to why they're present (perhaps for non-cache-coherent architectures?) or reconsidered.
**Issue: `virt_to_page()` on `dma_alloc_coherent()` return value**
```c
for (pg = 0; pg < pagecount; pg++)
buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr +
(pg * PAGE_SIZE));
```
`dma_alloc_coherent()` returns a virtual address, but on some architectures (notably ARM with `no-map` regions), this might return an ioremap'd address rather than a regular kernel linear map address. `virt_to_page()` on such addresses is **undefined behavior** and will produce garbage `struct page *` pointers. This is a significant concern on ARM platforms where these coherent reserved-memory regions are commonly used with `no-map`. The pages obtained this way are then used in `sg_alloc_table_from_pages()` during attach, which would corrupt the scatterlist.
Consider using `pfn_to_page(PHYS_PFN(buffer->dma_addr + pg * PAGE_SIZE))` or directly computing pages from the physical address of the rmem region instead.
**Issue: `coherent_heap_map_dma_buf` returns `-ENOMEM` for all errors**
```c
ret = dma_map_sgtable(attachment->dev, table, direction, 0);
if (ret)
return ERR_PTR(-ENOMEM);
```
Should return `ERR_PTR(ret)` to preserve the actual error code.
**Issue: Memory leak of `coh_heap->name` on registration failure**
In `__coherent_heap_register()`:
```c
destroy_heap:
dma_heap_destroy(coh_heap->heap);
coh_heap->heap = NULL;
free_name:
kfree(coh_heap->name);
```
But `dma_heap_destroy()` → `put_device()` → `dma_heap_dev_release()` will `kfree(heap->name)`, which is the **same pointer** as `coh_heap->name` (since `exp_info.name = coh_heap->name` and `dma_heap_create` stores it). So `free_name` does a double-free. Either `dma_heap_dev_release()` should not free the name (see patch 2 issue), or this `kfree(coh_heap->name)` should be removed.
**Issue: No `module_exit` — intentional but no suppression of warning**
The commit message says the heap can't unload. When built as a module (patch 6), the lack of `module_exit` will generate a compiler warning. Consider adding a comment or `__module_exit` stub.
**Minor: Missing newline in pr_warn**
```c
pr_warn("Failed to add coherent heap %s",
rmem->name ? rmem->name : "unknown");
```
Missing `\n` at the end of the format string.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (4 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 5/6] dma-buf: heaps: Add Coherent heap to dmabuf heaps Albert Esteve
@ 2026-03-06 10:36 ` Albert Esteve
2026-03-08 23:02 ` Claude review: " Claude Code Review Bot
2026-03-08 23:02 ` Claude review: dma-buf: heaps: add coherent reserved-memory heap Claude Code Review Bot
6 siblings, 1 reply; 15+ messages in thread
From: Albert Esteve @ 2026-03-06 10:36 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Marek Szyprowski,
Robin Murphy, Rob Herring, Saravana Kannan
Cc: linux-kernel, linux-media, dri-devel, linaro-mm-sig, iommu,
devicetree, Albert Esteve, mripard, echanude
Following the current efforts to make CMA heap as module,
we can do the same and turn the Coherent heap into
a module as well, by changing the Kconfig into a tristate
and importing the proper dma-buf namespaces.
This heap won't be able to unload (same as happens with
the CMA heap), since we're missing a big part of the
infrastructure that would allow to make it safe.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
drivers/dma-buf/heaps/Kconfig | 2 +-
drivers/dma-buf/heaps/coherent_heap.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index aeb475e585048..2f84a1018b900 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -14,7 +14,7 @@ config DMABUF_HEAPS_CMA
regions, you should say Y here.
config DMABUF_HEAPS_COHERENT
- bool "DMA-BUF Coherent Reserved-Memory Heap"
+ tristate "DMA-BUF Coherent Reserved-Memory Heap"
depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT
help
Choose this option to enable coherent reserved-memory dma-buf heaps.
diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c
index 55f53f87c4c15..fdb3f5d907e88 100644
--- a/drivers/dma-buf/heaps/coherent_heap.c
+++ b/drivers/dma-buf/heaps/coherent_heap.c
@@ -412,3 +412,6 @@ static int __init coherent_heap_register(void)
}
module_init(coherent_heap_register);
MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions");
+MODULE_LICENSE("GPL");
+MODULE_IMPORT_NS("DMA_BUF");
+MODULE_IMPORT_NS("DMA_BUF_HEAP");
--
2.52.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Claude review: dma-buf: heaps: coherent: Turn heap into a module
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
0 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Straightforward change from `bool` to `tristate` and adding `MODULE_LICENSE`/`MODULE_IMPORT_NS`.
**Issue: Missing `MODULE_LICENSE` should have been in patch 5**
Patch 5 adds `MODULE_DESCRIPTION` but not `MODULE_LICENSE("GPL")`. Even for built-in code, it's good practice. More importantly, this means patch 5 doesn't compile as a module (the Kconfig is `bool` in patch 5, so it's technically fine, but it's cleaner to include the license from the start).
**No `module_exit`**: As noted above, this will warn when built as a module. The CMA heap series presumably addresses this the same way, but it should be explicitly noted or suppressed.
---
### Summary of key issues:
1. **Bug (patch 2)**: Double `put_device()` in `dma_heap_register()` error path + `dma_heap_add()` error path causes UAF
2. **Bug (patch 2)**: `dma_heap_dev_release()` does `kfree(heap->name)` but `dma_heap_create()` doesn't `kstrdup()` the name — existing callers pass string literals
3. **Bug (patch 2 + 5)**: `err3` path in `dma_heap_register()` does `device_destroy()` + `dma_heap_destroy()` — double put/free
4. **Bug (patch 5)**: Double-free of `coh_heap->name` in `__coherent_heap_register()` error path
5. **Concern (patch 5)**: `virt_to_page()` on `dma_alloc_coherent()` return is unsafe on architectures with `no-map` coherent regions
6. **Minor (patch 5)**: Unnecessary cache sync ops for coherent memory, swallowed error code in `map_dma_buf`, missing `\n` in `pr_warn`
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread
* Claude review: dma-buf: heaps: add coherent reserved-memory heap
2026-03-06 10:36 [PATCH v3 0/6] dma-buf: heaps: add coherent reserved-memory heap Albert Esteve
` (5 preceding siblings ...)
2026-03-06 10:36 ` [PATCH v3 6/6] dma-buf: heaps: coherent: Turn heap into a module Albert Esteve
@ 2026-03-08 23:02 ` Claude Code Review Bot
6 siblings, 0 replies; 15+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 23:02 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: dma-buf: heaps: add coherent reserved-memory heap
Author: Albert Esteve <aesteve@redhat.com>
Patches: 7
Reviewed: 2026-03-09T09:02:39.213764
---
This v3 series adds a new dma-buf heap backed by coherent reserved-memory regions (DT "shared-dma-pool" without "reusable"). The motivation is sound: enabling cgroup accounting for coherent reserved-memory allocations by routing them through dma-buf heaps. The series is well-structured and splits the work into logical steps: tracking the heap device, splitting heap creation/registration, adding an rmem helper, storing coherent regions, the actual heap driver, and modularization.
However, there are several issues ranging from potential memory corruption/UAF in the dma-heap core refactoring, questionable design choices in the coherent region tracking, and missing safety in the coherent heap driver itself.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 15+ messages in thread