public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* Claude review: drm/ttm: Use common ancestor of evictor and evictee as limit pool
  2026-02-25 12:10 ` [PATCH v4 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
@ 2026-02-27  3:36   ` Claude Code Review Bot
  0 siblings, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-02-27  3:36 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This correctly implements recursive protection semantics for sibling cgroup competition. The commit message explanation is excellent.

The self-eviction guard is important and well-commented:

```c
+	if (evict_walk->alloc_state->only_evict_unprotected &&
+			bo->resource->css == evict_walk->alloc_state->charge_pool)
+		return 0;
```

Without this, `dmem_cgroup_state_evict_valuable` would allow a cgroup to evict from itself (since a cgroup is always allowed to self-evict), which would defeat the purpose of `only_evict_unprotected`.

The common ancestor fallback logic is correct:

```c
+	if (!limit_pool)
+		limit_pool = dmem_cgroup_common_ancestor(bo->resource->css,
+							 evict_walk->alloc_state->charge_pool);
```

The extensive comment explaining why the common ancestor matters for `effective_protection` calculation is very helpful.

**One observation**: The `below_min`/`below_low` checks in Patch 5 use `NULL` (global root) while eviction decisions in this patch use the common ancestor. These are at different logical levels - the global check determines the "policy" (should we attempt aggressive eviction?) while the per-candidate check determines the "mechanism" (is this specific buffer evictable?). This design makes sense.

No blocking issues with this patch.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: drm/ttm: Use common ancestor of evictor and evictee as limit pool
  2026-03-02 12:37 ` [PATCH v5 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
@ 2026-03-03  3:29   ` Claude Code Review Bot
  0 siblings, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-03  3:29 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Purpose:** Uses the common ancestor of evictor and evictee as the root for protection calculation, enabling correct behavior for sibling cgroup prioritization.

**Self-eviction guard:**
```c
	if (evict_walk->alloc_state->only_evict_unprotected &&
			bo->resource->css == evict_walk->alloc_state->charge_pool)
		return 0;
```
This prevents a protected cgroup from evicting its own buffers during the "only unprotected" pass. The pointer comparison checks for exact same pool — not parent/child relationships. This is intentional: a cgroup should be allowed to evict from its own children, but not from itself when it's trying to enforce protection.

**Common ancestor logic:** Well-motivated by the comment:
```c
	/*
	 * ...Recursive protection distributes cgroup protection afforded
	 * to a parent cgroup but not used explicitly by a child cgroup between
	 * all child cgroups...
	 */
```
Without this, two sibling cgroups where one is protected and the other isn't would both appear equally protected (due to recursive distribution from their common parent), preventing the protected one from stealing from the unprotected one.

**Reference management:**
```c
	ancestor = dmem_cgroup_common_ancestor(bo->resource->css,
					       evict_walk->alloc_state->charge_pool);
	limit_pool = ancestor;
	...
	evict_valuable = dmem_cgroup_state_evict_valuable(limit_pool, ...);
	if (ancestor)
		dmem_cgroup_pool_state_put(ancestor);
```
The `put` after use is correct, but relies on `dmem_cgroup_common_ancestor()` returning a referenced pool (see concern in patch 2 review).

**Formatting nit:** The self-eviction check uses non-standard indentation:
```c
	if (evict_walk->alloc_state->only_evict_unprotected &&
			bo->resource->css == evict_walk->alloc_state->charge_pool)
```
Kernel style typically aligns continuation lines to the opening parenthesis, or uses a single tab indent. The double-tab here is inconsistent with the rest of the series.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases
@ 2026-03-13 11:39 Natalie Vock
  2026-03-13 11:40 ` [PATCH v6 1/6] cgroup/dmem: Add queries for protection values Natalie Vock
                   ` (6 more replies)
  0 siblings, 7 replies; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:39 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock, Tvrtko Ursulin

Hi all,

I've been looking into some cases where dmem protection fails to prevent
allocations from ending up in GTT when VRAM gets scarce and apps start
competing hard.

In short, this is because other (unprotected) applications end up
filling VRAM before protected applications do. This causes TTM to back
off and try allocating in GTT before anything else, and that is where
the allocation is placed in the end. The existing eviction protection
cannot prevent this, because no attempt at evicting is ever made
(although you could consider the backing-off as an immediate eviction to
GTT).

This series tries to alleviate this by adding a special case when the
allocation is protected by cgroups: Instead of backing off immediately,
TTM will try evicting unprotected buffers from the domain to make space
for the protected one. This ensures that applications can actually use
all the memory protection awarded to them by the system, without being
prone to ping-ponging (only protected allocations can evict unprotected
ones, never the other way around).

The first two patches just add a few small utilities needed to implement
this to the dmem controller. The other patches are the TTM implementation:

"drm/ttm: Be more aggressive..." decouples cgroup charging from resource
allocation to allow us to hold on to the charge even if allocation fails
on first try, and adds a path to call ttm_bo_evict_alloc when the
charged allocation falls within min/low protection limits.

"drm/ttm: Use common ancestor..." is a more general improvement in
correctly implementing cgroup protection semantics. With recursive
protection rules, unused memory protection afforded to a parent node is
transferred to children recursively, which helps protect entire
subtrees from stealing each others' memory without needing to protect
each cgroup individually. This doesn't apply when considering direct
siblings inside the same subtree, so in order to not break
prioritization between these siblings, we need to consider the
relationship of evictor and evictee when calculating protection.
In practice, this fixes cases where a protected cgroup cannot steal
memory from unprotected siblings (which, in turn, leads to eviction
failures and new allocations being placed in GTT).

Thanks,
Natalie

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
---
Changes in v6:
- Rename dmem_cgroup_common_ancestor to dmem_cgroup_get_common_ancestor
  (Maarten)
- Use ttm_bo_alloc_at_place in eviction cb as well (Tvrtko)
- Clean up the more aggressive eviction logic (Tvrtko)
- Link to v5: https://lore.kernel.org/r/20260302-dmemcg-aggressive-protect-v5-0-ffd3a2602309@gmx.de

Changes in v5:
- Added cgroup_common_ancestor helper to use with
  dmem_cgroup_common_ancestor (Tejun)
- Note: "drm/ttm: Use common ancestor..." needed minor changes since
  dmem_cgroup_common_ancestor now grabs a reference to the ancestor
  pool which needs to be dropped after use
- Removed extraneous whitespaces in "drm/ttm: Split cgroup charge..."
  and unnecessary changes done in "drm/ttm: Extract code..." (Tvrtko)
- Applied a comment from v3 about below_low not needing to be
  initialized in "drm/ttm: Be more aggressive..." (Tvrtko)
- Fixed uncharging the cgroup on allocation failure (Tvrtko)
- Fixed a typo in the message of "drm/ttm: Split cgroup charge..."
  (Tvrtko)
- Added case in ttm_bo_evict_cb for when charging fails, since we need
  to retry the charge (found myself)
- Link to v4: https://lore.kernel.org/r/20260225-dmemcg-aggressive-protect-v4-0-de847ab35184@gmx.de

Changes in v4:
- Split cgroup charge decoupling and eviction logic changes into
  separate commits (Tvrtko)
- Fix two cases of errno handling in ttm_bo_alloc_place and its caller
  (Tvrtko)
- Improve commit message/description of "drm/ttm: Make a helper..." (now
  "drm/ttm: Extract code...") (Tvrtko)
- Documentation improvements for new TTM eviction logic (Tvrtko)
- Formatting fixes (Tvrtko)
- Link to v3: https://lore.kernel.org/r/20251110-dmemcg-aggressive-protect-v3-0-219ffcfc54e9@gmx.de

Changes in v3:
- Improved documentation around cgroup queries and TTM eviction helpers
  (Maarten)
- Fixed up ttm_alloc_at_place charge failure logic to return either
  -EBUSY or -ENOSPC, not -EAGAIN (found this myself)
- Link to v2: https://lore.kernel.org/r/20251015-dmemcg-aggressive-protect-v2-0-36644fb4e37f@gmx.de

Changes in v2:
- Factored out the ttm logic for charging/allocating/evicting into a
  separate helper to keep things simpler
- Link to v1: https://lore.kernel.org/r/20250915-dmemcg-aggressive-protect-v1-0-2f3353bfcdac@gmx.de

---
Natalie Vock (6):
      cgroup/dmem: Add queries for protection values
      cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper
      drm/ttm: Extract code for attempting allocation in a place
      drm/ttm: Split cgroup charge and resource allocation
      drm/ttm: Be more aggressive when allocating below protection limit
      drm/ttm: Use common ancestor of evictor and evictee as limit pool

 drivers/gpu/drm/ttm/ttm_bo.c       | 221 ++++++++++++++++++++++++++++++++-----
 drivers/gpu/drm/ttm/ttm_resource.c |  48 +++++---
 include/drm/ttm/ttm_resource.h     |   6 +-
 include/linux/cgroup.h             |  21 ++++
 include/linux/cgroup_dmem.h        |  25 +++++
 kernel/cgroup/dmem.c               |  90 +++++++++++++++
 6 files changed, 368 insertions(+), 43 deletions(-)
---
base-commit: 61c0f69a2ff79c8f388a9e973abb4853be467127
change-id: 20250915-dmemcg-aggressive-protect-5cf37f717cdb

Best regards,
-- 
Natalie Vock <natalie.vock@gmx.de>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v6 1/6] cgroup/dmem: Add queries for protection values
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
@ 2026-03-13 11:40 ` Natalie Vock
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2026-03-13 11:40 ` [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper Natalie Vock
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:40 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock

Callers can use this feedback to be more aggressive in making space for
allocations of a cgroup if they know it is protected.

These are counterparts to memcg's mem_cgroup_below_{min,low}.

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
---
 include/linux/cgroup_dmem.h | 16 ++++++++++++
 kernel/cgroup/dmem.c        | 62 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/include/linux/cgroup_dmem.h b/include/linux/cgroup_dmem.h
index dd4869f1d736e..1a88cd0c9eb00 100644
--- a/include/linux/cgroup_dmem.h
+++ b/include/linux/cgroup_dmem.h
@@ -24,6 +24,10 @@ void dmem_cgroup_uncharge(struct dmem_cgroup_pool_state *pool, u64 size);
 bool dmem_cgroup_state_evict_valuable(struct dmem_cgroup_pool_state *limit_pool,
 				      struct dmem_cgroup_pool_state *test_pool,
 				      bool ignore_low, bool *ret_hit_low);
+bool dmem_cgroup_below_min(struct dmem_cgroup_pool_state *root,
+			   struct dmem_cgroup_pool_state *test);
+bool dmem_cgroup_below_low(struct dmem_cgroup_pool_state *root,
+			   struct dmem_cgroup_pool_state *test);
 
 void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool);
 #else
@@ -59,6 +63,18 @@ bool dmem_cgroup_state_evict_valuable(struct dmem_cgroup_pool_state *limit_pool,
 	return true;
 }
 
+static inline bool dmem_cgroup_below_min(struct dmem_cgroup_pool_state *root,
+					 struct dmem_cgroup_pool_state *test)
+{
+	return false;
+}
+
+static inline bool dmem_cgroup_below_low(struct dmem_cgroup_pool_state *root,
+					 struct dmem_cgroup_pool_state *test)
+{
+	return false;
+}
+
 static inline void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool)
 { }
 
diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c
index 9d95824dc6fa0..28227405f7cfe 100644
--- a/kernel/cgroup/dmem.c
+++ b/kernel/cgroup/dmem.c
@@ -694,6 +694,68 @@ int dmem_cgroup_try_charge(struct dmem_cgroup_region *region, u64 size,
 }
 EXPORT_SYMBOL_GPL(dmem_cgroup_try_charge);
 
+/**
+ * dmem_cgroup_below_min() - Tests whether current usage is within min limit.
+ *
+ * @root: Root of the subtree to calculate protection for, or NULL to calculate global protection.
+ * @test: The pool to test the usage/min limit of.
+ *
+ * Return: true if usage is below min and the cgroup is protected, false otherwise.
+ */
+bool dmem_cgroup_below_min(struct dmem_cgroup_pool_state *root,
+			   struct dmem_cgroup_pool_state *test)
+{
+	if (root == test || !pool_parent(test))
+		return false;
+
+	if (!root) {
+		for (root = test; pool_parent(root); root = pool_parent(root))
+			{}
+	}
+
+	/*
+	 * In mem_cgroup_below_min(), the memcg pendant, this call is missing.
+	 * mem_cgroup_below_min() gets called during traversal of the cgroup tree, where
+	 * protection is already calculated as part of the traversal. dmem cgroup eviction
+	 * does not traverse the cgroup tree, so we need to recalculate effective protection
+	 * here.
+	 */
+	dmem_cgroup_calculate_protection(root, test);
+	return page_counter_read(&test->cnt) <= READ_ONCE(test->cnt.emin);
+}
+EXPORT_SYMBOL_GPL(dmem_cgroup_below_min);
+
+/**
+ * dmem_cgroup_below_low() - Tests whether current usage is within low limit.
+ *
+ * @root: Root of the subtree to calculate protection for, or NULL to calculate global protection.
+ * @test: The pool to test the usage/low limit of.
+ *
+ * Return: true if usage is below low and the cgroup is protected, false otherwise.
+ */
+bool dmem_cgroup_below_low(struct dmem_cgroup_pool_state *root,
+			   struct dmem_cgroup_pool_state *test)
+{
+	if (root == test || !pool_parent(test))
+		return false;
+
+	if (!root) {
+		for (root = test; pool_parent(root); root = pool_parent(root))
+			{}
+	}
+
+	/*
+	 * In mem_cgroup_below_low(), the memcg pendant, this call is missing.
+	 * mem_cgroup_below_low() gets called during traversal of the cgroup tree, where
+	 * protection is already calculated as part of the traversal. dmem cgroup eviction
+	 * does not traverse the cgroup tree, so we need to recalculate effective protection
+	 * here.
+	 */
+	dmem_cgroup_calculate_protection(root, test);
+	return page_counter_read(&test->cnt) <= READ_ONCE(test->cnt.elow);
+}
+EXPORT_SYMBOL_GPL(dmem_cgroup_below_low);
+
 static int dmem_cgroup_region_capacity_show(struct seq_file *sf, void *v)
 {
 	struct dmem_cgroup_region *region;

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
  2026-03-13 11:40 ` [PATCH v6 1/6] cgroup/dmem: Add queries for protection values Natalie Vock
@ 2026-03-13 11:40 ` Natalie Vock
  2026-03-13 14:16   ` Michal Koutný
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2026-03-13 11:40 ` [PATCH v6 3/6] drm/ttm: Extract code for attempting allocation in a place Natalie Vock
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:40 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock

This helps to find a common subtree of two resources, which is important
when determining whether it's helpful to evict one resource in favor of
another.

To facilitate this, add a common helper to find the ancestor of two
cgroups using each cgroup's ancestor array.

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
---
 include/linux/cgroup.h      | 21 +++++++++++++++++++++
 include/linux/cgroup_dmem.h |  9 +++++++++
 kernel/cgroup/dmem.c        | 28 ++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index bc892e3b37eea..560ae995e3a54 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -561,6 +561,27 @@ static inline struct cgroup *cgroup_ancestor(struct cgroup *cgrp,
 	return cgrp->ancestors[ancestor_level];
 }
 
+/**
+ * cgroup_common_ancestor - find common ancestor of two cgroups
+ * @a: first cgroup to find common ancestor of
+ * @b: second cgroup to find common ancestor of
+ *
+ * Find the first cgroup that is an ancestor of both @a and @b, if it exists
+ * and return a pointer to it. If such a cgroup doesn't exist, return NULL.
+ *
+ * This function is safe to call as long as both @a and @b are accessible.
+ */
+static inline struct cgroup *cgroup_common_ancestor(struct cgroup *a,
+						    struct cgroup *b)
+{
+	int level;
+
+	for (level = min(a->level, b->level); level >= 0; level--)
+		if (a->ancestors[level] == b->ancestors[level])
+			return a->ancestors[level];
+	return NULL;
+}
+
 /**
  * task_under_cgroup_hierarchy - test task's membership of cgroup ancestry
  * @task: the task to be tested
diff --git a/include/linux/cgroup_dmem.h b/include/linux/cgroup_dmem.h
index 1a88cd0c9eb00..9d72457c4cb9d 100644
--- a/include/linux/cgroup_dmem.h
+++ b/include/linux/cgroup_dmem.h
@@ -28,6 +28,8 @@ bool dmem_cgroup_below_min(struct dmem_cgroup_pool_state *root,
 			   struct dmem_cgroup_pool_state *test);
 bool dmem_cgroup_below_low(struct dmem_cgroup_pool_state *root,
 			   struct dmem_cgroup_pool_state *test);
+struct dmem_cgroup_pool_state *dmem_cgroup_get_common_ancestor(struct dmem_cgroup_pool_state *a,
+							       struct dmem_cgroup_pool_state *b);
 
 void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool);
 #else
@@ -75,6 +77,13 @@ static inline bool dmem_cgroup_below_low(struct dmem_cgroup_pool_state *root,
 	return false;
 }
 
+static inline
+struct dmem_cgroup_pool_state *dmem_cgroup_get_common_ancestor(struct dmem_cgroup_pool_state *a,
+							       struct dmem_cgroup_pool_state *b)
+{
+	return NULL;
+}
+
 static inline void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool)
 { }
 
diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c
index 28227405f7cfe..9ae085a7fcb73 100644
--- a/kernel/cgroup/dmem.c
+++ b/kernel/cgroup/dmem.c
@@ -756,6 +756,34 @@ bool dmem_cgroup_below_low(struct dmem_cgroup_pool_state *root,
 }
 EXPORT_SYMBOL_GPL(dmem_cgroup_below_low);
 
+/**
+ * dmem_cgroup_get_common_ancestor(): Find the first common ancestor of two pools.
+ * @a: First pool to find the common ancestor of.
+ * @b: First pool to find the common ancestor of.
+ *
+ * Return: The first pool that is a parent of both @a and @b, or NULL if either @a or @b are NULL,
+ * or if such a pool does not exist. A reference to the returned pool is grabbed and must be
+ * released by the caller when it is done using the pool.
+ */
+struct dmem_cgroup_pool_state *dmem_cgroup_get_common_ancestor(struct dmem_cgroup_pool_state *a,
+							       struct dmem_cgroup_pool_state *b)
+{
+	struct cgroup *ancestor_cgroup;
+	struct cgroup_subsys_state *ancestor_css;
+
+	if (!a || !b)
+		return NULL;
+
+	ancestor_cgroup = cgroup_common_ancestor(a->cs->css.cgroup, b->cs->css.cgroup);
+	if (!ancestor_cgroup)
+		return NULL;
+
+	ancestor_css = cgroup_e_css(ancestor_cgroup, &dmem_cgrp_subsys);
+
+	return get_cg_pool_unlocked(css_to_dmemcs(ancestor_css), a->region);
+}
+EXPORT_SYMBOL_GPL(dmem_cgroup_get_common_ancestor);
+
 static int dmem_cgroup_region_capacity_show(struct seq_file *sf, void *v)
 {
 	struct dmem_cgroup_region *region;

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v6 3/6] drm/ttm: Extract code for attempting allocation in a place
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
  2026-03-13 11:40 ` [PATCH v6 1/6] cgroup/dmem: Add queries for protection values Natalie Vock
  2026-03-13 11:40 ` [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper Natalie Vock
@ 2026-03-13 11:40 ` Natalie Vock
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2026-03-13 11:40 ` [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation Natalie Vock
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:40 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock, Tvrtko Ursulin

Move all code for attempting allocation for a specific place to
ttm_bo_alloc_place. With subsequent patches, this logic is going to get
more complicated, so it helps readability to have this separate.

ttm_bo_alloc_at_place takes a pointer to a struct ttm_bo_alloc_state.
This struct holds various state produced by the allocation (e.g. cgroup
resource associated with the allocation) that the caller needs to keep
track of (and potentially dispose of). This is just the limiting cgroup
pool for now, but future patches will add more state needing to be tracked.

ttm_bo_alloc_at_place also communicates via return codes if eviction
using ttm_bo_evict_alloc should be attempted. This is preparation for
attempting eviction in more cases than just force_space being set.

No functional change intended.

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 101 +++++++++++++++++++++++++++++++++----------
 1 file changed, 79 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index acb9197db8798..5cca0d6edbaf6 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -489,6 +489,62 @@ int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man
 	return ret;
 }
 
+struct ttm_bo_alloc_state {
+	/** @limit_pool: Which pool limit we should test against */
+	struct dmem_cgroup_pool_state *limit_pool;
+};
+
+/**
+ * ttm_bo_alloc_at_place - Attempt allocating a BO's backing store in a place
+ *
+ * @bo: The buffer to allocate the backing store of
+ * @place: The place to attempt allocation in
+ * @ctx: ttm_operation_ctx associated with this allocation
+ * @force_space: If we should evict buffers to force space
+ * @res: On allocation success, the resulting struct ttm_resource.
+ * @alloc_state: Object holding allocation state such as charged cgroups.
+ *
+ * Returns:
+ * -EBUSY: No space available, but allocation should be retried with ttm_bo_evict_alloc.
+ * -ENOSPC: No space available, allocation should not be retried.
+ * -ERESTARTSYS: An interruptible sleep was interrupted by a signal.
+ *
+ */
+static int ttm_bo_alloc_at_place(struct ttm_buffer_object *bo,
+				 const struct ttm_place *place,
+				 struct ttm_operation_ctx *ctx,
+				 bool force_space,
+				 struct ttm_resource **res,
+				 struct ttm_bo_alloc_state *alloc_state)
+{
+	bool may_evict;
+	int ret;
+
+	may_evict = force_space && place->mem_type != TTM_PL_SYSTEM;
+
+	ret = ttm_resource_alloc(bo, place, res,
+				 force_space ? &alloc_state->limit_pool : NULL);
+
+	if (ret) {
+		/*
+		 * -EAGAIN means the charge failed, which we treat like an
+		 * allocation failure. Therefore, return an error code indicating
+		 * the allocation failed - either -EBUSY if the allocation should
+		 * be retried with eviction, or -ENOSPC if there should be no second
+		 * attempt.
+		 */
+		if (ret == -EAGAIN)
+			return may_evict ? -EBUSY : -ENOSPC;
+
+		if (ret == -ENOSPC && may_evict)
+			return -EBUSY;
+
+		return ret;
+	}
+
+	return 0;
+}
+
 /**
  * struct ttm_bo_evict_walk - Parameters for the evict walk.
  */
@@ -504,12 +560,13 @@ struct ttm_bo_evict_walk {
 	/** @evicted: Number of successful evictions. */
 	unsigned long evicted;
 
-	/** @limit_pool: Which pool limit we should test against */
-	struct dmem_cgroup_pool_state *limit_pool;
 	/** @try_low: Whether we should attempt to evict BO's with low watermark threshold */
 	bool try_low;
 	/** @hit_low: If we cannot evict a bo when @try_low is false (first pass) */
 	bool hit_low;
+
+	/** @alloc_state: State associated with the allocation attempt. */
+	struct ttm_bo_alloc_state *alloc_state;
 };
 
 static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo)
@@ -518,8 +575,9 @@ static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *
 		container_of(walk, typeof(*evict_walk), walk);
 	s64 lret;
 
-	if (!dmem_cgroup_state_evict_valuable(evict_walk->limit_pool, bo->resource->css,
-					      evict_walk->try_low, &evict_walk->hit_low))
+	if (!dmem_cgroup_state_evict_valuable(evict_walk->alloc_state->limit_pool,
+					      bo->resource->css, evict_walk->try_low,
+					      &evict_walk->hit_low))
 		return 0;
 
 	if (bo->pin_count || !bo->bdev->funcs->eviction_valuable(bo, evict_walk->place))
@@ -561,7 +619,7 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
 			      struct ttm_operation_ctx *ctx,
 			      struct ww_acquire_ctx *ticket,
 			      struct ttm_resource **res,
-			      struct dmem_cgroup_pool_state *limit_pool)
+			      struct ttm_bo_alloc_state *state)
 {
 	struct ttm_bo_evict_walk evict_walk = {
 		.walk = {
@@ -574,7 +632,7 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
 		.place = place,
 		.evictor = evictor,
 		.res = res,
-		.limit_pool = limit_pool,
+		.alloc_state = state,
 	};
 	s64 lret;
 
@@ -725,9 +783,8 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
 
 	for (i = 0; i < placement->num_placement; ++i) {
 		const struct ttm_place *place = &placement->placement[i];
-		struct dmem_cgroup_pool_state *limit_pool = NULL;
+		struct ttm_bo_alloc_state alloc_state = {};
 		struct ttm_resource_manager *man;
-		bool may_evict;
 
 		man = ttm_manager_type(bdev, place->mem_type);
 		if (!man || !ttm_resource_manager_used(man))
@@ -737,25 +794,25 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
 				    TTM_PL_FLAG_FALLBACK))
 			continue;
 
-		may_evict = (force_space && place->mem_type != TTM_PL_SYSTEM);
-		ret = ttm_resource_alloc(bo, place, res, force_space ? &limit_pool : NULL);
-		if (ret) {
-			if (ret != -ENOSPC && ret != -EAGAIN) {
-				dmem_cgroup_pool_state_put(limit_pool);
-				return ret;
-			}
-			if (!may_evict) {
-				dmem_cgroup_pool_state_put(limit_pool);
-				continue;
-			}
+		ret = ttm_bo_alloc_at_place(bo, place, ctx, force_space,
+				res, &alloc_state);
 
+		if (ret == -ENOSPC) {
+			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
+			continue;
+		} else if (ret == -EBUSY) {
 			ret = ttm_bo_evict_alloc(bdev, man, place, bo, ctx,
-						 ticket, res, limit_pool);
-			dmem_cgroup_pool_state_put(limit_pool);
+						 ticket, res, &alloc_state);
+
+			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
+
 			if (ret == -EBUSY)
 				continue;
-			if (ret)
+			else if (ret)
 				return ret;
+		} else if (ret) {
+			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
+			return ret;
 		}
 
 		ret = ttm_bo_add_pipelined_eviction_fences(bo, man, ctx->no_wait_gpu);

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
                   ` (2 preceding siblings ...)
  2026-03-13 11:40 ` [PATCH v6 3/6] drm/ttm: Extract code for attempting allocation in a place Natalie Vock
@ 2026-03-13 11:40 ` Natalie Vock
  2026-03-13 12:53   ` Tvrtko Ursulin
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2026-03-13 11:40 ` [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit Natalie Vock
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:40 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock

Coupling resource allocation and cgroup charging is racy when charging
succeeds, but subsequent resource allocation fails. Certain eviction
decisions are made on the basis of whether the allocating cgroup is
protected, i.e. within its min/low limits, but with the charge being
tied to resource allocation (and uncharged when the resource allocation
fails), this check is done at a point where the allocation is not actually
charged to the cgroup.

This is subtly wrong if the allocation were to cause the cgroup to exceed
the min/low protection, but it's even more wrong if the same cgroup tries
allocating multiple buffers concurrently: In this case, the min/low
protection may pass for all allocation attempts when the real min/low
protection covers only some, or potentially none of the allocated
buffers.

Instead, charge the allocation to the cgroup once and keep the charge
for as long as we try to allocate a ttm_resource, and only undo the charge
if allocating the resource is ultimately unsuccessful and we move on to
a different ttm_place.

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
---
 drivers/gpu/drm/ttm/ttm_bo.c       | 66 ++++++++++++++++++++++++++------------
 drivers/gpu/drm/ttm/ttm_resource.c | 48 +++++++++++++++++++--------
 include/drm/ttm/ttm_resource.h     |  6 +++-
 3 files changed, 85 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 5cca0d6edbaf6..4adc9b80cba4a 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -490,8 +490,12 @@ int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man
 }
 
 struct ttm_bo_alloc_state {
+	/** @charge_pool: The memory pool the resource is charged to */
+	struct dmem_cgroup_pool_state *charge_pool;
 	/** @limit_pool: Which pool limit we should test against */
 	struct dmem_cgroup_pool_state *limit_pool;
+	/** @in_evict: Whether we are currently evicting buffers */
+	bool in_evict;
 };
 
 /**
@@ -520,28 +524,39 @@ static int ttm_bo_alloc_at_place(struct ttm_buffer_object *bo,
 	bool may_evict;
 	int ret;
 
-	may_evict = force_space && place->mem_type != TTM_PL_SYSTEM;
-
-	ret = ttm_resource_alloc(bo, place, res,
-				 force_space ? &alloc_state->limit_pool : NULL);
+	may_evict = !alloc_state->in_evict && force_space &&
+		    place->mem_type != TTM_PL_SYSTEM;
+	if (!alloc_state->charge_pool) {
+		ret = ttm_resource_try_charge(bo, place, &alloc_state->charge_pool,
+					      force_space ? &alloc_state->limit_pool
+							  : NULL);
+		if (ret) {
+			/*
+			 * -EAGAIN means the charge failed, which we treat
+			 * like an allocation failure. Therefore, return an
+			 * error code indicating the allocation failed -
+			 * either -EBUSY if the allocation should be
+			 * retried with eviction, or -ENOSPC if there should
+			 * be no second attempt.
+			 */
+			if (ret == -EAGAIN)
+				ret = may_evict ? -EBUSY : -ENOSPC;
+			return ret;
+		}
+	}
 
+	ret = ttm_resource_alloc(bo, place, res, alloc_state->charge_pool);
 	if (ret) {
-		/*
-		 * -EAGAIN means the charge failed, which we treat like an
-		 * allocation failure. Therefore, return an error code indicating
-		 * the allocation failed - either -EBUSY if the allocation should
-		 * be retried with eviction, or -ENOSPC if there should be no second
-		 * attempt.
-		 */
-		if (ret == -EAGAIN)
-			return may_evict ? -EBUSY : -ENOSPC;
-
 		if (ret == -ENOSPC && may_evict)
-			return -EBUSY;
-
+			ret = -EBUSY;
 		return ret;
 	}
 
+	/*
+	 * Ownership of charge_pool has been transferred to the TTM resource,
+	 * don't make the caller think we still hold a reference to it.
+	 */
+	alloc_state->charge_pool = NULL;
 	return 0;
 }
 
@@ -596,8 +611,9 @@ static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *
 
 	evict_walk->evicted++;
 	if (evict_walk->res)
-		lret = ttm_resource_alloc(evict_walk->evictor, evict_walk->place,
-					  evict_walk->res, NULL);
+		lret = ttm_bo_alloc_at_place(evict_walk->evictor, evict_walk->place,
+					     walk->arg.ctx, false, evict_walk->res,
+					     evict_walk->alloc_state);
 	if (lret == 0)
 		return 1;
 out:
@@ -636,6 +652,8 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
 	};
 	s64 lret;
 
+	state->in_evict = true;
+
 	evict_walk.walk.arg.trylock_only = true;
 	lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1);
 
@@ -666,6 +684,7 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
 		goto retry;
 	}
 out:
+	state->in_evict = false;
 	if (lret < 0)
 		return lret;
 	if (lret == 0)
@@ -798,6 +817,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
 				res, &alloc_state);
 
 		if (ret == -ENOSPC) {
+			dmem_cgroup_uncharge(alloc_state.charge_pool, bo->base.size);
 			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
 			continue;
 		} else if (ret == -EBUSY) {
@@ -806,11 +826,15 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
 
 			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
 
-			if (ret == -EBUSY)
-				continue;
-			else if (ret)
+			if (ret) {
+				dmem_cgroup_uncharge(alloc_state.charge_pool,
+						bo->base.size);
+				if (ret == -EBUSY)
+					continue;
 				return ret;
+			}
 		} else if (ret) {
+			dmem_cgroup_uncharge(alloc_state.charge_pool, bo->base.size);
 			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
 			return ret;
 		}
diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
index 192fca24f37e4..a8a836f6e376a 100644
--- a/drivers/gpu/drm/ttm/ttm_resource.c
+++ b/drivers/gpu/drm/ttm/ttm_resource.c
@@ -373,30 +373,52 @@ void ttm_resource_fini(struct ttm_resource_manager *man,
 }
 EXPORT_SYMBOL(ttm_resource_fini);
 
+/**
+ * ttm_resource_try_charge - charge a resource manager's cgroup pool
+ * @bo: buffer for which an allocation should be charged
+ * @place: where the allocation is attempted to be placed
+ * @ret_pool: on charge success, the pool that was charged
+ * @ret_limit_pool: on charge failure, the pool responsible for the failure
+ *
+ * Should be used to charge cgroups before attempting resource allocation.
+ * When charging succeeds, the value of ret_pool should be passed to
+ * ttm_resource_alloc.
+ *
+ * Returns: 0 on charge success, negative errno on failure.
+ */
+int ttm_resource_try_charge(struct ttm_buffer_object *bo,
+			    const struct ttm_place *place,
+			    struct dmem_cgroup_pool_state **ret_pool,
+			    struct dmem_cgroup_pool_state **ret_limit_pool)
+{
+	struct ttm_resource_manager *man =
+		ttm_manager_type(bo->bdev, place->mem_type);
+
+	if (!man->cg) {
+		*ret_pool = NULL;
+		if (ret_limit_pool)
+			*ret_limit_pool = NULL;
+		return 0;
+	}
+
+	return dmem_cgroup_try_charge(man->cg, bo->base.size, ret_pool,
+				      ret_limit_pool);
+}
+
 int ttm_resource_alloc(struct ttm_buffer_object *bo,
 		       const struct ttm_place *place,
 		       struct ttm_resource **res_ptr,
-		       struct dmem_cgroup_pool_state **ret_limit_pool)
+		       struct dmem_cgroup_pool_state *charge_pool)
 {
 	struct ttm_resource_manager *man =
 		ttm_manager_type(bo->bdev, place->mem_type);
-	struct dmem_cgroup_pool_state *pool = NULL;
 	int ret;
 
-	if (man->cg) {
-		ret = dmem_cgroup_try_charge(man->cg, bo->base.size, &pool, ret_limit_pool);
-		if (ret)
-			return ret;
-	}
-
 	ret = man->func->alloc(man, bo, place, res_ptr);
-	if (ret) {
-		if (pool)
-			dmem_cgroup_uncharge(pool, bo->base.size);
+	if (ret)
 		return ret;
-	}
 
-	(*res_ptr)->css = pool;
+	(*res_ptr)->css = charge_pool;
 
 	spin_lock(&bo->bdev->lru_lock);
 	ttm_resource_add_bulk_move(*res_ptr, bo);
diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
index 33e80f30b8b82..549b5b796884d 100644
--- a/include/drm/ttm/ttm_resource.h
+++ b/include/drm/ttm/ttm_resource.h
@@ -456,10 +456,14 @@ void ttm_resource_init(struct ttm_buffer_object *bo,
 void ttm_resource_fini(struct ttm_resource_manager *man,
 		       struct ttm_resource *res);
 
+int ttm_resource_try_charge(struct ttm_buffer_object *bo,
+			    const struct ttm_place *place,
+			    struct dmem_cgroup_pool_state **ret_pool,
+			    struct dmem_cgroup_pool_state **ret_limit_pool);
 int ttm_resource_alloc(struct ttm_buffer_object *bo,
 		       const struct ttm_place *place,
 		       struct ttm_resource **res,
-		       struct dmem_cgroup_pool_state **ret_limit_pool);
+		       struct dmem_cgroup_pool_state *charge_pool);
 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res);
 bool ttm_resource_intersects(struct ttm_device *bdev,
 			     struct ttm_resource *res,

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
                   ` (3 preceding siblings ...)
  2026-03-13 11:40 ` [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation Natalie Vock
@ 2026-03-13 11:40 ` Natalie Vock
  2026-03-13 13:29   ` Tvrtko Ursulin
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2026-03-13 11:40 ` [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
  2026-03-13 21:10 ` Claude review: cgroup/dmem,drm/ttm: Improve protection in contended cases Claude Code Review Bot
  6 siblings, 2 replies; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:40 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock

When the cgroup's memory usage is below the low/min limit and allocation
fails, try evicting some unprotected buffers to make space. Otherwise,
application buffers may be forced to go into GTT even though usage is
below the corresponding low/min limit, if other applications filled VRAM
with their allocations first.

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 51 +++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 48 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 4adc9b80cba4a..7300b91b77dd3 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -496,6 +496,10 @@ struct ttm_bo_alloc_state {
 	struct dmem_cgroup_pool_state *limit_pool;
 	/** @in_evict: Whether we are currently evicting buffers */
 	bool in_evict;
+	/** @may_try_low: If only unprotected BOs, i.e. BOs whose cgroup
+	 *  is exceeding its dmem low/min protection, should be considered for eviction
+	 */
+	bool may_try_low;
 };
 
 /**
@@ -545,6 +549,42 @@ static int ttm_bo_alloc_at_place(struct ttm_buffer_object *bo,
 		}
 	}
 
+	/*
+	 * cgroup protection plays a special role in eviction.
+	 * Conceptually, protection of memory via the dmem cgroup controller
+	 * entitles the protected cgroup to use a certain amount of memory.
+	 * There are two types of protection - the 'low' limit is a
+	 * "best-effort" protection, whereas the 'min' limit provides a hard
+	 * guarantee that memory within the cgroup's allowance will not be
+	 * evicted under any circumstance.
+	 *
+	 * To faithfully model this concept in TTM, we also need to take cgroup
+	 * protection into account when allocating. When allocation in one
+	 * place fails, TTM will default to trying other places first before
+	 * evicting.
+	 * If the allocation is covered by dmem cgroup protection, however,
+	 * this prevents the allocation from using the memory it is "entitled"
+	 * to. To make sure unprotected allocations cannot push new protected
+	 * allocations out of places they are "entitled" to use, we should
+	 * evict buffers not covered by any cgroup protection, if this
+	 * allocation is covered by cgroup protection.
+	 *
+	 * Buffers covered by 'min' protection are a special case - the 'min'
+	 * limit is a stronger guarantee than 'low', and thus buffers protected
+	 * by 'low' but not 'min' should also be considered for eviction.
+	 * Buffers protected by 'min' will never be considered for eviction
+	 * anyway, so the regular eviction path should be triggered here.
+	 * Buffers protected by 'low' but not 'min' will take a special
+	 * eviction path that only evicts buffers covered by neither 'low' or
+	 * 'min' protections.
+	 */
+	if (!alloc_state->in_evict) {
+		may_evict |= dmem_cgroup_below_min(NULL, alloc_state->charge_pool);
+		alloc_state->may_try_low = may_evict;
+
+		may_evict |= dmem_cgroup_below_low(NULL, alloc_state->charge_pool);
+	}
+
 	ret = ttm_resource_alloc(bo, place, res, alloc_state->charge_pool);
 	if (ret) {
 		if (ret == -ENOSPC && may_evict)
@@ -657,8 +697,12 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
 	evict_walk.walk.arg.trylock_only = true;
 	lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1);
 
-	/* One more attempt if we hit low limit? */
-	if (!lret && evict_walk.hit_low) {
+	/* If we failed to find enough BOs to evict, but we skipped over
+	 * some BOs because they were covered by dmem low protection, retry
+	 * evicting these protected BOs too, except if we're told not to
+	 * consider protected BOs at all.
+	 */
+	if (!lret && evict_walk.hit_low && state->may_try_low) {
 		evict_walk.try_low = true;
 		lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1);
 	}
@@ -679,7 +723,8 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
 	} while (!lret && evict_walk.evicted);
 
 	/* We hit the low limit? Try once more */
-	if (!lret && evict_walk.hit_low && !evict_walk.try_low) {
+	if (!lret && evict_walk.hit_low && !evict_walk.try_low &&
+			state->may_try_low) {
 		evict_walk.try_low = true;
 		goto retry;
 	}

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
                   ` (4 preceding siblings ...)
  2026-03-13 11:40 ` [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit Natalie Vock
@ 2026-03-13 11:40 ` Natalie Vock
  2026-03-13 14:11   ` Tvrtko Ursulin
                     ` (2 more replies)
  2026-03-13 21:10 ` Claude review: cgroup/dmem,drm/ttm: Improve protection in contended cases Claude Code Review Bot
  6 siblings, 3 replies; 21+ messages in thread
From: Natalie Vock @ 2026-03-13 11:40 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Michal Koutný, Christian Koenig, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Thomas Zimmermann, David Airlie,
	Simona Vetter, Tvrtko Ursulin
  Cc: cgroups, dri-devel, Natalie Vock

When checking whether to skip certain buffers because they're protected
by dmem.low, we're checking the effective protection of the evictee's
cgroup, but depending on how the evictor's cgroup relates to the
evictee's, the semantics of effective protection values change.

When testing against cgroups from different subtrees, page_counter's
recursive protection propagates memory protection afforded to a parent
down to the child cgroups, even if the children were not explicitly
protected. This prevents cgroups whose parents were afforded no
protection from stealing memory from cgroups whose parents were afforded
more protection, without users having to explicitly propagate this
protection.

However, if we always calculate protection from the root cgroup, this
breaks prioritization of sibling cgroups: If one cgroup was explicitly
protected and its siblings were not, the protected cgroup should get
higher priority, i.e. the protected cgroup should be able to steal from
unprotected siblings. This only works if we restrict the protection
calculation to the subtree shared by evictor and evictee.

Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 43 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 7300b91b77dd3..df4f4633a3a53 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -628,11 +628,48 @@ static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *
 {
 	struct ttm_bo_evict_walk *evict_walk =
 		container_of(walk, typeof(*evict_walk), walk);
+	struct dmem_cgroup_pool_state *limit_pool, *ancestor = NULL;
+	bool evict_valuable;
 	s64 lret;
 
-	if (!dmem_cgroup_state_evict_valuable(evict_walk->alloc_state->limit_pool,
-					      bo->resource->css, evict_walk->try_low,
-					      &evict_walk->hit_low))
+	/*
+	 * If may_try_low is not set, then we're trying to evict unprotected
+	 * buffers in favor of a protected allocation for charge_pool. Explicitly skip
+	 * buffers belonging to the same cgroup here - that cgroup is definitely protected,
+	 * even though dmem_cgroup_state_evict_valuable would allow the eviction because a
+	 * cgroup is always allowed to evict from itself even if it is protected.
+	 */
+	if (!evict_walk->alloc_state->may_try_low &&
+			bo->resource->css == evict_walk->alloc_state->charge_pool)
+		return 0;
+
+	limit_pool = evict_walk->alloc_state->limit_pool;
+	/*
+	 * If there is no explicit limit pool, find the root of the shared subtree between
+	 * evictor and evictee. This is important so that recursive protection rules can
+	 * apply properly: Recursive protection distributes cgroup protection afforded
+	 * to a parent cgroup but not used explicitly by a child cgroup between all child
+	 * cgroups (see docs of effective_protection in mm/page_counter.c). However, when
+	 * direct siblings compete for memory, siblings that were explicitly protected
+	 * should get prioritized over siblings that weren't. This only happens correctly
+	 * when the root of the shared subtree is passed to
+	 * dmem_cgroup_state_evict_valuable. Otherwise, the effective-protection
+	 * calculation cannot distinguish direct siblings from unrelated subtrees and the
+	 * calculated protection ends up wrong.
+	 */
+	if (!limit_pool) {
+		ancestor = dmem_cgroup_get_common_ancestor(bo->resource->css,
+							   evict_walk->alloc_state->charge_pool);
+		limit_pool = ancestor;
+	}
+
+	evict_valuable = dmem_cgroup_state_evict_valuable(limit_pool, bo->resource->css,
+							  evict_walk->try_low,
+							  &evict_walk->hit_low);
+	if (ancestor)
+		dmem_cgroup_pool_state_put(ancestor);
+
+	if (!evict_valuable)
 		return 0;
 
 	if (bo->pin_count || !bo->bdev->funcs->eviction_valuable(bo, evict_walk->place))

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation
  2026-03-13 11:40 ` [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation Natalie Vock
@ 2026-03-13 12:53   ` Tvrtko Ursulin
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  1 sibling, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2026-03-13 12:53 UTC (permalink / raw)
  To: Natalie Vock, Maarten Lankhorst, Maxime Ripard, Tejun Heo,
	Johannes Weiner, Michal Koutný, Christian Koenig, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: cgroups, dri-devel


On 13/03/2026 11:40, Natalie Vock wrote:
> Coupling resource allocation and cgroup charging is racy when charging
> succeeds, but subsequent resource allocation fails. Certain eviction
> decisions are made on the basis of whether the allocating cgroup is
> protected, i.e. within its min/low limits, but with the charge being
> tied to resource allocation (and uncharged when the resource allocation
> fails), this check is done at a point where the allocation is not actually
> charged to the cgroup.
> 
> This is subtly wrong if the allocation were to cause the cgroup to exceed
> the min/low protection, but it's even more wrong if the same cgroup tries
> allocating multiple buffers concurrently: In this case, the min/low
> protection may pass for all allocation attempts when the real min/low
> protection covers only some, or potentially none of the allocated
> buffers.
> 
> Instead, charge the allocation to the cgroup once and keep the charge
> for as long as we try to allocate a ttm_resource, and only undo the charge
> if allocating the resource is ultimately unsuccessful and we move on to
> a different ttm_place.
> 
> Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c       | 66 ++++++++++++++++++++++++++------------
>   drivers/gpu/drm/ttm/ttm_resource.c | 48 +++++++++++++++++++--------
>   include/drm/ttm/ttm_resource.h     |  6 +++-
>   3 files changed, 85 insertions(+), 35 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 5cca0d6edbaf6..4adc9b80cba4a 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -490,8 +490,12 @@ int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man
>   }
>   
>   struct ttm_bo_alloc_state {
> +	/** @charge_pool: The memory pool the resource is charged to */
> +	struct dmem_cgroup_pool_state *charge_pool;
>   	/** @limit_pool: Which pool limit we should test against */
>   	struct dmem_cgroup_pool_state *limit_pool;
> +	/** @in_evict: Whether we are currently evicting buffers */
> +	bool in_evict;
>   };
>   
>   /**
> @@ -520,28 +524,39 @@ static int ttm_bo_alloc_at_place(struct ttm_buffer_object *bo,
>   	bool may_evict;
>   	int ret;
>   
> -	may_evict = force_space && place->mem_type != TTM_PL_SYSTEM;
> -
> -	ret = ttm_resource_alloc(bo, place, res,
> -				 force_space ? &alloc_state->limit_pool : NULL);
> +	may_evict = !alloc_state->in_evict && force_space &&
> +		    place->mem_type != TTM_PL_SYSTEM;
> +	if (!alloc_state->charge_pool) {
> +		ret = ttm_resource_try_charge(bo, place, &alloc_state->charge_pool,
> +					      force_space ? &alloc_state->limit_pool
> +							  : NULL);
> +		if (ret) {
> +			/*
> +			 * -EAGAIN means the charge failed, which we treat
> +			 * like an allocation failure. Therefore, return an
> +			 * error code indicating the allocation failed -
> +			 * either -EBUSY if the allocation should be
> +			 * retried with eviction, or -ENOSPC if there should
> +			 * be no second attempt.
> +			 */
> +			if (ret == -EAGAIN)
> +				ret = may_evict ? -EBUSY : -ENOSPC;
> +			return ret;
> +		}
> +	}
>   
> +	ret = ttm_resource_alloc(bo, place, res, alloc_state->charge_pool);
>   	if (ret) {
> -		/*
> -		 * -EAGAIN means the charge failed, which we treat like an
> -		 * allocation failure. Therefore, return an error code indicating
> -		 * the allocation failed - either -EBUSY if the allocation should
> -		 * be retried with eviction, or -ENOSPC if there should be no second
> -		 * attempt.
> -		 */
> -		if (ret == -EAGAIN)
> -			return may_evict ? -EBUSY : -ENOSPC;
> -
>   		if (ret == -ENOSPC && may_evict)
> -			return -EBUSY;
> -
> +			ret = -EBUSY;
>   		return ret;
>   	}
>   
> +	/*
> +	 * Ownership of charge_pool has been transferred to the TTM resource,
> +	 * don't make the caller think we still hold a reference to it.
> +	 */
> +	alloc_state->charge_pool = NULL;
>   	return 0;
>   }
>   
> @@ -596,8 +611,9 @@ static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *
>   
>   	evict_walk->evicted++;
>   	if (evict_walk->res)
> -		lret = ttm_resource_alloc(evict_walk->evictor, evict_walk->place,
> -					  evict_walk->res, NULL);
> +		lret = ttm_bo_alloc_at_place(evict_walk->evictor, evict_walk->place,
> +					     walk->arg.ctx, false, evict_walk->res,
> +					     evict_walk->alloc_state);

Oh I am glad this worked out. I will not go as far to say TTM eviction 
logic is now easy to follow but at least the new state machine logic is 
consolidated. Anyway, I went back and forth many many times and it all 
looks good to me.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>

Btw have you looked at the TTM kunit tests? Does any cover the new paths 
or it would make sense to add some coverage with this series, or as a 
follow up?

Regards,

Tvrtko

>   	if (lret == 0)
>   		return 1;
>   out:
> @@ -636,6 +652,8 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
>   	};
>   	s64 lret;
>   
> +	state->in_evict = true;
> +
>   	evict_walk.walk.arg.trylock_only = true;
>   	lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1);
>   
> @@ -666,6 +684,7 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
>   		goto retry;
>   	}
>   out:
> +	state->in_evict = false;
>   	if (lret < 0)
>   		return lret;
>   	if (lret == 0)
> @@ -798,6 +817,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
>   				res, &alloc_state);
>   
>   		if (ret == -ENOSPC) {
> +			dmem_cgroup_uncharge(alloc_state.charge_pool, bo->base.size);
>   			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
>   			continue;
>   		} else if (ret == -EBUSY) {
> @@ -806,11 +826,15 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
>   
>   			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
>   
> -			if (ret == -EBUSY)
> -				continue;
> -			else if (ret)
> +			if (ret) {
> +				dmem_cgroup_uncharge(alloc_state.charge_pool,
> +						bo->base.size);
> +				if (ret == -EBUSY)
> +					continue;
>   				return ret;
> +			}
>   		} else if (ret) {
> +			dmem_cgroup_uncharge(alloc_state.charge_pool, bo->base.size);
>   			dmem_cgroup_pool_state_put(alloc_state.limit_pool);
>   			return ret;
>   		}
> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
> index 192fca24f37e4..a8a836f6e376a 100644
> --- a/drivers/gpu/drm/ttm/ttm_resource.c
> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> @@ -373,30 +373,52 @@ void ttm_resource_fini(struct ttm_resource_manager *man,
>   }
>   EXPORT_SYMBOL(ttm_resource_fini);
>   
> +/**
> + * ttm_resource_try_charge - charge a resource manager's cgroup pool
> + * @bo: buffer for which an allocation should be charged
> + * @place: where the allocation is attempted to be placed
> + * @ret_pool: on charge success, the pool that was charged
> + * @ret_limit_pool: on charge failure, the pool responsible for the failure
> + *
> + * Should be used to charge cgroups before attempting resource allocation.
> + * When charging succeeds, the value of ret_pool should be passed to
> + * ttm_resource_alloc.
> + *
> + * Returns: 0 on charge success, negative errno on failure.
> + */
> +int ttm_resource_try_charge(struct ttm_buffer_object *bo,
> +			    const struct ttm_place *place,
> +			    struct dmem_cgroup_pool_state **ret_pool,
> +			    struct dmem_cgroup_pool_state **ret_limit_pool)
> +{
> +	struct ttm_resource_manager *man =
> +		ttm_manager_type(bo->bdev, place->mem_type);
> +
> +	if (!man->cg) {
> +		*ret_pool = NULL;
> +		if (ret_limit_pool)
> +			*ret_limit_pool = NULL;
> +		return 0;
> +	}
> +
> +	return dmem_cgroup_try_charge(man->cg, bo->base.size, ret_pool,
> +				      ret_limit_pool);
> +}
> +
>   int ttm_resource_alloc(struct ttm_buffer_object *bo,
>   		       const struct ttm_place *place,
>   		       struct ttm_resource **res_ptr,
> -		       struct dmem_cgroup_pool_state **ret_limit_pool)
> +		       struct dmem_cgroup_pool_state *charge_pool)
>   {
>   	struct ttm_resource_manager *man =
>   		ttm_manager_type(bo->bdev, place->mem_type);
> -	struct dmem_cgroup_pool_state *pool = NULL;
>   	int ret;
>   
> -	if (man->cg) {
> -		ret = dmem_cgroup_try_charge(man->cg, bo->base.size, &pool, ret_limit_pool);
> -		if (ret)
> -			return ret;
> -	}
> -
>   	ret = man->func->alloc(man, bo, place, res_ptr);
> -	if (ret) {
> -		if (pool)
> -			dmem_cgroup_uncharge(pool, bo->base.size);
> +	if (ret)
>   		return ret;
> -	}
>   
> -	(*res_ptr)->css = pool;
> +	(*res_ptr)->css = charge_pool;
>   
>   	spin_lock(&bo->bdev->lru_lock);
>   	ttm_resource_add_bulk_move(*res_ptr, bo);
> diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
> index 33e80f30b8b82..549b5b796884d 100644
> --- a/include/drm/ttm/ttm_resource.h
> +++ b/include/drm/ttm/ttm_resource.h
> @@ -456,10 +456,14 @@ void ttm_resource_init(struct ttm_buffer_object *bo,
>   void ttm_resource_fini(struct ttm_resource_manager *man,
>   		       struct ttm_resource *res);
>   
> +int ttm_resource_try_charge(struct ttm_buffer_object *bo,
> +			    const struct ttm_place *place,
> +			    struct dmem_cgroup_pool_state **ret_pool,
> +			    struct dmem_cgroup_pool_state **ret_limit_pool);
>   int ttm_resource_alloc(struct ttm_buffer_object *bo,
>   		       const struct ttm_place *place,
>   		       struct ttm_resource **res,
> -		       struct dmem_cgroup_pool_state **ret_limit_pool);
> +		       struct dmem_cgroup_pool_state *charge_pool);
>   void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res);
>   bool ttm_resource_intersects(struct ttm_device *bdev,
>   			     struct ttm_resource *res,
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit
  2026-03-13 11:40 ` [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit Natalie Vock
@ 2026-03-13 13:29   ` Tvrtko Ursulin
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  1 sibling, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2026-03-13 13:29 UTC (permalink / raw)
  To: Natalie Vock, Maarten Lankhorst, Maxime Ripard, Tejun Heo,
	Johannes Weiner, Michal Koutný, Christian Koenig, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: cgroups, dri-devel


On 13/03/2026 11:40, Natalie Vock wrote:
> When the cgroup's memory usage is below the low/min limit and allocation
> fails, try evicting some unprotected buffers to make space. Otherwise,
> application buffers may be forced to go into GTT even though usage is
> below the corresponding low/min limit, if other applications filled VRAM
> with their allocations first.
> 
> Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c | 51 +++++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 48 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 4adc9b80cba4a..7300b91b77dd3 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -496,6 +496,10 @@ struct ttm_bo_alloc_state {
>   	struct dmem_cgroup_pool_state *limit_pool;
>   	/** @in_evict: Whether we are currently evicting buffers */
>   	bool in_evict;
> +	/** @may_try_low: If only unprotected BOs, i.e. BOs whose cgroup
> +	 *  is exceeding its dmem low/min protection, should be considered for eviction
> +	 */
> +	bool may_try_low;
>   };
>   
>   /**
> @@ -545,6 +549,42 @@ static int ttm_bo_alloc_at_place(struct ttm_buffer_object *bo,
>   		}
>   	}
>   
> +	/*
> +	 * cgroup protection plays a special role in eviction.
> +	 * Conceptually, protection of memory via the dmem cgroup controller
> +	 * entitles the protected cgroup to use a certain amount of memory.
> +	 * There are two types of protection - the 'low' limit is a
> +	 * "best-effort" protection, whereas the 'min' limit provides a hard
> +	 * guarantee that memory within the cgroup's allowance will not be
> +	 * evicted under any circumstance.
> +	 *
> +	 * To faithfully model this concept in TTM, we also need to take cgroup
> +	 * protection into account when allocating. When allocation in one
> +	 * place fails, TTM will default to trying other places first before
> +	 * evicting.
> +	 * If the allocation is covered by dmem cgroup protection, however,
> +	 * this prevents the allocation from using the memory it is "entitled"
> +	 * to. To make sure unprotected allocations cannot push new protected
> +	 * allocations out of places they are "entitled" to use, we should
> +	 * evict buffers not covered by any cgroup protection, if this
> +	 * allocation is covered by cgroup protection.
> +	 *
> +	 * Buffers covered by 'min' protection are a special case - the 'min'
> +	 * limit is a stronger guarantee than 'low', and thus buffers protected
> +	 * by 'low' but not 'min' should also be considered for eviction.
> +	 * Buffers protected by 'min' will never be considered for eviction
> +	 * anyway, so the regular eviction path should be triggered here.
> +	 * Buffers protected by 'low' but not 'min' will take a special
> +	 * eviction path that only evicts buffers covered by neither 'low' or
> +	 * 'min' protections.
> +	 */
> +	if (!alloc_state->in_evict) {
> +		may_evict |= dmem_cgroup_below_min(NULL, alloc_state->charge_pool);
> +		alloc_state->may_try_low = may_evict;
> +
> +		may_evict |= dmem_cgroup_below_low(NULL, alloc_state->charge_pool);

For some value of optimisation you could combine the two calls of 
dmem_cgroup_below_min/low into a single helper which returns both min 
and low, given how it is the only callers, saving a pair of function 
calls and tree traversing calculations. Or.. I am not sure if simply 
exporting dmem_cgroup_calculate_protection() and making it return 
page_counters would be okay, or a copy of them, or a new data structure 
would need to be added.

Anyway, that can be discussed separately. In the meantime this patch 
looks good to me:

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>

Regards,

Tvrtko

> +	}
> +
>   	ret = ttm_resource_alloc(bo, place, res, alloc_state->charge_pool);
>   	if (ret) {
>   		if (ret == -ENOSPC && may_evict)
> @@ -657,8 +697,12 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
>   	evict_walk.walk.arg.trylock_only = true;
>   	lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1);
>   
> -	/* One more attempt if we hit low limit? */
> -	if (!lret && evict_walk.hit_low) {
> +	/* If we failed to find enough BOs to evict, but we skipped over
> +	 * some BOs because they were covered by dmem low protection, retry
> +	 * evicting these protected BOs too, except if we're told not to
> +	 * consider protected BOs at all.
> +	 */
> +	if (!lret && evict_walk.hit_low && state->may_try_low) {
>   		evict_walk.try_low = true;
>   		lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1);
>   	}
> @@ -679,7 +723,8 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
>   	} while (!lret && evict_walk.evicted);
>   
>   	/* We hit the low limit? Try once more */
> -	if (!lret && evict_walk.hit_low && !evict_walk.try_low) {
> +	if (!lret && evict_walk.hit_low && !evict_walk.try_low &&
> +			state->may_try_low) {
>   		evict_walk.try_low = true;
>   		goto retry;
>   	}
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool
  2026-03-13 11:40 ` [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
@ 2026-03-13 14:11   ` Tvrtko Ursulin
  2026-03-13 14:16   ` Michal Koutný
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2026-03-13 14:11 UTC (permalink / raw)
  To: Natalie Vock, Maarten Lankhorst, Maxime Ripard, Tejun Heo,
	Johannes Weiner, Michal Koutný, Christian Koenig, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: cgroups, dri-devel


On 13/03/2026 11:40, Natalie Vock wrote:
> When checking whether to skip certain buffers because they're protected
> by dmem.low, we're checking the effective protection of the evictee's
> cgroup, but depending on how the evictor's cgroup relates to the
> evictee's, the semantics of effective protection values change.
> 
> When testing against cgroups from different subtrees, page_counter's
> recursive protection propagates memory protection afforded to a parent
> down to the child cgroups, even if the children were not explicitly
> protected. This prevents cgroups whose parents were afforded no
> protection from stealing memory from cgroups whose parents were afforded
> more protection, without users having to explicitly propagate this
> protection.
> 
> However, if we always calculate protection from the root cgroup, this
> breaks prioritization of sibling cgroups: If one cgroup was explicitly
> protected and its siblings were not, the protected cgroup should get
> higher priority, i.e. the protected cgroup should be able to steal from
> unprotected siblings. This only works if we restrict the protection
> calculation to the subtree shared by evictor and evictee.
> 
> Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c | 43 ++++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 40 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 7300b91b77dd3..df4f4633a3a53 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -628,11 +628,48 @@ static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *
>   {
>   	struct ttm_bo_evict_walk *evict_walk =
>   		container_of(walk, typeof(*evict_walk), walk);
> +	struct dmem_cgroup_pool_state *limit_pool, *ancestor = NULL;
> +	bool evict_valuable;
>   	s64 lret;
>   
> -	if (!dmem_cgroup_state_evict_valuable(evict_walk->alloc_state->limit_pool,
> -					      bo->resource->css, evict_walk->try_low,
> -					      &evict_walk->hit_low))
> +	/*
> +	 * If may_try_low is not set, then we're trying to evict unprotected
> +	 * buffers in favor of a protected allocation for charge_pool. Explicitly skip
> +	 * buffers belonging to the same cgroup here - that cgroup is definitely protected,
> +	 * even though dmem_cgroup_state_evict_valuable would allow the eviction because a
> +	 * cgroup is always allowed to evict from itself even if it is protected.
> +	 */
> +	if (!evict_walk->alloc_state->may_try_low &&
> +			bo->resource->css == evict_walk->alloc_state->charge_pool)
> +		return 0;

Hm.. should this hunk go into the previous patch?

> +
> +	limit_pool = evict_walk->alloc_state->limit_pool;
> +	/*
> +	 * If there is no explicit limit pool, find the root of the shared subtree between
> +	 * evictor and evictee. This is important so that recursive protection rules can
> +	 * apply properly: Recursive protection distributes cgroup protection afforded
> +	 * to a parent cgroup but not used explicitly by a child cgroup between all child
> +	 * cgroups (see docs of effective_protection in mm/page_counter.c). However, when
> +	 * direct siblings compete for memory, siblings that were explicitly protected
> +	 * should get prioritized over siblings that weren't. This only happens correctly
> +	 * when the root of the shared subtree is passed to
> +	 * dmem_cgroup_state_evict_valuable. Otherwise, the effective-protection
> +	 * calculation cannot distinguish direct siblings from unrelated subtrees and the
> +	 * calculated protection ends up wrong.
> +	 */
> +	if (!limit_pool) {
> +		ancestor = dmem_cgroup_get_common_ancestor(bo->resource->css,
> +							   evict_walk->alloc_state->charge_pool);
> +		limit_pool = ancestor;
> +	}
> +
> +	evict_valuable = dmem_cgroup_state_evict_valuable(limit_pool, bo->resource->css,
> +							  evict_walk->try_low,
> +							  &evict_walk->hit_low);
> +	if (ancestor)
> +		dmem_cgroup_pool_state_put(ancestor);
> +
> +	if (!evict_valuable)

This part is probably better reviewed by someone more familiar with the 
dmem controller. One question I have though is whether this patch is 
independent from the rest of the series or it really makes sense for it 
to be last?

Regards,

Tvrtko

>   		return 0;
>   
>   	if (bo->pin_count || !bo->bdev->funcs->eviction_valuable(bo, evict_walk->place))
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper
  2026-03-13 11:40 ` [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper Natalie Vock
@ 2026-03-13 14:16   ` Michal Koutný
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  1 sibling, 0 replies; 21+ messages in thread
From: Michal Koutný @ 2026-03-13 14:16 UTC (permalink / raw)
  To: Natalie Vock
  Cc: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Christian Koenig, Huang Rui, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Thomas Zimmermann, David Airlie, Simona Vetter,
	Tvrtko Ursulin, cgroups, dri-devel

[-- Attachment #1: Type: text/plain, Size: 867 bytes --]

On Fri, Mar 13, 2026 at 12:40:01PM +0100, Natalie Vock <natalie.vock@gmx.de> wrote:
> This helps to find a common subtree of two resources, which is important
> when determining whether it's helpful to evict one resource in favor of
> another.
> 
> To facilitate this, add a common helper to find the ancestor of two
> cgroups using each cgroup's ancestor array.
> 
> Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
> ---
>  include/linux/cgroup.h      | 21 +++++++++++++++++++++
>  include/linux/cgroup_dmem.h |  9 +++++++++
>  kernel/cgroup/dmem.c        | 28 ++++++++++++++++++++++++++++
>  3 files changed, 58 insertions(+)

When the helper is added, the idiom in
kernel/cgroup/cgroup.c:cgroup_procs_write_permission() could perhaps be
switched to it too (structured in commits) to "optimize" migrations from
large depths.

Thanks,
Michal

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 265 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool
  2026-03-13 11:40 ` [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
  2026-03-13 14:11   ` Tvrtko Ursulin
@ 2026-03-13 14:16   ` Michal Koutný
  2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
  2 siblings, 0 replies; 21+ messages in thread
From: Michal Koutný @ 2026-03-13 14:16 UTC (permalink / raw)
  To: Natalie Vock
  Cc: Maarten Lankhorst, Maxime Ripard, Tejun Heo, Johannes Weiner,
	Christian Koenig, Huang Rui, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Thomas Zimmermann, David Airlie, Simona Vetter,
	Tvrtko Ursulin, cgroups, dri-devel

[-- Attachment #1: Type: text/plain, Size: 1044 bytes --]

Hi.

On Fri, Mar 13, 2026 at 12:40:05PM +0100, Natalie Vock <natalie.vock@gmx.de> wrote:
> However, if we always calculate protection from the root cgroup, this
> breaks prioritization of sibling cgroups: If one cgroup was explicitly
> protected and its siblings were not, the protected cgroup should get
> higher priority, i.e. the protected cgroup should be able to steal from
> unprotected siblings. This only works if we restrict the protection
> calculation to the subtree shared by evictor and evictee.

When there are thee siblings A, B, C where A has protection and C is
doing a new allocation (evictor) but hits a limit on L, what effective
values to A would be applied in the respective cases below?

Case 1)

  L    dmem.max
  `- A dmem.low
  `- B
  `- C (alloc)

Case 2)

  L       dmem.max
  `- M    // dmem.low=0
     `- A dmem.low
     `- B
     `- C (alloc)

I think it should be the configured A:dmem.low in the first case but
zero in the latter case because M has no protection configured. -- Is
that correct?

Thanks,
Michal

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 265 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: cgroup/dmem,drm/ttm: Improve protection in contended cases
  2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
                   ` (5 preceding siblings ...)
  2026-03-13 11:40 ` [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
@ 2026-03-13 21:10 ` Claude Code Review Bot
  6 siblings, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: cgroup/dmem,drm/ttm: Improve protection in contended cases
Author: Natalie Vock <natalie.vock@gmx.de>
Patches: 12
Reviewed: 2026-03-14T07:10:15.097178

---

This is v6 of a series by Natalie Vock that improves dmem cgroup protection enforcement in TTM. The core problem is well-motivated: when VRAM is filled by unprotected applications, TTM falls back to GTT for protected allocations without ever attempting eviction, defeating the purpose of cgroup protection. The series adds infrastructure (protection queries, common ancestor helpers), refactors TTM allocation flow, decouples cgroup charging from resource allocation, and then uses all of this to trigger eviction for protected allocations even when `force_space` is not set.

The series is well-structured with good incremental patches, clear commit messages, and has clearly benefited from multiple rounds of review (Tvrtko's R-b on patch 3). The overall approach is sound — it correctly models the memcg protection semantics (min vs low) and handles the subtree/sibling distinction properly.

A few issues are noted below, mostly minor.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: cgroup/dmem: Add queries for protection values
  2026-03-13 11:40 ` [PATCH v6 1/6] cgroup/dmem: Add queries for protection values Natalie Vock
@ 2026-03-13 21:10   ` Claude Code Review Bot
  0 siblings, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This patch adds `dmem_cgroup_below_min()` and `dmem_cgroup_below_low()` as counterparts to memcg's `mem_cgroup_below_{min,low}`.

The implementation looks correct. Both functions walk up to find the root when `root` is NULL, call `dmem_cgroup_calculate_protection()` to refresh effective values, and then compare usage against `emin`/`elow`.

**Minor nit**: The comment says "the memcg pendant" — the English word should be "pendant" → "**pendant**" is actually an ornament; the correct word is "**counterpart**" or "**pendant**" in some European languages. This appears twice (both functions have the same comment). Suggest s/pendant/counterpart/.

The `<=` comparison is consistent with `dmem_cgroup_state_evict_valuable()` which uses `used <= min` to mean "protected". No issues here.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper
  2026-03-13 11:40 ` [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper Natalie Vock
  2026-03-13 14:16   ` Michal Koutný
@ 2026-03-13 21:10   ` Claude Code Review Bot
  1 sibling, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Adds `cgroup_common_ancestor()` to `include/linux/cgroup.h` and `dmem_cgroup_get_common_ancestor()` to dmem.

The `cgroup_common_ancestor()` implementation is clean — walks from `min(a->level, b->level)` downward checking `ancestors[]`. This is correct since two cgroups in the same hierarchy share ancestors at every level up to and including their LCA.

**Doc typo in `dmem_cgroup_get_common_ancestor()`**:
```
 * @b: First pool to find the common ancestor of.
```
Should say `@b: Second pool...`.

**Potential concern**: `dmem_cgroup_get_common_ancestor()` calls `cgroup_e_css()` and then `get_cg_pool_unlocked()`. The `cgroup_e_css()` call is done without holding `rcu_read_lock()`. Looking at the kernel, `cgroup_e_css()` dereferences `cgroup->subsys[]` which is RCU-protected. This should be called under `rcu_read_lock()`, or the caller should hold a reference that prevents the css from going away. The css obtained from `cgroup_e_css()` is passed to `css_to_dmemcs()` and then to `get_cg_pool_unlocked()` — if the css could be freed between these calls, this is a use-after-free. This warrants investigation: does the fact that `a` and `b` are alive (and hold references to their respective css) guarantee that the common ancestor's css is also alive? In cgroup v2 with a unified hierarchy this should be the case since parents can't be destroyed while children exist, but this should be documented or protected with `rcu_read_lock()` + `css_get()`.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: drm/ttm: Extract code for attempting allocation in a place
  2026-03-13 11:40 ` [PATCH v6 3/6] drm/ttm: Extract code for attempting allocation in a place Natalie Vock
@ 2026-03-13 21:10   ` Claude Code Review Bot
  0 siblings, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Clean refactoring that extracts `ttm_bo_alloc_at_place()` and introduces `struct ttm_bo_alloc_state`. The `limit_pool` is moved from `ttm_bo_evict_walk` into `ttm_bo_alloc_state`, and `ttm_bo_evict_walk` gets a pointer to the alloc_state.

The error handling mapping is preserved correctly:
- `-EAGAIN` (charge fail) → `-EBUSY` if eviction possible, `-ENOSPC` otherwise
- `-ENOSPC` (alloc fail) + `may_evict` → `-EBUSY`

Reviewed-by from Tvrtko. Looks good, no functional change intended and none introduced.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: drm/ttm: Split cgroup charge and resource allocation
  2026-03-13 11:40 ` [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation Natalie Vock
  2026-03-13 12:53   ` Tvrtko Ursulin
@ 2026-03-13 21:10   ` Claude Code Review Bot
  1 sibling, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This is the most complex refactoring patch. It splits `ttm_resource_alloc()` into `ttm_resource_try_charge()` + `ttm_resource_alloc()`, keeping the charge alive across eviction retries.

**Key design**: `charge_pool` is charged once, and ownership transfers to the `ttm_resource` on success (set to NULL in alloc_state). On failure to place, the charge is explicitly undone with `dmem_cgroup_uncharge()`.

**The `in_evict` flag**: prevents recursive eviction — `may_evict` is forced false during the evict callback. This is important since `ttm_bo_alloc_at_place` is now called from `ttm_bo_evict_cb`.

**Review of uncharge paths in `ttm_bo_alloc_resource()`**:
- `-ENOSPC` path: uncharges + puts limit_pool + `continue` ✓
- `-EBUSY` path with evict failure: uncharges + `continue` or `return` ✓  
- Other error path: uncharges + puts limit_pool + `return` ✓
- Success path: charge transferred to resource ✓

This looks correct. The `ttm_resource_alloc()` signature change from `**ret_limit_pool` to `*charge_pool` is clean.

**One concern**: In the evict callback path (`ttm_bo_evict_cb`), `ttm_bo_alloc_at_place` is called with `force_space=false`. Since `charge_pool` is already set (charged earlier), the `if (!alloc_state->charge_pool)` block is skipped, going straight to `ttm_resource_alloc()`. This is correct — we don't want to re-charge. But `may_evict` will be false (both `in_evict` prevents it and `force_space` is false), so allocation failure returns the raw error. Since `lret` in the callback treats `0` as success and negative as "keep trying", this should be fine.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: drm/ttm: Be more aggressive when allocating below protection limit
  2026-03-13 11:40 ` [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit Natalie Vock
  2026-03-13 13:29   ` Tvrtko Ursulin
@ 2026-03-13 21:10   ` Claude Code Review Bot
  1 sibling, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This is the core behavioral change. When an allocation is protected by min/low and resource allocation fails, TTM now attempts eviction of unprotected buffers.

**Logic flow**:
```c
if (!alloc_state->in_evict) {
    may_evict |= dmem_cgroup_below_min(NULL, alloc_state->charge_pool);
    alloc_state->may_try_low = may_evict;
    may_evict |= dmem_cgroup_below_low(NULL, alloc_state->charge_pool);
}
```

This is subtly clever:
- If below **min**: `may_evict=true`, `may_try_low=true` (can try evicting low-protected BOs too)
- If below **low** (but not min): `may_evict=true`, `may_try_low=false` (only evict unprotected BOs)
- If neither: no change to `may_evict`, no aggressive eviction

The `may_try_low` flag gates the "retry with low" logic in `ttm_bo_evict_alloc()`, preventing low-but-not-min protected allocations from evicting low-protected buffers. This correctly models the min > low priority.

**Comment style**: The block comment at line 2133 starts with `/* If we failed...` — kernel style prefers the opening `/*` on its own line for multi-line comments. Minor style nit.

**The `may_try_low` field naming** could be slightly confusing since setting it to `true` means "yes, we may try evicting low-protected BOs" but the semantics in `ttm_bo_evict_alloc` are "skip the low retry if may_try_low is false". This is consistent but requires careful reading.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Claude review: drm/ttm: Use common ancestor of evictor and evictee as limit pool
  2026-03-13 11:40 ` [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
  2026-03-13 14:11   ` Tvrtko Ursulin
  2026-03-13 14:16   ` Michal Koutný
@ 2026-03-13 21:10   ` Claude Code Review Bot
  2 siblings, 0 replies; 21+ messages in thread
From: Claude Code Review Bot @ 2026-03-13 21:10 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This patch fixes protection calculation for sibling cgroups by using their common ancestor as the limit pool root. Without this, recursive protection propagation makes all children of a protected parent appear protected, preventing a protected sibling from evicting an unprotected sibling.

**The self-eviction guard**:
```c
if (!evict_walk->alloc_state->may_try_low &&
        bo->resource->css == evict_walk->alloc_state->charge_pool)
    return 0;
```
This prevents a protected cgroup from evicting its own buffers during the "aggressive protected" eviction path. The pointer comparison works because `charge_pool` was obtained from `get_current_dmemcs()` and `bo->resource->css` is the pool charged during allocation. However, this is a **pointer comparison** — if two different pool_state objects refer to the same cgroup (e.g., obtained at different times), this could fail to match. Looking at `dmem_cgroup_try_charge`, the pool is obtained via `get_cg_pool_unlocked` which caches pool states per cgroup, so the same cgroup should yield the same pointer. This should be safe.

**The ancestor lookup**:
```c
if (!limit_pool) {
    ancestor = dmem_cgroup_get_common_ancestor(bo->resource->css,
                           evict_walk->alloc_state->charge_pool);
    limit_pool = ancestor;
}
```
When there's no explicit limit_pool (the aggressive protection path), the common ancestor is computed per-eviction-candidate. This is O(depth) per candidate, which seems acceptable for typical cgroup hierarchies. The reference is properly put after the `dmem_cgroup_state_evict_valuable` call.

**Indentation**: The `&&` continuation at line 1852 uses tabs for alignment rather than aligning under the `(`. This may or may not match local style; the existing code in this file seems to use tab-based continuation which is kernel style.

Overall this is a well-thought-out series. The main actionable items are:
1. The `@b: First pool` → `@b: Second pool` typo in patch 2
2. The potential RCU concern around `cgroup_e_css()` usage in `dmem_cgroup_get_common_ancestor()` in patch 2
3. The "pendant" → "counterpart" wording in patch 1

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2026-03-13 21:10 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-13 11:39 [PATCH v6 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
2026-03-13 11:40 ` [PATCH v6 1/6] cgroup/dmem: Add queries for protection values Natalie Vock
2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
2026-03-13 11:40 ` [PATCH v6 2/6] cgroup,cgroup/dmem: Add (dmem_)cgroup_common_ancestor helper Natalie Vock
2026-03-13 14:16   ` Michal Koutný
2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
2026-03-13 11:40 ` [PATCH v6 3/6] drm/ttm: Extract code for attempting allocation in a place Natalie Vock
2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
2026-03-13 11:40 ` [PATCH v6 4/6] drm/ttm: Split cgroup charge and resource allocation Natalie Vock
2026-03-13 12:53   ` Tvrtko Ursulin
2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
2026-03-13 11:40 ` [PATCH v6 5/6] drm/ttm: Be more aggressive when allocating below protection limit Natalie Vock
2026-03-13 13:29   ` Tvrtko Ursulin
2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
2026-03-13 11:40 ` [PATCH v6 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
2026-03-13 14:11   ` Tvrtko Ursulin
2026-03-13 14:16   ` Michal Koutný
2026-03-13 21:10   ` Claude review: " Claude Code Review Bot
2026-03-13 21:10 ` Claude review: cgroup/dmem,drm/ttm: Improve protection in contended cases Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-03-02 12:37 [PATCH v5 0/6] " Natalie Vock
2026-03-02 12:37 ` [PATCH v5 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
2026-03-03  3:29   ` Claude review: " Claude Code Review Bot
2026-02-25 12:10 [PATCH v4 0/6] cgroup/dmem,drm/ttm: Improve protection in contended cases Natalie Vock
2026-02-25 12:10 ` [PATCH v4 6/6] drm/ttm: Use common ancestor of evictor and evictee as limit pool Natalie Vock
2026-02-27  3:36   ` Claude review: " Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox