public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v2] drm/ttm: Fix GPU MM stats during pool shrinking
@ 2026-05-02  6:53 Matthew Brost
       [not found] ` <7f927ef8-2d3d-4609-926d-b18535794f72@panix.com>
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Matthew Brost @ 2026-05-02  6:53 UTC (permalink / raw)
  To: intel-xe, dri-devel
  Cc: Kenneth Crudup, Christian Koenig, Huang Rui, Matthew Auld,
	David Airlie

TTM pool shrinking frees pages by calling __free_pages() directly,
which bypasses updates to NR_GPU_ACTIVE and leaves GPU MM accounting
out of sync.

Introduce a helper, __free_pages_gpu_account(), and use it for all page
frees in ttm_pool.c so GPU MM statistics are updated consistently.

Reported-by: Kenneth Crudup <kenny@panix.com>
Fixes: ae80122f3896 ("drm/ttm: use gpu mm stats to track gpu memory allocations. (v4)")
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: David Airlie <airlied@gmail.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

---

v2:
 - Replace put_page() with __free_pages_gpu_account() (Kenneth)
---
 drivers/gpu/drm/ttm/ttm_pool.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 26a3689e5fd9..278bbe7a11ad 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -206,6 +206,14 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
 	return NULL;
 }
 
+static void __free_pages_gpu_account(struct page *p, unsigned int order,
+				     bool reclaim)
+{
+	mod_lruvec_page_state(p, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE,
+			      -(1 << order));
+	__free_pages(p, order);
+}
+
 /* Reset the caching and pages of size 1 << order */
 static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
 			       unsigned int order, struct page *p, bool reclaim)
@@ -223,9 +231,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
 #endif
 
 	if (!pool || !ttm_pool_uses_dma_alloc(pool)) {
-		mod_lruvec_page_state(p, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE,
-				      -(1 << order));
-		__free_pages(p, order);
+		__free_pages_gpu_account(p, order, reclaim);
 		return;
 	}
 
@@ -606,7 +612,7 @@ static int ttm_pool_restore_commit(struct ttm_pool_tt_restore *restore,
 			 */
 			ttm_pool_split_for_swap(restore->pool, p);
 			copy_highpage(restore->alloced_page + i, p);
-			__free_pages(p, 0);
+			__free_pages_gpu_account(p, 0, false);
 		}
 
 		restore->restored_pages++;
@@ -1068,7 +1074,7 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 			if (flags->purge) {
 				shrunken += num_pages;
 				page->private = 0;
-				__free_pages(page, order);
+				__free_pages_gpu_account(page, order, false);
 				memset(tt->pages + i, 0,
 				       num_pages * sizeof(*tt->pages));
 			}
@@ -1109,7 +1115,7 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 		}
 		handle = shandle;
 		tt->pages[i] = ttm_backup_handle_to_page_ptr(handle);
-		put_page(page);
+		__free_pages_gpu_account(page, 0, false);
 		shrunken++;
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] drm/ttm: Fix GPU MM stats during pool shrinking
       [not found] ` <7f927ef8-2d3d-4609-926d-b18535794f72@panix.com>
@ 2026-05-04  3:56   ` Dave Airlie
  2026-05-04  4:09     ` Matthew Brost
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Airlie @ 2026-05-04  3:56 UTC (permalink / raw)
  To: Kenneth Crudup
  Cc: Matthew Brost, intel-xe, dri-devel, Christian Koenig, Huang Rui,
	Matthew Auld

On Sat, 2 May 2026 at 18:14, Kenneth Crudup <kenny@panix.com> wrote:
>
>
>
> On 5/1/26 23:53, Matthew Brost wrote:
> > TTM pool shrinking frees pages by calling __free_pages() directly,
> > which bypasses updates to NR_GPU_ACTIVE and leaves GPU MM accounting
> > out of sync.
> >
> > Introduce a helper, __free_pages_gpu_account(), and use it for all page
> > frees in ttm_pool.c so GPU MM statistics are updated consistently.
> >
> > Reported-by: Kenneth Crudup <kenny@panix.com>
> > Fixes: ae80122f3896 ("drm/ttm: use gpu mm stats to track gpu memory allocations. (v4)")
> > Cc: Christian Koenig <christian.koenig@amd.com>
> > Cc: Huang Rui <ray.huang@amd.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Cc: David Airlie <airlied@gmail.com>
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >
> > ---
> >
> > v2:

Oops, good catch,

Reviewed-by: Dave Airlie <airlied@redhat.com>

Dave.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] drm/ttm: Fix GPU MM stats during pool shrinking
  2026-05-04  3:56   ` Dave Airlie
@ 2026-05-04  4:09     ` Matthew Brost
  0 siblings, 0 replies; 7+ messages in thread
From: Matthew Brost @ 2026-05-04  4:09 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Kenneth Crudup, intel-xe, dri-devel, Christian Koenig, Huang Rui,
	Matthew Auld

On Mon, May 04, 2026 at 01:56:25PM +1000, Dave Airlie wrote:
> On Sat, 2 May 2026 at 18:14, Kenneth Crudup <kenny@panix.com> wrote:
> >
> >
> >
> > On 5/1/26 23:53, Matthew Brost wrote:
> > > TTM pool shrinking frees pages by calling __free_pages() directly,
> > > which bypasses updates to NR_GPU_ACTIVE and leaves GPU MM accounting
> > > out of sync.
> > >
> > > Introduce a helper, __free_pages_gpu_account(), and use it for all page
> > > frees in ttm_pool.c so GPU MM statistics are updated consistently.
> > >
> > > Reported-by: Kenneth Crudup <kenny@panix.com>
> > > Fixes: ae80122f3896 ("drm/ttm: use gpu mm stats to track gpu memory allocations. (v4)")
> > > Cc: Christian Koenig <christian.koenig@amd.com>
> > > Cc: Huang Rui <ray.huang@amd.com>
> > > Cc: Matthew Auld <matthew.auld@intel.com>
> > > Cc: David Airlie <airlied@gmail.com>
> > > Cc: dri-devel@lists.freedesktop.org
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > >
> > > ---
> > >
> > > v2:
> 
> Oops, good catch,
> 
> Reviewed-by: Dave Airlie <airlied@redhat.com>

Thanks! Merged to drm-misc-fixes.

Matt

> 
> Dave.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Claude review: drm/ttm: Fix GPU MM stats during pool shrinking
  2026-05-02  6:53 [PATCH v2] drm/ttm: Fix GPU MM stats during pool shrinking Matthew Brost
       [not found] ` <7f927ef8-2d3d-4609-926d-b18535794f72@panix.com>
@ 2026-05-04 22:58 ` Claude Code Review Bot
  2026-05-04 22:58 ` Claude Code Review Bot
  2 siblings, 0 replies; 7+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:58 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/ttm: Fix GPU MM stats during pool shrinking
Author: Matthew Brost <matthew.brost@intel.com>
Patches: 3
Reviewed: 2026-05-05T08:58:24.455662

---

This is a single patch (v2) fixing a real bug in TTM's GPU memory accounting. When TTM pool backup/purge/restore frees pages via `__free_pages()` or `put_page()` directly, it bypasses the `NR_GPU_ACTIVE` / `NR_GPU_RECLAIM` stat updates that were established in commit ae80122f3896. This causes the GPU MM statistics reported via `/proc/meminfo` (or equivalent) to drift out of sync over time as pages are shrunk/backed up.

The fix is clean and well-scoped: a small helper function is introduced and applied to both the existing inline code in `ttm_pool_free_page()` (refactor) and the three buggy call sites. The approach is correct — all three fixed sites free pages that are in `NR_GPU_ACTIVE` state, and `reclaim=false` is the right choice for each.

**Verdict: Looks good.** One minor observation below, but nothing blocking.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Claude review: drm/ttm: Fix GPU MM stats during pool shrinking
  2026-05-02  6:53 [PATCH v2] drm/ttm: Fix GPU MM stats during pool shrinking Matthew Brost
       [not found] ` <7f927ef8-2d3d-4609-926d-b18535794f72@panix.com>
  2026-05-04 22:58 ` Claude review: " Claude Code Review Bot
@ 2026-05-04 22:58 ` Claude Code Review Bot
  2 siblings, 0 replies; 7+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:58 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**The helper function (lines +105..+111):**

```c
static void __free_pages_gpu_account(struct page *p, unsigned int order,
				     bool reclaim)
{
	mod_lruvec_page_state(p, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE,
			      -(1 << order));
	__free_pages(p, order);
}
```

This correctly mirrors the pattern already used in `ttm_pool_free_page()`. The ordering (stat update before `__free_pages`) matches the existing code and is safe since `mod_lruvec_page_state` only needs the page struct to look up the lruvec, not the page contents.

**Refactor in `ttm_pool_free_page()` (lines -120..+123):**

Clean replacement of the inline `mod_lruvec_page_state` + `__free_pages` with the new helper. No functional change. Good.

**Fix in `ttm_pool_restore_commit()` (line +132):**

```c
__free_pages_gpu_account(p, 0, false);
```

Correct. This is the restore path where old retained pages from `tt->pages[]` are copied and freed. These pages were allocated via `ttm_pool_alloc_page()` which marks them `NR_GPU_ACTIVE` (line 175 of current tree). They were never moved to the reclaim pool, so `reclaim=false` is right. The order is 0 because `ttm_pool_split_for_swap()` was called just above.

**Fix in `ttm_pool_backup()` purge path (line +141):**

```c
__free_pages_gpu_account(page, order, false);
```

Correct. In the purge path, pages from `tt->pages[]` are freed directly at their original order (no split). These are `NR_GPU_ACTIVE` pages. `reclaim=false` is correct. The `page->private = 0` reset on the line above clears the order stored in private, and the helper reads the order from the parameter rather than from `page->private`, so ordering is fine.

**Fix in `ttm_pool_backup()` swap path (line +150):**

```c
__free_pages_gpu_account(page, 0, false);
```

Replacing `put_page(page)` — this is the v2 change per Kenneth's feedback. After `ttm_pool_split_for_swap()`, these are individual order-0 non-compound pages, so `__free_pages(p, 0)` is functionally equivalent to `put_page(page)` here (both do `put_page_testzero` + free). `reclaim=false` is correct; these pages are `NR_GPU_ACTIVE`.

**Minor observation:** The `reclaim` parameter is always `false` at all three new call sites. The only caller that passes `true` is the existing `ttm_pool_free_page()` code path. This is expected — `ttm_pool_backup()` returns `-EBUSY` for DMA alloc pools, and pages in `tt->pages[]` are never in the reclaim pool (they'd need to go through `ttm_pool_type_add()` for that). So the `reclaim` parameter is justified by the `ttm_pool_free_page()` usage, but worth noting that the *new* sites are all `NR_GPU_ACTIVE` decrements.

**No issues found.** The patch is a correct and minimal fix for a real accounting bug. The Fixes tag correctly references the commit that introduced the GPU MM stats.

Reviewed-by: Dave Airlie <airlied@gmail.com>

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Claude review: drm/ttm: Fix GPU MM stats during pool shrinking
  2026-05-01 22:30 [PATCH] " Matthew Brost
  2026-05-04 23:02 ` Claude review: " Claude Code Review Bot
@ 2026-05-04 23:02 ` Claude Code Review Bot
  1 sibling, 0 replies; 7+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 23:02 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/ttm: Fix GPU MM stats during pool shrinking
Author: Matthew Brost <matthew.brost@intel.com>
Patches: 3
Reviewed: 2026-05-05T09:02:45.099837

---

This is a single-patch fix for a GPU memory accounting bug in TTM pool management. The issue is real and the fix is correct: two call sites in `ttm_pool.c` were calling `__free_pages()` directly without decrementing `NR_GPU_ACTIVE`, causing GPU MM stats to drift upward over time as pages are freed through the backup/restore and purge paths. The approach of extracting a helper (`__free_pages_gpu_account()`) is clean and ensures consistency across all three `__free_pages` call sites in the file.

The patch is minimal, well-scoped, and the Fixes tag correctly identifies the commit that introduced the GPU MM accounting.

**Recommendation: Approve.** One minor question worth confirming below, but it shouldn't block the patch.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Claude review: drm/ttm: Fix GPU MM stats during pool shrinking
  2026-05-01 22:30 [PATCH] " Matthew Brost
@ 2026-05-04 23:02 ` Claude Code Review Bot
  2026-05-04 23:02 ` Claude Code Review Bot
  1 sibling, 0 replies; 7+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 23:02 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Correctness: Good.**

The patch introduces:
```c
static void __free_pages_gpu_account(struct page *p, unsigned int order,
				     bool reclaim)
{
	mod_lruvec_page_state(p, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE,
			      -(1 << order));
	__free_pages(p, order);
}
```

And applies it to all three `__free_pages` call sites:

1. **`ttm_pool_free_page()` (line 225)** -- Pure refactor, replaces identical inline code. Already guarded by `if (!pool || !ttm_pool_uses_dma_alloc(pool))`, so it only runs for non-DMA pages that were accounted at allocation. Correct.

2. **`ttm_pool_restore_commit()` (line 609)** -- This frees an old retained page after splitting and copying to a new allocation. These pages were previously allocated through `ttm_pool_alloc_page()` which accounts them as `NR_GPU_ACTIVE` (line 175 in the current tree). Passing `reclaim=false` correctly decrements `NR_GPU_ACTIVE`. This was a real leak.

3. **`ttm_pool_backup()` (line 1071)** -- The purge path frees pages directly from `tt->pages`. These are actively-used pages counted in `NR_GPU_ACTIVE` (either from initial allocation or from `ttm_pool_type_take()` which moves pages back to ACTIVE state). Passing `reclaim=false` is correct. This was the main leak reported by the user.

**Coverage check:** I verified that these are the only three `__free_pages` call sites in `ttm_pool.c`:
- Line 228: handled (refactored)
- Line 609: handled (bug fix)
- Line 1071: handled (bug fix)

The shrinker dispose path (`ttm_pool_dispose_list` -> `ttm_pool_free_page` with `reclaim=true`) was already correct since it went through `ttm_pool_free_page()`.

**One question worth confirming:** At lines 609 and 1071, the `__free_pages()` call is not guarded by a `!ttm_pool_uses_dma_alloc()` check (unlike `ttm_pool_free_page` which has that guard). If pages from a DMA-alloc pool could reach these paths, calling `__free_pages_gpu_account()` would undercount `NR_GPU_ACTIVE` (since DMA-alloc pages never increment it at allocation). However, this would also mean the pre-existing `__free_pages()` call was itself wrong for DMA pages (leaking the DMA allocation), so this patch doesn't make it worse -- it's a separate pre-existing concern. It's worth confirming that these paths are indeed only reachable for non-DMA pools, or if not, whether there's a separate fix needed for the DMA-alloc case.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-05-04 23:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-02  6:53 [PATCH v2] drm/ttm: Fix GPU MM stats during pool shrinking Matthew Brost
     [not found] ` <7f927ef8-2d3d-4609-926d-b18535794f72@panix.com>
2026-05-04  3:56   ` Dave Airlie
2026-05-04  4:09     ` Matthew Brost
2026-05-04 22:58 ` Claude review: " Claude Code Review Bot
2026-05-04 22:58 ` Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-05-01 22:30 [PATCH] " Matthew Brost
2026-05-04 23:02 ` Claude review: " Claude Code Review Bot
2026-05-04 23:02 ` Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox