* [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation
@ 2026-04-21 1:26 Matthew Brost
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: Matthew Brost @ 2026-04-21 1:26 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Christian Koenig, Huang Rui,
Matthew Auld, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Simona Vetter, Daniel Colascione
TTM allocations at higher orders can drive Xe into a pathological
reclaim loop when memory is fragmented:
kswapd → shrinker → eviction → rebind (exec ioctl) → repeat
In this state, reclaim is triggered despite substantial free memory,
but fails to produce contiguous higher-order pages. The Xe shrinker then
evicts active buffer objects, increasing faulting and rebind activity
and further feeding the loop. The result is high CPU overhead and poor
GPU forward progress.
This issue was first reported in [1] and independently observed
internally and by Google.
A simple reproducer is:
- Boot an iGPU system with mem=8G
- Launch 10 Chrome tabs running the WebGL aquarium demo
- Configure each tab with ~5k fish
Under this workload, ftrace shows a continuous loop of:
xe_shrinker_scan (kswapd)
xe_vma_rebind_exec
Performance degrades significantly, with each tab dropping to ~2 FPS on
PTL.
At the same time, /proc/buddyinfo shows substantial free memory but no
higher-order availability. For example, the Normal zone:
Count: 4063 4595 3455 3400 3139 2762 2293 1655 643 0 0
This corresponds to ~2.8GB free memory, but no order-9 (2MB) blocks,
indicating severe fragmentation.
This series addresses the issue in two ways:
TTM: Restrict direct reclaim to beneficial_order. Larger allocations
use __GFP_NORETRY to fail quickly rather than triggering reclaim.
Xe: Introduce a heuristic in the shrinker to avoid eviction when
running under kswapd and the system appears memory-rich but
fragmented.
With these changes, the reclaim/eviction loop is eliminated. The same
workload improves to ~10 FPS per tab, and kswapd activity subsides.
Buddyinfo after applying this series shows restored higher-order
availability:
Count: 8526 7067 3092 1959 1292 660 194 28 20 13 1
Matt
[1] https://patchwork.freedesktop.org/patch/716404/?series=164353&rev=1
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
CC: dri-devel@lists.freedesktop.org
Cc: Daniel Colascione <dancol@dancol.org>
Matthew Brost (3):
drm/ttm: Issue direct reclaim at beneficial_order
drm/xe: Set TTM device beneficial_order to 9 (2M)
drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
drivers/gpu/drm/xe/xe_device.c | 3 ++-
drivers/gpu/drm/xe/xe_shrinker.c | 13 +++++++++++++
3 files changed, 17 insertions(+), 3 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-21 1:26 [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
@ 2026-04-21 1:26 ` Matthew Brost
2026-04-21 6:11 ` Christian König
` (2 more replies)
2026-04-21 1:26 ` [PATCH 2/3] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
` (2 subsequent siblings)
3 siblings, 3 replies; 18+ messages in thread
From: Matthew Brost @ 2026-04-21 1:26 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Christian Koenig, Huang Rui,
Matthew Auld, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Simona Vetter, Daniel Colascione
Triggering kswap at an order higher than beneficial_order makes little
sense, as the driver has already indicated the optimal order at which
reclaim is effective. Similarly, issuing direct reclaim or triggering
kswap at a lower order than beneficial_order is ineffective, since the
driver does not benefit from reclaiming lower-order pages.
As a result, direct reclaim should only be issued with __GFP_NORETRY at
exactly beneficial_order, or as a fallback, direct reclaim without
__GFP_NORETRY at order 0 when failure is not an option.
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
CC: dri-devel@lists.freedesktop.org
Cc: Daniel Colascione <dancol@dancol.org>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 26a3689e5fd9..8425dbcc6c68 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
* Do not add latency to the allocation path for allocations orders
* device tolds us do not bring them additional performance gains.
*/
- if (beneficial_order && order > beneficial_order)
- gfp_flags &= ~__GFP_DIRECT_RECLAIM;
+ if (order && beneficial_order && order != beneficial_order)
+ gfp_flags &= ~__GFP_RECLAIM;
if (!ttm_pool_uses_dma_alloc(pool)) {
p = alloc_pages_node(pool->nid, gfp_flags, order);
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
@ 2026-04-21 6:11 ` Christian König
2026-04-22 4:12 ` Matthew Brost
2026-04-22 7:32 ` Tvrtko Ursulin
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2 siblings, 1 reply; 18+ messages in thread
From: Christian König @ 2026-04-21 6:11 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Huang Rui, Matthew Auld,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Colascione
On 4/21/26 03:26, Matthew Brost wrote:
> Triggering kswap at an order higher than beneficial_order makes little
> sense, as the driver has already indicated the optimal order at which
> reclaim is effective. Similarly, issuing direct reclaim or triggering
> kswap at a lower order than beneficial_order is ineffective, since the
> driver does not benefit from reclaiming lower-order pages.
>
> As a result, direct reclaim should only be issued with __GFP_NORETRY at
> exactly beneficial_order, or as a fallback, direct reclaim without
> __GFP_NORETRY at order 0 when failure is not an option.
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Carlos Santa <carlos.santa@intel.com>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Cc: Huang Rui <ray.huang@amd.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> CC: dri-devel@lists.freedesktop.org
> Cc: Daniel Colascione <dancol@dancol.org>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index 26a3689e5fd9..8425dbcc6c68 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> * Do not add latency to the allocation path for allocations orders
> * device tolds us do not bring them additional performance gains.
> */
> - if (beneficial_order && order > beneficial_order)
> - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> + if (order && beneficial_order && order != beneficial_order)
> + gfp_flags &= ~__GFP_RECLAIM;
>
> if (!ttm_pool_uses_dma_alloc(pool)) {
> p = alloc_pages_node(pool->nid, gfp_flags, order);
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-21 6:11 ` Christian König
@ 2026-04-22 4:12 ` Matthew Brost
2026-04-22 6:41 ` Christian König
0 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2026-04-22 4:12 UTC (permalink / raw)
To: Christian König
Cc: intel-xe, dri-devel, Thomas Hellström, Carlos Santa,
Huang Rui, Matthew Auld, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Daniel Colascione
On Tue, Apr 21, 2026 at 08:11:17AM +0200, Christian König wrote:
> On 4/21/26 03:26, Matthew Brost wrote:
> > Triggering kswap at an order higher than beneficial_order makes little
> > sense, as the driver has already indicated the optimal order at which
> > reclaim is effective. Similarly, issuing direct reclaim or triggering
> > kswap at a lower order than beneficial_order is ineffective, since the
> > driver does not benefit from reclaiming lower-order pages.
> >
> > As a result, direct reclaim should only be issued with __GFP_NORETRY at
> > exactly beneficial_order, or as a fallback, direct reclaim without
> > __GFP_NORETRY at order 0 when failure is not an option.
> >
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Cc: Carlos Santa <carlos.santa@intel.com>
> > Cc: Christian Koenig <christian.koenig@amd.com>
> > Cc: Huang Rui <ray.huang@amd.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@gmail.com>
> > Cc: Simona Vetter <simona@ffwll.ch>
> > CC: dri-devel@lists.freedesktop.org
> > Cc: Daniel Colascione <dancol@dancol.org>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>
> Reviewed-by: Christian König <christian.koenig@amd.com>
>
Thanks! I'm going to merge this patch to independently to drm-misc-next
unless you object - the Xe side heuristics of the shrinker will take a
bit longer to land on an agreed upon design.
Matt
> > ---
> > drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > index 26a3689e5fd9..8425dbcc6c68 100644
> > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > @@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > * Do not add latency to the allocation path for allocations orders
> > * device tolds us do not bring them additional performance gains.
> > */
> > - if (beneficial_order && order > beneficial_order)
> > - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> > + if (order && beneficial_order && order != beneficial_order)
> > + gfp_flags &= ~__GFP_RECLAIM;
> >
> > if (!ttm_pool_uses_dma_alloc(pool)) {
> > p = alloc_pages_node(pool->nid, gfp_flags, order);
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-22 4:12 ` Matthew Brost
@ 2026-04-22 6:41 ` Christian König
0 siblings, 0 replies; 18+ messages in thread
From: Christian König @ 2026-04-22 6:41 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, Thomas Hellström, Carlos Santa,
Huang Rui, Matthew Auld, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Daniel Colascione
On 4/22/26 06:12, Matthew Brost wrote:
> On Tue, Apr 21, 2026 at 08:11:17AM +0200, Christian König wrote:
>> On 4/21/26 03:26, Matthew Brost wrote:
>>> Triggering kswap at an order higher than beneficial_order makes little
>>> sense, as the driver has already indicated the optimal order at which
>>> reclaim is effective. Similarly, issuing direct reclaim or triggering
>>> kswap at a lower order than beneficial_order is ineffective, since the
>>> driver does not benefit from reclaiming lower-order pages.
>>>
>>> As a result, direct reclaim should only be issued with __GFP_NORETRY at
>>> exactly beneficial_order, or as a fallback, direct reclaim without
>>> __GFP_NORETRY at order 0 when failure is not an option.
>>>
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Cc: Carlos Santa <carlos.santa@intel.com>
>>> Cc: Christian Koenig <christian.koenig@amd.com>
>>> Cc: Huang Rui <ray.huang@amd.com>
>>> Cc: Matthew Auld <matthew.auld@intel.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Cc: Maxime Ripard <mripard@kernel.org>
>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>> Cc: David Airlie <airlied@gmail.com>
>>> Cc: Simona Vetter <simona@ffwll.ch>
>>> CC: dri-devel@lists.freedesktop.org
>>> Cc: Daniel Colascione <dancol@dancol.org>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>
>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>
>
> Thanks! I'm going to merge this patch to independently to drm-misc-next
> unless you object - the Xe side heuristics of the shrinker will take a
> bit longer to land on an agreed upon design.
Yeah feel free to push it upstream through the XE tree, the two liner is probably small enough that it won't cause conflict.
Christian
>
> Matt
>
>>> ---
>>> drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>>> index 26a3689e5fd9..8425dbcc6c68 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>> @@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>>> * Do not add latency to the allocation path for allocations orders
>>> * device tolds us do not bring them additional performance gains.
>>> */
>>> - if (beneficial_order && order > beneficial_order)
>>> - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
>>> + if (order && beneficial_order && order != beneficial_order)
>>> + gfp_flags &= ~__GFP_RECLAIM;
>>>
>>> if (!ttm_pool_uses_dma_alloc(pool)) {
>>> p = alloc_pages_node(pool->nid, gfp_flags, order);
>>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
2026-04-21 6:11 ` Christian König
@ 2026-04-22 7:32 ` Tvrtko Ursulin
2026-04-22 7:41 ` Christian König
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2 siblings, 1 reply; 18+ messages in thread
From: Tvrtko Ursulin @ 2026-04-22 7:32 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Christian Koenig, Huang Rui,
Matthew Auld, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Simona Vetter, Daniel Colascione
On 21/04/2026 02:26, Matthew Brost wrote:
> Triggering kswap at an order higher than beneficial_order makes little
> sense, as the driver has already indicated the optimal order at which
> reclaim is effective. Similarly, issuing direct reclaim or triggering
> kswap at a lower order than beneficial_order is ineffective, since the
> driver does not benefit from reclaiming lower-order pages.
>
> As a result, direct reclaim should only be issued with __GFP_NORETRY at
> exactly beneficial_order, or as a fallback, direct reclaim without
> __GFP_NORETRY at order 0 when failure is not an option.
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Carlos Santa <carlos.santa@intel.com>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Cc: Huang Rui <ray.huang@amd.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> CC: dri-devel@lists.freedesktop.org
> Cc: Daniel Colascione <dancol@dancol.org>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index 26a3689e5fd9..8425dbcc6c68 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> * Do not add latency to the allocation path for allocations orders
> * device tolds us do not bring them additional performance gains.
> */
> - if (beneficial_order && order > beneficial_order)
> - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> + if (order && beneficial_order && order != beneficial_order)
> + gfp_flags &= ~__GFP_RECLAIM;
>
> if (!ttm_pool_uses_dma_alloc(pool)) {
> p = alloc_pages_node(pool->nid, gfp_flags, order);
I missed this conversation so don't know if this was discussed -
having less of 64k pages is not a concern? I mean slightly higher TLB
pressure etc on hardware which supports this PTE size.
Also, does clearing __GFP_RECLAIM disable compaction completely and is
that wanted?
Regards,
Tvrtko
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-22 7:32 ` Tvrtko Ursulin
@ 2026-04-22 7:41 ` Christian König
2026-04-22 20:41 ` Matthew Brost
0 siblings, 1 reply; 18+ messages in thread
From: Christian König @ 2026-04-22 7:41 UTC (permalink / raw)
To: Tvrtko Ursulin, Matthew Brost, intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Huang Rui, Matthew Auld,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Colascione
On 4/22/26 09:32, Tvrtko Ursulin wrote:
>
> On 21/04/2026 02:26, Matthew Brost wrote:
>> Triggering kswap at an order higher than beneficial_order makes little
>> sense, as the driver has already indicated the optimal order at which
>> reclaim is effective. Similarly, issuing direct reclaim or triggering
>> kswap at a lower order than beneficial_order is ineffective, since the
>> driver does not benefit from reclaiming lower-order pages.
>>
>> As a result, direct reclaim should only be issued with __GFP_NORETRY at
>> exactly beneficial_order, or as a fallback, direct reclaim without
>> __GFP_NORETRY at order 0 when failure is not an option.
>>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Carlos Santa <carlos.santa@intel.com>
>> Cc: Christian Koenig <christian.koenig@amd.com>
>> Cc: Huang Rui <ray.huang@amd.com>
>> Cc: Matthew Auld <matthew.auld@intel.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Cc: Maxime Ripard <mripard@kernel.org>
>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> CC: dri-devel@lists.freedesktop.org
>> Cc: Daniel Colascione <dancol@dancol.org>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> ---
>> drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>> index 26a3689e5fd9..8425dbcc6c68 100644
>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>> @@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>> * Do not add latency to the allocation path for allocations orders
>> * device tolds us do not bring them additional performance gains.
>> */
>> - if (beneficial_order && order > beneficial_order)
>> - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
>> + if (order && beneficial_order && order != beneficial_order)
>> + gfp_flags &= ~__GFP_RECLAIM;
>> if (!ttm_pool_uses_dma_alloc(pool)) {
>> p = alloc_pages_node(pool->nid, gfp_flags, order);
>
> I missed this conversation so don't know if this was discussed -
> having less of 64k pages is not a concern? I mean slightly higher TLB pressure etc on hardware which supports this PTE size.
At least for AMD GPUs 64k doesn't matter at all.
There was a large push from the Windows side to use that size, but we have more than enough evidence to prove that this size is actually completely nonsense for almost all use cases.
I have no idea how we ended up with that in the first place.
It could be that there is still HW out there which can only handle that size, but in that case such HW should just set beneficial_order to 64k.
> Also, does clearing __GFP_RECLAIM disable compaction completely and is that wanted?
Oh good point, most likely not.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
2026-04-22 7:41 ` Christian König
@ 2026-04-22 20:41 ` Matthew Brost
0 siblings, 0 replies; 18+ messages in thread
From: Matthew Brost @ 2026-04-22 20:41 UTC (permalink / raw)
To: Christian König
Cc: Tvrtko Ursulin, intel-xe, dri-devel, Thomas Hellström,
Carlos Santa, Huang Rui, Matthew Auld, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Daniel Colascione
On Wed, Apr 22, 2026 at 09:41:54AM +0200, Christian König wrote:
> On 4/22/26 09:32, Tvrtko Ursulin wrote:
> >
> > On 21/04/2026 02:26, Matthew Brost wrote:
> >> Triggering kswap at an order higher than beneficial_order makes little
> >> sense, as the driver has already indicated the optimal order at which
> >> reclaim is effective. Similarly, issuing direct reclaim or triggering
> >> kswap at a lower order than beneficial_order is ineffective, since the
> >> driver does not benefit from reclaiming lower-order pages.
> >>
> >> As a result, direct reclaim should only be issued with __GFP_NORETRY at
> >> exactly beneficial_order, or as a fallback, direct reclaim without
> >> __GFP_NORETRY at order 0 when failure is not an option.
> >>
> >> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >> Cc: Carlos Santa <carlos.santa@intel.com>
> >> Cc: Christian Koenig <christian.koenig@amd.com>
> >> Cc: Huang Rui <ray.huang@amd.com>
> >> Cc: Matthew Auld <matthew.auld@intel.com>
> >> Cc: Matthew Brost <matthew.brost@intel.com>
> >> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> >> Cc: Maxime Ripard <mripard@kernel.org>
> >> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >> Cc: David Airlie <airlied@gmail.com>
> >> Cc: Simona Vetter <simona@ffwll.ch>
> >> CC: dri-devel@lists.freedesktop.org
> >> Cc: Daniel Colascione <dancol@dancol.org>
> >> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >> ---
> >> drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
> >> 1 file changed, 2 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> >> index 26a3689e5fd9..8425dbcc6c68 100644
> >> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> >> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> >> @@ -165,8 +165,8 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> >> * Do not add latency to the allocation path for allocations orders
> >> * device tolds us do not bring them additional performance gains.
> >> */
> >> - if (beneficial_order && order > beneficial_order)
> >> - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> >> + if (order && beneficial_order && order != beneficial_order)
> >> + gfp_flags &= ~__GFP_RECLAIM;
> >> if (!ttm_pool_uses_dma_alloc(pool)) {
> >> p = alloc_pages_node(pool->nid, gfp_flags, order);
> >
> > I missed this conversation so don't know if this was discussed -
I meant to CC you here, but missed including you.
> > having less of 64k pages is not a concern? I mean slightly higher TLB pressure etc on hardware which supports this PTE size.
>
> At least for AMD GPUs 64k doesn't matter at all.
>
Same on Intel GPUs for system memory mappings - it is either 4k or 2M
GPU pages. VRAM can we 64k pages but that isn't involved here.
> There was a large push from the Windows side to use that size, but we have more than enough evidence to prove that this size is actually completely nonsense for almost all use cases.
>
> I have no idea how we ended up with that in the first place.
>
> It could be that there is still HW out there which can only handle that size, but in that case such HW should just set beneficial_order to 64k.
>
Or we move to a table config if we find drivers have multiple
beneficial_orders.
> > Also, does clearing __GFP_RECLAIM disable compaction completely and is that wanted?
>
> Oh good point, most likely not.
>
Without completely reverse engineering the core MM, I'm not sure here.
I just read the kernel doc for __GFP_KSWAPD_RECLAIM [1] and this to
indicate if this is clear compaction won't be entered.
Matt
[1] https://elixir.bootlin.com/linux/v7.0/source/include/linux/gfp_types.h#L198
> Regards,
> Christian.
>
> >
> > Regards,
> >
> > Tvrtko
> >
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Claude review: drm/ttm: Issue direct reclaim at beneficial_order
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
2026-04-21 6:11 ` Christian König
2026-04-22 7:32 ` Tvrtko Ursulin
@ 2026-04-22 23:01 ` Claude Code Review Bot
2 siblings, 0 replies; 18+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 23:01 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This patch changes the reclaim behavior in `ttm_pool_alloc_page()` for non-beneficial orders.
**The change:**
```c
- if (beneficial_order && order > beneficial_order)
- gfp_flags &= ~__GFP_DIRECT_RECLAIM;
+ if (order && beneficial_order && order != beneficial_order)
+ gfp_flags &= ~__GFP_RECLAIM;
```
There are two distinct changes here that should be clearly called out:
**1. Condition broadened from `order > beneficial_order` to `order != beneficial_order`:**
The old code only disabled reclaim for orders *above* beneficial_order. The new code also disables reclaim for orders *below* beneficial_order (e.g., orders 1-8 when beneficial_order is 9). The rationale in the commit message — "issuing direct reclaim or triggering kswap at a lower order than beneficial_order is ineffective, since the driver does not benefit from reclaiming lower-order pages" — is correct but understates the significance. This is a meaningful behavioral change for the entire fallback path.
Looking at the allocation loop in `__ttm_pool_alloc()` (ttm_pool.c:786-829), when order-9 allocation fails, the allocator falls back through orders 8, 7, ... 1, 0. Under the old code, these fallback orders would still trigger reclaim. Under the new code, only order 0 retains reclaim. This is correct in the fragmentation scenario (plenty of free pages, just not contiguous), but worth verifying that it doesn't hurt the genuine-memory-pressure scenario where even order-0 allocations are tight.
In the genuine pressure case: order 9 with `__GFP_NORETRY` would attempt limited reclaim and fail, orders 8-1 would fail immediately (no reclaim), and order 0 would succeed with full reclaim (no `__GFP_NORETRY`, full `__GFP_RECLAIM`). This seems acceptable since order-0 is the ultimate fallback and handles true pressure.
**2. Flag change from `~__GFP_DIRECT_RECLAIM` to `~__GFP_RECLAIM`:**
`__GFP_RECLAIM` is `__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM` (gfp_types.h:259). The old code only disabled synchronous direct reclaim but still allowed waking kswapd. The new code disables *both*. This is more aggressive — non-beneficial-order allocations won't even wake kswapd. The commit message says "direct reclaim should only be issued with __GFP_NORETRY at exactly beneficial_order" but doesn't explicitly mention that kswapd wakeup is also now suppressed for all non-beneficial orders.
**Suggestion:** The commit message should explicitly call out both changes: (a) reclaim is now disabled for orders *below* beneficial_order (not just above), and (b) kswapd wakeup is also suppressed (not just direct reclaim). These are individually defensible but together represent a substantially more aggressive policy than the old code.
**Minor nit:** The existing comment above this code says "device tolds us" — pre-existing typo, but if you're touching adjacent lines, consider fixing to "device told us".
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 2/3] drm/xe: Set TTM device beneficial_order to 9 (2M)
2026-04-21 1:26 [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
@ 2026-04-21 1:26 ` Matthew Brost
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2026-04-21 1:26 ` [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation Matthew Brost
2026-04-22 23:01 ` Claude review: drm/ttm, drm/xe: Avoid reclaim/eviction loops " Claude Code Review Bot
3 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2026-04-21 1:26 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Thomas Hellström, Carlos Santa, Matthew Auld
Set the TTM device beneficial_order to 9 (2M), which is the sweet
spot for Xe when attempting reclaim on system memory BOs, as it matches
the large GPU page size. This ensures reclaim is attempted at the most
effective order for the driver.
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 4b45b617a039..3f719ab08d1c 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -500,7 +500,8 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
err = ttm_device_init(&xe->ttm, &xe_ttm_funcs, xe->drm.dev,
xe->drm.anon_inode->i_mapping,
- xe->drm.vma_offset_manager, 0);
+ xe->drm.vma_offset_manager,
+ TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M)));
if (WARN_ON(err))
return ERR_PTR(err);
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Claude review: drm/xe: Set TTM device beneficial_order to 9 (2M)
2026-04-21 1:26 ` [PATCH 2/3] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
@ 2026-04-22 23:01 ` Claude Code Review Bot
0 siblings, 0 replies; 18+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 23:01 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
```c
- xe->drm.vma_offset_manager, 0);
+ xe->drm.vma_offset_manager,
+ TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M)));
```
Straightforward and correct. `get_order(SZ_2M)` returns 9 (2MB / 4KB = 512 = 2^9). The macro `TTM_ALLOCATION_POOL_BENEFICIAL_ORDER` masks to the lower 8 bits, which is fine for value 9.
This correctly enables the Patch 1 behavior for Xe. The 2MB sweet spot matches the large GPU page size, which makes sense for Xe's memory management.
No concerns with this patch.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
2026-04-21 1:26 [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
2026-04-21 1:26 ` [PATCH 2/3] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
@ 2026-04-21 1:26 ` Matthew Brost
2026-04-22 8:22 ` Thomas Hellström
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2026-04-22 23:01 ` Claude review: drm/ttm, drm/xe: Avoid reclaim/eviction loops " Claude Code Review Bot
3 siblings, 2 replies; 18+ messages in thread
From: Matthew Brost @ 2026-04-21 1:26 UTC (permalink / raw)
To: intel-xe, dri-devel; +Cc: Thomas Hellström, Carlos Santa, Matthew Auld
When the Xe shrinker is invoked from kswapd, a large amount of free
memory in ZONE_NORMAL relative to the high watermark is a strong signal
that reclaim is being driven by fragmentation rather than true memory
pressure.
In this case, shrinking Xe memory is unlikely to help kswapd make
forward progress. Instead it can evict active GPU memory despite the
system still having substantial free memory, increasing residency churn
and reducing GPU forward progress.
Detect this case and bail out early from the Xe shrinker when running in
kswapd and ZONE_NORMAL has more than 2x its high watermark free.
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_shrinker.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c
index 83374cd57660..e56afde83de6 100644
--- a/drivers/gpu/drm/xe/xe_shrinker.c
+++ b/drivers/gpu/drm/xe/xe_shrinker.c
@@ -236,6 +236,19 @@ static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_con
if (nr_scanned >= nr_to_scan || !can_backup)
goto out;
+ if (current_is_kswapd()) {
+ struct zone *zone = &NODE_DATA(sc->nid)->node_zones[ZONE_NORMAL];
+ unsigned long free_pages = zone_page_state(zone, NR_FREE_PAGES);
+ unsigned long high_wmark = high_wmark_pages(zone);
+
+ /*
+ * If we have 2x the high watermark free, this is definitely
+ * fragmentation
+ */
+ if (free_pages > (high_wmark * 2))
+ goto out;
+ }
+
/* If we didn't wake before, try to do it now if needed. */
if (!runtime_pm)
runtime_pm = xe_shrinker_runtime_pm_get(shrinker, true, 0, can_backup);
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
2026-04-21 1:26 ` [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation Matthew Brost
@ 2026-04-22 8:22 ` Thomas Hellström
2026-04-22 20:27 ` Matthew Brost
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
1 sibling, 1 reply; 18+ messages in thread
From: Thomas Hellström @ 2026-04-22 8:22 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel; +Cc: Carlos Santa, Matthew Auld
Hi,
On Mon, 2026-04-20 at 18:26 -0700, Matthew Brost wrote:
> When the Xe shrinker is invoked from kswapd, a large amount of free
> memory in ZONE_NORMAL relative to the high watermark is a strong
> signal
> that reclaim is being driven by fragmentation rather than true memory
> pressure.
>
> In this case, shrinking Xe memory is unlikely to help kswapd make
> forward progress. Instead it can evict active GPU memory despite the
> system still having substantial free memory, increasing residency
> churn
> and reducing GPU forward progress.
>
> Detect this case and bail out early from the Xe shrinker when running
> in
> kswapd and ZONE_NORMAL has more than 2x its high watermark free.
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Carlos Santa <carlos.santa@intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_shrinker.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_shrinker.c
> b/drivers/gpu/drm/xe/xe_shrinker.c
> index 83374cd57660..e56afde83de6 100644
> --- a/drivers/gpu/drm/xe/xe_shrinker.c
> +++ b/drivers/gpu/drm/xe/xe_shrinker.c
> @@ -236,6 +236,19 @@ static unsigned long xe_shrinker_scan(struct
> shrinker *shrink, struct shrink_con
> if (nr_scanned >= nr_to_scan || !can_backup)
> goto out;
>
> + if (current_is_kswapd()) {
> + struct zone *zone = &NODE_DATA(sc->nid)-
> >node_zones[ZONE_NORMAL];
> + unsigned long free_pages = zone_page_state(zone,
> NR_FREE_PAGES);
> + unsigned long high_wmark = high_wmark_pages(zone);
> +
> + /*
> + * If we have 2x the high watermark free, this is
> definitely
> + * fragmentation
> + */
> + if (free_pages > (high_wmark * 2))
> + goto out;
> + }
> +
While this or a similar check might make sense, That should ideally be
in the TTM shrinker helpers. And probably we should ask core mm for a
proper indication whether this is indeed fragmentation-driven.
Thanks,
Thomas
> /* If we didn't wake before, try to do it now if needed. */
> if (!runtime_pm)
> runtime_pm = xe_shrinker_runtime_pm_get(shrinker,
> true, 0, can_backup);
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
2026-04-22 8:22 ` Thomas Hellström
@ 2026-04-22 20:27 ` Matthew Brost
0 siblings, 0 replies; 18+ messages in thread
From: Matthew Brost @ 2026-04-22 20:27 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe, dri-devel, Carlos Santa, Matthew Auld
On Wed, Apr 22, 2026 at 10:22:56AM +0200, Thomas Hellström wrote:
> Hi,
>
> On Mon, 2026-04-20 at 18:26 -0700, Matthew Brost wrote:
> > When the Xe shrinker is invoked from kswapd, a large amount of free
> > memory in ZONE_NORMAL relative to the high watermark is a strong
> > signal
> > that reclaim is being driven by fragmentation rather than true memory
> > pressure.
> >
> > In this case, shrinking Xe memory is unlikely to help kswapd make
> > forward progress. Instead it can evict active GPU memory despite the
> > system still having substantial free memory, increasing residency
> > churn
> > and reducing GPU forward progress.
> >
> > Detect this case and bail out early from the Xe shrinker when running
> > in
> > kswapd and ZONE_NORMAL has more than 2x its high watermark free.
> >
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Cc: Carlos Santa <carlos.santa@intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_shrinker.c | 13 +++++++++++++
> > 1 file changed, 13 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_shrinker.c
> > b/drivers/gpu/drm/xe/xe_shrinker.c
> > index 83374cd57660..e56afde83de6 100644
> > --- a/drivers/gpu/drm/xe/xe_shrinker.c
> > +++ b/drivers/gpu/drm/xe/xe_shrinker.c
> > @@ -236,6 +236,19 @@ static unsigned long xe_shrinker_scan(struct
> > shrinker *shrink, struct shrink_con
> > if (nr_scanned >= nr_to_scan || !can_backup)
> > goto out;
> >
> > + if (current_is_kswapd()) {
> > + struct zone *zone = &NODE_DATA(sc->nid)-
> > >node_zones[ZONE_NORMAL];
> > + unsigned long free_pages = zone_page_state(zone,
> > NR_FREE_PAGES);
> > + unsigned long high_wmark = high_wmark_pages(zone);
> > +
> > + /*
> > + * If we have 2x the high watermark free, this is
> > definitely
> > + * fragmentation
> > + */
> > + if (free_pages > (high_wmark * 2))
> > + goto out;
> > + }
> > +
>
> While this or a similar check might make sense, That should ideally be
> in the TTM shrinker helpers. And probably we should ask core mm for a
> proper indication whether this is indeed fragmentation-driven.
>
Yes, moving this to a TTM shrinker helper—or even a core MM helper—makes
more sense. This is the best heuristic I could come up with based on
what is available in the core MM today. However, this is still not 100%
correct, as I don’t think ZONE_NORMAL is used on systems with less than
4GB of memory; instead, ZONE_DMA32 is used. This might get even worse
when considering Kconfig combonations. Asking the core MM via a helper
whether memory is fragmented is likely the best solution.
Matt
> Thanks,
> Thomas
>
>
>
> > /* If we didn't wake before, try to do it now if needed. */
> > if (!runtime_pm)
> > runtime_pm = xe_shrinker_runtime_pm_get(shrinker,
> > true, 0, can_backup);
^ permalink raw reply [flat|nested] 18+ messages in thread
* Claude review: drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
2026-04-21 1:26 ` [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation Matthew Brost
2026-04-22 8:22 ` Thomas Hellström
@ 2026-04-22 23:01 ` Claude Code Review Bot
1 sibling, 0 replies; 18+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 23:01 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This adds a fragmentation detection heuristic in `xe_shrinker_scan()`:
```c
+ if (current_is_kswapd()) {
+ struct zone *zone = &NODE_DATA(sc->nid)->node_zones[ZONE_NORMAL];
+ unsigned long free_pages = zone_page_state(zone, NR_FREE_PAGES);
+ unsigned long high_wmark = high_wmark_pages(zone);
+
+ /*
+ * If we have 2x the high watermark free, this is definitely
+ * fragmentation
+ */
+ if (free_pages > (high_wmark * 2))
+ goto out;
+ }
```
**Placement is good:** The check is positioned after the purgeable-objects walk but before the expensive eviction/writeback walk. This means cheap-to-free purgeable objects are still reclaimed, but expensive eviction of active BOs is avoided under fragmentation. This is the right trade-off.
**Concerns:**
1. **Hardcoded ZONE_NORMAL:** This assumes ZONE_NORMAL is the relevant zone to check. On x86_64 systems with limited memory (e.g., all RAM < 4GB), ZONE_NORMAL could be empty (0 managed pages, 0 free pages, 0 high watermark). In that case `free_pages > (high_wmark * 2)` evaluates to `0 > 0` which is false, so the heuristic wouldn't trigger — this is a safe failure mode (no false bailout). However, the heuristic would also never provide its intended benefit on such systems. Consider checking the zone that actually has managed pages, or checking `managed_zone(zone)` as a guard. Since Xe targets Intel iGPUs which are almost always on x86_64 systems with >4GB RAM, this is unlikely to matter in practice but is still a fragile assumption.
2. **The 2x multiplier is a magic number:** The comment says "this is definitely fragmentation" but 2x is an arbitrary threshold. It would be good to document why 2x was chosen rather than, say, 1.5x or 3x. If the high watermark is, say, 128MB and we have 260MB free, that's "definitely fragmentation" per this heuristic — but it's not an overwhelming amount of free memory either. Some justification or at least a named constant would improve readability.
3. **NUMA considerations:** On multi-node systems, `sc->nid` identifies the node being reclaimed. Using `NODE_DATA(sc->nid)->node_zones[ZONE_NORMAL]` checks the right node's ZONE_NORMAL. This is correct for NUMA-aware reclaim.
4. **Return value semantics:** When bailing out via `goto out`, if `nr_scanned` is 0, the function returns `SHRINK_STOP`. This tells the VM that the shrinker can't free anything, which should prevent repeated calls. If some purgeable pages were already freed (`nr_scanned > 0`), it returns `freed`, correctly reporting partial work. Both cases seem correct.
5. **Comment accuracy:** The comment says "this is definitely fragmentation" but it's a heuristic, not a guarantee. Having lots of free memory in ZONE_NORMAL while kswapd is running *could* also indicate pressure in other zones (DMA, DMA32, MOVABLE). Consider softening to "this is likely fragmentation" or "this is a strong signal of fragmentation" (which matches the commit message wording).
**Overall:** The heuristic is a pragmatic solution to a real problem. The ZONE_NORMAL assumption and magic constant are defensible for the Intel iGPU target but should be documented better.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 18+ messages in thread
* Claude review: drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation
2026-04-21 1:26 [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
` (2 preceding siblings ...)
2026-04-21 1:26 ` [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation Matthew Brost
@ 2026-04-22 23:01 ` Claude Code Review Bot
3 siblings, 0 replies; 18+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 23:01 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation
Author: Matthew Brost <matthew.brost@intel.com>
Patches: 12
Reviewed: 2026-04-23T09:01:58.435382
---
This is a well-motivated 3-patch series addressing a real and well-documented pathological reclaim loop on Xe iGPU systems under memory fragmentation. The cover letter clearly describes the problem (kswapd → shrinker → eviction → rebind loop at ~2 FPS) and demonstrates measurable improvement (~10 FPS, restored buddyinfo).
The approach is sound: (1) teach TTM to only do reclaim at the order the driver actually benefits from, (2) tell TTM that Xe cares about order 9 (2MB), and (3) add a heuristic in the Xe shrinker to detect fragmentation-driven kswapd invocations and bail out early.
However, Patch 1 has a significant semantic expansion beyond what the commit message emphasizes, and Patch 3 has some robustness concerns around the ZONE_NORMAL hardcoding. Overall the series is reasonable but warrants discussion on these points.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 0/6] mm, drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation
@ 2026-04-30 18:23 Matthew Brost
2026-04-30 18:23 ` [PATCH v3 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
0 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2026-04-30 18:23 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: Dave Chinner, Qi Zheng, Roman Gushchin, Johannes Weiner,
Shakeel Butt, Kairui Song, Barry Song, Axel Rasmussen,
Yuanchu Xie, Wei Xu, Tvrtko Ursulin, Thomas Hellström,
Carlos Santa, Christian Koenig, Huang Rui, Matthew Auld,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Colascione, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
linux-mm, linux-kernel
TTM allocations at higher orders can drive Xe into a pathological
reclaim loop when memory is fragmented:
kswapd → shrinker → eviction → rebind (exec ioctl) → repeat
In this state, reclaim is triggered despite substantial free memory,
but fails to produce contiguous higher-order pages. The Xe shrinker then
evicts active buffer objects, increasing faulting and rebind activity
and further feeding the loop. The result is high CPU overhead and poor
GPU forward progress.
This issue was first reported in [1] and independently observed
internally and by Google.
A simple reproducer is:
- Boot an iGPU system with mem=8G
- Launch 10 Chrome tabs running the WebGL aquarium demo
- Configure each tab with ~5k fish
Under this workload, ftrace shows a continuous loop of:
xe_shrinker_scan (kswapd)
xe_vma_rebind_exec
Performance degrades significantly, with each tab dropping to ~2 FPS on
PTL (Ubuntu 24.04).
At the same time, /proc/buddyinfo shows substantial free memory but no
higher-order availability. For example, the Normal zone:
Count: 4063 4595 3455 3400 3139 2762 2293 1655 643 0 0
This corresponds to ~2.8GB free memory, but no order-9 (2MB) blocks,
indicating severe fragmentation.
This series addresses the issue in two ways:
TTM: Restrict direct reclaim to beneficial_order. Larger allocations
use __GFP_NORETRY to fail quickly rather than triggering reclaim.
Xe: Introduce a heuristic in the shrinker to avoid eviction when
running under kswapd and the system appears memory-rich but
fragmented.
With these changes, the reclaim/eviction loop is eliminated. The same
workload improves to ~10 FPS per tab (Ubuntu 24.04) or ~15 FPS per tab
(Ubuntu 24.10), and kswapd activity subsides.
Buddyinfo after applying this series shows restored higher-order
availability:
Count: 8526 7067 3092 1959 1292 660 194 28 20 13 1
Matt
v2:
- Layer with core MM / TTM helpers (Thomas)
[1] https://patchwork.freedesktop.org/patch/716404/?series=164353&rev=1
Cc: Dave Chinner <david@fromorbit.com>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Kairui Song <kasong@tencent.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
CC: dri-devel@lists.freedesktop.org
Cc: Daniel Colascione <dancol@dancol.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Lorenzo Stoakes <ljs@kernel.org>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Matthew Brost (6):
mm: Wire up order in shrink_control
mm: Introduce zone_maybe_fragmented_in_shrinker()
drm/ttm: Issue direct reclaim at beneficial_order
drm/ttm: Introduce ttm_bo_shrink_kswap_maybe_fragmented()
drm/xe: Set TTM device beneficial_order to 9 (2M)
drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
drivers/gpu/drm/ttm/ttm_bo_util.c | 38 +++++++++++++++++++++++++++++++
drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
drivers/gpu/drm/xe/xe_device.c | 3 ++-
drivers/gpu/drm/xe/xe_shrinker.c | 3 +++
include/drm/ttm/ttm_bo.h | 2 ++
include/linux/shrinker.h | 3 +++
include/linux/vmstat.h | 12 ++++++++++
mm/internal.h | 4 ++--
mm/shrinker.c | 11 +++++----
mm/vmscan.c | 7 +++---
10 files changed, 75 insertions(+), 12 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M)
2026-04-30 18:23 [PATCH v3 0/6] mm, " Matthew Brost
@ 2026-04-30 18:23 ` Matthew Brost
2026-05-05 0:13 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2026-04-30 18:23 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Matthew Auld, Andi Shyti
Set the TTM device beneficial_order to 9 (2M), which is the sweet
spot for Xe when attempting reclaim on system memory BOs, as it matches
the large GPU page size. This ensures reclaim is attempted at the most
effective order for the driver.
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 4b45b617a039..3f719ab08d1c 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -500,7 +500,8 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
err = ttm_device_init(&xe->ttm, &xe_ttm_funcs, xe->drm.dev,
xe->drm.anon_inode->i_mapping,
- xe->drm.vma_offset_manager, 0);
+ xe->drm.vma_offset_manager,
+ TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M)));
if (WARN_ON(err))
return ERR_PTR(err);
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Claude review: drm/xe: Set TTM device beneficial_order to 9 (2M)
2026-04-30 18:23 ` [PATCH v3 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
@ 2026-05-05 0:13 ` Claude Code Review Bot
0 siblings, 0 replies; 18+ messages in thread
From: Claude Code Review Bot @ 2026-05-05 0:13 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Overall:** Simple one-liner that sets the beneficial order for Xe's TTM device to order 9 (2M pages).
```c
- xe->drm.vma_offset_manager, 0);
+ xe->drm.vma_offset_manager,
+ TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M)));
```
**Observations:**
- `get_order(SZ_2M)` = `get_order(2*1024*1024)` = 9 on systems with 4K pages. On 16K page systems (some ARM), this would be different. Using `get_order()` is correct and portable.
- `TTM_ALLOCATION_POOL_BENEFICIAL_ORDER` is a simple mask `((n) & 0xff)` so this is just `9 & 0xff = 9`.
- Already has Reviewed-by from Andi Shyti.
- Clean and correct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4 0/6] mm, drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation
@ 2026-04-30 19:18 Matthew Brost
2026-04-30 19:18 ` [PATCH v4 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
0 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2026-04-30 19:18 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: Dave Chinner, Qi Zheng, Roman Gushchin, Johannes Weiner,
Shakeel Butt, Kairui Song, Barry Song, Axel Rasmussen,
Yuanchu Xie, Wei Xu, Tvrtko Ursulin, Thomas Hellström,
Carlos Santa, Christian Koenig, Huang Rui, Matthew Auld,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Colascione, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
linux-mm, linux-kernel
TTM allocations at higher orders can drive Xe into a pathological
reclaim loop when memory is fragmented:
kswapd → shrinker → eviction → rebind (exec ioctl) → repeat
In this state, reclaim is triggered despite substantial free memory,
but fails to produce contiguous higher-order pages. The Xe shrinker then
evicts active buffer objects, increasing faulting and rebind activity
and further feeding the loop. The result is high CPU overhead and poor
GPU forward progress.
This issue was first reported in [1] and independently observed
internally and by Google.
A simple reproducer is:
- Boot an iGPU system with mem=8G
- Launch 10 Chrome tabs running the WebGL aquarium demo
- Configure each tab with ~5k fish
Under this workload, ftrace shows a continuous loop of:
xe_shrinker_scan (kswapd)
xe_vma_rebind_exec
Performance degrades significantly, with each tab dropping to ~2 FPS on
PTL (Ubuntu 24.04).
At the same time, /proc/buddyinfo shows substantial free memory but no
higher-order availability. For example, the Normal zone:
Count: 4063 4595 3455 3400 3139 2762 2293 1655 643 0 0
This corresponds to ~2.8GB free memory, but no order-9 (2MB) blocks,
indicating severe fragmentation.
This series addresses the issue in two ways:
TTM: Restrict direct reclaim to beneficial_order. Larger allocations
use __GFP_NORETRY to fail quickly rather than triggering reclaim.
Xe: Introduce a heuristic in the shrinker to avoid eviction when
running under kswapd and the system appears memory-rich but
fragmented.
With these changes, the reclaim/eviction loop is eliminated. The same
workload improves to ~10 FPS per tab (Ubuntu 24.04) or ~15 FPS per tab
(Ubuntu 24.10), and kswapd activity subsides.
Buddyinfo after applying this series shows restored higher-order
availability:
Count: 8526 7067 3092 1959 1292 660 194 28 20 13 1
Matt
v2:
- Layer with core MM / TTM helpers (Thomas)
v4:
- Fix build (CI)
[1] https://patchwork.freedesktop.org/patch/716404/?series=164353&rev=1
Cc: Dave Chinner <david@fromorbit.com>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Kairui Song <kasong@tencent.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
CC: dri-devel@lists.freedesktop.org
Cc: Daniel Colascione <dancol@dancol.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Lorenzo Stoakes <ljs@kernel.org>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Matthew Brost (6):
mm: Wire up order in shrink_control
mm: Introduce zone_maybe_fragmented_in_shrinker()
drm/ttm: Issue direct reclaim at beneficial_order
drm/ttm: Introduce ttm_bo_shrink_kswap_maybe_fragmented()
drm/xe: Set TTM device beneficial_order to 9 (2M)
drm/xe: Avoid shrinker reclaim from kswapd under fragmentation
drivers/gpu/drm/ttm/ttm_bo_util.c | 38 +++++++++++++++++++++++++++++++
drivers/gpu/drm/ttm/ttm_pool.c | 4 ++--
drivers/gpu/drm/xe/xe_device.c | 3 ++-
drivers/gpu/drm/xe/xe_shrinker.c | 3 +++
include/drm/ttm/ttm_bo.h | 2 ++
include/linux/shrinker.h | 3 +++
include/linux/vmstat.h | 12 ++++++++++
mm/internal.h | 4 ++--
mm/shrinker.c | 13 +++++++----
mm/vmscan.c | 7 +++---
10 files changed, 76 insertions(+), 13 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M)
2026-04-30 19:18 [PATCH v4 0/6] mm, drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
@ 2026-04-30 19:18 ` Matthew Brost
2026-05-05 0:00 ` Claude review: " Claude Code Review Bot
0 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2026-04-30 19:18 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: Thomas Hellström, Carlos Santa, Matthew Auld, Andi Shyti
Set the TTM device beneficial_order to 9 (2M), which is the sweet
spot for Xe when attempting reclaim on system memory BOs, as it matches
the large GPU page size. This ensures reclaim is attempted at the most
effective order for the driver.
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 4b45b617a039..3f719ab08d1c 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -500,7 +500,8 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
err = ttm_device_init(&xe->ttm, &xe_ttm_funcs, xe->drm.dev,
xe->drm.anon_inode->i_mapping,
- xe->drm.vma_offset_manager, 0);
+ xe->drm.vma_offset_manager,
+ TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M)));
if (WARN_ON(err))
return ERR_PTR(err);
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Claude review: drm/xe: Set TTM device beneficial_order to 9 (2M)
2026-04-30 19:18 ` [PATCH v4 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
@ 2026-05-05 0:00 ` Claude Code Review Bot
0 siblings, 0 replies; 18+ messages in thread
From: Claude Code Review Bot @ 2026-05-05 0:00 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Subject:** `[PATCH v4 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M)`
```c
- xe->drm.vma_offset_manager, 0);
+ xe->drm.vma_offset_manager,
+ TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M)));
```
**This is straightforward and correct.** `get_order(SZ_2M)` returns 9 on 4K-page systems. The `TTM_ALLOCATION_POOL_BENEFICIAL_ORDER` macro packs this into the `alloc_flags` parameter. Has Andi's R-b already.
One observation: on architectures with non-4K base page sizes (e.g., 64K pages on arm64), `get_order(SZ_2M)` would return a different value (e.g., order 5 for 64K pages). The commit message says "9 (2M)" but the code uses `get_order(SZ_2M)`, which is the correct portable approach. The commit message should perhaps say "order matching 2M" rather than hardcoding "9".
**No functional issues.**
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2026-05-05 0:13 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 1:26 [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
2026-04-21 6:11 ` Christian König
2026-04-22 4:12 ` Matthew Brost
2026-04-22 6:41 ` Christian König
2026-04-22 7:32 ` Tvrtko Ursulin
2026-04-22 7:41 ` Christian König
2026-04-22 20:41 ` Matthew Brost
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2026-04-21 1:26 ` [PATCH 2/3] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2026-04-21 1:26 ` [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation Matthew Brost
2026-04-22 8:22 ` Thomas Hellström
2026-04-22 20:27 ` Matthew Brost
2026-04-22 23:01 ` Claude review: " Claude Code Review Bot
2026-04-22 23:01 ` Claude review: drm/ttm, drm/xe: Avoid reclaim/eviction loops " Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-04-30 18:23 [PATCH v3 0/6] mm, " Matthew Brost
2026-04-30 18:23 ` [PATCH v3 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
2026-05-05 0:13 ` Claude review: " Claude Code Review Bot
2026-04-30 19:18 [PATCH v4 0/6] mm, drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
2026-04-30 19:18 ` [PATCH v4 5/6] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
2026-05-05 0:00 ` Claude review: " Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox