public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* Claude review: drm/xe/xe_vm: Add per VM fault info
  2026-02-23 17:21 ` [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-02-24  0:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-24  0:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Build-breaking bug.** The `xe_vm_add_fault_entry_pf` function references `pf->consumer.fault_type_level` and uses `FIELD_GET` with `XE_PAGEFAULT_TYPE_MASK` and `XE_PAGEFAULT_LEVEL_MASK`:

> +	e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
> +				  pf->consumer.fault_type_level),
> +	e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
> +				   pf->consumer.fault_type_level),

The `struct xe_pagefault` has separate `fault_type` and `fault_level` fields (both `u8`), not a combined `fault_type_level` field. Neither `XE_PAGEFAULT_TYPE_MASK` nor `XE_PAGEFAULT_LEVEL_MASK` is defined anywhere in the tree. This code will not compile. It appears to be based on an older version of the `xe_pagefault` struct. The fix is straightforward:

```c
e->fault_type = pf->consumer.fault_type;
e->fault_level = pf->consumer.fault_level;
```

**Comma instead of semicolon.** Those same two lines end with commas instead of semicolons. Due to C's comma operator, the two assignment expressions and the subsequent `list_add_tail` call end up chained into a single expression-statement. While this would execute correctly (all three expressions evaluate left to right), it is clearly unintentional and should use semicolons.

**Allocation under potential pressure.** The function calls `kzalloc(sizeof(*e), GFP_KERNEL)` from within `xe_pagefault_queue_work`, a workqueue handler that processes page faults. If the system is under memory pressure (which is exactly when page faults are likely), this allocation could sleep and delay fault processing. Worth considering whether `GFP_NOWAIT` or a pre-allocated pool would be more appropriate here, though this is a design consideration rather than a bug.

**Race window on VM close.** `xe_pagefault_save_to_vm` does `xe_vm_get(vm)` under `xe->usm.lock`, then calls `xe_vm_add_fault_entry_pf`, then `xe_vm_put`. Meanwhile, `xe_vm_close_and_put` calls `xe_vm_clear_fault_entries` after releasing `xe->usm.lock`. The per-fault spinlock (`vm->faults.lock`) protects the list operations, so no corruption occurs, but a fault could be added to a VM that is in the process of being torn down, only to be immediately freed by `xe_vm_clear_fault_entries`. This is harmless but worth noting.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Add per VM fault info
  2026-03-06 15:56 ` [PATCH v36 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-03-08 22:42   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 22:42 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

1. **Comma instead of semicolon (bug):**

```c
e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
              pf->consumer.fault_type_level),
e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
               pf->consumer.fault_type_level),
```

Both of these lines end with `,` (comma operator) instead of `;`. While this compiles and works because the comma operator evaluates both sides, it's definitely a bug/typo. Both should end with `;`.

2. **`access_type` stored without masking:**

```c
e->access_type = pf->consumer.access_type;
```

As noted in patch 1, `pf->consumer.access_type` has `XE_PAGEFAULT_ACCESS_PREFETCH` packed into bit 7. This will leak the prefetch bit into the stored access_type, which then gets exposed to userspace. It should be masked:

```c
e->access_type = pf->consumer.access_type & XE_PAGEFAULT_ACCESS_TYPE_MASK;
```

3. **Allocation under potential memory pressure:** `xe_pagefault_save_to_vm` calls `kzalloc(GFP_KERNEL)` in the page fault handler path. The fault handling path may already be under memory pressure (the fault may have been triggered by reclaim). Consider using `GFP_NOWAIT` or at least `GFP_KERNEL | __GFP_NOWARN`.

4. **Typo in comment:** "asociated" should be "associated" in the comment in `xe_pagefault_save_to_vm`.

5. **Fault list never cleared by userspace:** The faults accumulate up to 50 and then are silently discarded. There's no mechanism for userspace to clear the fault list after reading it. This means after 50 faults are hit, no new faults will ever be recorded. This seems like a significant design limitation — userspace can read the same stale faults forever and never see new ones.

6. **`xe_vm_clear_fault_entries` only called on VM close:** The fault list is only cleaned up on `xe_vm_close_and_put`. If userspace never closes the VM, these 50 allocations persist forever (which is fine, it's bounded, but the list is never useful after filling up).

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Add per VM fault info
  2026-03-10 14:49 ` [PATCH v37 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-03-11  3:13   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-11  3:13 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Comma operator bug.** In `xe_vm_add_fault_entry_pf()`:
```c
e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
              pf->consumer.fault_type_level),
e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
               pf->consumer.fault_type_level),
```
Both lines end with `,` (comma) instead of `;` (semicolon). While this compiles and works in this specific context (comma operator evaluates left-to-right and the result is discarded by the next statement), it looks like a typo and could confuse readers. Should be semicolons.

**Memory allocation under potential pressure.** `xe_pagefault_save_to_vm()` calls `kzalloc(sizeof(*e), GFP_KERNEL)` from a workqueue context processing page faults. This should be fine since `GFP_KERNEL` is appropriate here, but the allocation is done before checking `MAX_FAULTS_SAVED_PER_VM` — the check happens inside the spinlock. Moving the length check before the allocation would avoid wasted allocations:

```c
// Could check vm->faults.len first with the lock, then allocate
```

Though the current approach (allocate then check under lock) does avoid holding the lock during allocation, which is the right tradeoff. The wasted allocation on a full list is minor.

**`access_type` stored raw.** `e->access_type = pf->consumer.access_type` stores the raw value including the prefetch bit. The conversion in patch 4 (`xe_to_user_access_type`) then masks it. This means the internal representation differs from the uAPI representation. This works but is fragile — it would be cleaner to mask at storage time in `xe_vm_add_fault_entry_pf()`.

**Prefetch faults are saved.** `xe_pagefault_save_to_vm()` is called for all failed faults, but back in `xe_pagefault_queue_work()`, prefetch faults that fail are treated as benign (only a debug log). Saving prefetch faults to the VM fault list and exposing them to userspace seems inconsistent with the stated goal of only reporting meaningful failed faults. Consider filtering out prefetch faults in `xe_pagefault_save_to_vm()`.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Add per VM fault info
  2026-03-20 18:35 ` [PATCH v38 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-03-21 17:27   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:27 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Comma operator instead of semicolons**: Lines 629-631 in `xe_vm_add_fault_entry_pf()`:
```c
e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
			  pf->consumer.fault_type_level),
e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
			   pf->consumer.fault_type_level),
```
Both assignments end with commas (`,`) instead of semicolons (`;`). While the comma operator makes this technically work (the assignment still happens), it's a style error that could confuse readers and static analysis tools. Should be semicolons.

**kzalloc before spinlock then kfree under spinlock**: The pattern of allocating `e` before the lock, then potentially doing `kfree(e)` inside the `guard(spinlock)` scope if `len >= MAX_FAULTS_SAVED_PER_VM` is fine since `kfree` is safe to call with a spinlock held (it's GFP_ATOMIC internally). However, it means an allocation + free happens on every fault after the 50th, which could be frequent. Consider checking the length before allocating:

```c
/* Quick unlocked check to avoid unnecessary allocation */
if (READ_ONCE(vm->faults.len) >= MAX_FAULTS_SAVED_PER_VM)
    return;
```

**Typo in comment**: Line 482: "asociated" should be "associated".

**No mechanism to clear faults**: Faults accumulate up to 50 and are never cleared until VM destruction (`xe_vm_close_and_put`). The ioctl lets userspace read them but not consume/clear them. This means once 50 faults are reached, no new faults are ever recorded. This seems like a significant design limitation — userspace has no way to "acknowledge" faults and make room for new ones. This should at least be documented as a known limitation or addressed with a clear/reset mechanism.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
@ 2026-03-24 15:29 Jonathan Cavitt
  2026-03-24 15:29 ` [PATCH v39 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Jonathan Cavitt @ 2026-03-24 15:29 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, thomas.hellstrom

Add additional information to each VM so they can report up to the first
50 seen faults.  Only pagefaults are saved this way currently, though in
the future, all faults should be tracked by the VM for future reporting.

Additionally, of the pagefaults reported, only failed pagefaults are
saved this way, as successful pagefaults should recover silently and not
need to be reported to userspace.

To allow userspace to access these faults, a new ioctl -
xe_vm_get_property_ioct - was created.

v2: (Matt Brost)
- Break full ban list request into a separate property.
- Reformat drm_xe_vm_get_property struct.
- Remove need for drm_xe_faults helper struct.
- Separate data pointer and scalar return value in ioctl.
- Get address type on pagefault report and save it to the pagefault.
- Correctly reject writes to read-only VMAs.
- Miscellaneous formatting fixes.

v3: (Matt Brost)
- Only allow querying of failed pagefaults

v4:
- Remove unnecessary size parameter from helper function, as it
  is a property of the arguments. (jcavitt)
- Remove unnecessary copy_from_user (Jainxun)
- Set address_precision to 1 (Jainxun)
- Report max size instead of dynamic size for memory allocation
  purposes.  Total memory usage is reported separately.

v5:
- Return int from xe_vm_get_property_size (Shuicheng)
- Fix memory leak (Shuicheng)
- Remove unnecessary size variable (jcavitt)

v6:
- Free vm after use (Shuicheng)
- Compress pf copy logic (Shuicheng)
- Update fault_unsuccessful before storing (Shuicheng)
- Fix old struct name in comments (Shuicheng)
- Keep first 50 pagefaults instead of last 50 (Jianxun)
- Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)

v7:
- Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt)
- Fix double-locking error (jcavitt)
- Assert kmemdump is successful (Shuicheng)
- Repair and move fill_faults break condition (Dan Carpenter)
- Free vm after use (jcavitt)
- Combine assertions (jcavitt)
- Expand size check in xe_vm_get_faults_ioctl (jcavitt)
- Remove return mask from fill_faults, as return is already -EFAULT or 0
  (jcavitt)

v8:
- Revert back to using drm_xe_vm_get_property_ioctl
- s/Migrate/Move (Michal)
- s/xe_pagefault/xe_gt_pagefault (Michal)
- Create new header file, xe_gt_pagefault_types.h (Michal)
- Add and fix kernel docs (Michal)
- Rename xe_vm.pfs to xe_vm.faults (jcavitt)
- Store fault data and not pagefault in xe_vm faults list (jcavitt)
- Store address, address type, and address precision per fault (jcavitt)
- Store engine class and instance data per fault (Jianxun)
- Properly handle kzalloc error (Michal W)
- s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W)
- Store fault level per fault (Micahl M)
- Apply better copy_to_user logic (jcavitt)

v9:
- More kernel doc fixes (Michal W, Jianxun)
- Better error handling (jcavitt)

v10:
- Convert enums to defines in regs folder (Michal W)
- Move xe_guc_pagefault_desc to regs folder (Michal W)
- Future-proof size logic for zero-size properties (jcavitt)
- Replace address type extern with access type (Jianxun)
- Add fault type to xe_drm_fault (Jianxun)

v11:
- Remove unnecessary switch case logic (Raag)
- Compress size get, size validation, and property fill functions into a
  single helper function (jcavitt)
- Assert valid size (jcavitt)
- Store pagefaults in non-fault-mode VMs as well (Jianxun)

v12:
- Remove unnecessary else condition
- Correct backwards helper function size logic (jcavitt)
- Fix kernel docs and comments (Michal W)

v13:
- Move xe and user engine class mapping arrays to header (John H)

v14:
- Fix double locking issue (Jianxun)
- Use size_t instead of int (Raag)
- Remove unnecessary includes (jcavitt)

v15:
- Do not report faults from reserved engines (Jianxun)

v16:
- Remove engine class and instance (Ivan)

v17:
- Map access type, fault type, and fault level to user macros (Matt
  Brost, Ivan)

v18:
- Add uAPI merge request to this cover letter

v19:
- Perform kzalloc outside of lock (Auld)

v20:
- Fix inconsistent use of whitespace in defines

v21:
- Remove unnecessary size assertion (jcavitt)

v22:
- Fix xe_vm_fault_entry kernel docs (Shuicheng)

v23:
- Nit fixes (Matt Brost)

v24:
- s/xe_pagefault_desc.h/xe_guc_pagefault_desc.h (Dafna)
- Move PF_MSG_LEN_DW to regs folder (Dafna)

v25:
- Revert changes from last revision (John H)
- Add missing bspec (Michal W)

v26:
- Rebase on top of latest change to xe_pagefault layer (jcavitt)

v27:
- Apply max line length (Matt Brost)
- Correctly ignore fault mode in save_pagefault_to_vm (jcavitt)

v28:
- Do not copy_to_user in critical section (Matt Brost)
- Assert args->size is multiple of sizeof(struct xe_vm_fault) (Matt
  Brost)
- s/save_pagefault_to_vm/xe_pagefault_save_to_vm (Matt Brost)
- Use guard instead of spin_lock/unlock (Matt Brost)
- GT was added to xe_pagefault struct.  Use xe_gt_hw_engine
  instead of creating a new helper function (Matt Brost)

v29:
- Track address precision separately and report it accurately (Matt
  Brost)
- Remove unnecessary memset (Matt Brost)

v30:
- Keep u8 values together (Matt Brost)

v31:
- Rebase (jcavitt)

v32:
- Rebase (jcavitt)

v33:
- Rebase (jcavitt)

v34:
- Rebase (jcavitt)
- Save space for future expansion in pagefault struct (Matt Brost)

v35:
- Revert v34
- Rebase (jcavitt)
- Remove fixed value addr_precision (Matt Brost)
  - Since address precision is fixed, remove debug print statement as
    well (jcavitt)

v36:
- Rebase (jcavitt)
- s/Refactor/Rebase where needed in cover letter (jcavitt)

v37:
- Report canonicalized address (Jose)
- Mask out prefetch data from access type (Jose, jcavitt)

v38:
- Clarify address is canonical (José)

v39:
- s/uAPI/Link in the commit log links
- Use kzalloc_obj
- Fix debug parameter alignment

Link: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Zhang Jianxun <jianxun.zhang@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Cc: Raag Jadav <raag.jadav@intel.com>
Cc: John Harrison <john.c.harrison@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Dafna Hirschfeld <dafna.hirschfeld@intel.com>
Cc: Jose Souza <jose.souza@intel.com>

Jonathan Cavitt (4):
  drm/xe/xe_pagefault: Disallow writes to read-only VMAs
  drm/xe/uapi: Define drm_xe_vm_get_property
  drm/xe/xe_vm: Add per VM fault info
  drm/xe/xe_vm: Implement xe_vm_get_property_ioctl

 drivers/gpu/drm/xe/xe_device.c    |   2 +
 drivers/gpu/drm/xe/xe_pagefault.c |  32 +++++
 drivers/gpu/drm/xe/xe_vm.c        | 191 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h        |  12 ++
 drivers/gpu/drm/xe/xe_vm_types.h  |  29 +++++
 include/uapi/drm/xe_drm.h         |  86 ++++++++++++++
 6 files changed, 352 insertions(+)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v39 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs
  2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2026-03-24 15:29 ` Jonathan Cavitt
  2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
  2026-03-24 15:29 ` [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-03-24 15:29 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, thomas.hellstrom

The page fault handler should reject write/atomic access to read only
VMAs.  Add code to handle this in xe_pagefault_service after the VMA
lookup.

v2:
- Apply max line length (Matthew)

Fixes: fb544b844508 ("drm/xe: Implement xe_pagefault_queue_work")
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_pagefault.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index ea4857acf28d..918d595d1c1b 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -187,6 +187,12 @@ static int xe_pagefault_service(struct xe_pagefault *pf)
 		goto unlock_vm;
 	}
 
+	if (xe_vma_read_only(vma) &&
+	    pf->consumer.access_type != XE_PAGEFAULT_ACCESS_TYPE_READ) {
+		err = -EPERM;
+		goto unlock_vm;
+	}
+
 	atomic = xe_pagefault_access_is_atomic(pf->consumer.access_type);
 
 	if (xe_vma_is_cpu_addr_mirror(vma))
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property
  2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2026-03-24 15:29 ` [PATCH v39 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
@ 2026-03-24 15:29 ` Jonathan Cavitt
  2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
  2026-03-24 15:29 ` [PATCH v39 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-03-24 15:29 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, thomas.hellstrom

Add initial declarations for the drm_xe_vm_get_property ioctl.

v2:
- Expand kernel docs for drm_xe_vm_get_property (Jianxun)

v3:
- Remove address type external definitions (Jianxun)
- Add fault type to xe_drm_fault struct (Jianxun)

v4:
- Remove engine class and instance (Ivan)

v5:
- Add declares for fault type, access type, and fault level (Matt Brost,
  Ivan)

v6:
- Fix inconsistent use of whitespace in defines

v7:
- Rebase and refactor (jcavitt)

v8:
- Rebase (jcavitt)

v9:
- Clarify address is canonical (José)

v10:
- s/uAPI/Link in the commit log links

Link: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Reviewed-by: Shuicheng Lin <shuicheng.lin@intel.com>
Acked-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Ivan Briano <ivan.briano@intel.com>
Acked-by: José Roberto de Souza <jose.souza@intel.com>
Cc: Zhang Jianxun <jianxun.zhang@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
 include/uapi/drm/xe_drm.h | 86 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 7014dde1c9c4..6c99514a85e1 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -83,6 +83,7 @@ extern "C" {
  *  - &DRM_IOCTL_XE_OBSERVATION
  *  - &DRM_IOCTL_XE_MADVISE
  *  - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
+ *  - &DRM_IOCTL_XE_VM_GET_PROPERTY
  */
 
 /*
@@ -107,6 +108,7 @@ extern "C" {
 #define DRM_XE_MADVISE			0x0c
 #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS	0x0d
 #define DRM_XE_EXEC_QUEUE_SET_PROPERTY	0x0e
+#define DRM_XE_VM_GET_PROPERTY		0x0f
 
 /* Must be kept compact -- no holes */
 
@@ -125,6 +127,7 @@ extern "C" {
 #define DRM_IOCTL_XE_MADVISE			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
 #define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
 #define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY	DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
+#define DRM_IOCTL_XE_VM_GET_PROPERTY		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_GET_PROPERTY, struct drm_xe_vm_get_property)
 
 /**
  * DOC: Xe IOCTL Extensions
@@ -1263,6 +1266,89 @@ struct drm_xe_vm_bind {
 	__u64 reserved[2];
 };
 
+/** struct xe_vm_fault - Describes faults for %DRM_XE_VM_GET_PROPERTY_FAULTS */
+struct xe_vm_fault {
+	/** @address: Canonical address of the fault */
+	__u64 address;
+	/** @address_precision: Precision of faulted address */
+	__u32 address_precision;
+	/** @access_type: Type of address access that resulted in fault */
+#define FAULT_ACCESS_TYPE_READ		0
+#define FAULT_ACCESS_TYPE_WRITE		1
+#define FAULT_ACCESS_TYPE_ATOMIC	2
+	__u8 access_type;
+	/** @fault_type: Type of fault reported */
+#define FAULT_TYPE_NOT_PRESENT		0
+#define FAULT_TYPE_WRITE_ACCESS		1
+#define FAULT_TYPE_ATOMIC_ACCESS	2
+	__u8 fault_type;
+	/** @fault_level: fault level of the fault */
+#define FAULT_LEVEL_PTE		0
+#define FAULT_LEVEL_PDE		1
+#define FAULT_LEVEL_PDP		2
+#define FAULT_LEVEL_PML4	3
+#define FAULT_LEVEL_PML5	4
+	__u8 fault_level;
+	/** @pad: MBZ */
+	__u8 pad;
+	/** @reserved: MBZ */
+	__u64 reserved[4];
+};
+
+/**
+ * struct drm_xe_vm_get_property - Input of &DRM_IOCTL_XE_VM_GET_PROPERTY
+ *
+ * The user provides a VM and a property to query among DRM_XE_VM_GET_PROPERTY_*,
+ * and sets the values in the vm_id and property members, respectively.  This
+ * determines both the VM to get the property of, as well as the property to
+ * report.
+ *
+ * If size is set to 0, the driver fills it with the required size for the
+ * requested property.  The user is expected here to allocate memory for the
+ * property structure and to provide a pointer to the allocated memory using the
+ * data member.  For some properties, this may be zero, in which case, the
+ * value of the property will be saved to the value member and size will remain
+ * zero on return.
+ *
+ * If size is not zero, then the IOCTL will attempt to copy the requested
+ * property into the data member.
+ *
+ * The IOCTL will return -ENOENT if the VM could not be identified from the
+ * provided VM ID, or -EINVAL if the IOCTL fails for any other reason, such as
+ * providing an invalid size for the given property or if the property data
+ * could not be copied to the memory allocated to the data member.
+ *
+ * The property member can be:
+ *  - %DRM_XE_VM_GET_PROPERTY_FAULTS
+ */
+struct drm_xe_vm_get_property {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @vm_id: The ID of the VM to query the properties of */
+	__u32 vm_id;
+
+#define DRM_XE_VM_GET_PROPERTY_FAULTS		0
+	/** @property: property to get */
+	__u32 property;
+
+	/** @size: Size to allocate for @data */
+	__u32 size;
+
+	/** @pad: MBZ */
+	__u32 pad;
+
+	union {
+		/** @data: Pointer to user-defined array of flexible size and type */
+		__u64 data;
+		/** @value: Return value for scalar queries */
+		__u64 value;
+	};
+
+	/** @reserved: MBZ */
+	__u64 reserved[3];
+};
+
 /**
  * struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v39 3/4] drm/xe/xe_vm: Add per VM fault info
  2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2026-03-24 15:29 ` [PATCH v39 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
  2026-03-24 15:29 ` [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-03-24 15:29 ` Jonathan Cavitt
  2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
  2026-03-24 15:29 ` [PATCH v39 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2026-03-24 20:46 ` Claude Code Review Bot
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-03-24 15:29 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, thomas.hellstrom

Add additional information to each VM so they can report up to the first
50 seen faults.  Only pagefaults are saved this way currently, though in
the future, all faults should be tracked by the VM for future reporting.

Additionally, of the pagefaults reported, only failed pagefaults are
saved this way, as successful pagefaults should recover silently and not
need to be reported to userspace.

v2:
- Free vm after use (Shuicheng)
- Compress pf copy logic (Shuicheng)
- Update fault_unsuccessful before storing (Shuicheng)
- Fix old struct name in comments (Shuicheng)
- Keep first 50 pagefaults instead of last 50 (Jianxun)

v3:
- Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt)
- Fix double-locking error (jcavitt)
- Assert kmemdump is successful (Shuicheng)

v4:
- Rename xe_vm.pfs to xe_vm.faults (jcavitt)
- Store fault data and not pagefault in xe_vm faults list (jcavitt)
- Store address, address type, and address precision per fault (jcavitt)
- Store engine class and instance data per fault (Jianxun)
- Add and fix kernel docs (Michal W)
- Properly handle kzalloc error (Michal W)
- s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W)
- Store fault level per fault (Micahl M)

v5:
- Store fault and access type instead of address type (Jianxun)

v6:
- Store pagefaults in non-fault-mode VMs as well (Jianxun)

v7:
- Fix kernel docs and comments (Michal W)

v8:
- Fix double-locking issue (Jianxun)

v9:
- Do not report faults from reserved engines (Jianxun)

v10:
- Remove engine class and instance (Ivan)

v11:
- Perform kzalloc outside of lock (Auld)

v12:
- Fix xe_vm_fault_entry kernel docs (Shuicheng)

v13:
- Rebase and refactor (jcavitt)

v14:
- Correctly ignore fault mode in save_pagefault_to_vm (jcavitt)

v15:
- s/save_pagefault_to_vm/xe_pagefault_save_to_vm (Matt Brost)
- Use guard instead of spin_lock/unlock (Matt Brost)
- GT was added to xe_pagefault struct.  Use xe_gt_hw_engine
  instead of creating a new helper function (Matt Brost)

v16:
- Set address precision programmatically (Matt Brost)

v17:
- Set address precision to fixed value (Matt Brost)

v18:
- s/uAPI/Link in commit log links
- Use kzalloc_obj

Link: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Jianxun Zhang <jianxun.zhang@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
Cc: Michal Mzorek <michal.mzorek@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_pagefault.c | 26 +++++++++++
 drivers/gpu/drm/xe/xe_vm.c        | 74 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h        |  9 ++++
 drivers/gpu/drm/xe/xe_vm_types.h  | 29 ++++++++++++
 4 files changed, 138 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index 918d595d1c1b..2fd55d7c98f9 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -250,6 +250,31 @@ static void xe_pagefault_print(struct xe_pagefault *pf)
 		   pf->consumer.engine_instance);
 }
 
+static void xe_pagefault_save_to_vm(struct xe_device *xe, struct xe_pagefault *pf)
+{
+	struct xe_vm *vm;
+
+	/*
+	 * Pagefault may be asociated to VM that is not in fault mode.
+	 * Perform asid_to_vm behavior, except if VM is not in fault
+	 * mode, return VM anyways.
+	 */
+	down_read(&xe->usm.lock);
+	vm = xa_load(&xe->usm.asid_to_vm, pf->consumer.asid);
+	if (vm)
+		xe_vm_get(vm);
+	else
+		vm = ERR_PTR(-EINVAL);
+	up_read(&xe->usm.lock);
+
+	if (IS_ERR(vm))
+		return;
+
+	xe_vm_add_fault_entry_pf(vm, pf);
+
+	xe_vm_put(vm);
+}
+
 static void xe_pagefault_queue_work(struct work_struct *w)
 {
 	struct xe_pagefault_queue *pf_queue =
@@ -268,6 +293,7 @@ static void xe_pagefault_queue_work(struct work_struct *w)
 
 		err = xe_pagefault_service(&pf);
 		if (err) {
+			xe_pagefault_save_to_vm(gt_to_xe(pf.gt), &pf);
 			if (!(pf.consumer.access_type & XE_PAGEFAULT_ACCESS_PREFETCH)) {
 				xe_pagefault_print(&pf);
 				xe_gt_info(pf.gt, "Fault response: Unsuccessful %pe\n",
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index e24436287786..682ead62b34a 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -27,6 +27,7 @@
 #include "xe_device.h"
 #include "xe_drm_client.h"
 #include "xe_exec_queue.h"
+#include "xe_gt.h"
 #include "xe_migrate.h"
 #include "xe_pat.h"
 #include "xe_pm.h"
@@ -577,6 +578,74 @@ static void preempt_rebind_work_func(struct work_struct *w)
 	trace_xe_vm_rebind_worker_exit(vm);
 }
 
+/**
+ * xe_vm_add_fault_entry_pf() - Add pagefault to vm fault list
+ * @vm: The VM.
+ * @pf: The pagefault.
+ *
+ * This function takes the data from the pagefault @pf and saves it to @vm->faults.list.
+ *
+ * The function exits silently if the list is full, and reports a warning if the pagefault
+ * could not be saved to the list.
+ */
+void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_pagefault *pf)
+{
+	struct xe_vm_fault_entry *e;
+	struct xe_hw_engine *hwe;
+
+	/* Do not report faults on reserved engines */
+	hwe = xe_gt_hw_engine(pf->gt, pf->consumer.engine_class,
+			      pf->consumer.engine_instance, false);
+	if (!hwe || xe_hw_engine_is_reserved(hwe))
+		return;
+
+	e = kzalloc_obj(*e);
+	if (!e) {
+		drm_warn(&vm->xe->drm,
+			 "Could not allocate memory for fault!\n");
+		return;
+	}
+
+	guard(spinlock)(&vm->faults.lock);
+
+	/*
+	 * Limit the number of faults in the fault list to prevent
+	 * memory overuse.
+	 */
+	if (vm->faults.len >= MAX_FAULTS_SAVED_PER_VM) {
+		kfree(e);
+		return;
+	}
+
+	e->address = pf->consumer.page_addr;
+	/*
+	 * TODO:
+	 * Address precision is currently always SZ_4K, but this may change
+	 * in the future.
+	 */
+	e->address_precision = SZ_4K;
+	e->access_type = pf->consumer.access_type;
+	e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
+				  pf->consumer.fault_type_level),
+	e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
+				   pf->consumer.fault_type_level),
+
+	list_add_tail(&e->list, &vm->faults.list);
+	vm->faults.len++;
+}
+
+static void xe_vm_clear_fault_entries(struct xe_vm *vm)
+{
+	struct xe_vm_fault_entry *e, *tmp;
+
+	guard(spinlock)(&vm->faults.lock);
+	list_for_each_entry_safe(e, tmp, &vm->faults.list, list) {
+		list_del(&e->list);
+		kfree(e);
+	}
+	vm->faults.len = 0;
+}
+
 static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
 {
 	int i;
@@ -1538,6 +1607,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
 	INIT_LIST_HEAD(&vm->userptr.invalidated);
 	spin_lock_init(&vm->userptr.invalidated_lock);
 
+	INIT_LIST_HEAD(&vm->faults.list);
+	spin_lock_init(&vm->faults.lock);
+
 	ttm_lru_bulk_move_init(&vm->lru_bulk_move);
 
 	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
@@ -1854,6 +1926,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	}
 	up_write(&xe->usm.lock);
 
+	xe_vm_clear_fault_entries(vm);
+
 	for_each_tile(tile, xe, id)
 		xe_range_fence_tree_fini(&vm->rftree[id]);
 
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 0bc7ed23eeae..42767d2aebac 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -12,6 +12,12 @@
 #include "xe_map.h"
 #include "xe_vm_types.h"
 
+/**
+ * MAX_FAULTS_SAVED_PER_VM - Maximum number of faults each vm can store before future
+ * faults are discarded to prevent memory overuse
+ */
+#define MAX_FAULTS_SAVED_PER_VM	50
+
 struct drm_device;
 struct drm_printer;
 struct drm_file;
@@ -22,6 +28,7 @@ struct dma_fence;
 
 struct xe_exec_queue;
 struct xe_file;
+struct xe_pagefault;
 struct xe_sync_entry;
 struct xe_svm_range;
 struct drm_exec;
@@ -318,6 +325,8 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
 void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
 void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
 
+void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_pagefault *pf);
+
 /**
  * xe_vm_set_validating() - Register this task as currently making bos resident
  * @allow_res_evict: Allow eviction of buffer objects bound to @vm when
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 69e80c94138a..d5e48203b57e 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -24,6 +24,7 @@
 struct drm_pagemap;
 
 struct xe_bo;
+struct xe_pagefault;
 struct xe_svm_range;
 struct xe_sync_entry;
 struct xe_user_fence;
@@ -176,6 +177,24 @@ struct xe_userptr_vma {
 
 struct xe_device;
 
+/**
+ * struct xe_vm_fault_entry - Elements of vm->faults.list
+ * @list: link into @xe_vm.faults.list
+ * @address: address of the fault
+ * @address_precision: precision of faulted address
+ * @access_type: type of address access that resulted in fault
+ * @fault_type: type of fault reported
+ * @fault_level: fault level of the fault
+ */
+struct xe_vm_fault_entry {
+	struct list_head list;
+	u64 address;
+	u32 address_precision;
+	u8 access_type;
+	u8 fault_type;
+	u8 fault_level;
+};
+
 struct xe_vm {
 	/** @gpuvm: base GPUVM used to track VMAs */
 	struct drm_gpuvm gpuvm;
@@ -333,6 +352,16 @@ struct xe_vm {
 		bool capture_once;
 	} error_capture;
 
+	/** @faults: List of all faults associated with this VM */
+	struct {
+		/** @faults.lock: lock protecting @faults.list */
+		spinlock_t lock;
+		/** @faults.list: list of xe_vm_fault_entry entries */
+		struct list_head list;
+		/** @faults.len: length of @faults.list */
+		unsigned int len;
+	} faults;
+
 	/**
 	 * @validation: Validation data only valid with the vm resv held.
 	 * Note: This is really task state of the task holding the vm resv,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v39 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
                   ` (2 preceding siblings ...)
  2026-03-24 15:29 ` [PATCH v39 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-03-24 15:29 ` Jonathan Cavitt
  2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
  2026-03-24 20:46 ` Claude Code Review Bot
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-03-24 15:29 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, thomas.hellstrom

Add support for userspace to request a list of observed faults
from a specified VM.

v2:
- Only allow querying of failed pagefaults (Matt Brost)

v3:
- Remove unnecessary size parameter from helper function, as it
  is a property of the arguments. (jcavitt)
- Remove unnecessary copy_from_user (Jainxun)
- Set address_precision to 1 (Jainxun)
- Report max size instead of dynamic size for memory allocation
  purposes.  Total memory usage is reported separately.

v4:
- Return int from xe_vm_get_property_size (Shuicheng)
- Fix memory leak (Shuicheng)
- Remove unnecessary size variable (jcavitt)

v5:
- Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)
- Update fill_property_pfs to eliminate need for kzalloc (Jianxun)

v6:
- Repair and move fill_faults break condition (Dan Carpenter)
- Free vm after use (jcavitt)
- Combine assertions (jcavitt)
- Expand size check in xe_vm_get_faults_ioctl (jcavitt)
- Remove return mask from fill_faults, as return is already -EFAULT or 0
  (jcavitt)

v7:
- Revert back to using xe_vm_get_property_ioctl
- Apply better copy_to_user logic (jcavitt)

v8:
- Fix and clean up error value handling in ioctl (jcavitt)
- Reapply return mask for fill_faults (jcavitt)

v9:
- Future-proof size logic for zero-size properties (jcavitt)
- Add access and fault types (Jianxun)
- Remove address type (Jianxun)

v10:
- Remove unnecessary switch case logic (Raag)
- Compress size get, size validation, and property fill functions into a
  single helper function (jcavitt)
- Assert valid size (jcavitt)

v11:
- Remove unnecessary else condition
- Correct backwards helper function size logic (jcavitt)

v12:
- Use size_t instead of int (Raag)

v13:
- Remove engine class and instance (Ivan)

v14:
- Map access type, fault type, and fault level to user macros (Matt
  Brost, Ivan)

v15:
- Remove unnecessary size assertion (jcavitt)

v16:
- Nit fixes (Matt Brost)

v17:
- Rebase and refactor (jcavitt)

v18:
- Do not copy_to_user in critical section (Matt Brost)
- Assert args->size is multiple of sizeof(struct xe_vm_fault) (Matt
  Brost)

v19:
- Remove unnecessary memset (Matt Brost)

v20:
- Report canonicalized address (Jose)
- Mask out prefetch data from access type (Jose, jcavitt)

v21:
- s/uAPI/Link in the commit log links
- Align debug parameters

Link: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Michal Mrozek <michal.mrozek@intel.com>
Cc: Jainxun Zhang <jianxun.zhang@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Raag Jadav <raag.jadav@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Jose Souza <jose.souza@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c |   2 +
 drivers/gpu/drm/xe/xe_vm.c     | 117 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h     |   3 +
 3 files changed, 122 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index c8e91ff464a7..b17d4a878686 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -211,6 +211,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
 			  DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY, xe_exec_queue_set_property_ioctl,
 			  DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(XE_VM_GET_PROPERTY, xe_vm_get_property_ioctl,
+			  DRM_RENDER_ALLOW),
 };
 
 static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 682ead62b34a..4660d1132f28 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3960,6 +3960,123 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	return err;
 }
 
+/*
+ * Map access type, fault type, and fault level from current bspec
+ * specification to user spec abstraction.  The current mapping is
+ * approximately 1-to-1, with access type being the only notable
+ * exception as it carries additional data with respect to prefetch
+ * status that needs to be masked out.
+ */
+static u8 xe_to_user_access_type(u8 access_type)
+{
+	return access_type & XE_PAGEFAULT_ACCESS_TYPE_MASK;
+}
+
+static u8 xe_to_user_fault_type(u8 fault_type)
+{
+	return fault_type;
+}
+
+static u8 xe_to_user_fault_level(u8 fault_level)
+{
+	return fault_level;
+}
+
+static int fill_faults(struct xe_vm *vm,
+		       struct drm_xe_vm_get_property *args)
+{
+	struct xe_vm_fault __user *usr_ptr = u64_to_user_ptr(args->data);
+	struct xe_vm_fault *fault_list, fault_entry = { 0 };
+	struct xe_vm_fault_entry *entry;
+	int ret = 0, i = 0, count, entry_size;
+
+	entry_size = sizeof(struct xe_vm_fault);
+	count = args->size / entry_size;
+
+	fault_list = kcalloc(count, sizeof(struct xe_vm_fault), GFP_KERNEL);
+	if (!fault_list)
+		return -ENOMEM;
+
+	spin_lock(&vm->faults.lock);
+	list_for_each_entry(entry, &vm->faults.list, list) {
+		if (i == count)
+			break;
+
+		fault_entry.address = xe_device_canonicalize_addr(vm->xe, entry->address);
+		fault_entry.address_precision = entry->address_precision;
+
+		fault_entry.access_type = xe_to_user_access_type(entry->access_type);
+		fault_entry.fault_type = xe_to_user_fault_type(entry->fault_type);
+		fault_entry.fault_level = xe_to_user_fault_level(entry->fault_level);
+
+		memcpy(&fault_list[i], &fault_entry, entry_size);
+
+		i++;
+	}
+	spin_unlock(&vm->faults.lock);
+
+	ret = copy_to_user(usr_ptr, fault_list, args->size);
+
+	kfree(fault_list);
+	return ret ? -EFAULT : 0;
+}
+
+static int xe_vm_get_property_helper(struct xe_vm *vm,
+				     struct drm_xe_vm_get_property *args)
+{
+	size_t size;
+
+	switch (args->property) {
+	case DRM_XE_VM_GET_PROPERTY_FAULTS:
+		spin_lock(&vm->faults.lock);
+		size = size_mul(sizeof(struct xe_vm_fault), vm->faults.len);
+		spin_unlock(&vm->faults.lock);
+
+		if (!args->size) {
+			args->size = size;
+			return 0;
+		}
+
+		/*
+		 * Number of faults may increase between calls to
+		 * xe_vm_get_property_ioctl, so just report the number of
+		 * faults the user requests if it's less than or equal to
+		 * the number of faults in the VM fault array.
+		 *
+		 * We should also at least assert that the args->size value
+		 * is a multiple of the xe_vm_fault struct size.
+		 */
+		if (args->size > size || args->size % sizeof(struct xe_vm_fault))
+			return -EINVAL;
+
+		return fill_faults(vm, args);
+	}
+	return -EINVAL;
+}
+
+int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
+			     struct drm_file *file)
+{
+	struct xe_device *xe = to_xe_device(drm);
+	struct xe_file *xef = to_xe_file(file);
+	struct drm_xe_vm_get_property *args = data;
+	struct xe_vm *vm;
+	int ret = 0;
+
+	if (XE_IOCTL_DBG(xe, (args->reserved[0] || args->reserved[1] ||
+			      args->reserved[2])))
+		return -EINVAL;
+
+	vm = xe_vm_lookup(xef, args->vm_id);
+	if (XE_IOCTL_DBG(xe, !vm))
+		return -ENOENT;
+
+	ret = xe_vm_get_property_helper(vm, args);
+
+	xe_vm_put(vm);
+	return ret;
+}
+
 /**
  * xe_vm_bind_kernel_bo - bind a kernel BO to a VM
  * @vm: VM to bind the BO to
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 42767d2aebac..c5b900f38ded 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -210,6 +210,9 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
 int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
 		     struct drm_file *file);
 int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int xe_vm_get_property_ioctl(struct drm_device *dev, void *data,
+			     struct drm_file *file);
+
 void xe_vm_close_and_put(struct xe_vm *vm);
 
 static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
                   ` (3 preceding siblings ...)
  2026-03-24 15:29 ` [PATCH v39 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2026-03-24 20:46 ` Claude Code Review Bot
  4 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-24 20:46 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
Author: Jonathan Cavitt <jonathan.cavitt@intel.com>
Patches: 5
Reviewed: 2026-03-25T06:46:43.451179

---

This v39 series adds a new `DRM_IOCTL_XE_VM_GET_PROPERTY` ioctl to allow userspace to query up to 50 failed page faults per VM. The series is well-structured across 4 patches: a preparatory bug fix, uAPI definitions, internal fault storage, and the ioctl implementation. After 39 revisions, the code is generally clean, but there are several issues worth flagging — most notably missing validation of the `pad` field in the ioctl, comma-instead-of-semicolon bugs, a TOCTOU race in the size check, and the `copy_to_user` copying more data than was actually populated.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_pagefault: Disallow writes to read-only VMAs
  2026-03-24 15:29 ` [PATCH v39 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
@ 2026-03-24 20:46   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-24 20:46 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Clean, straightforward bug fix. Has a `Reviewed-by` and `Fixes` tag.

```c
if (xe_vma_read_only(vma) &&
    pf->consumer.access_type != XE_PAGEFAULT_ACCESS_TYPE_READ) {
    err = -EPERM;
    goto unlock_vm;
}
```

This correctly catches both WRITE and ATOMIC access to read-only VMAs. The placement after the VMA lookup and before the atomic check is appropriate. No issues.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/uapi: Define drm_xe_vm_get_property
  2026-03-24 15:29 ` [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-03-24 20:46   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-24 20:46 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**uAPI naming concern:** The `struct xe_vm_fault` does not follow the `drm_xe_` prefix convention used by all other uAPI structs in this header (e.g., `drm_xe_vm_create`, `drm_xe_vm_bind`). Similarly, the `FAULT_ACCESS_TYPE_*`, `FAULT_TYPE_*`, and `FAULT_LEVEL_*` defines lack a `DRM_XE_` prefix, which risks namespace collisions. Every other define in this header uses `DRM_XE_` prefixing. This is a uAPI that will be set in stone once merged — the naming should be consistent.

**Struct padding/alignment:** `struct xe_vm_fault` has `__u64 address`, then `__u32 address_precision`, then three `__u8` fields, then one `__u8 pad`, then `__u64 reserved[4]`. Between `pad` and `reserved` there are 0 bytes of implicit padding since the three u8 + pad = 4 bytes, and address_precision (u32) + 4 bytes of u8s = 8 bytes after the u64, so alignment is fine. Good.

**`data`/`value` union:** The union of `data` and `value` in `drm_xe_vm_get_property` is reasonable for supporting both pointer and scalar property types, though currently only `data` (pointer) is used.

**Size field is `__u32`:** The `size` field is `__u32`, which limits the maximum reportable faults. With `sizeof(struct xe_vm_fault)` being 48 bytes and MAX_FAULTS_SAVED_PER_VM=50, the max is 2400 bytes — well within `__u32` range. Fine, but somewhat inconsistent with other ioctls that use `__u64` for sizes.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Add per VM fault info
  2026-03-24 15:29 ` [PATCH v39 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-03-24 20:46   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-24 20:46 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Comma instead of semicolon (bug):** In `xe_vm_add_fault_entry_pf`:
```c
e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
              pf->consumer.fault_type_level),
e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
               pf->consumer.fault_type_level),
```
Both lines end with `,` instead of `;`. While this compiles (the comma operator evaluates left-to-right and discards the result of the left operand), it's clearly a typo. The `fault_type` assignment becomes the left operand of a comma expression whose result is the `fault_level` assignment. In practice this works, but it's wrong and confusing. These should be semicolons.

**Typo in comment:** "asociated" should be "associated" in `xe_pagefault_save_to_vm`.

**Duplicate forward declaration:** `struct xe_pagefault;` is forward-declared in both `xe_vm_types.h` and `xe_vm.h`. The one in `xe_vm_types.h` is unnecessary since nothing in that file references `xe_pagefault`.

**No mechanism to clear/consume faults:** Faults accumulate up to 50 and are never cleared until the VM is destroyed. There's no way for userspace to "consume" or reset the fault list after reading it. Once 50 faults are saved, no new faults are recorded. This seems like a significant design limitation — userspace can only ever see the first 50 faults in the VM's lifetime.

**Memory allocation before checking MAX:** `kzalloc_obj(*e)` allocates memory before checking `vm->faults.len >= MAX_FAULTS_SAVED_PER_VM`. If the list is full, the allocation is immediately freed. A minor inefficiency — checking `vm->faults.len` first (even without the lock, as a hint) would avoid the allocation in the common "full" case.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2026-03-24 15:29 ` [PATCH v39 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2026-03-24 20:46   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-24 20:46 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Missing `pad` field validation:** The ioctl validates `args->reserved[0..2]` but does NOT validate `args->pad`. All other xe ioctls validate their pad fields (as seen by `args->pad` checks in `xe_vm_bind_ioctl` etc.). This is a uAPI bug — once shipped, a non-zero pad value will silently be accepted, preventing future use of this field for extensions.

```c
if (XE_IOCTL_DBG(xe, (args->reserved[0] || args->reserved[1] ||
              args->reserved[2])))
    return -EINVAL;
// Missing: args->pad check
```

**Missing `extensions` field validation:** `args->extensions` is not checked for being 0. If extensions aren't supported yet, it should be validated as MBZ.

**TOCTOU race in size validation:** In `xe_vm_get_property_helper`, the fault count is read under the lock:
```c
spin_lock(&vm->faults.lock);
size = size_mul(sizeof(struct xe_vm_fault), vm->faults.len);
spin_unlock(&vm->faults.lock);
```
Then `args->size > size` is checked. But `vm->faults.len` can increase between this check and the actual `fill_faults` call, which does its own iteration. The comment acknowledges this ("Number of faults may increase between calls"), but the validation is backwards — it allows `args->size <= size` at check time, but by the time `fill_faults` runs, there could be more faults. The `fill_faults` function itself has `count = args->size / entry_size` and iterates `min(count, actual_list_len)`, so this is safe in practice, but the size validation and the comment are misleading. The check `args->size > size` could reject valid requests if faults were added between the size query and the fill call.

**`copy_to_user` copies more than populated:** In `fill_faults`:
```c
ret = copy_to_user(usr_ptr, fault_list, args->size);
```
This copies `args->size` bytes, but only `i` entries were actually filled. If `args->size` corresponds to, say, 10 entries but only 5 faults exist, it copies 10 entries worth of data — the last 5 are zero-initialized (from `kcalloc`), so it's not a data leak, but it's wasteful and confusing. Consider copying only `i * entry_size` bytes and updating `args->size` to reflect the actual amount returned.

**Trivial wrappers `xe_to_user_fault_type` and `xe_to_user_fault_level`:** These are identity functions that just return their input. While the comment says the mapping is "approximately 1-to-1", having no-op wrappers adds code without value. If the mapping changes in the future, these can be added then.

**`fault_entry` memcpy unnecessary:** In `fill_faults`, a local `fault_entry` is populated field-by-field, then `memcpy`'d into `fault_list[i]`. Just assign directly to `fault_list[i]` — the memcpy adds nothing since the types are the same.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-03-24 20:46 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-24 15:29 ` [PATCH v39 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
2026-03-24 15:29 ` [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
2026-03-24 15:29 ` [PATCH v39 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
2026-03-24 15:29 ` [PATCH v39 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-24 20:46   ` Claude review: " Claude Code Review Bot
2026-03-24 20:46 ` Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-03-20 18:35 [PATCH v38 0/4] " Jonathan Cavitt
2026-03-20 18:35 ` [PATCH v38 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-03-21 17:27   ` Claude review: " Claude Code Review Bot
2026-03-10 14:49 [PATCH v37 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-10 14:49 ` [PATCH v37 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-03-11  3:13   ` Claude review: " Claude Code Review Bot
2026-03-06 15:55 [PATCH v36 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-06 15:56 ` [PATCH v36 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-03-08 22:42   ` Claude review: " Claude Code Review Bot
2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-02-23 17:21 ` [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-02-24  0:00   ` Claude review: " Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox