public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
@ 2026-02-23 17:21 Jonathan Cavitt
  2026-02-23 17:21 ` [PATCH v35 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Jonathan Cavitt @ 2026-02-23 17:21 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, dafna.hirschfeld

Add additional information to each VM so they can report up to the first
50 seen faults.  Only pagefaults are saved this way currently, though in
the future, all faults should be tracked by the VM for future reporting.

Additionally, of the pagefaults reported, only failed pagefaults are
saved this way, as successful pagefaults should recover silently and not
need to be reported to userspace.

To allow userspace to access these faults, a new ioctl -
xe_vm_get_property_ioct - was created.

v2: (Matt Brost)
- Break full ban list request into a separate property.
- Reformat drm_xe_vm_get_property struct.
- Remove need for drm_xe_faults helper struct.
- Separate data pointer and scalar return value in ioctl.
- Get address type on pagefault report and save it to the pagefault.
- Correctly reject writes to read-only VMAs.
- Miscellaneous formatting fixes.

v3: (Matt Brost)
- Only allow querying of failed pagefaults

v4:
- Remove unnecessary size parameter from helper function, as it
  is a property of the arguments. (jcavitt)
- Remove unnecessary copy_from_user (Jainxun)
- Set address_precision to 1 (Jainxun)
- Report max size instead of dynamic size for memory allocation
  purposes.  Total memory usage is reported separately.

v5:
- Return int from xe_vm_get_property_size (Shuicheng)
- Fix memory leak (Shuicheng)
- Remove unnecessary size variable (jcavitt)

v6:
- Free vm after use (Shuicheng)
- Compress pf copy logic (Shuicheng)
- Update fault_unsuccessful before storing (Shuicheng)
- Fix old struct name in comments (Shuicheng)
- Keep first 50 pagefaults instead of last 50 (Jianxun)
- Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)

v7:
- Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt)
- Fix double-locking error (jcavitt)
- Assert kmemdump is successful (Shuicheng)
- Repair and move fill_faults break condition (Dan Carpenter)
- Free vm after use (jcavitt)
- Combine assertions (jcavitt)
- Expand size check in xe_vm_get_faults_ioctl (jcavitt)
- Remove return mask from fill_faults, as return is already -EFAULT or 0
  (jcavitt)

v8:
- Revert back to using drm_xe_vm_get_property_ioctl
- s/Migrate/Move (Michal)
- s/xe_pagefault/xe_gt_pagefault (Michal)
- Create new header file, xe_gt_pagefault_types.h (Michal)
- Add and fix kernel docs (Michal)
- Rename xe_vm.pfs to xe_vm.faults (jcavitt)
- Store fault data and not pagefault in xe_vm faults list (jcavitt)
- Store address, address type, and address precision per fault (jcavitt)
- Store engine class and instance data per fault (Jianxun)
- Properly handle kzalloc error (Michal W)
- s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W)
- Store fault level per fault (Micahl M)
- Apply better copy_to_user logic (jcavitt)

v9:
- More kernel doc fixes (Michal W, Jianxun)
- Better error handling (jcavitt)

v10:
- Convert enums to defines in regs folder (Michal W)
- Move xe_guc_pagefault_desc to regs folder (Michal W)
- Future-proof size logic for zero-size properties (jcavitt)
- Replace address type extern with access type (Jianxun)
- Add fault type to xe_drm_fault (Jianxun)

v11:
- Remove unnecessary switch case logic (Raag)
- Compress size get, size validation, and property fill functions into a
  single helper function (jcavitt)
- Assert valid size (jcavitt)
- Store pagefaults in non-fault-mode VMs as well (Jianxun)

v12:
- Remove unnecessary else condition
- Correct backwards helper function size logic (jcavitt)
- Fix kernel docs and comments (Michal W)

v13:
- Move xe and user engine class mapping arrays to header (John H)

v14:
- Fix double locking issue (Jianxun)
- Use size_t instead of int (Raag)
- Remove unnecessary includes (jcavitt)

v15:
- Do not report faults from reserved engines (Jianxun)

v16:
- Remove engine class and instance (Ivan)

v17:
- Map access type, fault type, and fault level to user macros (Matt
  Brost, Ivan)

v18:
- Add uAPI merge request to this cover letter

v19:
- Perform kzalloc outside of lock (Auld)

v20:
- Fix inconsistent use of whitespace in defines

v21:
- Remove unnecessary size assertion (jcavitt)

v22:
- Fix xe_vm_fault_entry kernel docs (Shuicheng)

v23:
- Nit fixes (Matt Brost)

v24:
- s/xe_pagefault_desc.h/xe_guc_pagefault_desc.h (Dafna)
- Move PF_MSG_LEN_DW to regs folder (Dafna)

v25:
- Revert changes from last revision (John H)
- Add missing bspec (Michal W)

v26:
- Rebase and refactor on top of latest change to xe_pagefault layer
  (jcavitt)

v27:
- Apply max line length (Matt Brost)
- Correctly ignore fault mode in save_pagefault_to_vm (jcavitt)

v28:
- Do not copy_to_user in critical section (Matt Brost)
- Assert args->size is multiple of sizeof(struct xe_vm_fault) (Matt
  Brost)
- s/save_pagefault_to_vm/xe_pagefault_save_to_vm (Matt Brost)
- Use guard instead of spin_lock/unlock (Matt Brost)
- GT was added to xe_pagefault struct.  Use xe_gt_hw_engine
  instead of creating a new helper function (Matt Brost)

v29:
- Track address precision separately and report it accurately (Matt
  Brost)
- Remove unnecessary memset (Matt Brost)

v30:
- Keep u8 values together (Matt Brost)

v31:
- Refactor (jcavitt)

v32:
- Refactor (jcavitt)

v33:
- Refactor (jcavitt)

v34:
- Refactor (jcavitt)
- Save space for future expansion in pagefault struct (Matt Brost)

v35:
- Revert v34
- Refactor (jcavitt)
- Remove fixed value addr_precision (Matt Brost)
  - Since address precision is fixed, remove debug print statement as
    well (jcavitt)

uAPI: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Zhang Jianxun <jianxun.zhang@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Cc: Raag Jadav <raag.jadav@intel.com>
Cc: John Harrison <john.c.harrison@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Dafna Hirschfeld <dafna.hirschfeld@intel.com>

Jonathan Cavitt (4):
  drm/xe/xe_pagefault: Disallow writes to read-only VMAs
  drm/xe/uapi: Define drm_xe_vm_get_property
  drm/xe/xe_vm: Add per VM fault info
  drm/xe/xe_vm: Implement xe_vm_get_property_ioctl

 drivers/gpu/drm/xe/xe_device.c    |   2 +
 drivers/gpu/drm/xe/xe_pagefault.c |  32 +++++
 drivers/gpu/drm/xe/xe_vm.c        | 191 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h        |  12 ++
 drivers/gpu/drm/xe/xe_vm_types.h  |  29 +++++
 include/uapi/drm/xe_drm.h         |  86 ++++++++++++++
 6 files changed, 352 insertions(+)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v35 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs
  2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2026-02-23 17:21 ` Jonathan Cavitt
  2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
  2026-02-23 17:21 ` [PATCH v35 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-02-23 17:21 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, dafna.hirschfeld

The page fault handler should reject write/atomic access to read only
VMAs.  Add code to handle this in xe_pagefault_service after the VMA
lookup.

v2:
- Apply max line length (Matthew)

Fixes: fb544b844508 ("drm/xe: Implement xe_pagefault_queue_work")
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_pagefault.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index 72f589fd2b64..0281b5b6d4ab 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -187,6 +187,12 @@ static int xe_pagefault_service(struct xe_pagefault *pf)
 		goto unlock_vm;
 	}
 
+	if (xe_vma_read_only(vma) &&
+	    pf->consumer.access_type != XE_PAGEFAULT_ACCESS_TYPE_READ) {
+		err = -EPERM;
+		goto unlock_vm;
+	}
+
 	atomic = xe_pagefault_access_is_atomic(pf->consumer.access_type);
 
 	if (xe_vma_is_cpu_addr_mirror(vma))
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v35 2/4] drm/xe/uapi: Define drm_xe_vm_get_property
  2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2026-02-23 17:21 ` [PATCH v35 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
@ 2026-02-23 17:21 ` Jonathan Cavitt
  2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
  2026-02-23 17:21 ` [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-02-23 17:21 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, dafna.hirschfeld

Add initial declarations for the drm_xe_vm_get_property ioctl.

v2:
- Expand kernel docs for drm_xe_vm_get_property (Jianxun)

v3:
- Remove address type external definitions (Jianxun)
- Add fault type to xe_drm_fault struct (Jianxun)

v4:
- Remove engine class and instance (Ivan)

v5:
- Add declares for fault type, access type, and fault level (Matt Brost,
  Ivan)

v6:
- Fix inconsistent use of whitespace in defines

v7:
- Rebase and refactor (jcavitt)

v8:
- Rebase (jcavitt)

uAPI: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Reviewed-by: Shuicheng Lin <shuicheng.lin@intel.com>
Acked-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Ivan Briano <ivan.briano@intel.com>
Cc: Zhang Jianxun <jianxun.zhang@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
 include/uapi/drm/xe_drm.h | 86 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index c9e70f78e723..56a7482ebd79 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -83,6 +83,7 @@ extern "C" {
  *  - &DRM_IOCTL_XE_OBSERVATION
  *  - &DRM_IOCTL_XE_MADVISE
  *  - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
+ *  - &DRM_IOCTL_XE_VM_GET_PROPERTY
  */
 
 /*
@@ -107,6 +108,7 @@ extern "C" {
 #define DRM_XE_MADVISE			0x0c
 #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS	0x0d
 #define DRM_XE_EXEC_QUEUE_SET_PROPERTY	0x0e
+#define DRM_XE_VM_GET_PROPERTY		0x0f
 
 /* Must be kept compact -- no holes */
 
@@ -125,6 +127,7 @@ extern "C" {
 #define DRM_IOCTL_XE_MADVISE			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
 #define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
 #define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY	DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
+#define DRM_IOCTL_XE_VM_GET_PROPERTY		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_GET_PROPERTY, struct drm_xe_vm_get_property)
 
 /**
  * DOC: Xe IOCTL Extensions
@@ -1243,6 +1246,89 @@ struct drm_xe_vm_bind {
 	__u64 reserved[2];
 };
 
+/** struct xe_vm_fault - Describes faults for %DRM_XE_VM_GET_PROPERTY_FAULTS */
+struct xe_vm_fault {
+	/** @address: Address of the fault */
+	__u64 address;
+	/** @address_precision: Precision of faulted address */
+	__u32 address_precision;
+	/** @access_type: Type of address access that resulted in fault */
+#define FAULT_ACCESS_TYPE_READ		0
+#define FAULT_ACCESS_TYPE_WRITE		1
+#define FAULT_ACCESS_TYPE_ATOMIC	2
+	__u8 access_type;
+	/** @fault_type: Type of fault reported */
+#define FAULT_TYPE_NOT_PRESENT		0
+#define FAULT_TYPE_WRITE_ACCESS		1
+#define FAULT_TYPE_ATOMIC_ACCESS	2
+	__u8 fault_type;
+	/** @fault_level: fault level of the fault */
+#define FAULT_LEVEL_PTE		0
+#define FAULT_LEVEL_PDE		1
+#define FAULT_LEVEL_PDP		2
+#define FAULT_LEVEL_PML4	3
+#define FAULT_LEVEL_PML5	4
+	__u8 fault_level;
+	/** @pad: MBZ */
+	__u8 pad;
+	/** @reserved: MBZ */
+	__u64 reserved[4];
+};
+
+/**
+ * struct drm_xe_vm_get_property - Input of &DRM_IOCTL_XE_VM_GET_PROPERTY
+ *
+ * The user provides a VM and a property to query among DRM_XE_VM_GET_PROPERTY_*,
+ * and sets the values in the vm_id and property members, respectively.  This
+ * determines both the VM to get the property of, as well as the property to
+ * report.
+ *
+ * If size is set to 0, the driver fills it with the required size for the
+ * requested property.  The user is expected here to allocate memory for the
+ * property structure and to provide a pointer to the allocated memory using the
+ * data member.  For some properties, this may be zero, in which case, the
+ * value of the property will be saved to the value member and size will remain
+ * zero on return.
+ *
+ * If size is not zero, then the IOCTL will attempt to copy the requested
+ * property into the data member.
+ *
+ * The IOCTL will return -ENOENT if the VM could not be identified from the
+ * provided VM ID, or -EINVAL if the IOCTL fails for any other reason, such as
+ * providing an invalid size for the given property or if the property data
+ * could not be copied to the memory allocated to the data member.
+ *
+ * The property member can be:
+ *  - %DRM_XE_VM_GET_PROPERTY_FAULTS
+ */
+struct drm_xe_vm_get_property {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @vm_id: The ID of the VM to query the properties of */
+	__u32 vm_id;
+
+#define DRM_XE_VM_GET_PROPERTY_FAULTS		0
+	/** @property: property to get */
+	__u32 property;
+
+	/** @size: Size to allocate for @data */
+	__u32 size;
+
+	/** @pad: MBZ */
+	__u32 pad;
+
+	union {
+		/** @data: Pointer to user-defined array of flexible size and type */
+		__u64 data;
+		/** @value: Return value for scalar queries */
+		__u64 value;
+	};
+
+	/** @reserved: MBZ */
+	__u64 reserved[3];
+};
+
 /**
  * struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info
  2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2026-02-23 17:21 ` [PATCH v35 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
  2026-02-23 17:21 ` [PATCH v35 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-02-23 17:21 ` Jonathan Cavitt
  2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
  2026-02-23 17:21 ` [PATCH v35 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2026-02-23 23:59 ` Claude Code Review Bot
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-02-23 17:21 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, dafna.hirschfeld

Add additional information to each VM so they can report up to the first
50 seen faults.  Only pagefaults are saved this way currently, though in
the future, all faults should be tracked by the VM for future reporting.

Additionally, of the pagefaults reported, only failed pagefaults are
saved this way, as successful pagefaults should recover silently and not
need to be reported to userspace.

v2:
- Free vm after use (Shuicheng)
- Compress pf copy logic (Shuicheng)
- Update fault_unsuccessful before storing (Shuicheng)
- Fix old struct name in comments (Shuicheng)
- Keep first 50 pagefaults instead of last 50 (Jianxun)

v3:
- Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt)
- Fix double-locking error (jcavitt)
- Assert kmemdump is successful (Shuicheng)

v4:
- Rename xe_vm.pfs to xe_vm.faults (jcavitt)
- Store fault data and not pagefault in xe_vm faults list (jcavitt)
- Store address, address type, and address precision per fault (jcavitt)
- Store engine class and instance data per fault (Jianxun)
- Add and fix kernel docs (Michal W)
- Properly handle kzalloc error (Michal W)
- s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W)
- Store fault level per fault (Micahl M)

v5:
- Store fault and access type instead of address type (Jianxun)

v6:
- Store pagefaults in non-fault-mode VMs as well (Jianxun)

v7:
- Fix kernel docs and comments (Michal W)

v8:
- Fix double-locking issue (Jianxun)

v9:
- Do not report faults from reserved engines (Jianxun)

v10:
- Remove engine class and instance (Ivan)

v11:
- Perform kzalloc outside of lock (Auld)

v12:
- Fix xe_vm_fault_entry kernel docs (Shuicheng)

v13:
- Rebase and refactor (jcavitt)

v14:
- Correctly ignore fault mode in save_pagefault_to_vm (jcavitt)

v15:
- s/save_pagefault_to_vm/xe_pagefault_save_to_vm (Matt Brost)
- Use guard instead of spin_lock/unlock (Matt Brost)
- GT was added to xe_pagefault struct.  Use xe_gt_hw_engine
  instead of creating a new helper function (Matt Brost)

v16:
- Set address precision programmatically (Matt Brost)

v17:
- Set address precision to fixed value (Matt Brost)

uAPI: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Jianxun Zhang <jianxun.zhang@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
Cc: Michal Mzorek <michal.mzorek@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_pagefault.c | 26 +++++++++++
 drivers/gpu/drm/xe/xe_vm.c        | 74 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h        |  9 ++++
 drivers/gpu/drm/xe/xe_vm_types.h  | 29 ++++++++++++
 4 files changed, 138 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index 0281b5b6d4ab..4bb1a3ff13e4 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -249,6 +249,31 @@ static void xe_pagefault_print(struct xe_pagefault *pf)
 		   pf->consumer.engine_instance);
 }
 
+static void xe_pagefault_save_to_vm(struct xe_device *xe, struct xe_pagefault *pf)
+{
+	struct xe_vm *vm;
+
+	/*
+	 * Pagefault may be asociated to VM that is not in fault mode.
+	 * Perform asid_to_vm behavior, except if VM is not in fault
+	 * mode, return VM anyways.
+	 */
+	down_read(&xe->usm.lock);
+	vm = xa_load(&xe->usm.asid_to_vm, pf->consumer.asid);
+	if (vm)
+		xe_vm_get(vm);
+	else
+		vm = ERR_PTR(-EINVAL);
+	up_read(&xe->usm.lock);
+
+	if (IS_ERR(vm))
+		return;
+
+	xe_vm_add_fault_entry_pf(vm, pf);
+
+	xe_vm_put(vm);
+}
+
 static void xe_pagefault_queue_work(struct work_struct *w)
 {
 	struct xe_pagefault_queue *pf_queue =
@@ -268,6 +293,7 @@ static void xe_pagefault_queue_work(struct work_struct *w)
 		err = xe_pagefault_service(&pf);
 		if (err) {
 			xe_pagefault_print(&pf);
+			xe_pagefault_save_to_vm(gt_to_xe(pf.gt), &pf);
 			xe_gt_info(pf.gt, "Fault response: Unsuccessful %pe\n",
 				   ERR_PTR(err));
 		}
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 5adabfd5dc30..e2f30c1c1669 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -27,6 +27,7 @@
 #include "xe_device.h"
 #include "xe_drm_client.h"
 #include "xe_exec_queue.h"
+#include "xe_gt.h"
 #include "xe_migrate.h"
 #include "xe_pat.h"
 #include "xe_pm.h"
@@ -577,6 +578,74 @@ static void preempt_rebind_work_func(struct work_struct *w)
 	trace_xe_vm_rebind_worker_exit(vm);
 }
 
+/**
+ * xe_vm_add_fault_entry_pf() - Add pagefault to vm fault list
+ * @vm: The VM.
+ * @pf: The pagefault.
+ *
+ * This function takes the data from the pagefault @pf and saves it to @vm->faults.list.
+ *
+ * The function exits silently if the list is full, and reports a warning if the pagefault
+ * could not be saved to the list.
+ */
+void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_pagefault *pf)
+{
+	struct xe_vm_fault_entry *e = NULL;
+	struct xe_hw_engine *hwe;
+
+	/* Do not report faults on reserved engines */
+	hwe = xe_gt_hw_engine(pf->gt, pf->consumer.engine_class,
+			      pf->consumer.engine_instance, false);
+	if (!hwe || xe_hw_engine_is_reserved(hwe))
+		return;
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+	if (!e) {
+		drm_warn(&vm->xe->drm,
+			 "Could not allocate memory for fault!\n");
+		return;
+	}
+
+	guard(spinlock)(&vm->faults.lock);
+
+	/*
+	 * Limit the number of faults in the fault list to prevent
+	 * memory overuse.
+	 */
+	if (vm->faults.len >= MAX_FAULTS_SAVED_PER_VM) {
+		kfree(e);
+		return;
+	}
+
+	e->address = pf->consumer.page_addr;
+	/*
+	 * TODO:
+	 * Address precision is currently always SZ_4K, but this may change
+	 * in the future.
+	 */
+	e->address_precision = SZ_4K;
+	e->access_type = pf->consumer.access_type;
+	e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
+				  pf->consumer.fault_type_level),
+	e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
+				   pf->consumer.fault_type_level),
+
+	list_add_tail(&e->list, &vm->faults.list);
+	vm->faults.len++;
+}
+
+static void xe_vm_clear_fault_entries(struct xe_vm *vm)
+{
+	struct xe_vm_fault_entry *e, *tmp;
+
+	guard(spinlock)(&vm->faults.lock);
+	list_for_each_entry_safe(e, tmp, &vm->faults.list, list) {
+		list_del(&e->list);
+		kfree(e);
+	}
+	vm->faults.len = 0;
+}
+
 static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
 {
 	int i;
@@ -1538,6 +1607,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
 	INIT_LIST_HEAD(&vm->userptr.invalidated);
 	spin_lock_init(&vm->userptr.invalidated_lock);
 
+	INIT_LIST_HEAD(&vm->faults.list);
+	spin_lock_init(&vm->faults.lock);
+
 	ttm_lru_bulk_move_init(&vm->lru_bulk_move);
 
 	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
@@ -1854,6 +1926,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	}
 	up_write(&xe->usm.lock);
 
+	xe_vm_clear_fault_entries(vm);
+
 	for_each_tile(tile, xe, id)
 		xe_range_fence_tree_fini(&vm->rftree[id]);
 
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 288115c7844a..fd3fc60f92bb 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -12,6 +12,12 @@
 #include "xe_map.h"
 #include "xe_vm_types.h"
 
+/**
+ * MAX_FAULTS_SAVED_PER_VM - Maximum number of faults each vm can store before future
+ * faults are discarded to prevent memory overuse
+ */
+#define MAX_FAULTS_SAVED_PER_VM	50
+
 struct drm_device;
 struct drm_printer;
 struct drm_file;
@@ -22,6 +28,7 @@ struct dma_fence;
 
 struct xe_exec_queue;
 struct xe_file;
+struct xe_pagefault;
 struct xe_sync_entry;
 struct xe_svm_range;
 struct drm_exec;
@@ -312,6 +319,8 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
 void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
 void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
 
+void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_pagefault *pf);
+
 /**
  * xe_vm_set_validating() - Register this task as currently making bos resident
  * @allow_res_evict: Allow eviction of buffer objects bound to @vm when
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 43203e90ee3e..07546953cb19 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -23,6 +23,7 @@
 struct drm_pagemap;
 
 struct xe_bo;
+struct xe_pagefault;
 struct xe_svm_range;
 struct xe_sync_entry;
 struct xe_user_fence;
@@ -175,6 +176,24 @@ struct xe_userptr_vma {
 
 struct xe_device;
 
+/**
+ * struct xe_vm_fault_entry - Elements of vm->faults.list
+ * @list: link into @xe_vm.faults.list
+ * @address: address of the fault
+ * @address_precision: precision of faulted address
+ * @access_type: type of address access that resulted in fault
+ * @fault_type: type of fault reported
+ * @fault_level: fault level of the fault
+ */
+struct xe_vm_fault_entry {
+	struct list_head list;
+	u64 address;
+	u32 address_precision;
+	u8 access_type;
+	u8 fault_type;
+	u8 fault_level;
+};
+
 struct xe_vm {
 	/** @gpuvm: base GPUVM used to track VMAs */
 	struct drm_gpuvm gpuvm;
@@ -331,6 +350,16 @@ struct xe_vm {
 		bool capture_once;
 	} error_capture;
 
+	/** @faults: List of all faults associated with this VM */
+	struct {
+		/** @faults.lock: lock protecting @faults.list */
+		spinlock_t lock;
+		/** @faults.list: list of xe_vm_fault_entry entries */
+		struct list_head list;
+		/** @faults.len: length of @faults.list */
+		unsigned int len;
+	} faults;
+
 	/**
 	 * @validation: Validation data only valid with the vm resv held.
 	 * Note: This is really task state of the task holding the vm resv,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v35 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
                   ` (2 preceding siblings ...)
  2026-02-23 17:21 ` [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-02-23 17:21 ` Jonathan Cavitt
  2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
  2026-02-23 23:59 ` Claude Code Review Bot
  4 siblings, 1 reply; 14+ messages in thread
From: Jonathan Cavitt @ 2026-02-23 17:21 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek, raag.jadav, ivan.briano,
	matthew.auld, dafna.hirschfeld

Add support for userspace to request a list of observed faults
from a specified VM.

v2:
- Only allow querying of failed pagefaults (Matt Brost)

v3:
- Remove unnecessary size parameter from helper function, as it
  is a property of the arguments. (jcavitt)
- Remove unnecessary copy_from_user (Jainxun)
- Set address_precision to 1 (Jainxun)
- Report max size instead of dynamic size for memory allocation
  purposes.  Total memory usage is reported separately.

v4:
- Return int from xe_vm_get_property_size (Shuicheng)
- Fix memory leak (Shuicheng)
- Remove unnecessary size variable (jcavitt)

v5:
- Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)
- Update fill_property_pfs to eliminate need for kzalloc (Jianxun)

v6:
- Repair and move fill_faults break condition (Dan Carpenter)
- Free vm after use (jcavitt)
- Combine assertions (jcavitt)
- Expand size check in xe_vm_get_faults_ioctl (jcavitt)
- Remove return mask from fill_faults, as return is already -EFAULT or 0
  (jcavitt)

v7:
- Revert back to using xe_vm_get_property_ioctl
- Apply better copy_to_user logic (jcavitt)

v8:
- Fix and clean up error value handling in ioctl (jcavitt)
- Reapply return mask for fill_faults (jcavitt)

v9:
- Future-proof size logic for zero-size properties (jcavitt)
- Add access and fault types (Jianxun)
- Remove address type (Jianxun)

v10:
- Remove unnecessary switch case logic (Raag)
- Compress size get, size validation, and property fill functions into a
  single helper function (jcavitt)
- Assert valid size (jcavitt)

v11:
- Remove unnecessary else condition
- Correct backwards helper function size logic (jcavitt)

v12:
- Use size_t instead of int (Raag)

v13:
- Remove engine class and instance (Ivan)

v14:
- Map access type, fault type, and fault level to user macros (Matt
  Brost, Ivan)

v15:
- Remove unnecessary size assertion (jcavitt)

v16:
- Nit fixes (Matt Brost)

v17:
- Rebase and refactor (jcavitt)

v18:
- Do not copy_to_user in critical section (Matt Brost)
- Assert args->size is multiple of sizeof(struct xe_vm_fault) (Matt
  Brost)

v19:
- Remove unnecessary memset (Matt Brost)

uAPI: https://github.com/intel/compute-runtime/pull/878
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Cc: Jainxun Zhang <jianxun.zhang@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Raag Jadav <raag.jadav@intel.com>
Cc: Ivan Briano <ivan.briano@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c |   2 +
 drivers/gpu/drm/xe/xe_vm.c     | 117 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h     |   3 +
 3 files changed, 122 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 3462645ca13c..5c63721e9e02 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -211,6 +211,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
 			  DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY, xe_exec_queue_set_property_ioctl,
 			  DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(XE_VM_GET_PROPERTY, xe_vm_get_property_ioctl,
+			  DRM_RENDER_ALLOW),
 };
 
 static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index e2f30c1c1669..98e663b1624f 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3926,6 +3926,123 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	return err;
 }
 
+/*
+ * Map access type, fault type, and fault level from current bspec
+ * specification to user spec abstraction.  The current mapping is
+ * 1-to-1, but if there is ever a hardware change, we will need
+ * this abstraction layer to maintain API stability through the
+ * hardware change.
+ */
+static u8 xe_to_user_access_type(u8 access_type)
+{
+	return access_type;
+}
+
+static u8 xe_to_user_fault_type(u8 fault_type)
+{
+	return fault_type;
+}
+
+static u8 xe_to_user_fault_level(u8 fault_level)
+{
+	return fault_level;
+}
+
+static int fill_faults(struct xe_vm *vm,
+		       struct drm_xe_vm_get_property *args)
+{
+	struct xe_vm_fault __user *usr_ptr = u64_to_user_ptr(args->data);
+	struct xe_vm_fault *fault_list, fault_entry = { 0 };
+	struct xe_vm_fault_entry *entry;
+	int ret = 0, i = 0, count, entry_size;
+
+	entry_size = sizeof(struct xe_vm_fault);
+	count = args->size / entry_size;
+
+	fault_list = kcalloc(count, sizeof(struct xe_vm_fault), GFP_KERNEL);
+	if (!fault_list)
+		return -ENOMEM;
+
+	spin_lock(&vm->faults.lock);
+	list_for_each_entry(entry, &vm->faults.list, list) {
+		if (i == count)
+			break;
+
+		fault_entry.address = entry->address;
+		fault_entry.address_precision = entry->address_precision;
+
+		fault_entry.access_type = xe_to_user_access_type(entry->access_type);
+		fault_entry.fault_type = xe_to_user_fault_type(entry->fault_type);
+		fault_entry.fault_level = xe_to_user_fault_level(entry->fault_level);
+
+		memcpy(&fault_list[i], &fault_entry, entry_size);
+
+		i++;
+	}
+	spin_unlock(&vm->faults.lock);
+
+	ret = copy_to_user(usr_ptr, fault_list, args->size);
+
+	kfree(fault_list);
+	return ret ? -EFAULT : 0;
+}
+
+static int xe_vm_get_property_helper(struct xe_vm *vm,
+				     struct drm_xe_vm_get_property *args)
+{
+	size_t size;
+
+	switch (args->property) {
+	case DRM_XE_VM_GET_PROPERTY_FAULTS:
+		spin_lock(&vm->faults.lock);
+		size = size_mul(sizeof(struct xe_vm_fault), vm->faults.len);
+		spin_unlock(&vm->faults.lock);
+
+		if (!args->size) {
+			args->size = size;
+			return 0;
+		}
+
+		/*
+		 * Number of faults may increase between calls to
+		 * xe_vm_get_property_ioctl, so just report the number of
+		 * faults the user requests if it's less than or equal to
+		 * the number of faults in the VM fault array.
+		 *
+		 * We should also at least assert that the args->size value
+		 * is a multiple of the xe_vm_fault struct size.
+		 */
+		if (args->size > size || args->size % sizeof(struct xe_vm_fault))
+			return -EINVAL;
+
+		return fill_faults(vm, args);
+	}
+	return -EINVAL;
+}
+
+int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
+			     struct drm_file *file)
+{
+	struct xe_device *xe = to_xe_device(drm);
+	struct xe_file *xef = to_xe_file(file);
+	struct drm_xe_vm_get_property *args = data;
+	struct xe_vm *vm;
+	int ret = 0;
+
+	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1] ||
+			     args->reserved[2]))
+		return -EINVAL;
+
+	vm = xe_vm_lookup(xef, args->vm_id);
+	if (XE_IOCTL_DBG(xe, !vm))
+		return -ENOENT;
+
+	ret = xe_vm_get_property_helper(vm, args);
+
+	xe_vm_put(vm);
+	return ret;
+}
+
 /**
  * xe_vm_bind_kernel_bo - bind a kernel BO to a VM
  * @vm: VM to bind the BO to
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index fd3fc60f92bb..0412391265dc 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -210,6 +210,9 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
 int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
 		     struct drm_file *file);
 int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int xe_vm_get_property_ioctl(struct drm_device *dev, void *data,
+			     struct drm_file *file);
+
 void xe_vm_close_and_put(struct xe_vm *vm);
 
 static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
                   ` (3 preceding siblings ...)
  2026-02-23 17:21 ` [PATCH v35 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2026-02-23 23:59 ` Claude Code Review Bot
  4 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-23 23:59 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
Author: Jonathan Cavitt <jonathan.cavitt@intel.com>
Patches: 5
Reviewed: 2026-02-24T09:59:59.947544

---

This is v35 of a 4-patch series that adds a new `DRM_IOCTL_XE_VM_GET_PROPERTY` ioctl to the xe driver, allowing userspace to query per-VM properties, starting with a list of failed page faults. The series is structured as: (1) a bugfix rejecting write/atomic access to read-only VMAs during page fault handling, (2) UAPI definitions, (3) kernel-side fault recording infrastructure, and (4) the ioctl implementation.

The overall design is reasonable: failed page faults are captured into a per-VM list capped at 50 entries, and userspace queries them through a two-phase ioctl (first query size, then retrieve data). However, the series has a critical build-breaking bug in patch 3 that references non-existent struct fields and undefined macros. Additionally, the UAPI definitions have naming convention issues -- the `struct xe_vm_fault` and its associated `FAULT_*` defines don't follow the established `drm_xe_` / `DRM_XE_` naming convention used throughout the rest of xe_drm.h. The ioctl handler in patch 4 also fails to validate the `pad` and `extensions` fields of the input struct, which is a UAPI contract violation that must be fixed before merge.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_pagefault: Disallow writes to read-only VMAs
  2026-02-23 17:21 ` [PATCH v35 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
@ 2026-02-24  0:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-24  0:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This is a straightforward bugfix. The check is placed correctly after the VMA lookup:

> +	if (xe_vma_read_only(vma) &&
> +	    pf->consumer.access_type != XE_PAGEFAULT_ACCESS_TYPE_READ) {
> +		err = -EPERM;
> +		goto unlock_vm;
> +	}

This correctly rejects both write and atomic access to read-only VMAs. The `goto unlock_vm` follows the established error path in `xe_pagefault_service`. The `Fixes:` tag is appropriate. No issues found.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/uapi: Define drm_xe_vm_get_property
  2026-02-23 17:21 ` [PATCH v35 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-02-24  0:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-24  0:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Several UAPI concerns:

**Naming convention violation.** Every other UAPI struct in xe_drm.h uses the `drm_xe_` prefix: `drm_xe_vm_create`, `drm_xe_vm_bind`, `drm_xe_exec_queue_create`, etc. The fault struct is named just `xe_vm_fault`:

> +/** struct xe_vm_fault - Describes faults for %DRM_XE_VM_GET_PROPERTY_FAULTS */
> +struct xe_vm_fault {

This should be `drm_xe_vm_fault` to be consistent with the rest of the header.

Similarly, all defines in xe_drm.h use `DRM_XE_` prefixes, but the fault-related defines use bare `FAULT_` prefixes:

> +#define FAULT_ACCESS_TYPE_READ		0
> +#define FAULT_ACCESS_TYPE_WRITE		1
> +#define FAULT_ACCESS_TYPE_ATOMIC	2
> ...
> +#define FAULT_TYPE_NOT_PRESENT		0
> +#define FAULT_LEVEL_PTE		0

These are in the global UAPI namespace. They should be `DRM_XE_FAULT_ACCESS_TYPE_READ`, `DRM_XE_FAULT_TYPE_NOT_PRESENT`, etc., to avoid namespace collisions with other subsystems.

**Documentation says `-EINVAL` for copy failures.** The doc says "...or -EINVAL if the IOCTL fails for any other reason, such as... if the property data could not be copied to the memory allocated to the data member." But the implementation returns `-EFAULT` for copy_to_user failures (which is the correct return code). The documentation should be updated to match.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Add per VM fault info
  2026-02-23 17:21 ` [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2026-02-24  0:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-24  0:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Build-breaking bug.** The `xe_vm_add_fault_entry_pf` function references `pf->consumer.fault_type_level` and uses `FIELD_GET` with `XE_PAGEFAULT_TYPE_MASK` and `XE_PAGEFAULT_LEVEL_MASK`:

> +	e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK,
> +				  pf->consumer.fault_type_level),
> +	e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK,
> +				   pf->consumer.fault_type_level),

The `struct xe_pagefault` has separate `fault_type` and `fault_level` fields (both `u8`), not a combined `fault_type_level` field. Neither `XE_PAGEFAULT_TYPE_MASK` nor `XE_PAGEFAULT_LEVEL_MASK` is defined anywhere in the tree. This code will not compile. It appears to be based on an older version of the `xe_pagefault` struct. The fix is straightforward:

```c
e->fault_type = pf->consumer.fault_type;
e->fault_level = pf->consumer.fault_level;
```

**Comma instead of semicolon.** Those same two lines end with commas instead of semicolons. Due to C's comma operator, the two assignment expressions and the subsequent `list_add_tail` call end up chained into a single expression-statement. While this would execute correctly (all three expressions evaluate left to right), it is clearly unintentional and should use semicolons.

**Allocation under potential pressure.** The function calls `kzalloc(sizeof(*e), GFP_KERNEL)` from within `xe_pagefault_queue_work`, a workqueue handler that processes page faults. If the system is under memory pressure (which is exactly when page faults are likely), this allocation could sleep and delay fault processing. Worth considering whether `GFP_NOWAIT` or a pre-allocated pool would be more appropriate here, though this is a design consideration rather than a bug.

**Race window on VM close.** `xe_pagefault_save_to_vm` does `xe_vm_get(vm)` under `xe->usm.lock`, then calls `xe_vm_add_fault_entry_pf`, then `xe_vm_put`. Meanwhile, `xe_vm_close_and_put` calls `xe_vm_clear_fault_entries` after releasing `xe->usm.lock`. The per-fault spinlock (`vm->faults.lock`) protects the list operations, so no corruption occurs, but a fault could be added to a VM that is in the process of being torn down, only to be immediately freed by `xe_vm_clear_fault_entries`. This is harmless but worth noting.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2026-02-23 17:21 ` [PATCH v35 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2026-02-24  0:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-02-24  0:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**Missing UAPI field validation.** The ioctl handler validates `reserved[]` but does not check `pad` or `extensions`:

> +	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1] ||
> +			     args->reserved[2]))
> +		return -EINVAL;

Both `pad` (marked MBZ in the UAPI struct) and `extensions` must be validated to be zero for future extensibility. Compare with `xe_vm_create_ioctl` and other xe ioctl handlers which all validate their `extensions` field. This should be:

```c
if (XE_IOCTL_DBG(xe, args->extensions))
    return -EINVAL;
if (XE_IOCTL_DBG(xe, args->pad || args->reserved[0] ||
                 args->reserved[1] || args->reserved[2]))
    return -EINVAL;
```

Failing to reject non-zero `extensions` now means the field cannot be used for extensions in the future without breaking backward compatibility with applications that happen to pass garbage in that field today.

**TOCTOU between size check and fill.** In `xe_vm_get_property_helper`, the faults lock is acquired to read `vm->faults.len`, then released, then `fill_faults` acquires the lock again:

> +		spin_lock(&vm->faults.lock);
> +		size = size_mul(sizeof(struct xe_vm_fault), vm->faults.len);
> +		spin_unlock(&vm->faults.lock);
> +
> +		if (!args->size) {
> +			args->size = size;
> +			return 0;
> +		}
> +
> +		if (args->size > size || args->size % sizeof(struct xe_vm_fault))
> +			return -EINVAL;
> +
> +		return fill_faults(vm, args);

Since faults can only be added (not removed except during VM teardown), and `fill_faults` caps its iteration at `count = args->size / entry_size`, this is safe in practice: there will always be at least `count` entries by the time `fill_faults` runs. However, if `xe_vm_close_and_put` runs between the size check and `fill_faults`, the list could be cleared, resulting in `fill_faults` copying fewer entries than expected (the rest being zero-filled from `kcalloc`). The user would receive zero-filled fault entries with no way to distinguish them from a legitimate fault at address 0. Documenting this race or returning the actual number of entries copied would be an improvement.

**No way to report actual count.** Related to the above, the ioctl provides no mechanism for the kernel to tell userspace how many fault entries were actually copied. If the user queries the size, allocates memory, then calls again, new faults may have been added in between. The user gets back exactly the number they requested, but has no way to know if there are additional faults available. For a query interface, a pattern like "return the total available count in a field so the user can re-query" would be more useful.

**`xe_vm_fault` reserved fields not validated on read-back.** The `struct xe_vm_fault` has `pad` and `reserved` MBZ fields, but since these are output-only structs (kernel writes them, user reads them), validation isn't needed. The kernel correctly zero-fills them via `kcalloc` and zero-initialized `fault_entry`. This is fine.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/uapi: Define drm_xe_vm_get_property
  2026-03-06 15:55 ` [PATCH v36 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-03-08 22:42   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-08 22:42 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

Several uAPI concerns:

1. **Namespace violation for `struct xe_vm_fault`:** The struct is in the DRM uAPI header but uses the `xe_vm_fault` name without the `drm_` prefix. Every other uAPI struct in `xe_drm.h` uses `drm_xe_*` naming. This should be `struct drm_xe_vm_fault`.

2. **Non-namespaced defines:** The defines `FAULT_ACCESS_TYPE_*`, `FAULT_TYPE_*`, and `FAULT_LEVEL_*` are very generic names in a uAPI header. They should be namespaced, e.g., `DRM_XE_FAULT_ACCESS_TYPE_READ`. These are likely to collide with other kernel or userspace defines.

3. **`address_precision` semantics unclear:** The field is documented as "Precision of faulted address" but the meaning isn't specified. From patch 3, it's always `SZ_4K`. The doc should clarify what the value represents (e.g., "the granularity in bytes of the faulted address").

4. **The data/value union:** The union of `data` and `value` shares the same 8 bytes. When `size == 0`, the ioctl fills in `size` but also potentially returns a `value`. This dual-use is documented but could be confusing. Currently only `DRM_XE_VM_GET_PROPERTY_FAULTS` exists which never uses `value`, so the `value` field is dead code.

5. **Missing `pad` in `struct xe_vm_fault`:** There's a single `__u8 pad` after three `__u8` fields. The struct layout is: `__u64 address` (8), `__u32 address_precision` (4), three `__u8` fields (3), `__u8 pad` (1), `__u64 reserved[4]` (32) = 48 bytes total. This is correct for alignment but the `reserved` array at 4 entries of `__u64` seems very generous for future expansion.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/uapi: Define drm_xe_vm_get_property
  2026-03-10 14:49 ` [PATCH v37 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-03-11  3:12   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-11  3:12 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**uAPI naming concern: `struct xe_vm_fault` lacks `drm_xe_` prefix.** All other xe uAPI structs use `drm_xe_` prefix (e.g., `drm_xe_vm_bind`, `drm_xe_vm_get_property`). The `struct xe_vm_fault` should be `struct drm_xe_vm_fault` for consistency. Same for the `FAULT_*` defines — they are very generic namespace polluters and should have a `DRM_XE_` or similar prefix (e.g., `DRM_XE_FAULT_ACCESS_TYPE_READ`).

**Struct padding/alignment concern.** In `struct xe_vm_fault`:
```c
__u64 address;
__u32 address_precision;
__u8 access_type;
__u8 fault_type;
__u8 fault_level;
__u8 pad;
__u64 reserved[4];
```
The layout is fine from a padding perspective (4+1+1+1+1 = 8, aligns to the next `__u64`). Total size is 48 bytes, which is reasonable.

**The `data`/`value` union semantics are unusual.** The documentation says if `size` is 0, the driver fills in the required size. But there's also a `value` field for "scalar queries" — yet no scalar property is defined. This adds complexity for a hypothetical future use case. Not a blocker, but worth questioning whether the union is needed in v1 of this uAPI.

**Missing `pad` validation.** The ioctl handler in patch 4 checks `reserved[0..2]` but does not check `args->pad`. For uAPI, MBZ fields should be validated to ensure forward compatibility.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/uapi: Define drm_xe_vm_get_property
  2026-03-20 18:35 ` [PATCH v38 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-03-21 17:27   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-21 17:27 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**uAPI naming convention issue**: The `struct xe_vm_fault` and defines like `FAULT_ACCESS_TYPE_READ`, `FAULT_TYPE_NOT_PRESENT`, `FAULT_LEVEL_PTE` don't follow the `drm_xe_` / `DRM_XE_` naming convention used everywhere else in `xe_drm.h`. These generic names risk collisions with other kernel/userspace headers. They should be:
- `struct drm_xe_vm_fault`
- `DRM_XE_VM_FAULT_ACCESS_TYPE_READ` (etc.)
- `DRM_XE_VM_FAULT_TYPE_NOT_PRESENT` (etc.)
- `DRM_XE_VM_FAULT_LEVEL_PTE` (etc.)

**Missing `pad` MBZ validation in struct docs**: The `pad` field in `struct xe_vm_fault` is documented as MBZ. The ioctl handler (patch 4) never validates that `pad` is zero on output, nor does it validate `extensions` in `struct drm_xe_vm_get_property`. These should be checked to preserve forward compatibility.

**Struct size**: `struct xe_vm_fault` is `8 + 4 + 1 + 1 + 1 + 1 + 32 = 48` bytes, which is nicely aligned. The `reserved[4]` provides 32 bytes for future expansion which seems generous.

The two-phase size query / data fetch pattern is a reasonable approach for the ioctl design.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Claude review: drm/xe/uapi: Define drm_xe_vm_get_property
  2026-03-24 15:29 ` [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2026-03-24 20:46   ` Claude Code Review Bot
  0 siblings, 0 replies; 14+ messages in thread
From: Claude Code Review Bot @ 2026-03-24 20:46 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

**uAPI naming concern:** The `struct xe_vm_fault` does not follow the `drm_xe_` prefix convention used by all other uAPI structs in this header (e.g., `drm_xe_vm_create`, `drm_xe_vm_bind`). Similarly, the `FAULT_ACCESS_TYPE_*`, `FAULT_TYPE_*`, and `FAULT_LEVEL_*` defines lack a `DRM_XE_` prefix, which risks namespace collisions. Every other define in this header uses `DRM_XE_` prefixing. This is a uAPI that will be set in stone once merged — the naming should be consistent.

**Struct padding/alignment:** `struct xe_vm_fault` has `__u64 address`, then `__u32 address_precision`, then three `__u8` fields, then one `__u8 pad`, then `__u64 reserved[4]`. Between `pad` and `reserved` there are 0 bytes of implicit padding since the three u8 + pad = 4 bytes, and address_precision (u32) + 4 bytes of u8s = 8 bytes after the u64, so alignment is fine. Good.

**`data`/`value` union:** The union of `data` and `value` in `drm_xe_vm_get_property` is reasonable for supporting both pointer and scalar property types, though currently only `data` (pointer) is used.

**Size field is `__u32`:** The `size` field is `__u32`, which limits the maximum reportable faults. With `sizeof(struct xe_vm_fault)` being 48 bytes and MAX_FAULTS_SAVED_PER_VM=50, the max is 2400 bytes — well within `__u32` range. Fine, but somewhat inconsistent with other ioctls that use `__u64` for sizes.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-03-24 20:46 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-23 17:21 [PATCH v35 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-02-23 17:21 ` [PATCH v35 1/4] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
2026-02-23 17:21 ` [PATCH v35 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
2026-02-23 17:21 ` [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
2026-02-23 17:21 ` [PATCH v35 4/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-02-24  0:00   ` Claude review: " Claude Code Review Bot
2026-02-23 23:59 ` Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-03-06 15:55 [PATCH v36 0/4] " Jonathan Cavitt
2026-03-06 15:55 ` [PATCH v36 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-03-08 22:42   ` Claude review: " Claude Code Review Bot
2026-03-10 14:49 [PATCH v37 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-10 14:49 ` [PATCH v37 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-03-11  3:12   ` Claude review: " Claude Code Review Bot
2026-03-20 18:35 [PATCH v38 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-20 18:35 ` [PATCH v38 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-03-21 17:27   ` Claude review: " Claude Code Review Bot
2026-03-24 15:29 [PATCH v39 0/4] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-03-24 15:29 ` [PATCH v39 2/4] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-03-24 20:46   ` Claude review: " Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox