From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BDFABEC1115 for ; Mon, 23 Feb 2026 17:21:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 71A1910E41C; Mon, 23 Feb 2026 17:21:23 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="HUxQaBGo"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8AB3A10E415; Mon, 23 Feb 2026 17:21:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1771867283; x=1803403283; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ckE8Um9owV/L3T6mT0/Nm+xB/3Wv4/Biaa+rCjdhCBk=; b=HUxQaBGosiO6BgSLvrrGeMcePET2zYSfFk762omWDKE60uC6OEQ97TY9 rMQ1PI1h5Y6cwKdbLmwdtBXpZaj8hjnrAzz1SN5nRuELb2CAXsaEVshg4 SOegwU+ICMTGbv8F/KsA8OKxnUynIIVdbc2dgNhsYlsmpbtgNNGkeQOrQ 0UkpHIbAfojPXgvPH1YssbprmiDlSlZzEiGflQL1iojauL7j+BOMAuORc gLxJakSC0qYSC7Im+qYCor3QRctYOwr7wZNM8HO6QPsO3rRt8DZSkWRhZ thv6xyi4kAKY9kI5KsHulWin8xw4Tf1IU7dH9dFOgIdKXND+Nd51v/XZY g==; X-CSE-ConnectionGUID: Aybu9RZXTt+LryVE/NWEBw== X-CSE-MsgGUID: G8CqAG0vTyeJpOR+MXMltA== X-IronPort-AV: E=McAfee;i="6800,10657,11710"; a="83189018" X-IronPort-AV: E=Sophos;i="6.21,307,1763452800"; d="scan'208";a="83189018" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2026 09:21:23 -0800 X-CSE-ConnectionGUID: 3XNB9Wx5QJOSrgA+1PpWaA== X-CSE-MsgGUID: aDloF0xdQiGPVEFdCzlczg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,307,1763452800"; d="scan'208";a="220622804" Received: from dut4086lnl.fm.intel.com ([10.105.10.85]) by orviesa005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2026 09:21:22 -0800 From: Jonathan Cavitt To: intel-xe@lists.freedesktop.org Cc: saurabhg.gupta@intel.com, alex.zuo@intel.com, jonathan.cavitt@intel.com, joonas.lahtinen@linux.intel.com, matthew.brost@intel.com, jianxun.zhang@intel.com, shuicheng.lin@intel.com, dri-devel@lists.freedesktop.org, Michal.Wajdeczko@intel.com, michal.mrozek@intel.com, raag.jadav@intel.com, ivan.briano@intel.com, matthew.auld@intel.com, dafna.hirschfeld@intel.com Subject: [PATCH v35 3/4] drm/xe/xe_vm: Add per VM fault info Date: Mon, 23 Feb 2026 17:21:23 +0000 Message-ID: <20260223172120.98961-9-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260223172120.98961-6-jonathan.cavitt@intel.com> References: <20260223172120.98961-6-jonathan.cavitt@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add additional information to each VM so they can report up to the first 50 seen faults. Only pagefaults are saved this way currently, though in the future, all faults should be tracked by the VM for future reporting. Additionally, of the pagefaults reported, only failed pagefaults are saved this way, as successful pagefaults should recover silently and not need to be reported to userspace. v2: - Free vm after use (Shuicheng) - Compress pf copy logic (Shuicheng) - Update fault_unsuccessful before storing (Shuicheng) - Fix old struct name in comments (Shuicheng) - Keep first 50 pagefaults instead of last 50 (Jianxun) v3: - Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt) - Fix double-locking error (jcavitt) - Assert kmemdump is successful (Shuicheng) v4: - Rename xe_vm.pfs to xe_vm.faults (jcavitt) - Store fault data and not pagefault in xe_vm faults list (jcavitt) - Store address, address type, and address precision per fault (jcavitt) - Store engine class and instance data per fault (Jianxun) - Add and fix kernel docs (Michal W) - Properly handle kzalloc error (Michal W) - s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W) - Store fault level per fault (Micahl M) v5: - Store fault and access type instead of address type (Jianxun) v6: - Store pagefaults in non-fault-mode VMs as well (Jianxun) v7: - Fix kernel docs and comments (Michal W) v8: - Fix double-locking issue (Jianxun) v9: - Do not report faults from reserved engines (Jianxun) v10: - Remove engine class and instance (Ivan) v11: - Perform kzalloc outside of lock (Auld) v12: - Fix xe_vm_fault_entry kernel docs (Shuicheng) v13: - Rebase and refactor (jcavitt) v14: - Correctly ignore fault mode in save_pagefault_to_vm (jcavitt) v15: - s/save_pagefault_to_vm/xe_pagefault_save_to_vm (Matt Brost) - Use guard instead of spin_lock/unlock (Matt Brost) - GT was added to xe_pagefault struct. Use xe_gt_hw_engine instead of creating a new helper function (Matt Brost) v16: - Set address precision programmatically (Matt Brost) v17: - Set address precision to fixed value (Matt Brost) uAPI: https://github.com/intel/compute-runtime/pull/878 Signed-off-by: Jonathan Cavitt Suggested-by: Matthew Brost Reviewed-by: Matthew Brost Cc: Shuicheng Lin Cc: Jianxun Zhang Cc: Michal Wajdeczko Cc: Michal Mzorek Cc: Ivan Briano Cc: Matthew Auld Cc: Matthew Brost --- drivers/gpu/drm/xe/xe_pagefault.c | 26 +++++++++++ drivers/gpu/drm/xe/xe_vm.c | 74 +++++++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_vm.h | 9 ++++ drivers/gpu/drm/xe/xe_vm_types.h | 29 ++++++++++++ 4 files changed, 138 insertions(+) diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c index 0281b5b6d4ab..4bb1a3ff13e4 100644 --- a/drivers/gpu/drm/xe/xe_pagefault.c +++ b/drivers/gpu/drm/xe/xe_pagefault.c @@ -249,6 +249,31 @@ static void xe_pagefault_print(struct xe_pagefault *pf) pf->consumer.engine_instance); } +static void xe_pagefault_save_to_vm(struct xe_device *xe, struct xe_pagefault *pf) +{ + struct xe_vm *vm; + + /* + * Pagefault may be asociated to VM that is not in fault mode. + * Perform asid_to_vm behavior, except if VM is not in fault + * mode, return VM anyways. + */ + down_read(&xe->usm.lock); + vm = xa_load(&xe->usm.asid_to_vm, pf->consumer.asid); + if (vm) + xe_vm_get(vm); + else + vm = ERR_PTR(-EINVAL); + up_read(&xe->usm.lock); + + if (IS_ERR(vm)) + return; + + xe_vm_add_fault_entry_pf(vm, pf); + + xe_vm_put(vm); +} + static void xe_pagefault_queue_work(struct work_struct *w) { struct xe_pagefault_queue *pf_queue = @@ -268,6 +293,7 @@ static void xe_pagefault_queue_work(struct work_struct *w) err = xe_pagefault_service(&pf); if (err) { xe_pagefault_print(&pf); + xe_pagefault_save_to_vm(gt_to_xe(pf.gt), &pf); xe_gt_info(pf.gt, "Fault response: Unsuccessful %pe\n", ERR_PTR(err)); } diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 5adabfd5dc30..e2f30c1c1669 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -27,6 +27,7 @@ #include "xe_device.h" #include "xe_drm_client.h" #include "xe_exec_queue.h" +#include "xe_gt.h" #include "xe_migrate.h" #include "xe_pat.h" #include "xe_pm.h" @@ -577,6 +578,74 @@ static void preempt_rebind_work_func(struct work_struct *w) trace_xe_vm_rebind_worker_exit(vm); } +/** + * xe_vm_add_fault_entry_pf() - Add pagefault to vm fault list + * @vm: The VM. + * @pf: The pagefault. + * + * This function takes the data from the pagefault @pf and saves it to @vm->faults.list. + * + * The function exits silently if the list is full, and reports a warning if the pagefault + * could not be saved to the list. + */ +void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_pagefault *pf) +{ + struct xe_vm_fault_entry *e = NULL; + struct xe_hw_engine *hwe; + + /* Do not report faults on reserved engines */ + hwe = xe_gt_hw_engine(pf->gt, pf->consumer.engine_class, + pf->consumer.engine_instance, false); + if (!hwe || xe_hw_engine_is_reserved(hwe)) + return; + + e = kzalloc(sizeof(*e), GFP_KERNEL); + if (!e) { + drm_warn(&vm->xe->drm, + "Could not allocate memory for fault!\n"); + return; + } + + guard(spinlock)(&vm->faults.lock); + + /* + * Limit the number of faults in the fault list to prevent + * memory overuse. + */ + if (vm->faults.len >= MAX_FAULTS_SAVED_PER_VM) { + kfree(e); + return; + } + + e->address = pf->consumer.page_addr; + /* + * TODO: + * Address precision is currently always SZ_4K, but this may change + * in the future. + */ + e->address_precision = SZ_4K; + e->access_type = pf->consumer.access_type; + e->fault_type = FIELD_GET(XE_PAGEFAULT_TYPE_MASK, + pf->consumer.fault_type_level), + e->fault_level = FIELD_GET(XE_PAGEFAULT_LEVEL_MASK, + pf->consumer.fault_type_level), + + list_add_tail(&e->list, &vm->faults.list); + vm->faults.len++; +} + +static void xe_vm_clear_fault_entries(struct xe_vm *vm) +{ + struct xe_vm_fault_entry *e, *tmp; + + guard(spinlock)(&vm->faults.lock); + list_for_each_entry_safe(e, tmp, &vm->faults.list, list) { + list_del(&e->list); + kfree(e); + } + vm->faults.len = 0; +} + static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds) { int i; @@ -1538,6 +1607,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef) INIT_LIST_HEAD(&vm->userptr.invalidated); spin_lock_init(&vm->userptr.invalidated_lock); + INIT_LIST_HEAD(&vm->faults.list); + spin_lock_init(&vm->faults.lock); + ttm_lru_bulk_move_init(&vm->lru_bulk_move); INIT_WORK(&vm->destroy_work, vm_destroy_work_func); @@ -1854,6 +1926,8 @@ void xe_vm_close_and_put(struct xe_vm *vm) } up_write(&xe->usm.lock); + xe_vm_clear_fault_entries(vm); + for_each_tile(tile, xe, id) xe_range_fence_tree_fini(&vm->rftree[id]); diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h index 288115c7844a..fd3fc60f92bb 100644 --- a/drivers/gpu/drm/xe/xe_vm.h +++ b/drivers/gpu/drm/xe/xe_vm.h @@ -12,6 +12,12 @@ #include "xe_map.h" #include "xe_vm_types.h" +/** + * MAX_FAULTS_SAVED_PER_VM - Maximum number of faults each vm can store before future + * faults are discarded to prevent memory overuse + */ +#define MAX_FAULTS_SAVED_PER_VM 50 + struct drm_device; struct drm_printer; struct drm_file; @@ -22,6 +28,7 @@ struct dma_fence; struct xe_exec_queue; struct xe_file; +struct xe_pagefault; struct xe_sync_entry; struct xe_svm_range; struct drm_exec; @@ -312,6 +319,8 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap); void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p); void xe_vm_snapshot_free(struct xe_vm_snapshot *snap); +void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_pagefault *pf); + /** * xe_vm_set_validating() - Register this task as currently making bos resident * @allow_res_evict: Allow eviction of buffer objects bound to @vm when diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h index 43203e90ee3e..07546953cb19 100644 --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -23,6 +23,7 @@ struct drm_pagemap; struct xe_bo; +struct xe_pagefault; struct xe_svm_range; struct xe_sync_entry; struct xe_user_fence; @@ -175,6 +176,24 @@ struct xe_userptr_vma { struct xe_device; +/** + * struct xe_vm_fault_entry - Elements of vm->faults.list + * @list: link into @xe_vm.faults.list + * @address: address of the fault + * @address_precision: precision of faulted address + * @access_type: type of address access that resulted in fault + * @fault_type: type of fault reported + * @fault_level: fault level of the fault + */ +struct xe_vm_fault_entry { + struct list_head list; + u64 address; + u32 address_precision; + u8 access_type; + u8 fault_type; + u8 fault_level; +}; + struct xe_vm { /** @gpuvm: base GPUVM used to track VMAs */ struct drm_gpuvm gpuvm; @@ -331,6 +350,16 @@ struct xe_vm { bool capture_once; } error_capture; + /** @faults: List of all faults associated with this VM */ + struct { + /** @faults.lock: lock protecting @faults.list */ + spinlock_t lock; + /** @faults.list: list of xe_vm_fault_entry entries */ + struct list_head list; + /** @faults.len: length of @faults.list */ + unsigned int len; + } faults; + /** * @validation: Validation data only valid with the vm resv held. * Note: This is really task state of the task holding the vm resv, -- 2.43.0