From: Lizhi Hou <lizhi.hou@amd.com>
To: <ogabbay@kernel.org>, <quic_jhugo@quicinc.com>,
<dri-devel@lists.freedesktop.org>, <mario.limonciello@amd.com>,
<maciej.falkowski@linux.intel.com>
Cc: Lizhi Hou <lizhi.hou@amd.com>, <linux-kernel@vger.kernel.org>,
<max.zhen@amd.com>, <sonal.santan@amd.com>
Subject: [PATCH V1] accel/amdxdna: Expose per-client BO memory usage via fdinfo
Date: Thu, 9 Apr 2026 08:22:59 -0700 [thread overview]
Message-ID: <20260409152259.176883-1-lizhi.hou@amd.com> (raw)
Implement amdxdna_show_fdinfo() to report per-client memory usage,
including below driver-specific memory stat:
- heap allocation (amdxdna-heap-alloc)
- internal BO allocation (amdxdna-internal-alloc)
- external BO allocation (amdxdna-external-alloc)
Hook the implementation into the DRM fdinfo infrastructure via
drm_driver.show_fdinfo, while continuing to expose standard DRM
memory stat through drm_show_memory_stats().
This improves observability of per-process memory usage and aligns
with existing fdinfo reporting mechanisms used by other drivers.
Signed-off-by: Max Zhen <max.zhen@amd.com>
Signed-off-by: Lizhi Hou <lizhi.hou@amd.com>
---
Documentation/accel/amdxdna/amdnpu.rst | 25 +++++++++++++++++++
Documentation/gpu/drm-usage-stats.rst | 1 +
drivers/accel/amdxdna/amdxdna_pci_drv.c | 32 ++++++++++++++++++++++++-
3 files changed, 57 insertions(+), 1 deletion(-)
diff --git a/Documentation/accel/amdxdna/amdnpu.rst b/Documentation/accel/amdxdna/amdnpu.rst
index 42e54904f9a8..064973bf4893 100644
--- a/Documentation/accel/amdxdna/amdnpu.rst
+++ b/Documentation/accel/amdxdna/amdnpu.rst
@@ -270,6 +270,31 @@ MERT can report various kinds of telemetry information like the following:
* Deep Sleep counter
* etc.
+.. _amdxdna-usage-stats:
+
+Amdxdna DRM client usage stats implementation
+=============================================
+
+The amdxdna driver implements the DRM client usage stats specification as
+documented in :ref:`drm-client-usage-stats`.
+
+Example of the output showing the implemented key value pairs:
+
+::
+
+ pos: 0
+ flags: 0100002
+ mnt_id: 29
+ ino: 939
+ drm-driver: amdxdna_accel_driver
+ drm-client-id: 3219
+ drm-pdev: 0000:c5:00.1
+ amdxdna_accel_driver-heap-alloc: 60 KiB
+ amdxdna_accel_driver-internal-alloc: 67588 KiB
+ amdxdna_accel_driver-external-alloc: 0
+ drm-total-memory: 67632 KiB
+ drm-shared-memory: 0
+
References
==========
diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
index 63d6b2abe5ad..24d3012ca7a6 100644
--- a/Documentation/gpu/drm-usage-stats.rst
+++ b/Documentation/gpu/drm-usage-stats.rst
@@ -215,3 +215,4 @@ Driver specific implementations
* :ref:`panfrost-usage-stats`
* :ref:`panthor-usage-stats`
* :ref:`xe-usage-stats`
+* :ref:`amdxdna-usage-stats`
diff --git a/drivers/accel/amdxdna/amdxdna_pci_drv.c b/drivers/accel/amdxdna/amdxdna_pci_drv.c
index 09d7d88bb6f1..21eddfc538d0 100644
--- a/drivers/accel/amdxdna/amdxdna_pci_drv.c
+++ b/drivers/accel/amdxdna/amdxdna_pci_drv.c
@@ -226,6 +226,35 @@ static const struct drm_ioctl_desc amdxdna_drm_ioctls[] = {
DRM_IOCTL_DEF_DRV(AMDXDNA_SET_STATE, amdxdna_drm_set_state_ioctl, DRM_ROOT_ONLY),
};
+static void amdxdna_show_fdinfo(struct drm_printer *p, struct drm_file *filp)
+{
+ struct amdxdna_client *client = filp->driver_priv;
+ size_t heap_usage, external_usage, internal_usage;
+ char *drv_name = filp->minor->dev->driver->name;
+
+ mutex_lock(&client->mm_lock);
+
+ heap_usage = client->heap_usage;
+ internal_usage = client->total_int_bo_usage;
+ external_usage = client->total_bo_usage - internal_usage;
+
+ mutex_unlock(&client->mm_lock);
+
+ /*
+ * Note for driver specific BO memory usage stat.
+ * Total memory alloc = amdxdna-internal-alloc + amdxdna-external-alloc
+ */
+ drm_fdinfo_print_size(p, drv_name, "heap", "alloc", heap_usage);
+ drm_fdinfo_print_size(p, drv_name, "internal", "alloc", internal_usage);
+ drm_fdinfo_print_size(p, drv_name, "external", "alloc", external_usage);
+ /*
+ * Note for DRM standard BO memory stat.
+ * drm-total-memory counts both DEV BO and HEAP BO
+ * drm-shared-memory counts BO imported
+ */
+ drm_show_memory_stats(p, filp);
+}
+
static const struct file_operations amdxdna_fops = {
.owner = THIS_MODULE,
.open = accel_open,
@@ -236,6 +265,7 @@ static const struct file_operations amdxdna_fops = {
.read = drm_read,
.llseek = noop_llseek,
.mmap = drm_gem_mmap,
+ .show_fdinfo = drm_show_fdinfo,
.fop_flags = FOP_UNSIGNED_OFFSET,
};
@@ -251,7 +281,7 @@ const struct drm_driver amdxdna_drm_drv = {
.postclose = amdxdna_drm_close,
.ioctls = amdxdna_drm_ioctls,
.num_ioctls = ARRAY_SIZE(amdxdna_drm_ioctls),
-
+ .show_fdinfo = amdxdna_show_fdinfo,
.gem_create_object = amdxdna_gem_create_shmem_object_cb,
.gem_prime_import = amdxdna_gem_prime_import,
};
--
2.34.1
next reply other threads:[~2026-04-09 15:23 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 15:22 Lizhi Hou [this message]
2026-04-09 17:02 ` [PATCH V1] accel/amdxdna: Expose per-client BO memory usage via fdinfo Mario Limonciello
2026-04-09 20:46 ` Lizhi Hou
2026-04-12 1:06 ` Claude review: " Claude Code Review Bot
2026-04-12 1:06 ` Claude Code Review Bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260409152259.176883-1-lizhi.hou@amd.com \
--to=lizhi.hou@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maciej.falkowski@linux.intel.com \
--cc=mario.limonciello@amd.com \
--cc=max.zhen@amd.com \
--cc=ogabbay@kernel.org \
--cc=quic_jhugo@quicinc.com \
--cc=sonal.santan@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox