* Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl
2026-04-20 22:25 [PATCH 00/13] " Rob Clark
2026-04-20 22:25 ` [PATCH 13/13] " Rob Clark
@ 2026-04-22 23:13 ` Claude Code Review Bot
1 sibling, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 23:13 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/msm: Add PERFCNTR_CONFIG ioctl
Author: Rob Clark <robin.clark@oss.qualcomm.com>
Patches: 32
Reviewed: 2026-04-23T09:13:13.309094
---
This is a 13-patch series from Rob Clark adding a new `PERFCNTR_CONFIG` ioctl to the MSM DRM driver. The series removes the old debugfs-based performance counter infrastructure and replaces it with a proper ioctl-based system supporting two modes: (1) global counter collection via an fd-based stream (requires `perfmon_capable()`), and (2) per-context counter reservation for local (within `GEM_SUBMIT`) counter collection. The design is well thought out -- global counters allocate from highest to lowest to avoid conflicts with old userspace, the stream fd model is clean, and the UABI is reasonably future-proof via `group_stride`.
However, there are several bugs that need fixing before merging, including a **critical loop variable typo** that can cause an infinite loop/ringbuffer corruption, a **stack buffer overflow** from unchecked `copy_from_user` size, and **NULL pointer dereferences** on GPUs without perfcntr support.
Overall the architecture is sound. The series needs a respin to fix the bugs identified below.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl
2026-04-20 22:25 ` [PATCH 13/13] " Rob Clark
@ 2026-04-22 23:13 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-04-22 23:13 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The main ioctl implementation. Well-structured with clear separation between validation, allocation, and setup phases. The fd-based stream with `DRM_IOW` (returning fd via ioctl return value) is a nice design choice, well-documented in the UAPI header.
**Bug -- stack buffer overflow via `group_stride`**:
```c
struct drm_msm_perfcntr_group g = {0};
void __user *userptr =
u64_to_user_ptr(args->groups + (i * args->group_stride));
if (copy_from_user(&g, userptr, args->group_stride))
return -EFAULT;
```
If `args->group_stride > sizeof(struct drm_msm_perfcntr_group)`, this `copy_from_user` writes past the end of the stack variable `g`. The `group_stride` field exists for future extensibility (to allow adding fields to the struct), but there's no check that it doesn't exceed `sizeof(g)`. Fix:
```c
if (copy_from_user(&g, userptr, min((size_t)args->group_stride, sizeof(g))))
```
**Missing validation on `bufsz_shift`**: There's no upper bound check on `args->bufsz_shift`. A malicious user could pass `bufsz_shift = 30` to try to allocate 1GB of kernel memory:
```c
void *buf __free(kfree) =
kmalloc(1 << args->bufsz_shift, GFP_KERNEL);
```
Add a reasonable upper bound (e.g., `if (args->bufsz_shift > 20)` for a 1MB cap), or at least use `kmalloc()` with `__GFP_NOWARN` to avoid log spam from failed huge allocations.
**Typo in commit message**: "on exist of IFPC" should be "on exit of IFPC".
**Typo in error message** (line 7030):
```c
return UERR(EBUSY, dev, "groups[%d]: to few counters available", i);
```
Should be "too few".
**`strncmp` for group name matching** in `get_group_idx()`:
```c
if (!strncmp(group->name, name, len))
```
where `len = sizeof(g.group_name)` which is 16. This compares at most 16 bytes, but if a group name is exactly 16 bytes (no null terminator), `strncmp` would match a prefix. Since group names are likely short (e.g., "CP", "SP"), this is unlikely to be a practical issue, but `strnlen` + exact match would be more robust.
**Lock ordering**: `get_available_counters()` acquires `gpu->dev->filelist_mutex` while `gpu->perfcntr_lock` is already held. Verify this doesn't conflict with any path that takes these locks in the opposite order (e.g., `drm_file` open/close paths that might touch perfcntr state).
**Race on stream teardown**: In `msm_perfcntrs_stream_release()`, between dropping `perfcntr_lock` and calling `cancel_work_sync(&stream->sel_work)`, `sel_worker` could be scheduled and run. The worker checks `stream != gpu->perfcntrs->stream` under lock, which would be true (since we set it to NULL), so it would bail out via `break`. This is safe -- good design with the documented comment.
**Missing `EPOLLERR`/`EPOLLHUP`**: The `poll` implementation only returns `EPOLLIN`. It might be useful to signal `EPOLLHUP` when the stream is being torn down, but this is a minor enhancement, not a bug.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl
@ 2026-05-04 19:06 Rob Clark
2026-05-04 19:06 ` [PATCH v3 01/16] drm/msm: Remove obsolete perf infrastructure Rob Clark
` (15 more replies)
0 siblings, 16 replies; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Abhinav Kumar, Bill Wendling, David Airlie, Dmitry Baryshkov,
Jessica Zhang, Justin Stitt, Konrad Dybcio, open list,
llvm@lists.linux.dev (open list:CLANG/LLVM BUILD SUPPORT:Keyword:\b(?i:clang|llvm)\b), Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Marijn Suijten,
Maxime Ripard, Nathan Chancellor, Nick Desaulniers, Sean Paul,
Simona Vetter, Thomas Zimmermann
Add a new PERFCNTR_CONFIG ioctl, serving two functions:
1. Global counter collection (restricted to perfmon_capable()) using the
MSM_PERFCNTR_STREAM flag. Global counter sampling is, global, across
contexts. Only a single global counter stream is allowed at a time.
2. Reserve counters for local counter collection. Local counter
collection is local to a cmdstream (GEM_SUBMIT), and as such is
allowed in all processes without additional privileges.
The kernel enforces that counters assigned for global counter collection
do not conflict with counters reserved for local counter collection, and
visa versa. Since local counter collection is scoped to a single cmd-
stream, multiple UMD processes can overlap in their reserved counters.
But cannot conflict with global counter usage.
In the case of local counter collection, the UMD is still responsible
for programming the corresponding SELect registers, and sampling the
counter values, from it's cmdstream. But by performing the reservation
step, the UMD protects itself from the kernel trying to use the same
SEL/counter regs for global counter collection.
For global counter collection, the kernel programs SEL regs, and sets up
a timer for counter sampling. Userspace reads out the sampled values
from the returned perfcntr stream fd. Releasing the global perfcntr
stream is simply a matter of close()ing the fd.
The final two patches wire up the needed support for global counter
stream collection while IFPC is active, and drops disabling of IFPC.
The mesa side of this is at:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/41158
igt test at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/perfcntrs
Changes in v3:
- Fix loop counter issue spotted by Claude review
- Add MSM_PERFCNTR_UPDATE flag to ask kernel to return the actual # of
available counters in case of -E2BIG
- Proper barriers for modifying pwrup_reglist
- Link to v2: https://lore.kernel.org/all/20260424151140.104093-1-robin.clark@oss.qualcomm.com
Changes in v2:
- Rework makefile magic based on Dmitry's suggestion, and add a2xx/a5xx
perfcntr tables (although only a6xx+ is supported at this point)
- Fix compile error for compilers that are picky about a struct that
only contains a flex array
- Drop a6xx_idle() under gpu->lock in a6xx_perfcntr_configure(), replace
with perfcntr_fence that sel_worker can check
- Add a7xx+ pwrup_reglist support for restoring SELect regs on exit from
IFPC. (a6xx doesn't support IFPC, and the pwrup_reglist works a bit
differently)
- Stop disabling IFPC when global counter stream is active.
- Link to v1: https://lore.kernel.org/all/20260420222621.417276-1-robin.clark@oss.qualcomm.com/
Rob Clark (16):
drm/msm: Remove obsolete perf infrastructure
drm/msm: Allow CAP_PERFMON for setting SYSPROF
drm/msm/adreno: Sync registers from mesa
drm/msm/registers: Sync gen_header.py from mesa
drm/msm/registers: Add perfcntr json
drm/msm: Add a6xx+ perfcntr tables
drm/msm: Add sysprof accessors
drm/msm/a6xx: Add yield & flush helper
drm/msm: Add per-context perfcntr state
drm/msm: Add basic perfcntr infrastructure
drm/msm/a6xx+: Add support to configure perfcntrs
drm/msm/a8xx: Add perfcntr flush sequence
drm/msm: Add PERFCNTR_CONFIG ioctl
drm/msm/a6xx: Increase pwrup_reglist size
drm/msm/a6xx: Append SEL regs to dyn pwrup reglist
drm/msm/a6xx: Allow IFPC with perfcntr stream
drivers/gpu/drm/msm/Makefile | 27 +-
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 7 -
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 16 -
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 -
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 16 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 10 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 217 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 15 +-
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 33 +-
drivers/gpu/drm/msm/adreno/a8xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_device.c | 8 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +-
drivers/gpu/drm/msm/msm_debugfs.c | 6 -
drivers/gpu/drm/msm/msm_drv.c | 2 +-
drivers/gpu/drm/msm/msm_drv.h | 13 +-
drivers/gpu/drm/msm/msm_gpu.c | 119 +-
drivers/gpu/drm/msm/msm_gpu.h | 104 +-
drivers/gpu/drm/msm/msm_perf.c | 235 --
drivers/gpu/drm/msm/msm_perfcntr.c | 638 +++++
drivers/gpu/drm/msm/msm_perfcntr.h | 155 ++
drivers/gpu/drm/msm/msm_ringbuffer.h | 2 +
drivers/gpu/drm/msm/msm_submitqueue.c | 3 +-
.../msm/registers/adreno/a2xx_perfcntrs.json | 109 +
drivers/gpu/drm/msm/registers/adreno/a3xx.xml | 8 +-
drivers/gpu/drm/msm/registers/adreno/a5xx.xml | 141 +-
.../msm/registers/adreno/a5xx_perfcntrs.json | 128 +
drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 1300 ++++++-----
.../msm/registers/adreno/a6xx_descriptors.xml | 71 +-
.../drm/msm/registers/adreno/a6xx_enums.xml | 3 +
.../msm/registers/adreno/a6xx_perfcntrs.json | 105 +
.../msm/registers/adreno/a7xx_perfcntrs.json | 228 ++
.../msm/registers/adreno/a8xx_descriptors.xml | 96 +-
.../msm/registers/adreno/a8xx_perfcntrs.json | 240 ++
.../msm/registers/adreno/a8xx_perfcntrs.xml | 1929 +++++++++++++++
.../msm/registers/adreno/adreno_common.xml | 42 +
.../drm/msm/registers/adreno/adreno_pm4.xml | 50 +-
drivers/gpu/drm/msm/registers/gen_header.py | 2079 +++++++++--------
include/uapi/drm/msm_drm.h | 48 +
39 files changed, 6005 insertions(+), 2212 deletions(-)
delete mode 100644 drivers/gpu/drm/msm/msm_perf.c
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.c
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.h
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.xml
--
2.54.0
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH v3 01/16] drm/msm: Remove obsolete perf infrastructure
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 02/16] drm/msm: Allow CAP_PERFMON for setting SYSPROF Rob Clark
` (14 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, David Airlie, Simona Vetter,
Konrad Dybcio, open list
Outside of a3xx, this was never really used. And it low-key gets in the
way of the new perfcntr support (or at least it is confusing to have two
things called "perf"). So lets remove it.
This drops the "perf" debugfs file. But these days, nvtop is a better
option. (Plus perfetto for newer gens.)
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
---
drivers/gpu/drm/msm/Makefile | 1 -
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 7 -
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 16 --
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 -
drivers/gpu/drm/msm/msm_debugfs.c | 6 -
drivers/gpu/drm/msm/msm_drv.c | 1 -
drivers/gpu/drm/msm/msm_drv.h | 5 -
drivers/gpu/drm/msm/msm_gpu.c | 107 ------------
drivers/gpu/drm/msm/msm_gpu.h | 31 ----
drivers/gpu/drm/msm/msm_perf.c | 235 --------------------------
10 files changed, 412 deletions(-)
delete mode 100644 drivers/gpu/drm/msm/msm_perf.c
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index ba45e99be05b..ce00cfb0a875 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -122,7 +122,6 @@ msm-y += \
msm_gpu_devfreq.o \
msm_io_utils.o \
msm_iommu.o \
- msm_perf.o \
msm_rd.o \
msm_ringbuffer.o \
msm_submitqueue.o \
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
index d5a5fa9e2cf8..df4cded9143f 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
@@ -489,10 +489,6 @@ static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
return ring->memptrs->rptr;
}
-static const struct msm_gpu_perfcntr perfcntrs[] = {
-/* TODO */
-};
-
static struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
{
struct a2xx_gpu *a2xx_gpu = NULL;
@@ -518,9 +514,6 @@ static struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
adreno_gpu = &a2xx_gpu->base;
gpu = &adreno_gpu->base;
- gpu->perfcntrs = perfcntrs;
- gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs);
-
ret = adreno_gpu_init(dev, pdev, adreno_gpu, config->info->funcs, 1);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
index 018183e0ac3f..c17e9777beae 100644
--- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
@@ -266,12 +266,6 @@ static int a3xx_hw_init(struct msm_gpu *gpu)
/* Turn on performance counters: */
gpu_write(gpu, REG_A3XX_RBBM_PERFCTR_CTL, 0x01);
- /* Enable the perfcntrs that we use.. */
- for (i = 0; i < gpu->num_perfcntrs; i++) {
- const struct msm_gpu_perfcntr *perfcntr = &gpu->perfcntrs[i];
- gpu_write(gpu, perfcntr->select_reg, perfcntr->select_val);
- }
-
gpu_write(gpu, REG_A3XX_RBBM_INT_0_MASK, A3XX_INT0_MASK);
ret = adreno_hw_init(gpu);
@@ -508,13 +502,6 @@ static u32 a3xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
return ring->memptrs->rptr;
}
-static const struct msm_gpu_perfcntr perfcntrs[] = {
- { REG_A3XX_SP_PERFCOUNTER6_SELECT, REG_A3XX_RBBM_PERFCTR_SP_6_LO,
- SP_ALU_ACTIVE_CYCLES, "ALUACTIVE" },
- { REG_A3XX_SP_PERFCOUNTER7_SELECT, REG_A3XX_RBBM_PERFCTR_SP_7_LO,
- SP_FS_FULL_ALU_INSTRUCTIONS, "ALUFULL" },
-};
-
static struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
{
struct a3xx_gpu *a3xx_gpu = NULL;
@@ -542,9 +529,6 @@ static struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
adreno_gpu = &a3xx_gpu->base;
gpu = &adreno_gpu->base;
- gpu->perfcntrs = perfcntrs;
- gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs);
-
adreno_gpu->registers = a3xx_registers;
ret = adreno_gpu_init(dev, pdev, adreno_gpu, config->info->funcs, 1);
diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
index e6ab731f8e9a..6392126f48f2 100644
--- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
@@ -652,9 +652,6 @@ static struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
adreno_gpu = &a4xx_gpu->base;
gpu = &adreno_gpu->base;
- gpu->perfcntrs = NULL;
- gpu->num_perfcntrs = 0;
-
ret = adreno_gpu_init(dev, pdev, adreno_gpu, config->info->funcs, 1);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
index 1059a9b29d6a..f12701e286ec 100644
--- a/drivers/gpu/drm/msm/msm_debugfs.c
+++ b/drivers/gpu/drm/msm/msm_debugfs.c
@@ -344,12 +344,6 @@ static int late_init_minor(struct drm_minor *minor)
return ret;
}
- ret = msm_perf_debugfs_init(minor);
- if (ret) {
- DRM_DEV_ERROR(dev->dev, "could not install perf debugfs\n");
- return ret;
- }
-
return 0;
}
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index edc3b4af14f4..3066547f319b 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -87,7 +87,6 @@ static int msm_drm_uninit(struct device *dev, const struct component_ops *gpu_op
msm_gem_shrinker_cleanup(ddev);
- msm_perf_debugfs_cleanup(priv);
msm_rd_debugfs_cleanup(priv);
if (priv->kms)
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 6d847d593f1a..e53e4f220bed 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -45,7 +45,6 @@ struct msm_gpu;
struct msm_mmu;
struct msm_mdss;
struct msm_rd_state;
-struct msm_perf_state;
struct msm_gem_submit;
struct msm_fence_context;
struct msm_disp_state;
@@ -89,7 +88,6 @@ struct msm_drm_private {
struct msm_rd_state *rd; /* debugfs to dump all submits */
struct msm_rd_state *hangrd; /* debugfs to dump hanging submits */
- struct msm_perf_state *perf;
/**
* total_mem: Total/global amount of memory backing GEM objects.
@@ -442,8 +440,6 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *priv);
__printf(3, 4)
void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit,
const char *fmt, ...);
-int msm_perf_debugfs_init(struct drm_minor *minor);
-void msm_perf_debugfs_cleanup(struct msm_drm_private *priv);
#else
static inline int msm_debugfs_late_init(struct drm_device *dev) { return 0; }
__printf(3, 4)
@@ -451,7 +447,6 @@ static inline void msm_rd_dump_submit(struct msm_rd_state *rd,
struct msm_gem_submit *submit,
const char *fmt, ...) {}
static inline void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) {}
-static inline void msm_perf_debugfs_cleanup(struct msm_drm_private *priv) {}
#endif
struct clk *msm_clk_get(struct platform_device *pdev, const char *name);
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index cf244fd529aa..1bac70473f80 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -699,104 +699,6 @@ void msm_gpu_sysrq_kill(struct msm_gpu *gpu)
}
}
-/*
- * Performance Counters:
- */
-
-/* called under perf_lock */
-static int update_hw_cntrs(struct msm_gpu *gpu, uint32_t ncntrs, uint32_t *cntrs)
-{
- uint32_t current_cntrs[ARRAY_SIZE(gpu->last_cntrs)];
- int i, n = min(ncntrs, gpu->num_perfcntrs);
-
- /* read current values: */
- for (i = 0; i < gpu->num_perfcntrs; i++)
- current_cntrs[i] = gpu_read(gpu, gpu->perfcntrs[i].sample_reg);
-
- /* update cntrs: */
- for (i = 0; i < n; i++)
- cntrs[i] = current_cntrs[i] - gpu->last_cntrs[i];
-
- /* save current values: */
- for (i = 0; i < gpu->num_perfcntrs; i++)
- gpu->last_cntrs[i] = current_cntrs[i];
-
- return n;
-}
-
-static void update_sw_cntrs(struct msm_gpu *gpu)
-{
- ktime_t time;
- uint32_t elapsed;
- unsigned long flags;
-
- spin_lock_irqsave(&gpu->perf_lock, flags);
- if (!gpu->perfcntr_active)
- goto out;
-
- time = ktime_get();
- elapsed = ktime_to_us(ktime_sub(time, gpu->last_sample.time));
-
- gpu->totaltime += elapsed;
- if (gpu->last_sample.active)
- gpu->activetime += elapsed;
-
- gpu->last_sample.active = msm_gpu_active(gpu);
- gpu->last_sample.time = time;
-
-out:
- spin_unlock_irqrestore(&gpu->perf_lock, flags);
-}
-
-void msm_gpu_perfcntr_start(struct msm_gpu *gpu)
-{
- unsigned long flags;
-
- pm_runtime_get_sync(&gpu->pdev->dev);
-
- spin_lock_irqsave(&gpu->perf_lock, flags);
- /* we could dynamically enable/disable perfcntr registers too.. */
- gpu->last_sample.active = msm_gpu_active(gpu);
- gpu->last_sample.time = ktime_get();
- gpu->activetime = gpu->totaltime = 0;
- gpu->perfcntr_active = true;
- update_hw_cntrs(gpu, 0, NULL);
- spin_unlock_irqrestore(&gpu->perf_lock, flags);
-}
-
-void msm_gpu_perfcntr_stop(struct msm_gpu *gpu)
-{
- gpu->perfcntr_active = false;
- pm_runtime_put_sync(&gpu->pdev->dev);
-}
-
-/* returns -errno or # of cntrs sampled */
-int msm_gpu_perfcntr_sample(struct msm_gpu *gpu, uint32_t *activetime,
- uint32_t *totaltime, uint32_t ncntrs, uint32_t *cntrs)
-{
- unsigned long flags;
- int ret;
-
- spin_lock_irqsave(&gpu->perf_lock, flags);
-
- if (!gpu->perfcntr_active) {
- ret = -EINVAL;
- goto out;
- }
-
- *activetime = gpu->activetime;
- *totaltime = gpu->totaltime;
-
- gpu->activetime = gpu->totaltime = 0;
-
- ret = update_hw_cntrs(gpu, ncntrs, cntrs);
-
-out:
- spin_unlock_irqrestore(&gpu->perf_lock, flags);
-
- return ret;
-}
-
/*
* Cmdstream submission/retirement:
*/
@@ -899,7 +801,6 @@ void msm_gpu_retire(struct msm_gpu *gpu)
msm_update_fence(gpu->rb[i]->fctx, gpu->rb[i]->memptrs->fence);
kthread_queue_work(gpu->worker, &gpu->retire_work);
- update_sw_cntrs(gpu);
}
/* add bo's to gpu's ring, and kick gpu: */
@@ -916,8 +817,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
submit->seqno = submit->hw_fence->seqno;
- update_sw_cntrs(gpu);
-
/*
* ring->submits holds a ref to the submit, to deal with the case
* that a submit completes before msm_ioctl_gem_submit() returns.
@@ -1009,9 +908,6 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
void *memptrs;
uint64_t memptrs_iova;
- if (WARN_ON(gpu->num_perfcntrs > ARRAY_SIZE(gpu->last_cntrs)))
- gpu->num_perfcntrs = ARRAY_SIZE(gpu->last_cntrs);
-
gpu->dev = drm;
gpu->funcs = funcs;
gpu->name = name;
@@ -1043,9 +939,6 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
timer_setup(&gpu->hangcheck_timer, hangcheck_handler, 0);
- spin_lock_init(&gpu->perf_lock);
-
-
/* Map registers: */
gpu->mmio = msm_ioremap(pdev, config->ioname);
if (IS_ERR(gpu->mmio)) {
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 07abbe33d992..78e1478669be 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -22,7 +22,6 @@
struct msm_gem_submit;
struct msm_gem_vm_log_entry;
-struct msm_gpu_perfcntr;
struct msm_gpu_state;
struct msm_context;
@@ -168,18 +167,6 @@ struct msm_gpu {
struct adreno_smmu_priv adreno_smmu;
- /* performance counters (hw & sw): */
- spinlock_t perf_lock;
- bool perfcntr_active;
- struct {
- bool active;
- ktime_t time;
- } last_sample;
- uint32_t totaltime, activetime; /* sw counters */
- uint32_t last_cntrs[5]; /* hw counters */
- const struct msm_gpu_perfcntr *perfcntrs;
- uint32_t num_perfcntrs;
-
struct msm_ringbuffer *rb[MSM_GPU_MAX_RINGS];
int nr_rings;
@@ -320,19 +307,6 @@ static inline bool msm_gpu_active(struct msm_gpu *gpu)
return false;
}
-/* Perf-Counters:
- * The select_reg and select_val are just there for the benefit of the child
- * class that actually enables the perf counter.. but msm_gpu base class
- * will handle sampling/displaying the counters.
- */
-
-struct msm_gpu_perfcntr {
- uint32_t select_reg;
- uint32_t sample_reg;
- uint32_t select_val;
- const char *name;
-};
-
/*
* The number of priority levels provided by drm gpu scheduler. The
* DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some
@@ -689,11 +663,6 @@ void msm_devfreq_idle(struct msm_gpu *gpu);
int msm_gpu_hw_init(struct msm_gpu *gpu);
-void msm_gpu_perfcntr_start(struct msm_gpu *gpu);
-void msm_gpu_perfcntr_stop(struct msm_gpu *gpu);
-int msm_gpu_perfcntr_sample(struct msm_gpu *gpu, uint32_t *activetime,
- uint32_t *totaltime, uint32_t ncntrs, uint32_t *cntrs);
-
void msm_gpu_retire(struct msm_gpu *gpu);
void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit);
void msm_gpu_sysrq_kill(struct msm_gpu *gpu);
diff --git a/drivers/gpu/drm/msm/msm_perf.c b/drivers/gpu/drm/msm/msm_perf.c
deleted file mode 100644
index 7768bde6745f..000000000000
--- a/drivers/gpu/drm/msm/msm_perf.c
+++ /dev/null
@@ -1,235 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (C) 2013 Red Hat
- * Author: Rob Clark <robdclark@gmail.com>
- */
-
-/* For profiling, userspace can:
- *
- * tail -f /sys/kernel/debug/dri/<minor>/gpu
- *
- * This will enable performance counters/profiling to track the busy time
- * and any gpu specific performance counters that are supported.
- */
-
-#ifdef CONFIG_DEBUG_FS
-
-#include <linux/debugfs.h>
-#include <linux/uaccess.h>
-
-#include <drm/drm_file.h>
-
-#include "msm_drv.h"
-#include "msm_gpu.h"
-
-struct msm_perf_state {
- struct drm_device *dev;
-
- bool open;
- int cnt;
- struct mutex read_lock;
-
- char buf[256];
- int buftot, bufpos;
-
- unsigned long next_jiffies;
-};
-
-#define SAMPLE_TIME (HZ/4)
-
-/* wait for next sample time: */
-static int wait_sample(struct msm_perf_state *perf)
-{
- unsigned long start_jiffies = jiffies;
-
- if (time_after(perf->next_jiffies, start_jiffies)) {
- unsigned long remaining_jiffies =
- perf->next_jiffies - start_jiffies;
- int ret = schedule_timeout_interruptible(remaining_jiffies);
- if (ret > 0) {
- /* interrupted */
- return -ERESTARTSYS;
- }
- }
- perf->next_jiffies += SAMPLE_TIME;
- return 0;
-}
-
-static int refill_buf(struct msm_perf_state *perf)
-{
- struct msm_drm_private *priv = perf->dev->dev_private;
- struct msm_gpu *gpu = priv->gpu;
- char *ptr = perf->buf;
- int rem = sizeof(perf->buf);
- int i, n;
-
- if ((perf->cnt++ % 32) == 0) {
- /* Header line: */
- n = scnprintf(ptr, rem, "%%BUSY");
- ptr += n;
- rem -= n;
-
- for (i = 0; i < gpu->num_perfcntrs; i++) {
- const struct msm_gpu_perfcntr *perfcntr = &gpu->perfcntrs[i];
- n = scnprintf(ptr, rem, "\t%s", perfcntr->name);
- ptr += n;
- rem -= n;
- }
- } else {
- /* Sample line: */
- uint32_t activetime = 0, totaltime = 0;
- uint32_t cntrs[5];
- uint32_t val;
- int ret;
-
- /* sleep until next sample time: */
- ret = wait_sample(perf);
- if (ret)
- return ret;
-
- ret = msm_gpu_perfcntr_sample(gpu, &activetime, &totaltime,
- ARRAY_SIZE(cntrs), cntrs);
- if (ret < 0)
- return ret;
-
- val = totaltime ? 1000 * activetime / totaltime : 0;
- n = scnprintf(ptr, rem, "%3d.%d%%", val / 10, val % 10);
- ptr += n;
- rem -= n;
-
- for (i = 0; i < ret; i++) {
- /* cycle counters (I think).. convert to MHz.. */
- val = cntrs[i] / 10000;
- n = scnprintf(ptr, rem, "\t%5d.%02d",
- val / 100, val % 100);
- ptr += n;
- rem -= n;
- }
- }
-
- n = scnprintf(ptr, rem, "\n");
- ptr += n;
- rem -= n;
-
- perf->bufpos = 0;
- perf->buftot = ptr - perf->buf;
-
- return 0;
-}
-
-static ssize_t perf_read(struct file *file, char __user *buf,
- size_t sz, loff_t *ppos)
-{
- struct msm_perf_state *perf = file->private_data;
- int n = 0, ret = 0;
-
- mutex_lock(&perf->read_lock);
-
- if (perf->bufpos >= perf->buftot) {
- ret = refill_buf(perf);
- if (ret)
- goto out;
- }
-
- n = min((int)sz, perf->buftot - perf->bufpos);
- if (copy_to_user(buf, &perf->buf[perf->bufpos], n)) {
- ret = -EFAULT;
- goto out;
- }
-
- perf->bufpos += n;
- *ppos += n;
-
-out:
- mutex_unlock(&perf->read_lock);
- if (ret)
- return ret;
- return n;
-}
-
-static int perf_open(struct inode *inode, struct file *file)
-{
- struct msm_perf_state *perf = inode->i_private;
- struct drm_device *dev = perf->dev;
- struct msm_drm_private *priv = dev->dev_private;
- struct msm_gpu *gpu = priv->gpu;
- int ret = 0;
-
- if (!gpu)
- return -ENODEV;
-
- mutex_lock(&gpu->lock);
-
- if (perf->open) {
- ret = -EBUSY;
- goto out;
- }
-
- file->private_data = perf;
- perf->open = true;
- perf->cnt = 0;
- perf->buftot = 0;
- perf->bufpos = 0;
- msm_gpu_perfcntr_start(gpu);
- perf->next_jiffies = jiffies + SAMPLE_TIME;
-
-out:
- mutex_unlock(&gpu->lock);
- return ret;
-}
-
-static int perf_release(struct inode *inode, struct file *file)
-{
- struct msm_perf_state *perf = inode->i_private;
- struct msm_drm_private *priv = perf->dev->dev_private;
- msm_gpu_perfcntr_stop(priv->gpu);
- perf->open = false;
- return 0;
-}
-
-
-static const struct file_operations perf_debugfs_fops = {
- .owner = THIS_MODULE,
- .open = perf_open,
- .read = perf_read,
- .release = perf_release,
-};
-
-int msm_perf_debugfs_init(struct drm_minor *minor)
-{
- struct msm_drm_private *priv = minor->dev->dev_private;
- struct msm_perf_state *perf;
-
- /* only create on first minor: */
- if (priv->perf)
- return 0;
-
- perf = kzalloc_obj(*perf);
- if (!perf)
- return -ENOMEM;
-
- perf->dev = minor->dev;
-
- mutex_init(&perf->read_lock);
- priv->perf = perf;
-
- debugfs_create_file("perf", S_IFREG | S_IRUGO, minor->debugfs_root,
- perf, &perf_debugfs_fops);
- return 0;
-}
-
-void msm_perf_debugfs_cleanup(struct msm_drm_private *priv)
-{
- struct msm_perf_state *perf = priv->perf;
-
- if (!perf)
- return;
-
- priv->perf = NULL;
-
- mutex_destroy(&perf->read_lock);
-
- kfree(perf);
-}
-
-#endif
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 02/16] drm/msm: Allow CAP_PERFMON for setting SYSPROF
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
2026-05-04 19:06 ` [PATCH v3 01/16] drm/msm: Remove obsolete perf infrastructure Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 04/16] drm/msm/registers: Sync gen_header.py from mesa Rob Clark
` (13 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Sean Paul, Konrad Dybcio, Dmitry Baryshkov,
Abhinav Kumar, Jessica Zhang, Marijn Suijten, David Airlie,
Simona Vetter, open list
Use perfmon_capable() which checks both CAP_SYS_ADMIN and CAP_PERFMON.
This matches what i915 and xe do, and seems more appropriate.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 66f80f2d12f9..72b71e9e44f0 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -494,7 +494,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx,
return 0;
}
case MSM_PARAM_SYSPROF:
- if (!capable(CAP_SYS_ADMIN))
+ if (!perfmon_capable())
return UERR(EPERM, drm, "invalid permissions");
return msm_context_set_sysprof(ctx, gpu, value);
case MSM_PARAM_EN_VM_BIND:
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 04/16] drm/msm/registers: Sync gen_header.py from mesa
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
2026-05-04 19:06 ` [PATCH v3 01/16] drm/msm: Remove obsolete perf infrastructure Rob Clark
2026-05-04 19:06 ` [PATCH v3 02/16] drm/msm: Allow CAP_PERFMON for setting SYSPROF Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 05/16] drm/msm/registers: Add perfcntr json Rob Clark
` (12 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, David Airlie, Simona Vetter,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
open list
Update gen_header.py to bring in support for generating perfcntr tables.
Sync from https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/40522
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
---
| 2079 ++++++++++---------
1 file changed, 1146 insertions(+), 933 deletions(-)
--git a/drivers/gpu/drm/msm/registers/gen_header.py b/drivers/gpu/drm/msm/registers/gen_header.py
index 2acad951f1e2..07e6f0cb4e66 100644
--- a/drivers/gpu/drm/msm/registers/gen_header.py
+++ b/drivers/gpu/drm/msm/registers/gen_header.py
@@ -11,997 +11,1210 @@ import collections
import argparse
import time
import datetime
+import json
+
class Error(Exception):
- def __init__(self, message):
- self.message = message
+ def __init__(self, message):
+ self.message = message
+
class Enum(object):
- def __init__(self, name):
- self.name = name
- self.values = []
-
- def has_name(self, name):
- for (n, value) in self.values:
- if n == name:
- return True
- return False
-
- def names(self):
- return [n for (n, value) in self.values]
-
- def dump(self, is_deprecated):
- use_hex = False
- for (name, value) in self.values:
- if value > 0x1000:
- use_hex = True
-
- print("enum %s {" % self.name)
- for (name, value) in self.values:
- if use_hex:
- print("\t%s = 0x%08x," % (name, value))
- else:
- print("\t%s = %d," % (name, value))
- print("};\n")
-
- def dump_pack_struct(self, is_deprecated):
- pass
+ def __init__(self, name):
+ self.name = name
+ self.values = []
+
+ def has_name(self, name):
+ for (n, value) in self.values:
+ if n == name:
+ return True
+ return False
+
+ def names(self):
+ return [n for (n, value) in self.values]
+
+ def value(self, name):
+ for (n, v) in self.values:
+ if n == name:
+ return v
+
+ def dump(self, has_variants):
+ use_hex = False
+ for (name, value) in self.values:
+ if value > 0x1000:
+ use_hex = True
+
+ print("enum %s {" % self.name)
+ for (name, value) in self.values:
+ if use_hex:
+ print("\t%s = 0x%08x," % (name, value))
+ else:
+ print("\t%s = %d," % (name, value))
+ print("};\n")
+
+ def dump_pack_struct(self, has_variants):
+ pass
+
class Field(object):
- def __init__(self, name, low, high, shr, type, parser):
- self.name = name
- self.low = low
- self.high = high
- self.shr = shr
- self.type = type
-
- builtin_types = [ None, "a3xx_regid", "boolean", "uint", "hex", "int", "fixed", "ufixed", "float", "address", "waddress" ]
-
- maxpos = parser.current_bitsize - 1
-
- if low < 0 or low > maxpos:
- raise parser.error("low attribute out of range: %d" % low)
- if high < 0 or high > maxpos:
- raise parser.error("high attribute out of range: %d" % high)
- if high < low:
- raise parser.error("low is greater than high: low=%d, high=%d" % (low, high))
- if self.type == "boolean" and not low == high:
- raise parser.error("booleans should be 1 bit fields")
- elif self.type == "float" and not (high - low == 31 or high - low == 15):
- raise parser.error("floats should be 16 or 32 bit fields")
- elif self.type not in builtin_types and self.type not in parser.enums:
- raise parser.error("unknown type '%s'" % self.type)
-
- def ctype(self, var_name):
- if self.type is None:
- type = "uint32_t"
- val = var_name
- elif self.type == "boolean":
- type = "bool"
- val = var_name
- elif self.type == "uint" or self.type == "hex" or self.type == "a3xx_regid":
- type = "uint32_t"
- val = var_name
- elif self.type == "int":
- type = "int32_t"
- val = var_name
- elif self.type == "fixed":
- type = "float"
- val = "((int32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
- elif self.type == "ufixed":
- type = "float"
- val = "((uint32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
- elif self.type == "float" and self.high - self.low == 31:
- type = "float"
- val = "fui(%s)" % var_name
- elif self.type == "float" and self.high - self.low == 15:
- type = "float"
- val = "_mesa_float_to_half(%s)" % var_name
- elif self.type in [ "address", "waddress" ]:
- type = "uint64_t"
- val = var_name
- else:
- type = "enum %s" % self.type
- val = var_name
-
- if self.shr > 0:
- val = "(%s >> %d)" % (val, self.shr)
-
- return (type, val)
+ def __init__(self, name, low, high, shr, type, parser):
+ self.name = name
+ self.low = low
+ self.high = high
+ self.shr = shr
+ self.type = type
+
+ builtin_types = [None, "a3xx_regid", "boolean", "uint", "hex",
+ "int", "fixed", "ufixed", "float", "address", "waddress"]
+
+ maxpos = parser.current_bitsize - 1
+
+ if low < 0 or low > maxpos:
+ raise parser.error("low attribute out of range: %d" % low)
+ if high < 0 or high > maxpos:
+ raise parser.error("high attribute out of range: %d" % high)
+ if high < low:
+ raise parser.error(
+ "low is greater than high: low=%d, high=%d" % (low, high))
+ if self.type == "boolean" and not low == high:
+ raise parser.error("booleans should be 1 bit fields")
+ elif self.type == "float" and not (high - low == 31 or high - low == 15):
+ raise parser.error("floats should be 16 or 32 bit fields")
+ elif self.type not in builtin_types and self.type not in parser.enums:
+ raise parser.error("unknown type '%s'" % self.type)
+
+ def ctype(self, var_name):
+ if self.type is None:
+ type = "uint32_t"
+ val = var_name
+ elif self.type == "boolean":
+ type = "bool"
+ val = var_name
+ elif self.type == "uint" or self.type == "hex" or self.type == "a3xx_regid":
+ type = "uint32_t"
+ val = var_name
+ elif self.type == "int":
+ type = "int32_t"
+ val = var_name
+ elif self.type == "fixed":
+ type = "float"
+ val = "(uint32_t)((int32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
+ elif self.type == "ufixed":
+ type = "float"
+ val = "((uint32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
+ elif self.type == "float" and self.high - self.low == 31:
+ type = "float"
+ val = "fui(%s)" % var_name
+ elif self.type == "float" and self.high - self.low == 15:
+ type = "float"
+ val = "_mesa_float_to_half(%s)" % var_name
+ elif self.type in ["address", "waddress"]:
+ type = "uint64_t"
+ val = var_name
+ else:
+ type = "enum %s" % self.type
+ val = var_name
+
+ if self.shr > 0:
+ val = "(%s >> %d)" % (val, self.shr)
+
+ return (type, val)
+
def tab_to(name, value):
- tab_count = (68 - (len(name) & ~7)) // 8
- if tab_count <= 0:
- tab_count = 1
- print(name + ('\t' * tab_count) + value)
+ tab_count = (68 - (len(name) & ~7)) // 8
+ if tab_count <= 0:
+ tab_count = 1
+ print(name + ('\t' * tab_count) + value)
+
+def define_macro(name, value, has_variants):
+ if has_variants:
+ value = "__FD_DEPRECATED " + value
+ tab_to(name, value)
def mask(low, high):
- return ((0xffffffffffffffff >> (64 - (high + 1 - low))) << low)
+ return ((0xffffffffffffffff >> (64 - (high + 1 - low))) << low)
+
def field_name(reg, f):
- if f.name:
- name = f.name.lower()
- else:
- # We hit this path when a reg is defined with no bitset fields, ie.
- # <reg32 offset="0x88db" name="RB_RESOLVE_SYSTEM_BUFFER_ARRAY_PITCH" low="0" high="28" shr="6" type="uint"/>
- name = reg.name.lower()
+ if f.name:
+ name = f.name.lower()
+ else:
+ # We hit this path when a reg is defined with no bitset fields, ie.
+ # <reg32 offset="0x88db" name="RB_RESOLVE_SYSTEM_BUFFER_ARRAY_PITCH" low="0" high="28" shr="6" type="uint"/>
+ name = reg.name.lower()
- if (name in [ "double", "float", "int" ]) or not (name[0].isalpha()):
- name = "_" + name
+ if (name in ["double", "float", "int"]) or not (name[0].isalpha()):
+ name = "_" + name
- return name
+ return name
# indices - array of (ctype, stride, __offsets_NAME)
+
+
def indices_varlist(indices):
- return ", ".join(["i%d" % i for i in range(len(indices))])
+ return ", ".join(["i%d" % i for i in range(len(indices))])
+
def indices_prototype(indices):
- return ", ".join(["%s i%d" % (ctype, idx)
- for (idx, (ctype, stride, offset)) in enumerate(indices)])
+ return ", ".join(["%s i%d" % (ctype, idx)
+ for (idx, (ctype, stride, offset)) in enumerate(indices)])
+
def indices_strides(indices):
- return " + ".join(["0x%x*i%d" % (stride, idx)
- if stride else
- "%s(i%d)" % (offset, idx)
- for (idx, (ctype, stride, offset)) in enumerate(indices)])
+ return " + ".join(["0x%x*i%d" % (stride, idx)
+ if stride else
+ "%s(i%d)" % (offset, idx)
+ for (idx, (ctype, stride, offset)) in enumerate(indices)])
+
def is_number(str):
- try:
- int(str)
- return True
- except ValueError:
- return False
+ try:
+ int(str)
+ return True
+ except ValueError:
+ return False
+
def sanitize_variant(variant):
- if variant and "-" in variant:
- return variant[:variant.index("-")]
- return variant
+ if variant and "-" in variant:
+ return variant[:variant.index("-")]
+ return variant
+
class Bitset(object):
- def __init__(self, name, template):
- self.name = name
- self.inline = False
- self.reg = None
- if template:
- self.fields = template.fields[:]
- else:
- self.fields = []
-
- # Get address field if there is one in the bitset, else return None:
- def get_address_field(self):
- for f in self.fields:
- if f.type in [ "address", "waddress" ]:
- return f
- return None
-
- def dump_regpair_builder(self, reg):
- print("#ifndef NDEBUG")
- known_mask = 0
- for f in self.fields:
- known_mask |= mask(f.low, f.high)
- if f.type in [ "boolean", "address", "waddress" ]:
- continue
- type, val = f.ctype("fields.%s" % field_name(reg, f))
- print(" assert((%-40s & 0x%08x) == 0);" % (val, 0xffffffff ^ mask(0 , f.high - f.low)))
- print(" assert((%-40s & 0x%08x) == 0);" % ("fields.unknown", known_mask))
- print("#endif\n")
-
- print(" return (struct fd_reg_pair) {")
- print(" .reg = (uint32_t)%s," % reg.reg_offset())
- print(" .value =")
- cast = "(uint64_t)" if reg.bit_size == 64 else ""
- for f in self.fields:
- if f.type in [ "address", "waddress" ]:
- continue
- else:
- type, val = f.ctype("fields.%s" % field_name(reg, f))
- print(" (%s%-40s << %2d) |" % (cast, val, f.low))
- value_name = "dword"
- if reg.bit_size == 64:
- value_name = "qword"
- print(" fields.unknown | fields.%s," % (value_name,))
-
- address = self.get_address_field()
- if address:
- print(" .bo = fields.bo,")
- print(" .is_address = true,")
- if f.type == "waddress":
- print(" .bo_write = true,")
- print(" .bo_offset = fields.bo_offset,")
- print(" .bo_shift = %d," % address.shr)
- print(" .bo_low = %d," % address.low)
-
- print(" };")
-
- def dump_pack_struct(self, is_deprecated, reg=None):
- if not reg:
- return
-
- prefix = reg.full_name
-
- print("struct %s {" % prefix)
- for f in self.fields:
- if f.type in [ "address", "waddress" ]:
- tab_to(" __bo_type", "bo;")
- tab_to(" uint32_t", "bo_offset;")
- continue
- name = field_name(reg, f)
-
- type, val = f.ctype("var")
-
- tab_to(" %s" % type, "%s;" % name)
- if reg.bit_size == 64:
- tab_to(" uint64_t", "unknown;")
- tab_to(" uint64_t", "qword;")
- else:
- tab_to(" uint32_t", "unknown;")
- tab_to(" uint32_t", "dword;")
- print("};\n")
-
- depcrstr = ""
- if is_deprecated:
- depcrstr = " FD_DEPRECATED"
- if reg.array:
- print("static inline%s struct fd_reg_pair\npack_%s(uint32_t __i, struct %s fields)\n{" %
- (depcrstr, prefix, prefix))
- else:
- print("static inline%s struct fd_reg_pair\npack_%s(struct %s fields)\n{" %
- (depcrstr, prefix, prefix))
-
- self.dump_regpair_builder(reg)
-
- print("\n}\n")
-
- if self.get_address_field():
- skip = ", { .reg = 0 }"
- else:
- skip = ""
-
- if reg.array:
- print("#define %s(__i, ...) pack_%s(__i, __struct_cast(%s) { __VA_ARGS__ })%s\n" %
- (prefix, prefix, prefix, skip))
- else:
- print("#define %s(...) pack_%s(__struct_cast(%s) { __VA_ARGS__ })%s\n" %
- (prefix, prefix, prefix, skip))
-
-
- def dump(self, is_deprecated, prefix=None, reg=None):
- if prefix is None:
- prefix = self.name
- reg64 = reg and self.reg and self.reg.bit_size == 64
- if reg64:
- print("static inline uint32_t %s_LO(uint32_t val)\n{" % prefix)
- print("\treturn val;\n}")
- print("static inline uint32_t %s_HI(uint32_t val)\n{" % prefix)
- print("\treturn val;\n}")
- for f in self.fields:
- if f.name:
- name = prefix + "_" + f.name
- else:
- name = prefix
-
- if not f.name and f.low == 0 and f.shr == 0 and f.type not in ["float", "fixed", "ufixed"]:
- pass
- elif f.type == "boolean" or (f.type is None and f.low == f.high):
- tab_to("#define %s" % name, "0x%08x" % (1 << f.low))
- else:
- typespec = "ull" if reg64 else "u"
- tab_to("#define %s__MASK" % name, "0x%08x%s" % (mask(f.low, f.high), typespec))
- tab_to("#define %s__SHIFT" % name, "%d" % f.low)
- type, val = f.ctype("val")
- ret_type = "uint64_t" if reg64 else "uint32_t"
- cast = "(uint64_t)" if reg64 else ""
-
- print("static inline %s %s(%s val)\n{" % (ret_type, name, type))
- if f.shr > 0:
- print("\tassert(!(val & 0x%x));" % mask(0, f.shr - 1))
- print("\treturn (%s(%s) << %s__SHIFT) & %s__MASK;\n}" % (cast, val, name, name))
- print()
+ def __init__(self, name, template):
+ self.name = name
+ self.inline = False
+ self.reg = None
+ if template:
+ self.fields = template.fields[:]
+ else:
+ self.fields = []
+
+ # Get address field if there is one in the bitset, else return None:
+ def get_address_field(self):
+ for f in self.fields:
+ if f.type in ["address", "waddress"]:
+ return f
+ return None
+
+ def dump_regpair_builder(self, reg):
+ print("#ifndef NDEBUG")
+ known_mask = 0
+ for f in self.fields:
+ known_mask |= mask(f.low, f.high)
+ if f.type in ["boolean", "address", "waddress"]:
+ continue
+ type, val = f.ctype("fields.%s" % field_name(reg, f))
+ print(" assert((%-40s & 0x%08x) == 0);" %
+ (val, 0xffffffff ^ mask(0, f.high - f.low)))
+ print(" assert((%-40s & 0x%08x) == 0);" %
+ ("fields.unknown", known_mask))
+ print("#endif\n")
+
+ print(" return (struct fd_reg_pair) {")
+ print(" .reg = (uint32_t)%s," % reg.reg_offset())
+ print(" .value =")
+ cast = "(uint64_t)" if reg.bit_size == 64 else ""
+ for f in self.fields:
+ if f.type in ["address", "waddress"]:
+ continue
+ else:
+ type, val = f.ctype("fields.%s" % field_name(reg, f))
+ print(" (%s%-40s << %2d) |" % (cast, val, f.low))
+ value_name = "dword"
+ if reg.bit_size == 64:
+ value_name = "qword"
+ print(" fields.unknown | fields.%s," % (value_name,))
+
+ address = self.get_address_field()
+ if address:
+ print("#ifndef TU_CS_H")
+ print(" .bo = fields.bo,")
+ print(" .is_address = true,")
+ print(" .bo_offset = fields.bo_offset,")
+ print(" .bo_shift = %d," % address.shr)
+ print(" .bo_low = %d," % address.low)
+ print("#else")
+ print(" .is_address = true,")
+ print("#endif")
+
+ print(" };")
+
+ def dump_pack_struct(self, has_variants, reg=None):
+ if not reg:
+ return
+
+ prefix = reg.full_name
+
+ constexpr_mark = " CONSTEXPR"
+
+ print("struct %s {" % prefix)
+ for f in self.fields:
+ if f.type in ["address", "waddress"]:
+ print("#ifndef TU_CS_H")
+ tab_to(" __bo_type", "bo;")
+ tab_to(" uint32_t", "bo_offset;")
+ print("#endif\n")
+ continue
+ name = field_name(reg, f)
+
+ type, val = f.ctype("var")
+
+ tab_to(" %s" % type, "%s;" % name)
+
+ if f.type == "float":
+ # Requires using `fui()` or `_mesa_float_to_half()`
+ constexpr_mark = ""
+ if reg.bit_size == 64:
+ tab_to(" uint64_t", "qword;")
+ tab_to(" uint64_t", "unknown;")
+ else:
+ tab_to(" uint32_t", "dword;")
+ tab_to(" uint32_t", "unknown;")
+ print("};\n")
+
+ if not has_variants:
+ print("static%s inline struct fd_reg_pair" % constexpr_mark)
+ if reg.array:
+ print("pack_%s(uint32_t __i, struct %s fields)\n{" % (prefix, prefix))
+ else:
+ print("pack_%s(struct %s fields)\n{" % (prefix, prefix))
+
+ self.dump_regpair_builder(reg)
+
+ print("\n}\n")
+
+ if self.get_address_field():
+ skip = ", { .reg = 0 }"
+ else:
+ skip = ""
+
+ if reg.array:
+ print("#define %s(__i, ...) pack_%s(__i, __struct_cast(%s) { __VA_ARGS__ })%s\n" %
+ (prefix, prefix, prefix, skip))
+ else:
+ print("#define %s(...) pack_%s(__struct_cast(%s) { __VA_ARGS__ })%s\n" %
+ (prefix, prefix, prefix, skip))
+
+ def dump(self, has_variants, prefix=None, reg=None):
+ if prefix is None:
+ prefix = self.name
+ suffix = ""
+ if self.reg and self.reg.bit_size == 64:
+ print(
+ "static CONSTEXPR inline uint32_t %s_LO(uint32_t val)\n{" % prefix)
+ print("\treturn val;\n}")
+ print(
+ "static CONSTEXPR inline uint32_t %s_HI(uint32_t val)\n{" % prefix)
+ print("\treturn val;\n}")
+ suffix = "ull"
+
+ for f in self.fields:
+ if f.name:
+ name = prefix + "_" + f.name
+ else:
+ name = prefix
+
+ if not f.name and f.low == 0 and f.shr == 0 and f.type not in ["float", "fixed", "ufixed"]:
+ pass
+ elif f.type == "boolean" or (f.type is None and f.low == f.high):
+ tab_to("#define %s" % name, "0x%08x%s" % ((1 << f.low), suffix))
+ else:
+ tab_to("#define %s__MASK" %
+ name, "0x%08x%s" % (mask(f.low, f.high), suffix))
+ tab_to("#define %s__SHIFT" % name, "%d" % f.low)
+ type, val = f.ctype("val")
+ ret_type = "uint64_t" if reg and reg.bit_size == 64 else "uint32_t"
+ cast = "(uint64_t)" if reg and reg.bit_size == 64 else ""
+
+ constexpr_mark = "" if type == "float" else " CONSTEXPR"
+ print("static%s inline %s %s(%s val)\n{" % (
+ constexpr_mark, ret_type, name, type))
+ if f.shr > 0:
+ print("\tassert(!(val & 0x%x));" % mask(0, f.shr - 1))
+ print("\treturn (%s(%s) << %s__SHIFT) & %s__MASK;\n}" %
+ (cast, val, name, name))
+ print()
+
class Array(object):
- def __init__(self, attrs, domain, variant, parent, index_type):
- if "name" in attrs:
- self.local_name = attrs["name"]
- else:
- self.local_name = ""
- self.domain = domain
- self.variant = variant
- self.parent = parent
- self.children = []
- if self.parent:
- self.name = self.parent.name + "_" + self.local_name
- else:
- self.name = self.local_name
- if "offsets" in attrs:
- self.offsets = map(lambda i: "0x%08x" % int(i, 0), attrs["offsets"].split(","))
- self.fixed_offsets = True
- elif "doffsets" in attrs:
- self.offsets = map(lambda s: "(%s)" % s , attrs["doffsets"].split(","))
- self.fixed_offsets = True
- else:
- self.offset = int(attrs["offset"], 0)
- self.stride = int(attrs["stride"], 0)
- self.fixed_offsets = False
- if "index" in attrs:
- self.index_type = index_type
- else:
- self.index_type = None
- self.length = int(attrs["length"], 0)
- if "usage" in attrs:
- self.usages = attrs["usage"].split(',')
- else:
- self.usages = None
-
- def index_ctype(self):
- if not self.index_type:
- return "uint32_t"
- else:
- return "enum %s" % self.index_type.name
-
- # Generate array of (ctype, stride, __offsets_NAME)
- def indices(self):
- if self.parent:
- indices = self.parent.indices()
- else:
- indices = []
- if self.length != 1:
- if self.fixed_offsets:
- indices.append((self.index_ctype(), None, "__offset_%s" % self.local_name))
- else:
- indices.append((self.index_ctype(), self.stride, None))
- return indices
-
- def total_offset(self):
- offset = 0
- if not self.fixed_offsets:
- offset += self.offset
- if self.parent:
- offset += self.parent.total_offset()
- return offset
-
- def dump(self, is_deprecated):
- depcrstr = ""
- if is_deprecated:
- depcrstr = " FD_DEPRECATED"
- proto = indices_varlist(self.indices())
- strides = indices_strides(self.indices())
- array_offset = self.total_offset()
- if self.fixed_offsets:
- print("static inline%s uint32_t __offset_%s(%s idx)" % (depcrstr, self.local_name, self.index_ctype()))
- print("{\n\tswitch (idx) {")
- if self.index_type:
- for val, offset in zip(self.index_type.names(), self.offsets):
- print("\t\tcase %s: return %s;" % (val, offset))
- else:
- for idx, offset in enumerate(self.offsets):
- print("\t\tcase %d: return %s;" % (idx, offset))
- print("\t\tdefault: return INVALID_IDX(idx);")
- print("\t}\n}")
- if proto == '':
- tab_to("#define REG_%s_%s" % (self.domain, self.name), "0x%08x\n" % array_offset)
- else:
- tab_to("#define REG_%s_%s(%s)" % (self.domain, self.name, proto), "(0x%08x + %s )\n" % (array_offset, strides))
-
- def dump_pack_struct(self, is_deprecated):
- pass
-
- def dump_regpair_builder(self):
- pass
+ def __init__(self, attrs, domain, variant, parent, index_type):
+ if "name" in attrs:
+ self.local_name = attrs["name"]
+ else:
+ self.local_name = ""
+ self.domain = domain
+ self.variant = variant
+ self.parent = parent
+ self.children = []
+ if self.parent:
+ self.name = self.parent.name + "_" + self.local_name
+ else:
+ self.name = self.local_name
+ if "offsets" in attrs:
+ self.offsets = map(lambda i: "0x%08x" %
+ int(i, 0), attrs["offsets"].split(","))
+ self.fixed_offsets = True
+ elif "doffsets" in attrs:
+ self.offsets = map(lambda s: "(%s)" %
+ s, attrs["doffsets"].split(","))
+ self.fixed_offsets = True
+ else:
+ self.offset = int(attrs["offset"], 0)
+ self.stride = int(attrs["stride"], 0)
+ self.fixed_offsets = False
+ if "index" in attrs:
+ self.index_type = index_type
+ else:
+ self.index_type = None
+ self.length = int(attrs["length"], 0)
+ if "usage" in attrs:
+ self.usages = attrs["usage"].split(',')
+ else:
+ self.usages = None
+
+ def index_ctype(self):
+ if not self.index_type:
+ return "uint32_t"
+ else:
+ return "enum %s" % self.index_type.name
+
+ # Generate array of (ctype, stride, __offsets_NAME)
+ def indices(self):
+ if self.parent:
+ indices = self.parent.indices()
+ else:
+ indices = []
+ if self.length != 1:
+ if self.fixed_offsets:
+ indices.append((self.index_ctype(), None,
+ "__offset_%s" % self.local_name))
+ else:
+ indices.append((self.index_ctype(), self.stride, None))
+ return indices
+
+ def total_offset(self):
+ offset = 0
+ if not self.fixed_offsets:
+ offset += self.offset
+ if self.parent:
+ offset += self.parent.total_offset()
+ return offset
+
+ def dump(self, has_variants):
+ proto = indices_varlist(self.indices())
+ strides = indices_strides(self.indices())
+ array_offset = self.total_offset()
+ if self.fixed_offsets and not has_variants:
+ print("static CONSTEXPR inline uint32_t __offset_%s(%s idx)" %
+ (self.local_name, self.index_ctype()))
+ print("{\n\tswitch (idx) {")
+ if self.index_type:
+ for val, offset in zip(self.index_type.names(), self.offsets):
+ print("\t\tcase %s: return %s;" % (val, offset))
+ else:
+ for idx, offset in enumerate(self.offsets):
+ print("\t\tcase %d: return %s;" % (idx, offset))
+ print("\t\tdefault: return INVALID_IDX(idx);")
+ print("\t}\n}")
+ if proto == '':
+ define_macro("#define REG_%s_%s" %
+ (self.domain, self.name), "0x%08x\n" % array_offset,
+ has_variants)
+ else:
+ define_macro("#define REG_%s_%s(%s)" % (self.domain, self.name,
+ proto), "(0x%08x + %s )\n" % (array_offset, strides),
+ has_variants)
+
+ def dump_pack_struct(self, has_variants):
+ pass
+
+ def dump_regpair_builder(self):
+ pass
+
class Reg(object):
- def __init__(self, attrs, domain, array, bit_size):
- self.name = attrs["name"]
- self.domain = domain
- self.array = array
- self.offset = int(attrs["offset"], 0)
- self.type = None
- self.bit_size = bit_size
- if array:
- self.name = array.name + "_" + self.name
- array.children.append(self)
- self.full_name = self.domain + "_" + self.name
- if "stride" in attrs:
- self.stride = int(attrs["stride"], 0)
- self.length = int(attrs["length"], 0)
- else:
- self.stride = None
- self.length = None
-
- # Generate array of (ctype, stride, __offsets_NAME)
- def indices(self):
- if self.array:
- indices = self.array.indices()
- else:
- indices = []
- if self.stride:
- indices.append(("uint32_t", self.stride, None))
- return indices
-
- def total_offset(self):
- if self.array:
- return self.array.total_offset() + self.offset
- else:
- return self.offset
-
- def reg_offset(self):
- if self.array:
- offset = self.array.offset + self.offset
- return "(0x%08x + 0x%x*__i)" % (offset, self.array.stride)
- return "0x%08x" % self.offset
-
- def dump(self, is_deprecated):
- depcrstr = ""
- if is_deprecated:
- depcrstr = " FD_DEPRECATED "
- proto = indices_prototype(self.indices())
- strides = indices_strides(self.indices())
- offset = self.total_offset()
- if proto == '':
- tab_to("#define REG_%s" % self.full_name, "0x%08x" % offset)
- else:
- print("static inline%s uint32_t REG_%s(%s) { return 0x%08x + %s; }" % (depcrstr, self.full_name, proto, offset, strides))
-
- if self.bitset.inline:
- self.bitset.dump(is_deprecated, self.full_name, self)
- print("")
-
- def dump_pack_struct(self, is_deprecated):
- if self.bitset.inline:
- self.bitset.dump_pack_struct(is_deprecated, self)
-
- def dump_regpair_builder(self):
- self.bitset.dump_regpair_builder(self)
-
- def dump_py(self):
- print("\tREG_%s = 0x%08x" % (self.full_name, self.offset))
+ def __init__(self, attrs, domain, array, bit_size):
+ self.name = attrs["name"]
+ self.domain = domain
+ self.array = array
+ self.offset = int(attrs["offset"], 0)
+ self.type = None
+ self.bit_size = bit_size
+ if array:
+ self.name = array.name + "_" + self.name
+ array.children.append(self)
+ self.full_name = self.domain + "_" + self.name
+ if "stride" in attrs:
+ self.stride = int(attrs["stride"], 0)
+ self.length = int(attrs["length"], 0)
+ else:
+ self.stride = None
+ self.length = None
+
+ # Generate array of (ctype, stride, __offsets_NAME)
+ def indices(self):
+ if self.array:
+ indices = self.array.indices()
+ else:
+ indices = []
+ if self.stride:
+ indices.append(("uint32_t", self.stride, None))
+ return indices
+
+ def total_offset(self):
+ if self.array:
+ return self.array.total_offset() + self.offset
+ else:
+ return self.offset
+
+ def reg_offset(self):
+ if self.array:
+ offset = self.array.offset + self.offset
+ return "(0x%08x + 0x%x*__i)" % (offset, self.array.stride)
+ return "0x%08x" % self.offset
+
+ def dump(self, has_variants):
+ proto = indices_prototype(self.indices())
+ strides = indices_strides(self.indices())
+ offset = self.total_offset()
+ if proto == '':
+ define_macro("#define REG_%s" % self.full_name, "0x%08x" % offset, has_variants)
+ elif not has_variants:
+ depcrstr = ""
+ if has_variants:
+ depcrstr = " __FD_DEPRECATED "
+ print("static CONSTEXPR inline%s uint32_t REG_%s(%s) { return 0x%08x + %s; }" % (
+ depcrstr, self.full_name, proto, offset, strides))
+
+ if self.bitset.inline:
+ self.bitset.dump(has_variants, self.full_name, self)
+ print("")
+
+ def dump_pack_struct(self, has_variants):
+ if self.bitset.inline:
+ self.bitset.dump_pack_struct(has_variants, self)
+
+ def dump_regpair_builder(self):
+ self.bitset.dump_regpair_builder(self)
+
+ def dump_py(self):
+ offset = self.offset
+ if self.array:
+ offset += self.array.offset
+ print("\tREG_%s = 0x%08x" % (self.full_name, offset))
class Parser(object):
- def __init__(self):
- self.current_array = None
- self.current_domain = None
- self.current_prefix = None
- self.current_prefix_type = None
- self.current_stripe = None
- self.current_bitset = None
- self.current_bitsize = 32
- # The varset attribute on the domain specifies the enum which
- # specifies all possible hw variants:
- self.current_varset = None
- # Regs that have multiple variants.. we only generated the C++
- # template based struct-packers for these
- self.variant_regs = {}
- # Information in which contexts regs are used, to be used in
- # debug options
- self.usage_regs = collections.defaultdict(list)
- self.bitsets = {}
- self.enums = {}
- self.variants = set()
- self.file = []
- self.xml_files = []
-
- def error(self, message):
- parser, filename = self.stack[-1]
- return Error("%s:%d:%d: %s" % (filename, parser.CurrentLineNumber, parser.CurrentColumnNumber, message))
-
- def prefix(self, variant=None):
- if self.current_prefix_type == "variant" and variant:
- return sanitize_variant(variant)
- elif self.current_stripe:
- return self.current_stripe + "_" + self.current_domain
- elif self.current_prefix:
- return self.current_prefix + "_" + self.current_domain
- else:
- return self.current_domain
-
- def parse_field(self, name, attrs):
- try:
- if "pos" in attrs:
- high = low = int(attrs["pos"], 0)
- elif "high" in attrs and "low" in attrs:
- high = int(attrs["high"], 0)
- low = int(attrs["low"], 0)
- else:
- low = 0
- high = self.current_bitsize - 1
-
- if "type" in attrs:
- type = attrs["type"]
- else:
- type = None
-
- if "shr" in attrs:
- shr = int(attrs["shr"], 0)
- else:
- shr = 0
-
- b = Field(name, low, high, shr, type, self)
-
- if type == "fixed" or type == "ufixed":
- b.radix = int(attrs["radix"], 0)
-
- self.current_bitset.fields.append(b)
- except ValueError as e:
- raise self.error(e)
-
- def parse_varset(self, attrs):
- # Inherit the varset from the enclosing domain if not overriden:
- varset = self.current_varset
- if "varset" in attrs:
- varset = self.enums[attrs["varset"]]
- return varset
-
- def parse_variants(self, attrs):
- if "variants" not in attrs:
- return None
-
- variant = attrs["variants"].split(",")[0]
- varset = self.parse_varset(attrs)
-
- if "-" in variant:
- # if we have a range, validate that both the start and end
- # of the range are valid enums:
- start = variant[:variant.index("-")]
- end = variant[variant.index("-") + 1:]
- assert varset.has_name(start)
- if end != "":
- assert varset.has_name(end)
- else:
- assert varset.has_name(variant)
-
- return variant
-
- def add_all_variants(self, reg, attrs, parent_variant):
- # TODO this should really handle *all* variants, including dealing
- # with open ended ranges (ie. "A2XX,A4XX-") (we have the varset
- # enum now to make that possible)
- variant = self.parse_variants(attrs)
- if not variant:
- variant = parent_variant
-
- if reg.name not in self.variant_regs:
- self.variant_regs[reg.name] = {}
- else:
- # All variants must be same size:
- v = next(iter(self.variant_regs[reg.name]))
- assert self.variant_regs[reg.name][v].bit_size == reg.bit_size
-
- self.variant_regs[reg.name][variant] = reg
-
- def add_all_usages(self, reg, usages):
- if not usages:
- return
-
- for usage in usages:
- self.usage_regs[usage].append(reg)
-
- self.variants.add(reg.domain)
-
- def do_validate(self, schemafile):
- if not self.validate:
- return
-
- try:
- from lxml import etree
-
- parser, filename = self.stack[-1]
- dirname = os.path.dirname(filename)
-
- # we expect this to look like <namespace url> schema.xsd.. I think
- # technically it is supposed to be just a URL, but that doesn't
- # quite match up to what we do.. Just skip over everything up to
- # and including the first whitespace character:
- schemafile = schemafile[schemafile.rindex(" ")+1:]
-
- # this is a bit cheezy, but the xml file to validate could be
- # in a child director, ie. we don't really know where the schema
- # file is, the way the rnn C code does. So if it doesn't exist
- # just look one level up
- if not os.path.exists(dirname + "/" + schemafile):
- schemafile = "../" + schemafile
-
- if not os.path.exists(dirname + "/" + schemafile):
- raise self.error("Cannot find schema for: " + filename)
-
- xmlschema_doc = etree.parse(dirname + "/" + schemafile)
- xmlschema = etree.XMLSchema(xmlschema_doc)
-
- xml_doc = etree.parse(filename)
- if not xmlschema.validate(xml_doc):
- error_str = str(xmlschema.error_log.filter_from_errors()[0])
- raise self.error("Schema validation failed for: " + filename + "\n" + error_str)
- except ImportError as e:
- print("lxml not found, skipping validation", file=sys.stderr)
-
- def do_parse(self, filename):
- filepath = os.path.abspath(filename)
- if filepath in self.xml_files:
- return
- self.xml_files.append(filepath)
- file = open(filename, "rb")
- parser = xml.parsers.expat.ParserCreate()
- self.stack.append((parser, filename))
- parser.StartElementHandler = self.start_element
- parser.EndElementHandler = self.end_element
- parser.CharacterDataHandler = self.character_data
- parser.buffer_text = True
- parser.ParseFile(file)
- self.stack.pop()
- file.close()
-
- def parse(self, rnn_path, filename, validate):
- self.path = rnn_path
- self.stack = []
- self.validate = validate
- self.do_parse(filename)
-
- def parse_reg(self, attrs, bit_size):
- self.current_bitsize = bit_size
- if "type" in attrs and attrs["type"] in self.bitsets:
- bitset = self.bitsets[attrs["type"]]
- if bitset.inline:
- self.current_bitset = Bitset(attrs["name"], bitset)
- self.current_bitset.inline = True
- else:
- self.current_bitset = bitset
- else:
- self.current_bitset = Bitset(attrs["name"], None)
- self.current_bitset.inline = True
- if "type" in attrs:
- self.parse_field(None, attrs)
-
- variant = self.parse_variants(attrs)
- if not variant and self.current_array:
- variant = self.current_array.variant
-
- self.current_reg = Reg(attrs, self.prefix(variant), self.current_array, bit_size)
- self.current_reg.bitset = self.current_bitset
- self.current_bitset.reg = self.current_reg
-
- if len(self.stack) == 1:
- self.file.append(self.current_reg)
-
- if variant is not None:
- self.add_all_variants(self.current_reg, attrs, variant)
-
- usages = None
- if "usage" in attrs:
- usages = attrs["usage"].split(',')
- elif self.current_array:
- usages = self.current_array.usages
-
- self.add_all_usages(self.current_reg, usages)
-
- def start_element(self, name, attrs):
- self.cdata = ""
- if name == "import":
- filename = attrs["file"]
- self.do_parse(os.path.join(self.path, filename))
- elif name == "domain":
- self.current_domain = attrs["name"]
- if "prefix" in attrs:
- self.current_prefix = sanitize_variant(self.parse_variants(attrs))
- self.current_prefix_type = attrs["prefix"]
- else:
- self.current_prefix = None
- self.current_prefix_type = None
- if "varset" in attrs:
- self.current_varset = self.enums[attrs["varset"]]
- elif name == "stripe":
- self.current_stripe = sanitize_variant(self.parse_variants(attrs))
- elif name == "enum":
- self.current_enum_value = 0
- self.current_enum = Enum(attrs["name"])
- self.enums[attrs["name"]] = self.current_enum
- if len(self.stack) == 1:
- self.file.append(self.current_enum)
- elif name == "value":
- if "value" in attrs:
- value = int(attrs["value"], 0)
- else:
- value = self.current_enum_value
- self.current_enum.values.append((attrs["name"], value))
- elif name == "reg32":
- self.parse_reg(attrs, 32)
- elif name == "reg64":
- self.parse_reg(attrs, 64)
- elif name == "array":
- self.current_bitsize = 32
- variant = self.parse_variants(attrs)
- index_type = self.enums[attrs["index"]] if "index" in attrs else None
- self.current_array = Array(attrs, self.prefix(variant), variant, self.current_array, index_type)
- if len(self.stack) == 1:
- self.file.append(self.current_array)
- elif name == "bitset":
- self.current_bitset = Bitset(attrs["name"], None)
- if "inline" in attrs and attrs["inline"] == "yes":
- self.current_bitset.inline = True
- self.bitsets[self.current_bitset.name] = self.current_bitset
- if len(self.stack) == 1 and not self.current_bitset.inline:
- self.file.append(self.current_bitset)
- elif name == "bitfield" and self.current_bitset:
- self.parse_field(attrs["name"], attrs)
- elif name == "database":
- self.do_validate(attrs["xsi:schemaLocation"])
-
- def end_element(self, name):
- if name == "domain":
- self.current_domain = None
- self.current_prefix = None
- self.current_prefix_type = None
- elif name == "stripe":
- self.current_stripe = None
- elif name == "bitset":
- self.current_bitset = None
- elif name == "reg32":
- self.current_reg = None
- elif name == "array":
- # if the array has no Reg children, push an implicit reg32:
- if len(self.current_array.children) == 0:
- attrs = {
- "name": "REG",
- "offset": "0",
- }
- self.parse_reg(attrs, 32)
- self.current_array = self.current_array.parent
- elif name == "enum":
- self.current_enum = None
-
- def character_data(self, data):
- self.cdata += data
-
- def dump_reg_usages(self):
- d = collections.defaultdict(list)
- for usage, regs in self.usage_regs.items():
- for reg in regs:
- variants = self.variant_regs.get(reg.name)
- if variants:
- for variant, vreg in variants.items():
- if reg == vreg:
- d[(usage, sanitize_variant(variant))].append(reg)
- else:
- for variant in self.variants:
- d[(usage, sanitize_variant(variant))].append(reg)
-
- print("#ifdef __cplusplus")
-
- for usage, regs in self.usage_regs.items():
- print("template<chip CHIP> constexpr inline uint16_t %s_REGS[] = {};" % (usage.upper()))
-
- for (usage, variant), regs in d.items():
- offsets = []
-
- for reg in regs:
- if reg.array:
- for i in range(reg.array.length):
- offsets.append(reg.array.offset + reg.offset + i * reg.array.stride)
- if reg.bit_size == 64:
- offsets.append(offsets[-1] + 1)
- else:
- offsets.append(reg.offset)
- if reg.bit_size == 64:
- offsets.append(offsets[-1] + 1)
-
- offsets.sort()
-
- print("template<> constexpr inline uint16_t %s_REGS<%s>[] = {" % (usage.upper(), variant))
- for offset in offsets:
- print("\t%s," % hex(offset))
- print("};")
-
- print("#endif")
-
- def has_variants(self, reg):
- return reg.name in self.variant_regs and not is_number(reg.name) and not is_number(reg.name[1:])
-
- def dump(self):
- enums = []
- bitsets = []
- regs = []
- for e in self.file:
- if isinstance(e, Enum):
- enums.append(e)
- elif isinstance(e, Bitset):
- bitsets.append(e)
- else:
- regs.append(e)
-
- for e in enums + bitsets + regs:
- e.dump(self.has_variants(e))
-
- self.dump_reg_usages()
-
-
- def dump_regs_py(self):
- regs = []
- for e in self.file:
- if isinstance(e, Reg):
- regs.append(e)
-
- for e in regs:
- e.dump_py()
-
-
- def dump_reg_variants(self, regname, variants):
- if is_number(regname) or is_number(regname[1:]):
- return
- print("#ifdef __cplusplus")
- print("struct __%s {" % regname)
- # TODO be more clever.. we should probably figure out which
- # fields have the same type in all variants (in which they
- # appear) and stuff everything else in a variant specific
- # sub-structure.
- seen_fields = []
- bit_size = 32
- array = False
- address = None
- for variant in variants.keys():
- print(" /* %s fields: */" % variant)
- reg = variants[variant]
- bit_size = reg.bit_size
- array = reg.array
- for f in reg.bitset.fields:
- fld_name = field_name(reg, f)
- if fld_name in seen_fields:
- continue
- seen_fields.append(fld_name)
- name = fld_name.lower()
- if f.type in [ "address", "waddress" ]:
- if address:
- continue
- address = f
- tab_to(" __bo_type", "bo;")
- tab_to(" uint32_t", "bo_offset;")
- continue
- type, val = f.ctype("var")
- tab_to(" %s" %type, "%s;" %name)
- print(" /* fallback fields: */")
- if bit_size == 64:
- tab_to(" uint64_t", "unknown;")
- tab_to(" uint64_t", "qword;")
- else:
- tab_to(" uint32_t", "unknown;")
- tab_to(" uint32_t", "dword;")
- print("};")
- # TODO don't hardcode the varset enum name
- varenum = "chip"
- print("template <%s %s>" % (varenum, varenum.upper()))
- print("static inline struct fd_reg_pair")
- xtra = ""
- xtravar = ""
- if array:
- xtra = "int __i, "
- xtravar = "__i, "
- print("__%s(%sstruct __%s fields) {" % (regname, xtra, regname))
- for variant in variants.keys():
- if "-" in variant:
- start = variant[:variant.index("-")]
- end = variant[variant.index("-") + 1:]
- if end != "":
- print(" if ((%s >= %s) && (%s <= %s)) {" % (varenum.upper(), start, varenum.upper(), end))
- else:
- print(" if (%s >= %s) {" % (varenum.upper(), start))
- else:
- print(" if (%s == %s) {" % (varenum.upper(), variant))
- reg = variants[variant]
- reg.dump_regpair_builder()
- print(" } else")
- print(" assert(!\"invalid variant\");")
- print(" return (struct fd_reg_pair){};")
- print("}")
-
- if bit_size == 64:
- skip = ", { .reg = 0 }"
- else:
- skip = ""
-
- print("#define %s(VARIANT, %s...) __%s<VARIANT>(%s{__VA_ARGS__})%s" % (regname, xtravar, regname, xtravar, skip))
- print("#endif /* __cplusplus */")
-
- def dump_structs(self):
- for e in self.file:
- e.dump_pack_struct(self.has_variants(e))
-
- for regname in self.variant_regs:
- self.dump_reg_variants(regname, self.variant_regs[regname])
+ def __init__(self):
+ self.current_array = None
+ self.current_domain = None
+ self.current_prefix = None
+ self.current_prefix_type = None
+ self.current_stripe = None
+ self.current_bitset = None
+ self.current_bitsize = 32
+ # The varset attribute on the domain specifies the enum which
+ # specifies all possible hw variants:
+ self.current_varset = None
+ # Regs that have multiple variants.. we only generated the C++
+ # template based struct-packers for these
+ self.variant_regs = {}
+ # Information in which contexts regs are used, to be used in
+ # debug options
+ self.usage_regs = collections.defaultdict(list)
+ self.bitsets = {}
+ self.enums = {}
+ self.variants = set()
+ self.file = []
+ self.xml_files = []
+
+ def error(self, message):
+ parser, filename = self.stack[-1]
+ return Error("%s:%d:%d: %s" % (filename, parser.CurrentLineNumber, parser.CurrentColumnNumber, message))
+
+ def prefix(self, variant=None):
+ if self.current_prefix_type == "variant" and variant:
+ return sanitize_variant(variant)
+ elif self.current_stripe:
+ return self.current_stripe + "_" + self.current_domain
+ elif self.current_prefix:
+ return self.current_prefix + "_" + self.current_domain
+ else:
+ return self.current_domain
+
+ def parse_field(self, name, attrs):
+ try:
+ if "pos" in attrs:
+ high = low = int(attrs["pos"], 0)
+ elif "high" in attrs and "low" in attrs:
+ high = int(attrs["high"], 0)
+ low = int(attrs["low"], 0)
+ else:
+ low = 0
+ high = self.current_bitsize - 1
+
+ if "type" in attrs:
+ type = attrs["type"]
+ else:
+ type = None
+
+ if "shr" in attrs:
+ shr = int(attrs["shr"], 0)
+ else:
+ shr = 0
+
+ b = Field(name, low, high, shr, type, self)
+
+ if type == "fixed" or type == "ufixed":
+ b.radix = int(attrs["radix"], 0)
+
+ self.current_bitset.fields.append(b)
+ except ValueError as e:
+ raise self.error(e)
+
+ def parse_varset(self, attrs):
+ # Inherit the varset from the enclosing domain if not overriden:
+ varset = self.current_varset
+ if "varset" in attrs:
+ varset = self.enums[attrs["varset"]]
+ return varset
+
+ def parse_variants(self, attrs):
+ if "variants" not in attrs:
+ return None
+
+ variant = attrs["variants"].split(",")[0]
+ varset = self.parse_varset(attrs)
+
+ if "-" in variant:
+ # if we have a range, validate that both the start and end
+ # of the range are valid enums:
+ start = variant[:variant.index("-")]
+ end = variant[variant.index("-") + 1:]
+ assert varset.has_name(start)
+ if end != "":
+ assert varset.has_name(end)
+ else:
+ assert varset.has_name(variant)
+
+ return variant
+
+ def add_all_variants(self, reg, attrs, parent_variant):
+ # TODO this should really handle *all* variants, including dealing
+ # with open ended ranges (ie. "A2XX,A4XX-") (we have the varset
+ # enum now to make that possible)
+ variant = self.parse_variants(attrs)
+ if not variant:
+ variant = parent_variant
+
+ if reg.name not in self.variant_regs:
+ self.variant_regs[reg.name] = {}
+ else:
+ # All variants must be same size:
+ v = next(iter(self.variant_regs[reg.name]))
+ assert self.variant_regs[reg.name][v].bit_size == reg.bit_size
+
+ self.variant_regs[reg.name][variant] = reg
+
+ def add_all_usages(self, reg, usages):
+ if not usages:
+ return
+
+ for usage in usages:
+ self.usage_regs[usage].append(reg)
+
+ self.variants.add(reg.domain)
+
+ def do_validate(self, schemafile):
+ if not self.validate:
+ return
+
+ try:
+ from lxml import etree
+
+ parser, filename = self.stack[-1]
+ dirname = os.path.dirname(filename)
+
+ # we expect this to look like <namespace url> schema.xsd.. I think
+ # technically it is supposed to be just a URL, but that doesn't
+ # quite match up to what we do.. Just skip over everything up to
+ # and including the first whitespace character:
+ schemafile = schemafile[schemafile.rindex(" ")+1:]
+
+ # this is a bit cheezy, but the xml file to validate could be
+ # in a child director, ie. we don't really know where the schema
+ # file is, the way the rnn C code does. So if it doesn't exist
+ # just look one level up
+ if not os.path.exists(dirname + "/" + schemafile):
+ schemafile = "../" + schemafile
+
+ if not os.path.exists(dirname + "/" + schemafile):
+ raise self.error("Cannot find schema for: " + filename)
+
+ xmlschema_doc = etree.parse(dirname + "/" + schemafile)
+ xmlschema = etree.XMLSchema(xmlschema_doc)
+
+ xml_doc = etree.parse(filename)
+ if not xmlschema.validate(xml_doc):
+ error_str = str(xmlschema.error_log.filter_from_errors()[0])
+ raise self.error(
+ "Schema validation failed for: " + filename + "\n" + error_str)
+ except ImportError as e:
+ print("lxml not found, skipping validation", file=sys.stderr)
+
+ def do_parse(self, filename):
+ filepath = os.path.abspath(filename)
+ if filepath in self.xml_files:
+ return
+ self.xml_files.append(filepath)
+ file = open(filename, "rb")
+ parser = xml.parsers.expat.ParserCreate()
+ self.stack.append((parser, filename))
+ parser.StartElementHandler = self.start_element
+ parser.EndElementHandler = self.end_element
+ parser.CharacterDataHandler = self.character_data
+ parser.buffer_text = True
+ parser.ParseFile(file)
+ self.stack.pop()
+ file.close()
+
+ def parse(self, rnn_path, filename, validate):
+ self.path = rnn_path
+ self.stack = []
+ self.validate = validate
+ self.do_parse(filename)
+
+ def parse_reg(self, attrs, bit_size):
+ self.current_bitsize = bit_size
+ if "type" in attrs and attrs["type"] in self.bitsets:
+ bitset = self.bitsets[attrs["type"]]
+ if bitset.inline:
+ self.current_bitset = Bitset(attrs["name"], bitset)
+ self.current_bitset.inline = True
+ else:
+ self.current_bitset = bitset
+ else:
+ self.current_bitset = Bitset(attrs["name"], None)
+ self.current_bitset.inline = True
+ if "type" in attrs:
+ self.parse_field(None, attrs)
+
+ variant = self.parse_variants(attrs)
+ if not variant and self.current_array:
+ variant = self.current_array.variant
+
+ self.current_reg = Reg(attrs, self.prefix(
+ variant), self.current_array, bit_size)
+ self.current_reg.bitset = self.current_bitset
+ self.current_bitset.reg = self.current_reg
+
+ if len(self.stack) == 1:
+ self.file.append(self.current_reg)
+
+ if variant is not None:
+ self.add_all_variants(self.current_reg, attrs, variant)
+
+ usages = None
+ if "usage" in attrs:
+ usages = attrs["usage"].split(',')
+ elif self.current_array:
+ usages = self.current_array.usages
+
+ self.add_all_usages(self.current_reg, usages)
+
+ def start_element(self, name, attrs):
+ self.cdata = ""
+ if name == "import":
+ filename = attrs["file"]
+ self.do_parse(os.path.join(self.path, filename))
+ elif name == "domain":
+ self.current_domain = attrs["name"]
+ if "prefix" in attrs:
+ self.current_prefix = sanitize_variant(
+ self.parse_variants(attrs))
+ self.current_prefix_type = attrs["prefix"]
+ else:
+ self.current_prefix = None
+ self.current_prefix_type = None
+ if "varset" in attrs:
+ self.current_varset = self.enums[attrs["varset"]]
+ elif name == "stripe":
+ self.current_stripe = sanitize_variant(self.parse_variants(attrs))
+ elif name == "enum":
+ self.current_enum_value = 0
+ self.current_enum = Enum(attrs["name"])
+ self.enums[attrs["name"]] = self.current_enum
+ if len(self.stack) == 1:
+ self.file.append(self.current_enum)
+ elif name == "value":
+ if "value" in attrs:
+ value = int(attrs["value"], 0)
+ else:
+ value = self.current_enum_value
+ self.current_enum.values.append((attrs["name"], value))
+ elif name == "reg32":
+ self.parse_reg(attrs, 32)
+ elif name == "reg64":
+ self.parse_reg(attrs, 64)
+ elif name == "array":
+ self.current_bitsize = 32
+ variant = self.parse_variants(attrs)
+ index_type = self.enums[attrs["index"]
+ ] if "index" in attrs else None
+ self.current_array = Array(attrs, self.prefix(
+ variant), variant, self.current_array, index_type)
+ if len(self.stack) == 1:
+ self.file.append(self.current_array)
+ elif name == "bitset":
+ self.current_bitset = Bitset(attrs["name"], None)
+ if "inline" in attrs and attrs["inline"] == "yes":
+ self.current_bitset.inline = True
+ self.bitsets[self.current_bitset.name] = self.current_bitset
+ if len(self.stack) == 1 and not self.current_bitset.inline:
+ self.file.append(self.current_bitset)
+ elif name == "bitfield" and self.current_bitset:
+ self.parse_field(attrs["name"], attrs)
+ elif name == "database":
+ self.do_validate(attrs["xsi:schemaLocation"])
+
+ def end_element(self, name):
+ if name == "domain":
+ self.current_domain = None
+ self.current_prefix = None
+ self.current_prefix_type = None
+ elif name == "stripe":
+ self.current_stripe = None
+ elif name == "bitset":
+ self.current_bitset = None
+ elif name == "reg32":
+ self.current_reg = None
+ elif name == "array":
+ # if the array has no Reg children, push an implicit reg32:
+ if len(self.current_array.children) == 0:
+ attrs = {
+ "name": "REG",
+ "offset": "0",
+ }
+ self.parse_reg(attrs, 32)
+ self.current_array = self.current_array.parent
+ elif name == "enum":
+ self.current_enum = None
+
+ def character_data(self, data):
+ self.cdata += data
+
+ def dump_reg_usages(self):
+ d = collections.defaultdict(list)
+ for usage, regs in self.usage_regs.items():
+ for reg in regs:
+ variants = self.variant_regs.get(reg.name)
+ if variants:
+ for variant, vreg in variants.items():
+ if reg == vreg:
+ d[(usage, sanitize_variant(variant))].append(reg)
+ else:
+ for variant in self.variants:
+ d[(usage, sanitize_variant(variant))].append(reg)
+
+ print("#ifdef __cplusplus")
+
+ for usage, regs in self.usage_regs.items():
+ print("template<chip CHIP> constexpr inline uint16_t %s_REGS[] = {};" % (
+ usage.upper()))
+
+ for (usage, variant), regs in d.items():
+ offsets = []
+
+ for reg in regs:
+ if reg.array:
+ for i in range(reg.array.length):
+ offsets.append(reg.array.offset +
+ reg.offset + i * reg.array.stride)
+ if reg.bit_size == 64:
+ offsets.append(offsets[-1] + 1)
+ else:
+ offsets.append(reg.offset)
+ if reg.bit_size == 64:
+ offsets.append(offsets[-1] + 1)
+
+ offsets.sort()
+
+ print("template<> constexpr inline uint16_t %s_REGS<%s>[] = {" % (
+ usage.upper(), variant))
+ for offset in offsets:
+ print("\t%s," % hex(offset))
+ print("};")
+
+ print("#endif")
+
+ def has_variants(self, reg):
+ return reg.name in self.variant_regs and not is_number(reg.name) and not is_number(reg.name[1:])
+
+ def dump(self):
+ enums = []
+ bitsets = []
+ regs = []
+ for e in self.file:
+ if isinstance(e, Enum):
+ enums.append(e)
+ elif isinstance(e, Bitset):
+ bitsets.append(e)
+ else:
+ regs.append(e)
+
+ for e in enums + bitsets + regs:
+ e.dump(self.has_variants(e))
+
+ self.dump_reg_usages()
+
+ def dump_regs_py(self):
+ regs = []
+ for e in self.file:
+ if isinstance(e, Reg):
+ regs.append(e)
+
+ for e in regs:
+ e.dump_py()
+
+ def dump_reg_variants(self, regname, variants):
+ if is_number(regname) or is_number(regname[1:]):
+ return
+ print("#ifdef __cplusplus")
+ print("struct __%s {" % regname)
+ # TODO be more clever.. we should probably figure out which
+ # fields have the same type in all variants (in which they
+ # appear) and stuff everything else in a variant specific
+ # sub-structure.
+ seen_fields = []
+ bit_size = 32
+ array = False
+ address = None
+ constexpr_mark = " CONSTEXPR"
+ for variant in variants.keys():
+ print(" /* %s fields: */" % variant)
+ reg = variants[variant]
+ bit_size = reg.bit_size
+ array = reg.array
+ for f in reg.bitset.fields:
+ fld_name = field_name(reg, f)
+ if fld_name in seen_fields:
+ continue
+ seen_fields.append(fld_name)
+ name = fld_name.lower()
+ if f.type in ["address", "waddress"]:
+ if address:
+ continue
+ address = f
+ print("#ifndef TU_CS_H")
+ tab_to(" __bo_type", "bo;")
+ tab_to(" uint32_t", "bo_offset;")
+ print("#endif")
+ continue
+ type, val = f.ctype("var")
+ tab_to(" %s" % type, "%s;" % name)
+ if f.type == "float":
+ constexpr_mark = ""
+ print(" /* fallback fields: */")
+ if bit_size == 64:
+ tab_to(" uint64_t", "unknown;")
+ tab_to(" uint64_t", "qword;")
+ else:
+ tab_to(" uint32_t", "unknown;")
+ tab_to(" uint32_t", "dword;")
+ print("};")
+ # TODO don't hardcode the varset enum name
+ varenum = "chip"
+ print("template <%s %s>" % (varenum, varenum.upper()))
+ print("static%s inline struct fd_reg_pair" % (constexpr_mark))
+ xtra = ""
+ xtravar = ""
+ if array:
+ xtra = "int __i, "
+ xtravar = "__i, "
+ print("__%s(%sstruct __%s fields) {" % (regname, xtra, regname))
+ for variant in variants.keys():
+ if "-" in variant:
+ start = variant[:variant.index("-")]
+ end = variant[variant.index("-") + 1:]
+ if end != "":
+ print(" if ((%s >= %s) && (%s <= %s)) {" % (
+ varenum.upper(), start, varenum.upper(), end))
+ else:
+ print(" if (%s >= %s) {" % (varenum.upper(), start))
+ else:
+ print(" if (%s == %s) {" % (varenum.upper(), variant))
+ reg = variants[variant]
+ reg.dump_regpair_builder()
+ print(" } else")
+ print(" assert(!\"invalid variant\");")
+ print(" return (struct fd_reg_pair){};")
+ print("}")
+
+ if bit_size == 64:
+ skip = ", { .reg = 0 }"
+ else:
+ skip = ""
+
+ print("#define %s(VARIANT, %s...) __%s<VARIANT>(%s{__VA_ARGS__})%s" % (
+ regname, xtravar, regname, xtravar, skip))
+ print("#endif /* __cplusplus */")
+
+ def dump_structs(self):
+ for e in self.file:
+ e.dump_pack_struct(self.has_variants(e))
+
+ for regname in self.variant_regs:
+ self.dump_reg_variants(regname, self.variant_regs[regname])
def dump_c(args, guard, func):
- p = Parser()
-
- try:
- p.parse(args.rnn, args.xml, args.validate)
- except Error as e:
- print(e, file=sys.stderr)
- exit(1)
-
- print("#ifndef %s\n#define %s\n" % (guard, guard))
-
- print("/* Autogenerated file, DO NOT EDIT manually! */")
-
- print()
- print("#ifdef __KERNEL__")
- print("#include <linux/bug.h>")
- print("#define assert(x) BUG_ON(!(x))")
- print("#else")
- print("#include <assert.h>")
- print("#endif")
- print()
-
- print("#ifdef __cplusplus")
- print("#define __struct_cast(X)")
- print("#else")
- print("#define __struct_cast(X) (struct X)")
- print("#endif")
- print()
-
- print("#ifndef FD_NO_DEPRECATED_PACK")
- print("#define FD_DEPRECATED __attribute__((deprecated))")
- print("#else")
- print("#define FD_DEPRECATED")
- print("#endif")
- print()
-
- func(p)
-
- print()
- print("#undef FD_DEPRECATED")
- print()
-
- print("#endif /* %s */" % guard)
+ p = Parser()
+
+ try:
+ p.parse(args.rnn, args.xml, args.validate)
+ except Error as e:
+ print(e, file=sys.stderr)
+ exit(1)
+
+ print("#ifndef %s\n#define %s\n" % (guard, guard))
+
+ print("/* Autogenerated file, DO NOT EDIT manually! */")
+
+ print()
+ print("#ifdef __KERNEL__")
+ print("#include <linux/bug.h>")
+ print("#define assert(x) BUG_ON(!(x))")
+ print("#else")
+ print("#include <assert.h>")
+ print("#endif")
+ print()
+
+ print("#ifdef __cplusplus")
+ print("#define __struct_cast(X)")
+ print("#define CONSTEXPR constexpr")
+ print("#else")
+ print("#define __struct_cast(X) (struct X)")
+ print("#define CONSTEXPR")
+ print("#endif")
+ print()
+
+ # TODO figure out what to do about fd_reg_stomp_allowed()
+ # vs gcc.. for now only enable the warnings with clang:
+ print("#if defined(__clang__) && !defined(FD_NO_DEPRECATED_PACK)")
+ print("#define __FD_DEPRECATED _Pragma (\"GCC warning \\\"Deprecated reg builder\\\"\")")
+ print("#else")
+ print("#define __FD_DEPRECATED")
+ print("#endif")
+ print()
+
+ func(p)
+
+ print("#endif /* %s */" % guard)
def dump_c_defines(args):
- guard = str.replace(os.path.basename(args.xml), '.', '_').upper()
- dump_c(args, guard, lambda p: p.dump())
+ guard = str.replace(os.path.basename(args.xml), '.', '_').upper()
+ dump_c(args, guard, lambda p: p.dump())
def dump_c_pack_structs(args):
- guard = str.replace(os.path.basename(args.xml), '.', '_').upper() + '_STRUCTS'
- dump_c(args, guard, lambda p: p.dump_structs())
-
+ guard = str.replace(os.path.basename(args.xml),
+ '.', '_').upper() + '_STRUCTS'
+ dump_c(args, guard, lambda p: p.dump_structs())
+
+
+def dump_perfcntrs(args):
+ p = Parser()
+
+ try:
+ p.parse(args.rnn, args.xml, args.validate)
+ except Error as e:
+ print(e, file=sys.stderr)
+ exit(1)
+
+ perfcntrs = json.load(open(args.json, "r", encoding="utf-8"))
+
+ chip_type = p.enums['chip']
+ chip = perfcntrs['chip']
+ if not chip_type.has_name(chip):
+ raise Error("Invalid chip: " + chip)
+
+ groups = perfcntrs['groups']
+
+ guard = "__" + chip + "_PERFCNTRS_"
+ print("#ifndef %s\n#define %s\n" % (guard, guard))
+ print("/* Autogenerated file, DO NOT EDIT manually! */")
+ print()
+ print("#ifdef __KERNEL__")
+ print("#include \"msm_perfcntr.h\"")
+ print("#endif")
+ print()
+
+ def has_variant(variant):
+ if variant is None:
+ return True
+ if "-" in variant:
+ start = chip_type.value(variant[:variant.index("-")])
+ end = chip_type.value(variant[variant.index("-") + 1:])
+ chipn = chip_type.value(chip)
+
+ return (start is None or chipn >= start) and (end is None or chipn <= end)
+ return chip == variant
+
+ # Split out arrays and regs for later access:
+ arrays = {}
+ regs = {}
+ for e in p.file:
+ if isinstance(e, Array) and has_variant(e.variant):
+ arrays[e.local_name] = e
+ if isinstance(e, Reg):
+ regs[e.name] = e
+
+ # For variant regs, overwrite 'regs' entries with correct variant:
+ for regname in p.variant_regs:
+ for (variant, reg) in p.variant_regs[regname].items():
+ if has_variant(variant):
+ regs[regname] = reg
+ break
+
+ for group in groups:
+ name = group['name']
+ name_low = name.lower()
+ num = group['num']
+ countable_type_name = group['countable_type']
+
+ if not countable_type_name in p.enums:
+ raise Error("Invalid type: " + countable_type_name)
+
+ countable_type = p.enums[countable_type_name]
+
+ print("#ifndef __KERNEL__")
+ print("static const struct fd_perfcntr_countable " + name_low + "_countables[] = {")
+ for (name, value) in countable_type.values:
+ # if the countable is prefixed with the chip, strip that:
+ # (note: avoid py3.9 dependency for kernel)
+ if name.startswith(chip + "_"):
+ name = name[len(chip)+1:]
+ print(" { \"" + name + "\", " + str(value) + " },")
+ print("};")
+ print("#endif")
+
+ print("static const struct fd_perfcntr_counter " + name_low + "_counters[] = {")
+ for i in range(0, num):
+ if "reserved" in group and i in group["reserved"]:
+ continue
+ def get_reg(name):
+ # if reg has {} pattern, expand that first:
+ name = name.format(i)
+
+ if name in arrays:
+ arr = arrays[name]
+ return arr.offset + (i * arr.stride)
+
+ if not name in regs:
+ raise Error("Invalid reg: " + name)
+
+ reg = regs[name]
+ return reg.offset
+
+ def get_counter():
+ # if the counter is <reg64> just a single "counter" value
+ # should be specified in the json, but for legacy separate
+ # hi/lo <reg32> pairs "counter_lo" and "counter_hi" should
+ # be specified
+ if "counter" in group:
+ counter = get_reg(group["counter"])
+ return [counter, counter+1]
+ counter_lo = get_reg(group["counter_lo"])
+ counter_hi = get_reg(group["counter_hi"])
+ return [counter_lo, counter_hi]
+
+ (counter_lo, counter_hi) = get_counter()
+ select = get_reg(group['select'])
+
+ select_offset = 0
+ if "select_offset" in group:
+ select_offset = int(group["select_offset"])
+ select = select + select_offset
+
+ slice_select_str = ""
+ if "slice_select" in group:
+ slice_select = group["slice_select"]
+ for reg in slice_select:
+ val = get_reg(reg) + select_offset
+ slice_select_str += "0x%04x, " % val
+
+ # TODO add support for things that need enable/clear regs
+
+ print(" { 0x%04x, {%s}, 0x%04x, 0x%04x }," % (select, slice_select_str, counter_lo, counter_hi))
+ print("};")
+
+ print()
+
+ print("const struct fd_perfcntr_group " + chip.lower() + "_perfcntr_groups[] = {")
+ for group in groups:
+ name = group['name']
+ name_low = name.lower()
+ pipe = 'NONE'
+ if 'pipe' in group:
+ pipe = group['pipe']
+
+ print(" GROUP(\"%s\", PIPE_%s, %s_counters, %s_countables)," % (name, pipe, name_low, name_low))
+
+ print("};")
+ print("const unsigned " + chip.lower() + "_num_perfcntr_groups = ARRAY_SIZE(" + chip.lower() + "_perfcntr_groups);")
+
+ print()
+ print("#endif /* %s */" % guard)
def dump_py_defines(args):
- p = Parser()
+ p = Parser()
- try:
- p.parse(args.rnn, args.xml, args.validate)
- except Error as e:
- print(e, file=sys.stderr)
- exit(1)
+ try:
+ p.parse(args.rnn, args.xml, args.validate)
+ except Error as e:
+ print(e, file=sys.stderr)
+ exit(1)
- file_name = os.path.splitext(os.path.basename(args.xml))[0]
+ file_name = os.path.splitext(os.path.basename(args.xml))[0]
- print("from enum import IntEnum")
- print("class %sRegs(IntEnum):" % file_name.upper())
+ print("from enum import IntEnum")
+ print("class %sRegs(IntEnum):" % file_name.upper())
- os.path.basename(args.xml)
+ os.path.basename(args.xml)
- p.dump_regs_py()
+ p.dump_regs_py()
def main():
- parser = argparse.ArgumentParser()
- parser.add_argument('--rnn', type=str, required=True)
- parser.add_argument('--xml', type=str, required=True)
- parser.add_argument('--validate', default=False, action='store_true')
- parser.add_argument('--no-validate', dest='validate', action='store_false')
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--rnn', type=str, required=True)
+ parser.add_argument('--xml', type=str, required=True)
+ parser.add_argument('--validate', default=False, action='store_true')
+ parser.add_argument('--no-validate', dest='validate', action='store_false')
+
+ subparsers = parser.add_subparsers()
+ subparsers.required = True
- subparsers = parser.add_subparsers()
- subparsers.required = True
+ parser_c_defines = subparsers.add_parser('c-defines')
+ parser_c_defines.set_defaults(func=dump_c_defines)
- parser_c_defines = subparsers.add_parser('c-defines')
- parser_c_defines.set_defaults(func=dump_c_defines)
+ parser_c_pack_structs = subparsers.add_parser('c-pack-structs')
+ parser_c_pack_structs.set_defaults(func=dump_c_pack_structs)
- parser_c_pack_structs = subparsers.add_parser('c-pack-structs')
- parser_c_pack_structs.set_defaults(func=dump_c_pack_structs)
+ parser_perfcntrs = subparsers.add_parser('perfcntrs')
+ parser_perfcntrs.add_argument('--json', type=str, required=True)
+ parser_perfcntrs.set_defaults(func=dump_perfcntrs)
- parser_py_defines = subparsers.add_parser('py-defines')
- parser_py_defines.set_defaults(func=dump_py_defines)
+ parser_py_defines = subparsers.add_parser('py-defines')
+ parser_py_defines.set_defaults(func=dump_py_defines)
- args = parser.parse_args()
- args.func(args)
+ args = parser.parse_args()
+ args.func(args)
if __name__ == '__main__':
- main()
+ main()
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 05/16] drm/msm/registers: Add perfcntr json
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (2 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 04/16] drm/msm/registers: Sync gen_header.py from mesa Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 06/16] drm/msm: Add a6xx+ perfcntr tables Rob Clark
` (11 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, David Airlie, Simona Vetter, Konrad Dybcio,
open list
Pull in perfcntr json and wire up generation of perfcntr tables.
Sync from https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/40522
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/Makefile | 25 +-
drivers/gpu/drm/msm/msm_perfcntr.h | 48 ++++
.../msm/registers/adreno/a2xx_perfcntrs.json | 109 ++++++++
.../msm/registers/adreno/a5xx_perfcntrs.json | 128 ++++++++++
.../msm/registers/adreno/a6xx_perfcntrs.json | 105 ++++++++
.../msm/registers/adreno/a7xx_perfcntrs.json | 228 +++++++++++++++++
.../msm/registers/adreno/a8xx_perfcntrs.json | 240 ++++++++++++++++++
7 files changed, 882 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.h
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index ce00cfb0a875..337634e7e247 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -176,6 +176,11 @@ quiet_cmd_headergen = GENHDR $@
cmd_headergen = mkdir -p $(obj)/generated && $(PYTHON3) $(src)/registers/gen_header.py \
$(headergen-opts) --rnn $(src)/registers --xml $< c-defines > $@
+# TODO how to do this for a2xx/a5xx which have different .xml arg?
+quiet_cmd_headergen_json = GENHDRJSN $@
+ cmd_headergen_json = mkdir -p $(obj)/generated && $(PYTHON3) $(src)/registers/gen_header.py \
+ $(headergen-opts) --rnn $(src)/registers --xml $(filter %.xml,$^) perfcntrs --json $< > $@
+
$(obj)/generated/%.xml.h: $(src)/registers/adreno/%.xml \
$(src)/registers/adreno/adreno_common.xml \
$(src)/registers/adreno/adreno_pm4.xml \
@@ -192,6 +197,24 @@ $(obj)/generated/%.xml.h: $(src)/registers/display/%.xml \
FORCE
$(call if_changed,headergen)
+ADRENO_PERFCNTRS =
+
+define adreno_perfcntrs
+ADRENO_PERFCNTRS += generated/$(1)_perfcntrs.json.c
+$$(obj)/generated/$(1)_perfcntrs.json.c: $$(src)/registers/adreno/$(1)_perfcntrs.json \
+ $$(src)/registers/adreno/$(2).xml \
+ FORCE
+ $$(call if_changed,headergen_json)
+endef
+
+$(eval $(call adreno_perfcntrs,a2xx,a2xx))
+$(eval $(call adreno_perfcntrs,a5xx,a5xx))
+$(eval $(call adreno_perfcntrs,a6xx,a6xx))
+$(eval $(call adreno_perfcntrs,a7xx,a6xx))
+$(eval $(call adreno_perfcntrs,a8xx,a6xx))
+
+adreno-y += $(ADRENO_PERFCNTRS:.c=.o)
+
ADRENO_HEADERS = \
generated/a2xx.xml.h \
generated/a3xx.xml.h \
@@ -226,4 +249,4 @@ DISPLAY_HEADERS = \
$(addprefix $(obj)/,$(adreno-y)): $(addprefix $(obj)/,$(ADRENO_HEADERS))
$(addprefix $(obj)/,$(msm-display-y)): $(addprefix $(obj)/,$(DISPLAY_HEADERS))
-targets += $(ADRENO_HEADERS) $(DISPLAY_HEADERS)
+targets += $(ADRENO_HEADERS) $(DISPLAY_HEADERS) $(ADRENO_PERFCNTRS)
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.h b/drivers/gpu/drm/msm/msm_perfcntr.h
new file mode 100644
index 000000000000..305dcde15c5e
--- /dev/null
+++ b/drivers/gpu/drm/msm/msm_perfcntr.h
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+ */
+
+#ifndef __MSM_PERFCNTR_H__
+#define __MSM_PERFCNTR_H__
+
+#include "linux/array_size.h"
+
+#include "adreno_common.xml.h"
+
+/*
+ * This is a subset of the tables used by mesa. We don't need to
+ * enumerate the countables on the kernel side.
+ */
+
+/* Describes a single counter: */
+struct msm_perfcntr_counter {
+ /* offset of the SELect register to choose what to count: */
+ unsigned select_reg;
+ /* additional SEL regs to enable slice counters (gen8+) */
+ unsigned slice_select_regs[2];
+ /* offset of the lo/hi 32b to read current counter value: */
+ unsigned counter_reg_lo;
+ unsigned counter_reg_hi;
+ /* TODO some counters have enable/clear registers */
+};
+
+/* Describes an entire counter group: */
+struct msm_perfcntr_group {
+ const char *name;
+ enum adreno_pipe pipe;
+ unsigned num_counters;
+ const struct msm_perfcntr_counter *counters;
+};
+
+#define GROUP(_name, _pipe, _counters, _countables) { \
+ .name = _name, \
+ .pipe = _pipe, \
+ .num_counters = ARRAY_SIZE(_counters), \
+ .counters = _counters, \
+ }
+
+#define fd_perfcntr_counter msm_perfcntr_counter
+#define fd_perfcntr_group msm_perfcntr_group
+
+#endif /* __MSM_PERFCNTR_H__ */
diff --git a/drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json b/drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json
new file mode 100644
index 000000000000..8095345ffd8e
--- /dev/null
+++ b/drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json
@@ -0,0 +1,109 @@
+{
+ "chip": "A2XX",
+ "groups": [
+ {
+ "name": "CP",
+ "num": 1,
+ "select": "CP_PERFCOUNTER_SELECT",
+ "counter_lo": "CP_PERFCOUNTER_LO",
+ "counter_hi": "CP_PERFCOUNTER_HI",
+ "countable_type": "a2xx_cp_perfcount_sel"
+ },
+ {
+ "name": "PA_SU",
+ "num": 4,
+ "select": "PA_SU_PERFCOUNTER{}_SELECT",
+ "counter_lo": "PA_SU_PERFCOUNTER{}_LOW",
+ "counter_hi": "PA_SU_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_su_perfcnt_select"
+ },
+ {
+ "name": "PA_SC",
+ "num": 1,
+ "select": "PA_SC_PERFCOUNTER{}_SELECT",
+ "counter_lo": "PA_SC_PERFCOUNTER{}_LOW",
+ "counter_hi": "PA_SC_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_sc_perfcnt_select"
+ },
+ {
+ "name": "VGT",
+ "num": 4,
+ "select": "VGT_PERFCOUNTER{}_SELECT",
+ "counter_lo": "VGT_PERFCOUNTER{}_LOW",
+ "counter_hi": "VGT_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_vgt_perfcount_select"
+ },
+ {
+ "name": "TCR",
+ "num": 2,
+ "select": "TCR_PERFCOUNTER{}_SELECT",
+ "counter_lo": "TCR_PERFCOUNTER{}_LOW",
+ "counter_hi": "TCR_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_tcr_perfcount_select"
+ },
+ {
+ "name": "TP0",
+ "num": 2,
+ "select": "TP0_PERFCOUNTER{}_SELECT",
+ "counter_lo": "TP0_PERFCOUNTER{}_LOW",
+ "counter_hi": "TP0_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_tp_perfcount_select"
+ },
+ {
+ "name": "TCM",
+ "num": 2,
+ "select": "TCM_PERFCOUNTER{}_SELECT",
+ "counter_lo": "TCM_PERFCOUNTER{}_LOW",
+ "counter_hi": "TCM_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_tcm_perfcount_select"
+ },
+ {
+ "name": "TCF",
+ "num": 12,
+ "select": "TCF_PERFCOUNTER{}_SELECT",
+ "counter_lo": "TCF_PERFCOUNTER{}_LOW",
+ "counter_hi": "TCF_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_tcf_perfcount_select"
+ },
+ {
+ "name": "SQ",
+ "num": 4,
+ "select": "SQ_PERFCOUNTER{}_SELECT",
+ "counter_lo": "SQ_PERFCOUNTER{}_LOW",
+ "counter_hi": "SQ_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_sq_perfcnt_select"
+ },
+ {
+ "name": "SX",
+ "num": 1,
+ "select": "SX_PERFCOUNTER{}_SELECT",
+ "counter_lo": "SX_PERFCOUNTER{}_LOW",
+ "counter_hi": "SX_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_sx_perfcnt_select"
+ },
+ {
+ "name": "MH",
+ "num": 2,
+ "select": "MH_PERFCOUNTER{}_SELECT",
+ "counter_lo": "MH_PERFCOUNTER{}_LOW",
+ "counter_hi": "MH_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_mh_perfcnt_select"
+ },
+ {
+ "name": "RBBM",
+ "num": 2,
+ "select": "RBBM_PERFCOUNTER{}_SELECT",
+ "counter_lo": "RBBM_PERFCOUNTER{}_LO",
+ "counter_hi": "RBBM_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_rbbm_perfcount1_sel"
+ },
+ {
+ "name": "RB",
+ "num": 4,
+ "select": "RB_PERFCOUNTER{}_SELECT",
+ "counter_lo": "RB_PERFCOUNTER{}_LOW",
+ "counter_hi": "RB_PERFCOUNTER{}_HI",
+ "countable_type": "a2xx_rb_perfcnt_select"
+ }
+ ]
+}
diff --git a/drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json b/drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json
new file mode 100644
index 000000000000..d95503543f94
--- /dev/null
+++ b/drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json
@@ -0,0 +1,128 @@
+{
+ "chip": "A5XX",
+ "groups": [
+ {
+ "name": "CP",
+ "num": 8,
+ "reserved": [ 0 ],
+ "select": "CP_PERFCTR_CP_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_CP_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_CP_{}_HI",
+ "countable_type": "a5xx_cp_perfcounter_select"
+ },
+ {
+ "name": "CCU",
+ "num": 4,
+ "select": "RB_PERFCTR_CCU_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_CCU_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_CCU_{}_HI",
+ "countable_type": "a5xx_ccu_perfcounter_select"
+ },
+ {
+ "name": "TSE",
+ "num": 4,
+ "select": "GRAS_PERFCTR_TSE_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_TSE_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_TSE_{}_HI",
+ "countable_type": "a5xx_tse_perfcounter_select"
+ },
+ {
+ "name": "RAS",
+ "num": 4,
+ "select": "GRAS_PERFCTR_RAS_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_RAS_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_RAS_{}_HI",
+ "countable_type": "a5xx_ras_perfcounter_select"
+ },
+ {
+ "name": "LRZ",
+ "num": 4,
+ "select": "GRAS_PERFCTR_LRZ_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_LRZ_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_LRZ_{}_HI",
+ "countable_type": "a5xx_lrz_perfcounter_select"
+ },
+ {
+ "name": "HLSQ",
+ "num": 8,
+ "select": "HLSQ_PERFCTR_HLSQ_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_HLSQ_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_HLSQ_{}_HI",
+ "countable_type": "a5xx_hlsq_perfcounter_select"
+ },
+ {
+ "name": "PC",
+ "num": 8,
+ "select": "PC_PERFCTR_PC_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_PC_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_PC_{}_HI",
+ "countable_type": "a5xx_pc_perfcounter_select"
+ },
+ {
+ "name": "RB",
+ "num": 8,
+ "select": "RB_PERFCTR_RB_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_RB_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_RB_{}_HI",
+ "countable_type": "a5xx_rb_perfcounter_select"
+ },
+ {
+ "name": "RBBM",
+ "num": 4,
+ "reserved": [ 0 ],
+ "select": "RBBM_PERFCTR_RBBM_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_RBBM_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_RBBM_{}_HI",
+ "countable_type": "a5xx_rbbm_perfcounter_select"
+ },
+ {
+ "name": "SP",
+ "num": 12,
+ "reserved": [ 0 ],
+ "select": "SP_PERFCTR_SP_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_SP_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_SP_{}_HI",
+ "countable_type": "a5xx_sp_perfcounter_select"
+ },
+ {
+ "name": "TP",
+ "num": 8,
+ "select": "TPL1_PERFCTR_TP_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_TP_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_TP_{}_HI",
+ "countable_type": "a5xx_tp_perfcounter_select"
+ },
+ {
+ "name": "UCHE",
+ "num": 8,
+ "select": "UCHE_PERFCTR_UCHE_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_UCHE_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_UCHE_{}_HI",
+ "countable_type": "a5xx_uche_perfcounter_select"
+ },
+ {
+ "name": "VFD",
+ "num": 8,
+ "select": "VFD_PERFCTR_VFD_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_VFD_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_VFD_{}_HI",
+ "countable_type": "a5xx_vfd_perfcounter_select"
+ },
+ {
+ "name": "VPC",
+ "num": 4,
+ "select": "VPC_PERFCTR_VPC_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_VPC_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_VPC_{}_HI",
+ "countable_type": "a5xx_vpc_perfcounter_select"
+ },
+ {
+ "name": "VSC",
+ "num": 2,
+ "select": "VSC_PERFCTR_VSC_SEL_{}",
+ "counter_lo": "RBBM_PERFCTR_VSC_{}_LO",
+ "counter_hi": "RBBM_PERFCTR_VSC_{}_HI",
+ "countable_type": "a5xx_vsc_perfcounter_select"
+ }
+ ]
+}
diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json b/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json
new file mode 100644
index 000000000000..8bb31820479e
--- /dev/null
+++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json
@@ -0,0 +1,105 @@
+{
+ "chip": "A6XX",
+ "groups": [
+ {
+ "name": "CP",
+ "num": 14,
+ "reserved": [ 0 ],
+ "select": "CP_PERFCTR_CP_SEL",
+ "counter": "RBBM_PERFCTR_CP",
+ "countable_type": "a6xx_cp_perfcounter_select"
+ },
+ {
+ "name": "CCU",
+ "num": 5,
+ "select": "RB_PERFCTR_CCU_SEL",
+ "counter": "RBBM_PERFCTR_CCU",
+ "countable_type": "a6xx_ccu_perfcounter_select"
+ },
+ {
+ "name": "TSE",
+ "num": 4,
+ "select": "GRAS_PERFCTR_TSE_SEL",
+ "counter": "RBBM_PERFCTR_TSE",
+ "countable_type": "a6xx_tse_perfcounter_select"
+ },
+ {
+ "name": "RAS",
+ "num": 4,
+ "select": "GRAS_PERFCTR_RAS_SEL",
+ "counter": "RBBM_PERFCTR_RAS",
+ "countable_type": "a6xx_ras_perfcounter_select"
+ },
+ {
+ "name": "LRZ",
+ "num": 4,
+ "select": "GRAS_PERFCTR_LRZ_SEL",
+ "counter": "RBBM_PERFCTR_LRZ",
+ "countable_type": "a6xx_lrz_perfcounter_select"
+ },
+ {
+ "name": "HLSQ",
+ "num": 6,
+ "select": "HLSQ_PERFCTR_HLSQ_SEL",
+ "counter": "RBBM_PERFCTR_HLSQ",
+ "countable_type": "a6xx_hlsq_perfcounter_select"
+ },
+ {
+ "name": "PC",
+ "num": 8,
+ "select": "PC_PERFCTR_PC_SEL",
+ "counter": "RBBM_PERFCTR_PC",
+ "countable_type": "a6xx_pc_perfcounter_select"
+ },
+ {
+ "name": "RB",
+ "num": 8,
+ "select": "RB_PERFCTR_RB_SEL",
+ "counter": "RBBM_PERFCTR_RB",
+ "countable_type": "a6xx_rb_perfcounter_select"
+ },
+ {
+ "name": "SP",
+ "num": 24,
+ "reserved": [ 0 ],
+ "select": "SP_PERFCTR_SP_SEL",
+ "counter": "RBBM_PERFCTR_SP",
+ "countable_type": "a6xx_sp_perfcounter_select"
+ },
+ {
+ "name": "TP",
+ "num": 12,
+ "select": "TPL1_PERFCTR_TP_SEL",
+ "counter": "RBBM_PERFCTR_TP",
+ "countable_type": "a6xx_tp_perfcounter_select"
+ },
+ {
+ "name": "UCHE",
+ "num": 12,
+ "select": "UCHE_PERFCTR_UCHE_SEL",
+ "counter": "RBBM_PERFCTR_UCHE",
+ "countable_type": "a6xx_uche_perfcounter_select"
+ },
+ {
+ "name": "VFD",
+ "num": 8,
+ "select": "VFD_PERFCTR_VFD_SEL",
+ "counter": "RBBM_PERFCTR_VFD",
+ "countable_type": "a6xx_vfd_perfcounter_select"
+ },
+ {
+ "name": "VPC",
+ "num": 6,
+ "select": "VPC_PERFCTR_VPC_SEL",
+ "counter": "RBBM_PERFCTR_VPC",
+ "countable_type": "a6xx_vpc_perfcounter_select"
+ },
+ {
+ "name": "VSC",
+ "num": 2,
+ "select": "VSC_PERFCTR_VSC_SEL",
+ "counter": "RBBM_PERFCTR_VSC",
+ "countable_type": "a6xx_vsc_perfcounter_select"
+ }
+ ]
+}
diff --git a/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json b/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json
new file mode 100644
index 000000000000..e60aab1862ec
--- /dev/null
+++ b/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json
@@ -0,0 +1,228 @@
+{
+ "chip": "A7XX",
+ "groups": [
+ {
+ "name": "CP",
+ "num": 14,
+ "reserved": [ 0 ],
+ "select": "CP_PERFCTR_CP_SEL",
+ "counter": "RBBM_PERFCTR_CP",
+ "countable_type": "a7xx_cp_perfcounter_select"
+ },
+ {
+ "name": "RBBM",
+ "num": 4,
+ "select": "RBBM_PERFCTR_RBBM_SEL",
+ "counter": "RBBM_PERFCTR_RBBM",
+ "countable_type": "a7xx_rbbm_perfcounter_select"
+ },
+ {
+ "name": "PC",
+ "pipe": "BR",
+ "num": 8,
+ "select": "PC_PERFCTR_PC_SEL",
+ "counter": "RBBM_PERFCTR_PC",
+ "countable_type": "a7xx_pc_perfcounter_select"
+ },
+ {
+ "name": "VFD",
+ "pipe": "BR",
+ "num": 8,
+ "select": "VFD_PERFCTR_VFD_SEL",
+ "counter": "RBBM_PERFCTR_VFD",
+ "countable_type": "a7xx_vfd_perfcounter_select"
+ },
+ {
+ "name": "HLSQ",
+ "pipe": "BR",
+ "num": 6,
+ "select": "SP_PERFCTR_HLSQ_SEL",
+ "counter": "RBBM_PERFCTR_HLSQ",
+ "countable_type": "a7xx_hlsq_perfcounter_select"
+ },
+ {
+ "name": "VPC",
+ "pipe": "BR",
+ "num": 6,
+ "select": "VPC_PERFCTR_VPC_SEL",
+ "counter": "RBBM_PERFCTR_VPC",
+ "countable_type": "a7xx_vpc_perfcounter_select"
+ },
+ {
+ "name": "TSE",
+ "pipe": "BR",
+ "num": 4,
+ "select": "GRAS_PERFCTR_TSE_SEL",
+ "counter": "RBBM_PERFCTR_TSE",
+ "countable_type": "a7xx_tse_perfcounter_select"
+ },
+ {
+ "name": "RAS",
+ "pipe": "BR",
+ "num": 4,
+ "select": "GRAS_PERFCTR_RAS_SEL",
+ "counter": "RBBM_PERFCTR_RAS",
+ "countable_type": "a7xx_ras_perfcounter_select"
+ },
+ {
+ "name": "UCHE",
+ "num": 12,
+ "select": "UCHE_PERFCTR_UCHE_SEL",
+ "counter": "RBBM_PERFCTR_UCHE",
+ "countable_type": "a7xx_uche_perfcounter_select"
+ },
+ {
+ "name": "TP",
+ "pipe": "BR",
+ "num": 12,
+ "select": "TPL1_PERFCTR_TP_SEL",
+ "counter": "RBBM_PERFCTR_TP",
+ "countable_type": "a7xx_tp_perfcounter_select"
+ },
+ {
+ "name": "SP",
+ "pipe": "BR",
+ "num": 24,
+ "select": "SP_PERFCTR_SP_SEL",
+ "counter": "RBBM_PERFCTR_SP",
+ "countable_type": "a7xx_sp_perfcounter_select"
+ },
+ {
+ "name": "RB",
+ "num": 8,
+ "select": "RB_PERFCTR_RB_SEL",
+ "counter": "RBBM_PERFCTR_RB",
+ "countable_type": "a7xx_rb_perfcounter_select"
+ },
+ {
+ "name": "VSC",
+ "num": 2,
+ "select": "VSC_PERFCTR_VSC_SEL",
+ "counter": "RBBM_PERFCTR_VSC",
+ "countable_type": "a7xx_vsc_perfcounter_select"
+ },
+ {
+ "name": "CCU",
+ "num": 5,
+ "select": "RB_PERFCTR_CCU_SEL",
+ "counter": "RBBM_PERFCTR_CCU",
+ "countable_type": "a7xx_ccu_perfcounter_select"
+ },
+ {
+ "name": "LRZ",
+ "pipe": "BR",
+ "num": 4,
+ "select": "GRAS_PERFCTR_LRZ_SEL",
+ "counter": "RBBM_PERFCTR_LRZ",
+ "countable_type": "a7xx_lrz_perfcounter_select"
+ },
+ {
+ "name": "CMP",
+ "num": 4,
+ "select": "RB_PERFCTR_CMP_SEL",
+ "counter": "RBBM_PERFCTR_CMP",
+ "countable_type": "a7xx_cmp_perfcounter_select"
+ },
+ {
+ "name": "UFC",
+ "pipe": "BR",
+ "num": 4,
+ "select": "RB_PERFCTR_UFC_SEL",
+ "counter": "RBBM_PERFCTR_UFC",
+ "countable_type": "a7xx_ufc_perfcounter_select"
+ },
+ {
+ "name": "BV_CP",
+ "num": 7,
+ "select": "CP_BV_PERFCTR_CP_SEL",
+ "counter": "RBBM_PERFCTR2_CP",
+ "countable_type": "a7xx_cp_perfcounter_select"
+ },
+ {
+ "name": "BV_PC",
+ "pipe": "BV",
+ "num": 8,
+ "select_offset": 8,
+ "select": "PC_PERFCTR_PC_SEL",
+ "counter": "RBBM_PERFCTR_BV_PC",
+ "countable_type": "a7xx_pc_perfcounter_select"
+ },
+ {
+ "name": "BV_VFD",
+ "pipe": "BV",
+ "num": 8,
+ "select_offset": 8,
+ "select": "VFD_PERFCTR_VFD_SEL",
+ "counter": "RBBM_PERFCTR_BV_VFD",
+ "countable_type": "a7xx_vfd_perfcounter_select"
+ },
+ {
+ "name": "BV_VPC",
+ "pipe": "BV",
+ "num": 6,
+ "select_offset": 6,
+ "select": "VPC_PERFCTR_VPC_SEL",
+ "counter": "RBBM_PERFCTR_BV_VPC",
+ "countable_type": "a7xx_vpc_perfcounter_select"
+ },
+ {
+ "name": "BV_TP",
+ "pipe": "BV",
+ "num": 6,
+ "select_offset": 12,
+ "select": "TPL1_PERFCTR_TP_SEL",
+ "counter": "RBBM_PERFCTR2_TP",
+ "countable_type": "a7xx_tp_perfcounter_select"
+ },
+ {
+ "name": "BV_SP",
+ "pipe": "BV",
+ "num": 12,
+ "select_offset": 24,
+ "select": "SP_PERFCTR_SP_SEL",
+ "counter": "RBBM_PERFCTR2_SP",
+ "countable_type": "a7xx_sp_perfcounter_select"
+ },
+ {
+ "name": "BV_UFC",
+ "pipe": "BV",
+ "num": 2,
+ "select_offset": 4,
+ "select": "RB_PERFCTR_UFC_SEL",
+ "counter": "RBBM_PERFCTR2_UFC",
+ "countable_type": "a7xx_ufc_perfcounter_select"
+ },
+ {
+ "name": "BV_TSE",
+ "pipe": "BV",
+ "num": 4,
+ "select": "GRAS_PERFCTR_TSE_SEL",
+ "counter": "RBBM_PERFCTR_BV_TSE",
+ "countable_type": "a7xx_tse_perfcounter_select"
+ },
+ {
+ "name": "BV_RAS",
+ "pipe": "BV",
+ "num": 4,
+ "select": "GRAS_PERFCTR_RAS_SEL",
+ "counter": "RBBM_PERFCTR_BV_RAS",
+ "countable_type": "a7xx_ras_perfcounter_select"
+ },
+ {
+ "name": "BV_LRZ",
+ "pipe": "BV",
+ "num": 4,
+ "select": "GRAS_PERFCTR_LRZ_SEL",
+ "counter": "RBBM_PERFCTR_BV_LRZ",
+ "countable_type": "a7xx_lrz_perfcounter_select"
+ },
+ {
+ "name": "BV_HLSQ",
+ "pipe": "BV",
+ "num": 6,
+ "select": "SP_PERFCTR_HLSQ_SEL",
+ "counter": "RBBM_PERFCTR2_HLSQ",
+ "countable_type": "a7xx_hlsq_perfcounter_select"
+ }
+ ]
+}
diff --git a/drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json b/drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json
new file mode 100644
index 000000000000..503b113df397
--- /dev/null
+++ b/drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json
@@ -0,0 +1,240 @@
+{
+ "chip": "A8XX",
+ "groups": [
+ {
+ "name": "CP",
+ "num": 14,
+ "reserved": [ 0 ],
+ "select": "CP_PERFCTR_CP_SEL",
+ "counter": "RBBM_PERFCTR_CP",
+ "countable_type": "a8xx_cp_perfcounter_select"
+ },
+ {
+ "name": "RBBM",
+ "num": 4,
+ "select": "RBBM_PERFCTR_RBBM_SEL",
+ "slice_select": [ "RBBM_SLICE_PERFCTR_RBBM_SEL" ],
+ "counter": "RBBM_PERFCTR_RBBM",
+ "countable_type": "a8xx_rbbm_perfcounter_select"
+ },
+ {
+ "name": "PC",
+ "pipe": "BR",
+ "num": 8,
+ "select": "PC_PERFCTR_PC_SEL",
+ "slice_select": [ "PC_SLICE_PERFCTR_PC_SEL" ],
+ "counter": "RBBM_PERFCTR_PC",
+ "countable_type": "a8xx_pc_perfcounter_select"
+ },
+ {
+ "name": "VFD",
+ "pipe": "BR",
+ "num": 8,
+ "select": "VFD_PERFCTR_VFD_SEL",
+ "counter": "RBBM_PERFCTR_VFD",
+ "countable_type": "a8xx_vfd_perfcounter_select"
+ },
+ {
+ "name": "HLSQ",
+ "pipe": "BR",
+ "num": 6,
+ "select": "SP_PERFCTR_HLSQ_SEL",
+ "slice_select": [ "SP_PERFCTR_HLSQ_SEL_2" ],
+ "counter": "RBBM_PERFCTR_HLSQ",
+ "countable_type": "a8xx_hlsq_perfcounter_select"
+ },
+ {
+ "name": "VPC",
+ "pipe": "BR",
+ "num": 6,
+ "select": "VPC_PERFCTR_VPC_SEL",
+ "slice_select": [ "VPC_PERFCTR_VPC_SEL_1", "VPC_PERFCTR_VPC_SEL_2" ],
+ "counter": "RBBM_PERFCTR_VPC",
+ "countable_type": "a8xx_vpc_perfcounter_select"
+ },
+ {
+ "name": "TSE",
+ "pipe": "BR",
+ "num": 4,
+ "select": "GRAS_PERFCTR_TSE_SEL",
+ "slice_select": [ "GRAS_PERFCTR_TSEFE_SEL" ],
+ "counter": "RBBM_PERFCTR_TSE",
+ "countable_type": "a8xx_tse_perfcounter_select"
+ },
+ {
+ "name": "RAS",
+ "pipe": "BR",
+ "num": 4,
+ "select": "GRAS_PERFCTR_RAS_SEL",
+ "counter": "RBBM_PERFCTR_RAS",
+ "countable_type": "a8xx_ras_perfcounter_select"
+ },
+ {
+ "name": "UCHE",
+ "num": 12,
+ "select": "UCHE_PERFCTR_UCHE_SEL",
+ "counter": "RBBM_PERFCTR_UCHE",
+ "countable_type": "a8xx_uche_perfcounter_select"
+ },
+ {
+ "name": "TP",
+ "pipe": "BR",
+ "num": 12,
+ "select": "TPL1_PERFCTR_TP_SEL",
+ "counter": "RBBM_PERFCTR_TP",
+ "countable_type": "a8xx_tp_perfcounter_select"
+ },
+ {
+ "name": "SP",
+ "pipe": "BR",
+ "num": 24,
+ "select": "SP_PERFCTR_SP_SEL",
+ "counter": "RBBM_PERFCTR_SP",
+ "countable_type": "a8xx_sp_perfcounter_select"
+ },
+ {
+ "name": "RB",
+ "pipe": "BR",
+ "num": 8,
+ "select": "RB_PERFCTR_RB_SEL",
+ "counter": "RBBM_PERFCTR_RB",
+ "countable_type": "a8xx_rb_perfcounter_select"
+ },
+ {
+ "name": "VSC",
+ "num": 2,
+ "select": "VSC_PERFCTR_VSC_SEL",
+ "counter": "RBBM_PERFCTR_VSC",
+ "countable_type": "a8xx_vsc_perfcounter_select"
+ },
+ {
+ "name": "CCU",
+ "pipe": "BR",
+ "num": 5,
+ "select": "RB_PERFCTR_CCU_SEL",
+ "counter": "RBBM_PERFCTR_CCU",
+ "countable_type": "a8xx_ccu_perfcounter_select"
+ },
+ {
+ "name": "LRZ",
+ "pipe": "BR",
+ "num": 4,
+ "select": "GRAS_PERFCTR_LRZ_SEL",
+ "counter": "RBBM_PERFCTR_LRZ",
+ "countable_type": "a8xx_lrz_perfcounter_select"
+ },
+ {
+ "name": "CMP",
+ "num": 4,
+ "select": "RB_PERFCTR_CMP_SEL",
+ "counter": "RBBM_PERFCTR_CMP",
+ "countable_type": "a8xx_cmp_perfcounter_select"
+ },
+ {
+ "name": "UFC",
+ "pipe": "BR",
+ "num": 4,
+ "select": "RB_PERFCTR_UFC_SEL",
+ "counter": "RBBM_PERFCTR_UFC",
+ "countable_type": "a8xx_ufc_perfcounter_select"
+ },
+ {
+ "name": "BV_CP",
+ "num": 7,
+ "select_offset": 14,
+ "select": "CP_PERFCTR_CP_SEL",
+ "counter": "RBBM_PERFCTR2_CP",
+ "countable_type": "a8xx_cp_perfcounter_select"
+ },
+ {
+ "name": "BV_PC",
+ "pipe": "BV",
+ "num": 8,
+ "select_offset": 8,
+ "select": "PC_PERFCTR_PC_SEL",
+ "slice_select": [ "PC_SLICE_PERFCTR_PC_SEL" ],
+ "counter": "RBBM_PERFCTR_BV_PC",
+ "countable_type": "a8xx_pc_perfcounter_select"
+ },
+ {
+ "name": "BV_VFD",
+ "pipe": "BV",
+ "num": 8,
+ "select_offset": 8,
+ "select": "VFD_PERFCTR_VFD_SEL",
+ "counter": "RBBM_PERFCTR_BV_VFD",
+ "countable_type": "a8xx_vfd_perfcounter_select"
+ },
+ {
+ "name": "BV_VPC",
+ "pipe": "BV",
+ "num": 6,
+ "select_offset": 6,
+ "select": "VPC_PERFCTR_VPC_SEL",
+ "slice_select": [ "VPC_PERFCTR_VPC_SEL_1", "VPC_PERFCTR_VPC_SEL_2" ],
+ "counter": "RBBM_PERFCTR_BV_VPC",
+ "countable_type": "a8xx_vpc_perfcounter_select"
+ },
+ {
+ "name": "BV_TP",
+ "pipe": "BV",
+ "num": 8,
+ "select_offset": 12,
+ "select": "TPL1_PERFCTR_TP_SEL",
+ "counter": "RBBM_PERFCTR2_TP",
+ "countable_type": "a8xx_tp_perfcounter_select"
+ },
+ {
+ "name": "BV_SP",
+ "pipe": "BV",
+ "num": 12,
+ "select_offset": 24,
+ "select": "SP_PERFCTR_SP_SEL",
+ "counter": "RBBM_PERFCTR2_SP",
+ "countable_type": "a8xx_sp_perfcounter_select"
+ },
+ {
+ "name": "BV_UFC",
+ "pipe": "BV",
+ "num": 2,
+ "select_offset": 4,
+ "select": "RB_PERFCTR_UFC_SEL",
+ "counter": "RBBM_PERFCTR2_UFC",
+ "countable_type": "a8xx_ufc_perfcounter_select"
+ },
+ {
+ "name": "BV_TSE",
+ "pipe": "BV",
+ "num": 4,
+ "select": "GRAS_PERFCTR_TSE_SEL",
+ "slice_select": [ "GRAS_PERFCTR_TSEFE_SEL" ],
+ "counter": "RBBM_PERFCTR_BV_TSE",
+ "countable_type": "a8xx_tse_perfcounter_select"
+ },
+ {
+ "name": "BV_RAS",
+ "pipe": "BV",
+ "num": 4,
+ "select": "GRAS_PERFCTR_RAS_SEL",
+ "counter": "RBBM_PERFCTR_BV_RAS",
+ "countable_type": "a8xx_ras_perfcounter_select"
+ },
+ {
+ "name": "BV_LRZ",
+ "pipe": "BV",
+ "num": 4,
+ "select": "GRAS_PERFCTR_LRZ_SEL",
+ "counter": "RBBM_PERFCTR_BV_LRZ",
+ "countable_type": "a8xx_lrz_perfcounter_select"
+ },
+ {
+ "name": "BV_HLSQ",
+ "pipe": "BV",
+ "num": 6,
+ "select": "SP_PERFCTR_HLSQ_SEL",
+ "slice_select": [ "SP_PERFCTR_HLSQ_SEL_2" ],
+ "counter": "RBBM_PERFCTR2_HLSQ",
+ "countable_type": "a8xx_hlsq_perfcounter_select"
+ }
+ ]
+}
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 06/16] drm/msm: Add a6xx+ perfcntr tables
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (3 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 05/16] drm/msm/registers: Add perfcntr json Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 07/16] drm/msm: Add sysprof accessors Rob Clark
` (10 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
Wire up the generated perfcntr tables for a6xx+. The PERFCNTR_CONFIG
ioctl will use this information to assign counters.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 15 +++++++++++++++
drivers/gpu/drm/msm/msm_gpu.h | 4 ++++
drivers/gpu/drm/msm/msm_perfcntr.h | 9 +++++++++
3 files changed, 28 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index e578417a4949..727281fbef36 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -5,6 +5,7 @@
#include "msm_gem.h"
#include "msm_mmu.h"
#include "msm_gpu_trace.h"
+#include "msm_perfcntr.h"
#include "a6xx_gpu.h"
#include "a6xx_gmu.xml.h"
@@ -2637,6 +2638,20 @@ static struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
adreno_gpu = &a6xx_gpu->base;
gpu = &adreno_gpu->base;
+ if ((ADRENO_6XX_GEN1 <= config->info->family) &&
+ (config->info->family <= ADRENO_6XX_GEN4)) {
+ gpu->perfcntr_groups = a6xx_perfcntr_groups;
+ gpu->num_perfcntr_groups = a6xx_num_perfcntr_groups;
+ } else if ((ADRENO_7XX_GEN1 <= config->info->family) &&
+ (config->info->family <= ADRENO_7XX_GEN3)) {
+ gpu->perfcntr_groups = a7xx_perfcntr_groups;
+ gpu->num_perfcntr_groups = a7xx_num_perfcntr_groups;
+ } else if ((ADRENO_8XX_GEN1 <= config->info->family) &&
+ (config->info->family <= ADRENO_8XX_GEN2)) {
+ gpu->perfcntr_groups = a8xx_perfcntr_groups;
+ gpu->num_perfcntr_groups = a8xx_num_perfcntr_groups;
+ }
+
mutex_init(&a6xx_gpu->gmu.lock);
spin_lock_init(&a6xx_gpu->aperture_lock);
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 78e1478669be..8c08dc065372 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -24,6 +24,7 @@ struct msm_gem_submit;
struct msm_gem_vm_log_entry;
struct msm_gpu_state;
struct msm_context;
+struct msm_perfcntr_group;
struct msm_gpu_config {
const char *ioname;
@@ -262,6 +263,9 @@ struct msm_gpu {
bool allow_relocs;
struct thermal_cooling_device *cooling;
+
+ const struct msm_perfcntr_group *perfcntr_groups;
+ unsigned num_perfcntr_groups;
};
static inline struct msm_gpu *dev_to_gpu(struct device *dev)
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.h b/drivers/gpu/drm/msm/msm_perfcntr.h
index 305dcde15c5e..64a5d29feba1 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.h
+++ b/drivers/gpu/drm/msm/msm_perfcntr.h
@@ -35,6 +35,15 @@ struct msm_perfcntr_group {
const struct msm_perfcntr_counter *counters;
};
+extern const struct msm_perfcntr_group a6xx_perfcntr_groups[];
+extern const unsigned a6xx_num_perfcntr_groups;
+
+extern const struct msm_perfcntr_group a7xx_perfcntr_groups[];
+extern const unsigned a7xx_num_perfcntr_groups;
+
+extern const struct msm_perfcntr_group a8xx_perfcntr_groups[];
+extern const unsigned a8xx_num_perfcntr_groups;
+
#define GROUP(_name, _pipe, _counters, _countables) { \
.name = _name, \
.pipe = _pipe, \
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 07/16] drm/msm: Add sysprof accessors
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (4 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 06/16] drm/msm: Add a6xx+ perfcntr tables Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 08/16] drm/msm/a6xx: Add yield & flush helper Rob Clark
` (9 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
Currently the sysprof param serves two functions, (a) disabling perfcntr
clearing on context switch/preemption, and (b) disabling IFPC. In the
future, with kernel side global perfcntr collection/stream, the decision
about disabling IFPC will change.
To prepare for this, split out two helpers/accessors for the two
different cases. For now, they are the same thing, but this will
change.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 8 +++-----
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +++--
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 2 +-
drivers/gpu/drm/msm/adreno/a8xx_preempt.c | 2 +-
drivers/gpu/drm/msm/msm_gpu.h | 18 ++++++++++++++++++
6 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 1b44b9e21ad8..aba08fb76249 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -2036,10 +2036,10 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
void a6xx_gmu_sysprof_setup(struct msm_gpu *gpu)
{
+ bool sysprof = msm_gpu_sysprof_no_ifpc(gpu);
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
- unsigned int sysprof_active;
/* Nothing to do if GPU is suspended. We will handle this during GMU resume */
if (!pm_runtime_get_if_active(&gpu->pdev->dev))
@@ -2047,15 +2047,13 @@ void a6xx_gmu_sysprof_setup(struct msm_gpu *gpu)
mutex_lock(&gmu->lock);
- sysprof_active = refcount_read(&gpu->sysprof_active);
-
/*
* 'Perfcounter select' register values are lost during IFPC collapse. To avoid that,
* use the currently unused perfcounter oob vote to block IFPC when sysprof is active
*/
- if ((sysprof_active > 1) && !test_and_set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status))
+ if (sysprof && !test_and_set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status))
a6xx_gmu_set_oob(gmu, GMU_OOB_PERFCOUNTER_SET);
- else if ((sysprof_active == 1) && test_and_clear_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status))
+ else if (!sysprof && test_and_clear_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status))
a6xx_gmu_clear_oob(gmu, GMU_OOB_PERFCOUNTER_SET);
mutex_unlock(&gmu->lock);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 727281fbef36..71f54ab5425d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -203,7 +203,7 @@ static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
struct msm_ringbuffer *ring, struct msm_gem_submit *submit)
{
- bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
+ bool sysprof = msm_gpu_sysprof_no_perfcntr_zap(&a6xx_gpu->base.base);
struct msm_context *ctx = submit->queue->ctx;
struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx);
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
@@ -1608,7 +1608,7 @@ static int hw_init(struct msm_gpu *gpu)
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_BOOT_SLUMBER);
}
- if (!ret && (refcount_read(&gpu->sysprof_active) > 1)) {
+ if (!ret && msm_gpu_sysprof_no_ifpc(gpu)) {
ret = a6xx_gmu_set_oob(gmu, GMU_OOB_PERFCOUNTER_SET);
if (!ret)
set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status);
@@ -2854,6 +2854,7 @@ const struct adreno_gpu_funcs a8xx_gpu_funcs = {
.create_private_vm = a6xx_create_private_vm,
.get_rptr = a6xx_get_rptr,
.progress = a8xx_progress,
+ .sysprof_setup = a6xx_gmu_sysprof_setup,
},
.init = a6xx_gpu_init,
.get_timestamp = a8xx_gmu_get_timestamp,
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
index df4cbf42e9a4..1e599d4ddea1 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
@@ -261,7 +261,7 @@ void a6xx_preempt_trigger(struct msm_gpu *gpu)
mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
/* Enable or disable postamble as needed */
- sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
+ sysprof = msm_gpu_sysprof_no_perfcntr_zap(gpu);
if (!sysprof && !a6xx_gpu->postamble_enabled)
preempt_prepare_postamble(a6xx_gpu);
diff --git a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
index ccfccc45133f..e022c9a162a4 100644
--- a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
@@ -849,7 +849,7 @@ static int hw_init(struct msm_gpu *gpu)
*/
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET);
- if (!ret && (refcount_read(&gpu->sysprof_active) > 1)) {
+ if (!ret && msm_gpu_sysprof_no_perfcntr_zap(gpu)) {
ret = a6xx_gmu_set_oob(gmu, GMU_OOB_PERFCOUNTER_SET);
if (!ret)
set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status);
diff --git a/drivers/gpu/drm/msm/adreno/a8xx_preempt.c b/drivers/gpu/drm/msm/adreno/a8xx_preempt.c
index 3d8c33ba722e..6cb53a071801 100644
--- a/drivers/gpu/drm/msm/adreno/a8xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a8xx_preempt.c
@@ -242,7 +242,7 @@ void a8xx_preempt_trigger(struct msm_gpu *gpu)
mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
/* Enable or disable postamble as needed */
- sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
+ sysprof = msm_gpu_sysprof_no_perfcntr_zap(gpu);
if (!sysprof && !a6xx_gpu->postamble_enabled)
preempt_prepare_postamble(a6xx_gpu);
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 8c08dc065372..9e5c753437c2 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -311,6 +311,24 @@ static inline bool msm_gpu_active(struct msm_gpu *gpu)
return false;
}
+static inline bool
+msm_gpu_sysprof_no_perfcntr_zap(struct msm_gpu *gpu)
+{
+ return refcount_read(&gpu->sysprof_active) > 1;
+}
+
+static inline bool
+msm_gpu_sysprof_no_ifpc(struct msm_gpu *gpu)
+{
+ /*
+ * For now, this is the same condition as disabling perfcntr clears
+ * on context switch. But once kernel perfcntr IFPC support is in
+ * place, we will only need to disable IFPC for legacy userspace
+ * setting SYSPROF param.
+ */
+ return msm_gpu_sysprof_no_perfcntr_zap(gpu);
+}
+
/*
* The number of priority levels provided by drm gpu scheduler. The
* DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 08/16] drm/msm/a6xx: Add yield & flush helper
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (5 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 07/16] drm/msm: Add sysprof accessors Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 09/16] drm/msm: Add per-context perfcntr state Rob Clark
` (8 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
It's a common pattern, needing to insert a yield packet before flushing
the rb. And we'll need this once again for configuring perfcntr SEL
regs. So add a helper.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 55 +++++++++++++--------------
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 1 +
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 10 +----
3 files changed, 28 insertions(+), 38 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 71f54ab5425d..415902f6e5d7 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -189,6 +189,30 @@ void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
spin_unlock_irqrestore(&ring->preempt_lock, flags);
}
+void
+a6xx_flush_yield(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+{
+ /* If preemption is enabled */
+ if (gpu->nr_rings > 1) {
+ /* Yield the floor on command completion */
+ OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
+
+ /*
+ * If dword[2:1] are non zero, they specify an address for
+ * the CP to write the value of dword[3] to on preemption
+ * complete. Write 0 to skip the write
+ */
+ OUT_RING(ring, 0x00);
+ OUT_RING(ring, 0x00);
+ /* Data value - not used if the address above is 0 */
+ OUT_RING(ring, 0x01);
+ /* generate interrupt on preemption completion */
+ OUT_RING(ring, 0x00);
+ }
+
+ a6xx_flush(gpu, ring);
+}
+
static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
u64 iova)
{
@@ -597,28 +621,9 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_PKT7(ring, CP_SET_MARKER, 1);
OUT_RING(ring, 0x100); /* IFPC enable */
- /* If preemption is enabled */
- if (gpu->nr_rings > 1) {
- /* Yield the floor on command completion */
- OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
-
- /*
- * If dword[2:1] are non zero, they specify an address for
- * the CP to write the value of dword[3] to on preemption
- * complete. Write 0 to skip the write
- */
- OUT_RING(ring, 0x00);
- OUT_RING(ring, 0x00);
- /* Data value - not used if the address above is 0 */
- OUT_RING(ring, 0x01);
- /* generate interrupt on preemption completion */
- OUT_RING(ring, 0x00);
- }
-
-
trace_msm_gpu_submit_flush(submit, adreno_gpu->funcs->get_timestamp(gpu));
- a6xx_flush(gpu, ring);
+ a6xx_flush_yield(gpu, ring);
/* Check to see if we need to start preemption */
if (adreno_is_a8xx(adreno_gpu))
@@ -958,15 +963,7 @@ static int a7xx_preempt_start(struct msm_gpu *gpu)
a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
- /* Yield the floor on command completion */
- OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
- OUT_RING(ring, 0x00);
- OUT_RING(ring, 0x00);
- OUT_RING(ring, 0x00);
- /* Generate interrupt on preemption completion */
- OUT_RING(ring, 0x00);
-
- a6xx_flush(gpu, ring);
+ a6xx_flush_yield(gpu, ring);
return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index eb431e5e00b1..99c3e55f5ca8 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -317,6 +317,7 @@ void a6xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu, bool gx_
void a6xx_gpu_sw_reset(struct msm_gpu *gpu, bool assert);
int a6xx_fenced_write(struct a6xx_gpu *gpu, u32 offset, u64 value, u32 mask, bool is_64b);
void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
+void a6xx_flush_yield(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
int a6xx_zap_shader_init(struct msm_gpu *gpu);
void a8xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu, bool gx_off);
diff --git a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
index e022c9a162a4..124d315b2469 100644
--- a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
@@ -488,15 +488,7 @@ static int a8xx_preempt_start(struct msm_gpu *gpu)
a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
- /* Yield the floor on command completion */
- OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
- OUT_RING(ring, 0x00);
- OUT_RING(ring, 0x00);
- OUT_RING(ring, 0x00);
- /* Generate interrupt on preemption completion */
- OUT_RING(ring, 0x00);
-
- a6xx_flush(gpu, ring);
+ a6xx_flush_yield(gpu, ring);
return a8xx_idle(gpu, ring) ? 0 : -EINVAL;
}
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 09/16] drm/msm: Add per-context perfcntr state
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (6 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 08/16] drm/msm/a6xx: Add yield & flush helper Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 10/16] drm/msm: Add basic perfcntr infrastructure Rob Clark
` (7 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
The upcoming PERFCNTR_CONFIG ioctl will allow for both global counter
collection, and per-context counter reservation for local (ie. within
a single GEM_SUBMIT ioctl) counter collection.
Any number of contexts can reserve the same counters, but we will need
to ensure that counters reserved for local counter collection do not
conflict with counters used for global counter collection.
So add tracking for per-context local counter reservations.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/msm_gpu.h | 5 +++++
drivers/gpu/drm/msm/msm_perfcntr.h | 21 +++++++++++++++++++++
drivers/gpu/drm/msm/msm_submitqueue.c | 1 +
3 files changed, 27 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 9e5c753437c2..19484774f369 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -434,6 +434,11 @@ struct msm_context {
* this context.
*/
atomic64_t ctx_mem;
+
+ /**
+ * @perfcntrs: Per-context reserved perfcntrs state
+ */
+ struct msm_perfcntr_context_state *perfctx;
};
struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx);
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.h b/drivers/gpu/drm/msm/msm_perfcntr.h
index 64a5d29feba1..7f0654182496 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.h
+++ b/drivers/gpu/drm/msm/msm_perfcntr.h
@@ -35,6 +35,27 @@ struct msm_perfcntr_group {
const struct msm_perfcntr_counter *counters;
};
+/**
+ * struct msm_perfcntr_context_state - per-msm_context counter state
+ *
+ * A given counter can either be unused, reserved for global counter
+ * collection exclusively, or reserved for local per-context counter
+ * collection inclusively. Multiple contexts can reserve the same
+ * counter, since SEL reg programming and counter begin/end sampling
+ * happen locally (within a single GEM_SUBMIT ioctl).
+ */
+struct msm_perfcntr_context_state {
+ /** @dummy: Some compilers dislike structs with only a flex array */
+ unsigned dummy;
+
+ /**
+ * @reserved_counters:
+ *
+ * The number of reserved counters indexed by perfcntr group.
+ */
+ unsigned reserved_counters[];
+};
+
extern const struct msm_perfcntr_group a6xx_perfcntr_groups[];
extern const unsigned a6xx_num_perfcntr_groups;
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index 2598d674a99d..a58fe41602c6 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -66,6 +66,7 @@ void __msm_context_destroy(struct kref *kref)
drm_gpuvm_put(ctx->vm);
kfree(ctx->comm);
kfree(ctx->cmdline);
+ kfree(ctx->perfctx);
kfree(ctx);
}
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 10/16] drm/msm: Add basic perfcntr infrastructure
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (7 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 09/16] drm/msm: Add per-context perfcntr state Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 11/16] drm/msm/a6xx+: Add support to configure perfcntrs Rob Clark
` (6 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, David Airlie, Simona Vetter, Konrad Dybcio,
open list
Add the basic infrastructure for tracking assigned perfcntrs.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/Makefile | 1 +
drivers/gpu/drm/msm/adreno/adreno_device.c | 8 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 5 +-
drivers/gpu/drm/msm/msm_drv.h | 6 ++
drivers/gpu/drm/msm/msm_gpu.c | 12 +++
drivers/gpu/drm/msm/msm_gpu.h | 57 +++++++++-
drivers/gpu/drm/msm/msm_perfcntr.c | 120 +++++++++++++++++++++
drivers/gpu/drm/msm/msm_perfcntr.h | 23 ++++
8 files changed, 226 insertions(+), 6 deletions(-)
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.c
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index 337634e7e247..2466cb32dac5 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -122,6 +122,7 @@ msm-y += \
msm_gpu_devfreq.o \
msm_io_utils.o \
msm_iommu.o \
+ msm_perfcntr.o \
msm_rd.o \
msm_ringbuffer.o \
msm_submitqueue.o \
diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
index fc38331ce640..7f20320ef66a 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
@@ -307,8 +307,10 @@ MODULE_DEVICE_TABLE(of, dt_match);
static int adreno_runtime_resume(struct device *dev)
{
struct msm_gpu *gpu = dev_to_gpu(dev);
-
- return gpu->funcs->pm_resume(gpu);
+ int ret = gpu->funcs->pm_resume(gpu);
+ if (!ret)
+ ret = msm_perfcntr_resume(gpu);
+ return ret;
}
static int adreno_runtime_suspend(struct device *dev)
@@ -322,6 +324,8 @@ static int adreno_runtime_suspend(struct device *dev)
*/
WARN_ON_ONCE(gpu->active_submits);
+ msm_perfcntr_suspend(gpu);
+
return gpu->funcs->pm_suspend(gpu);
}
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 72b71e9e44f0..ee0bcf985934 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -702,11 +702,10 @@ void adreno_recover(struct msm_gpu *gpu)
struct drm_device *dev = gpu->dev;
int ret;
- // XXX pm-runtime?? we *need* the device to be off after this
- // so maybe continuing to call ->pm_suspend/resume() is better?
-
+ msm_perfcntr_suspend(gpu);
gpu->funcs->pm_suspend(gpu);
gpu->funcs->pm_resume(gpu);
+ msm_perfcntr_resume(gpu);
ret = msm_gpu_hw_init(gpu);
if (ret) {
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index e53e4f220bed..f00b2e7aeb91 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -235,6 +235,12 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
int msm_ioctl_vm_bind(struct drm_device *dev, void *data,
struct drm_file *file);
+int msm_perfcntr_resume(struct msm_gpu *gpu);
+void msm_perfcntr_suspend(struct msm_gpu *gpu);
+
+struct msm_perfcntr_state * msm_perfcntr_init(struct msm_gpu *gpu);
+void msm_perfcntr_cleanup(struct msm_gpu *gpu);
+
#ifdef CONFIG_DEBUG_FS
unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan);
#endif
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 1bac70473f80..bf6845e5719e 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -1028,6 +1028,17 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
refcount_set(&gpu->sysprof_active, 1);
+ mutex_init(&gpu->perfcntr_lock);
+
+ if (gpu->num_perfcntr_groups > 0) {
+ gpu->perfcntrs = msm_perfcntr_init(gpu);
+ if (IS_ERR(gpu->perfcntrs)) {
+ ret = PTR_ERR(gpu->perfcntrs);
+ gpu->perfcntrs = NULL;
+ goto fail;
+ }
+ }
+
return 0;
fail:
@@ -1066,6 +1077,7 @@ void msm_gpu_cleanup(struct msm_gpu *gpu)
}
msm_devfreq_cleanup(gpu);
+ msm_perfcntr_cleanup(gpu);
platform_set_drvdata(gpu->pdev, NULL);
}
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 19484774f369..92710da5009b 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -25,6 +25,7 @@ struct msm_gem_vm_log_entry;
struct msm_gpu_state;
struct msm_context;
struct msm_perfcntr_group;
+struct msm_perfcntr_stream;
struct msm_gpu_config {
const char *ioname;
@@ -93,6 +94,13 @@ struct msm_gpu_funcs {
*/
bool (*progress)(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
void (*sysprof_setup)(struct msm_gpu *gpu);
+
+ /* Configure perfcntr SELect regs: */
+ void (*perfcntr_configure)(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
+ const struct msm_perfcntr_stream *stream);
+
+ /* Flush perfcntrs before reading (optional): */
+ void (*perfcntr_flush)(struct msm_gpu *gpu);
};
/* Additional state for iommu faults: */
@@ -266,6 +274,11 @@ struct msm_gpu {
const struct msm_perfcntr_group *perfcntr_groups;
unsigned num_perfcntr_groups;
+
+ struct msm_perfcntr_state *perfcntrs;
+
+ /** @perfcntr_lock: protects perfcntr related state */
+ struct mutex perfcntr_lock;
};
static inline struct msm_gpu *dev_to_gpu(struct device *dev)
@@ -311,10 +324,52 @@ static inline bool msm_gpu_active(struct msm_gpu *gpu)
return false;
}
+/**
+ * struct msm_perfcntr_group_state - Tracking for the currently allocated counter state
+ */
+struct msm_perfcntr_group_state {
+ /**
+ * @allocated_counters:
+ *
+ * allocated counters for global counter collection. The
+ * corresponding counters are allocated from highest to
+ * lowest, to minimize chance of conflict with old userspace
+ * allocating from lowest to highest.
+ */
+ unsigned allocated_counters;
+
+ /**
+ * @countables:
+ *
+ * The correspnding SELect reg values for the allocated counters
+ */
+ uint32_t countables[];
+};
+
+/**
+ * struct msm_perfcntr_state - overall global perfcntr state
+ */
+struct msm_perfcntr_state {
+ /** @stream: current global counter stream if active */
+ struct msm_perfcntr_stream *stream;
+
+ /**
+ * @groups: Global perfcntr stream group state.
+ *
+ * Conceptually this is part of msm_perfcntr_stream state, but is
+ * statically pre-allocated when the gpu is initialized to simplify
+ * error path cleanup in PERFCNTR_CONFIG ioctl. (__free(kfree)
+ * doesn't really help with variable length arrays of allocated
+ * pointers.)
+ */
+ struct msm_perfcntr_group_state *groups[];
+};
+
static inline bool
msm_gpu_sysprof_no_perfcntr_zap(struct msm_gpu *gpu)
{
- return refcount_read(&gpu->sysprof_active) > 1;
+ return (refcount_read(&gpu->sysprof_active) > 1) ||
+ READ_ONCE(gpu->perfcntrs->stream);
}
static inline bool
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.c b/drivers/gpu/drm/msm/msm_perfcntr.c
new file mode 100644
index 000000000000..09e6aa4b6620
--- /dev/null
+++ b/drivers/gpu/drm/msm/msm_perfcntr.c
@@ -0,0 +1,120 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+ */
+
+#include "msm_drv.h"
+#include "msm_gpu.h"
+#include "msm_perfcntr.h"
+
+static int
+msm_perfcntr_resume_locked(struct msm_perfcntr_stream *stream)
+{
+ return 0;
+}
+
+int
+msm_perfcntr_resume(struct msm_gpu *gpu)
+{
+ guard(mutex)(&gpu->perfcntr_lock);
+ return msm_perfcntr_resume_locked(gpu->perfcntrs->stream);
+}
+
+static void
+msm_perfcntr_suspend_locked(struct msm_perfcntr_stream *stream)
+{
+}
+
+void
+msm_perfcntr_suspend(struct msm_gpu *gpu)
+{
+ guard(mutex)(&gpu->perfcntr_lock);
+ msm_perfcntr_suspend_locked(gpu->perfcntrs->stream);
+}
+
+/**
+ * msm_perfcntr_group_idx - map idx of perfcntr group to group_idx
+ * @stream: The global perfcntr stream
+ * @n: The requested group_idx
+ *
+ * The PERFCNTR_CONFIG ioctl requested N counters/countables per perfcntr
+ * group, but the order of groups is not required to match the order they
+ * are defined in the perfcntr tables (which is not stable/UABI, only the
+ * group names are UABI).
+ *
+ * But the order samples are returned in the stream should match the
+ * order they are requested in the PERFCNTR_CONFIG ioctl. This helper
+ * handles the order remapping.
+ *
+ * Returns an index into gpu->perfcntr_groups[] and perfcntrs->groups[].
+ */
+uint32_t
+msm_perfcntr_group_idx(const struct msm_perfcntr_stream *stream, uint32_t n)
+{
+ WARN_ON_ONCE(n > stream->nr_groups);
+ return stream->group_idx[n];
+}
+
+/**
+ * msm_perfcntr_counter_base - get idx of the first counter in group
+ * @stream: The global perfcntr stream
+ * @group_idx: the index of the counter group
+ *
+ * For global counter collection, counters are allocated from the end
+ * (last counter) to minimize the chance of conflict with an old UMD
+ * which predates PERFCNTR_CONFIG ioctl (since UMD assigned from 0..N-1).
+ *
+ * Returns the index of first counter to use. An index into
+ * msm_perfcntr_group::counters[].
+ */
+uint32_t
+msm_perfcntr_counter_base(const struct msm_perfcntr_stream *stream, uint32_t group_idx)
+{
+ struct msm_gpu *gpu = stream->gpu;
+ struct msm_perfcntr_state *perfcntrs = gpu->perfcntrs;
+ unsigned num_counters = gpu->perfcntr_groups[group_idx].num_counters;
+ unsigned allocated_counters = perfcntrs->groups[group_idx]->allocated_counters;
+
+ return num_counters - allocated_counters;
+}
+
+struct msm_perfcntr_state *
+msm_perfcntr_init(struct msm_gpu *gpu)
+{
+ struct msm_perfcntr_state *perfcntrs;
+ struct device *dev = &gpu->pdev->dev;
+ size_t sz;
+
+ sz = struct_size(perfcntrs, groups, gpu->num_perfcntr_groups);
+ perfcntrs = devm_kzalloc(dev, sz, GFP_KERNEL);
+ if (!perfcntrs)
+ return ERR_PTR(-ENOMEM);
+
+ for (unsigned i = 0; i < gpu->num_perfcntr_groups; i++) {
+ const struct msm_perfcntr_group *group =
+ &gpu->perfcntr_groups[i];
+
+ sz = struct_size(perfcntrs->groups[i], countables, group->num_counters);
+ perfcntrs->groups[i] = devm_kzalloc(dev, sz, GFP_KERNEL);
+ if (!perfcntrs->groups[i]) {
+ msm_perfcntr_cleanup(gpu);
+ return ERR_PTR(-ENOMEM);
+ }
+ }
+
+ return perfcntrs;
+}
+
+void
+msm_perfcntr_cleanup(struct msm_gpu *gpu)
+{
+ struct msm_perfcntr_state *perfcntrs = gpu->perfcntrs;
+ struct device *dev = &gpu->pdev->dev;
+
+ gpu->perfcntrs = NULL;
+
+ for (unsigned i = 0; i < gpu->num_perfcntr_groups; i++)
+ devm_kfree(dev, perfcntrs->groups[i]);
+
+ devm_kfree(dev, perfcntrs);
+}
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.h b/drivers/gpu/drm/msm/msm_perfcntr.h
index 7f0654182496..bfda19e01535 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.h
+++ b/drivers/gpu/drm/msm/msm_perfcntr.h
@@ -35,6 +35,29 @@ struct msm_perfcntr_group {
const struct msm_perfcntr_counter *counters;
};
+/**
+ * struct msm_perfcntr_stream - state for a single open stream fd
+ */
+struct msm_perfcntr_stream {
+ /** @gpu: Back-link to the GPU */
+ struct msm_gpu *gpu;
+
+ /** @nr_groups: # of counter groups with enabled counters */
+ uint32_t nr_groups;
+
+ /**
+ * @group_idx: array of nr_groups
+ *
+ * Maps the order of groups in PERFCNTR_CONFIG ioctl to group idx,
+ * so that results in the results stream can be ordered to match
+ * the ioctl call that setup the stream
+ */
+ uint32_t *group_idx;
+};
+
+uint32_t msm_perfcntr_group_idx(const struct msm_perfcntr_stream *stream, uint32_t n);
+uint32_t msm_perfcntr_counter_base(const struct msm_perfcntr_stream *stream, uint32_t group_idx);
+
/**
* struct msm_perfcntr_context_state - per-msm_context counter state
*
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 11/16] drm/msm/a6xx+: Add support to configure perfcntrs
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (8 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 10/16] drm/msm: Add basic perfcntr infrastructure Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 12/16] drm/msm/a8xx: Add perfcntr flush sequence Rob Clark
` (5 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
Add support to configure counter SELect regs. In some cases the reg
writes need to happen while the GPU is idle. And for a7xx+, in some
cases SEL regs need to be configured from BV or BR aperture. The
easiest way to deal with this is to configure from the RB.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 69 +++++++++++++++++++++++++++
drivers/gpu/drm/msm/msm_perfcntr.h | 3 ++
drivers/gpu/drm/msm/msm_ringbuffer.h | 2 +
3 files changed, 74 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 415902f6e5d7..30df9bfa9ef8 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2535,6 +2535,71 @@ static bool a6xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
return progress;
}
+static void
+a6xx_perfcntr_configure(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
+ const struct msm_perfcntr_stream *stream)
+{
+ enum adreno_pipe pipe = PIPE_NONE;
+
+ for (unsigned i = 0; i < stream->nr_groups; i++) {
+ unsigned group_idx = msm_perfcntr_group_idx(stream, i);
+ unsigned base = msm_perfcntr_counter_base(stream, group_idx);
+
+ const struct msm_perfcntr_group *group =
+ &gpu->perfcntr_groups[group_idx];
+
+ struct msm_perfcntr_group_state *group_state =
+ gpu->perfcntrs->groups[group_idx];
+
+ if (group->pipe != pipe) {
+ pipe = group->pipe;
+
+ OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
+
+ if (pipe == PIPE_BR) {
+ OUT_RING(ring, CP_SET_THREAD_BR);
+ } else if (pipe == PIPE_BV) {
+ OUT_RING(ring, CP_SET_THREAD_BV);
+ } else {
+ OUT_RING(ring, CP_SET_THREAD_BOTH);
+ }
+ }
+
+ const struct msm_perfcntr_counter *counter = &group->counters[base];
+ unsigned nr = group_state->allocated_counters;
+ OUT_PKT4(ring, counter->select_reg, nr);
+ for (unsigned c = 0; c < nr; c++)
+ OUT_RING(ring, group_state->countables[c]);
+
+ for (unsigned s = 0; s < ARRAY_SIZE(counter->slice_select_regs); s++) {
+ if (!counter->slice_select_regs[s])
+ break;
+
+ OUT_PKT4(ring, counter->slice_select_regs[s], nr);
+ for (unsigned c = 0; c < nr; c++)
+ OUT_RING(ring, group_state->countables[c]);
+ }
+ }
+
+ if (pipe != PIPE_NONE) {
+ OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
+ OUT_RING(ring, CP_SET_THREAD_BOTH);
+ }
+
+ OUT_PKT7(ring, CP_MEM_WRITE, 3);
+ OUT_RING(ring, lower_32_bits(rbmemptr(ring, perfcntr_fence)));
+ OUT_RING(ring, upper_32_bits(rbmemptr(ring, perfcntr_fence)));
+ OUT_RING(ring, stream->sel_fence);
+
+ a6xx_flush_yield(gpu, ring);
+
+ /* Check to see if we need to start preemption */
+ if (adreno_is_a8xx(to_adreno_gpu(gpu)))
+ a8xx_preempt_trigger(gpu);
+ else
+ a6xx_preempt_trigger(gpu);
+}
+
static u32 fuse_to_supp_hw(const struct adreno_info *info, u32 fuse)
{
if (!info->speedbins)
@@ -2753,6 +2818,7 @@ const struct adreno_gpu_funcs a6xx_gpu_funcs = {
.get_rptr = a6xx_get_rptr,
.progress = a6xx_progress,
.sysprof_setup = a6xx_gmu_sysprof_setup,
+ .perfcntr_configure = a6xx_perfcntr_configure,
},
.init = a6xx_gpu_init,
.get_timestamp = a6xx_gmu_get_timestamp,
@@ -2786,6 +2852,7 @@ const struct adreno_gpu_funcs a6xx_gmuwrapper_funcs = {
.create_private_vm = a6xx_create_private_vm,
.get_rptr = a6xx_get_rptr,
.progress = a6xx_progress,
+ .perfcntr_configure = a6xx_perfcntr_configure,
},
.init = a6xx_gpu_init,
.get_timestamp = a6xx_get_timestamp,
@@ -2822,6 +2889,7 @@ const struct adreno_gpu_funcs a7xx_gpu_funcs = {
.get_rptr = a6xx_get_rptr,
.progress = a6xx_progress,
.sysprof_setup = a6xx_gmu_sysprof_setup,
+ .perfcntr_configure = a6xx_perfcntr_configure,
},
.init = a6xx_gpu_init,
.get_timestamp = a6xx_gmu_get_timestamp,
@@ -2852,6 +2920,7 @@ const struct adreno_gpu_funcs a8xx_gpu_funcs = {
.get_rptr = a6xx_get_rptr,
.progress = a8xx_progress,
.sysprof_setup = a6xx_gmu_sysprof_setup,
+ .perfcntr_configure = a6xx_perfcntr_configure,
},
.init = a6xx_gpu_init,
.get_timestamp = a8xx_gmu_get_timestamp,
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.h b/drivers/gpu/drm/msm/msm_perfcntr.h
index bfda19e01535..14506bc37d05 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.h
+++ b/drivers/gpu/drm/msm/msm_perfcntr.h
@@ -45,6 +45,9 @@ struct msm_perfcntr_stream {
/** @nr_groups: # of counter groups with enabled counters */
uint32_t nr_groups;
+ /** @sel_fence: Fence for SEL reg programming */
+ uint32_t sel_fence;
+
/**
* @group_idx: array of nr_groups
*
diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
index d1e49f701c81..28ca8c9f7463 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.h
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
@@ -37,6 +37,8 @@ struct msm_rbmemptrs {
volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
volatile u64 ttbr0;
volatile u32 context_idr;
+
+ volatile u32 perfcntr_fence;
};
struct msm_cp_state {
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 12/16] drm/msm/a8xx: Add perfcntr flush sequence
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (9 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 11/16] drm/msm/a6xx+: Add support to configure perfcntrs Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 13/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (4 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
With the slice architecture, we need to flush the slice and unslice
counters to perf RAM before reading counters.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 1 +
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 1 +
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 20 ++++++++++++++++++++
3 files changed, 22 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 30df9bfa9ef8..a329d20033d7 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2921,6 +2921,7 @@ const struct adreno_gpu_funcs a8xx_gpu_funcs = {
.progress = a8xx_progress,
.sysprof_setup = a6xx_gmu_sysprof_setup,
.perfcntr_configure = a6xx_perfcntr_configure,
+ .perfcntr_flush = a8xx_perfcntr_flush,
},
.init = a6xx_gpu_init,
.get_timestamp = a8xx_gmu_get_timestamp,
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index 99c3e55f5ca8..3491a24a9320 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -334,5 +334,6 @@ void a8xx_preempt_hw_init(struct msm_gpu *gpu);
void a8xx_preempt_trigger(struct msm_gpu *gpu);
void a8xx_preempt_irq(struct msm_gpu *gpu);
bool a8xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
+void a8xx_perfcntr_flush(struct msm_gpu *gpu);
void a8xx_recover(struct msm_gpu *gpu);
#endif /* __A6XX_GPU_H__ */
diff --git a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
index 124d315b2469..6c040f718176 100644
--- a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
@@ -1345,3 +1345,23 @@ bool a8xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
{
return true;
}
+
+void a8xx_perfcntr_flush(struct msm_gpu *gpu)
+{
+ u32 val;
+
+ /*
+ * Flush delta counters (both perf counters and pipe stats) present in
+ * RBBM_S and RBBM_US to perf RAM logic to get the latest data.
+ */
+ gpu_write(gpu, REG_A8XX_RBBM_PERFCTR_FLUSH_HOST_CMD, BIT(0));
+ gpu_write(gpu, REG_A8XX_RBBM_SLICE_PERFCTR_FLUSH_HOST_CMD, BIT(0));
+
+ /* Ensure all writes are posted before polling status register */
+ wmb();
+
+ if (gpu_poll_timeout(gpu, REG_A8XX_RBBM_PERFCTR_FLUSH_HOST_STATUS, val,
+ val & BIT(0), 100, 100 * 1000)) {
+ dev_err(&gpu->pdev->dev, "Perfcounter flush timed out: status=0x%08x\n", val);
+ }
+}
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 13/16] drm/msm: Add PERFCNTR_CONFIG ioctl
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (10 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 12/16] drm/msm/a8xx: Add perfcntr flush sequence Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 14/16] drm/msm/a6xx: Increase pwrup_reglist size Rob Clark
` (3 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, David Airlie, Simona Vetter, Konrad Dybcio,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, open list
Add new UABI and implementation of PERFCNTR_CONFIG ioctl.
A bit more work is required to configure the pwrup_reglist for the GMU
to restore SELect regs on exist of IFPC, before we can stop disabling
IFPC while global counter collection. This will follow in a later
commit, but will be transparent to userspace.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/msm_drv.c | 1 +
drivers/gpu/drm/msm/msm_drv.h | 2 +
drivers/gpu/drm/msm/msm_gpu.h | 3 +
drivers/gpu/drm/msm/msm_perfcntr.c | 510 +++++++++++++++++++++++++++++
drivers/gpu/drm/msm/msm_perfcntr.h | 51 +++
include/uapi/drm/msm_drm.h | 48 +++
6 files changed, 615 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 3066547f319b..0a7fc06113e0 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -801,6 +801,7 @@ static const struct drm_ioctl_desc msm_ioctls[] = {
DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(MSM_PERFCNTR_CONFIG, msm_ioctl_perfcntr_config, DRM_RENDER_ALLOW),
};
static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file)
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index f00b2e7aeb91..204e140ac8e9 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -237,6 +237,8 @@ int msm_ioctl_vm_bind(struct drm_device *dev, void *data,
int msm_perfcntr_resume(struct msm_gpu *gpu);
void msm_perfcntr_suspend(struct msm_gpu *gpu);
+int msm_ioctl_perfcntr_config(struct drm_device *dev, void *data,
+ struct drm_file *file);
struct msm_perfcntr_state * msm_perfcntr_init(struct msm_gpu *gpu);
void msm_perfcntr_cleanup(struct msm_gpu *gpu);
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 92710da5009b..67f1e84eb631 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -353,6 +353,9 @@ struct msm_perfcntr_state {
/** @stream: current global counter stream if active */
struct msm_perfcntr_stream *stream;
+ /** @sel_seqno: counter for sel_fence */
+ uint32_t sel_seqno;
+
/**
* @groups: Global perfcntr stream group state.
*
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.c b/drivers/gpu/drm/msm/msm_perfcntr.c
index 09e6aa4b6620..39bec201d5c9 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.c
+++ b/drivers/gpu/drm/msm/msm_perfcntr.c
@@ -3,13 +3,44 @@
* Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
*/
+#include "drm/drm_file.h"
+#include "drm/msm_drm.h"
+
+#include "linux/anon_inodes.h"
+#include "linux/gfp_types.h"
+#include "linux/poll.h"
+#include "linux/slab.h"
+
#include "msm_drv.h"
#include "msm_gpu.h"
#include "msm_perfcntr.h"
+#include "adreno/adreno_gpu.h"
+
+/* space used: */
+#define fifo_count(stream) \
+ (CIRC_CNT((stream)->fifo.head, (stream)->fifo.tail, (stream)->fifo_size))
+#define fifo_count_to_end(stream) \
+ (CIRC_CNT_TO_END((stream)->fifo.head, (stream)->fifo.tail, (stream)->fifo_size))
+/* space available: */
+#define fifo_space(stream) \
+ (CIRC_SPACE((stream)->fifo.head, (stream)->fifo.tail, (stream)->fifo_size))
+
static int
msm_perfcntr_resume_locked(struct msm_perfcntr_stream *stream)
{
+ if (!stream)
+ return 0;
+
+ /* Reprogram SEL regs on highest priority rb: */
+ struct msm_ringbuffer *ring = stream->gpu->rb[0];
+
+ queue_work(ring->sched.submit_wq, &stream->sel_work);
+
+ hrtimer_start(&stream->sample_timer,
+ ns_to_ktime(stream->sample_period_ns),
+ HRTIMER_MODE_REL_PINNED);
+
return 0;
}
@@ -23,6 +54,22 @@ msm_perfcntr_resume(struct msm_gpu *gpu)
static void
msm_perfcntr_suspend_locked(struct msm_perfcntr_stream *stream)
{
+ if (!stream)
+ return;
+
+ hrtimer_cancel(&stream->sample_timer);
+ kthread_cancel_work_sync(&stream->sample_work);
+
+ /*
+ * We can't use cancel_work_sync() here, since sel_work acquires
+ * gpu->lock which (a) in suspend path can already be held, or
+ * (b) in release path would invert the order of gpu->lock and
+ * gpu->perfcntr_lock. Either would cause deadlock.
+ */
+ cancel_work(&stream->sel_work);
+
+ stream->sel_fence = ++stream->gpu->perfcntrs->sel_seqno;
+ stream->seqno = 0;
}
void
@@ -32,6 +79,469 @@ msm_perfcntr_suspend(struct msm_gpu *gpu)
msm_perfcntr_suspend_locked(gpu->perfcntrs->stream);
}
+static int
+msm_perfcntrs_stream_release(struct inode *inode, struct file *file)
+{
+ struct msm_perfcntr_stream *stream = file->private_data;
+ struct msm_gpu *gpu = stream->gpu;
+
+ scoped_guard (mutex, &gpu->perfcntr_lock) {
+ struct msm_perfcntr_state *perfcntrs = gpu->perfcntrs;
+
+ msm_perfcntr_suspend_locked(stream);
+ perfcntrs->stream = NULL;
+
+ /* release previously allocated counters: */
+ for (unsigned i = 0; i < gpu->num_perfcntr_groups; i++)
+ perfcntrs->groups[i]->allocated_counters = 0;
+ }
+
+ /*
+ * In the suspend path we use async cancel_work(), to avoid blocking
+ * on sel_work, which acquires gpu->lock (which could deadlock since
+ * other paths acquire gpu->lock before perfcntr_lock) or already
+ * hold gpu->lock.
+ *
+ * But since we are freeing the stream, after dropping perfcntr_lock
+ * we need to block until sel_work is done:
+ */
+ cancel_work_sync(&stream->sel_work);
+
+ kfree(stream->group_idx);
+ kfree(stream->fifo.buf);
+ kfree(stream);
+
+ return 0;
+}
+
+static __poll_t
+msm_perfcntrs_stream_poll(struct file *file, poll_table *wait)
+{
+ struct msm_perfcntr_stream *stream = file->private_data;
+ __poll_t events = 0;
+
+ poll_wait(file, &stream->poll_wq, wait);
+
+ /* Are there samples to read? */
+ if (fifo_count(stream) > 0)
+ events |= EPOLLIN;
+
+ return events;
+}
+
+static ssize_t
+msm_perfcntrs_stream_read(struct file *file, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct msm_perfcntr_stream *stream = file->private_data;
+ int ret;
+
+ if (!(file->f_flags & O_NONBLOCK)) {
+ ret = wait_event_interruptible(stream->poll_wq,
+ fifo_count(stream) > 0);
+ if (ret)
+ return ret;
+ }
+
+ guard(mutex)(&stream->read_lock);
+
+ struct circ_buf *fifo = &stream->fifo;
+ const char *fptr = &fifo->buf[fifo->tail];
+
+ /*
+ * Note that smp_load_acquire() is not strictly required
+ * as CIRC_CNT_TO_END() does not access the head more than
+ * once.
+ */
+ count = min_t(size_t, count, fifo_count_to_end(stream));
+ if (copy_to_user(buf, fptr, count))
+ return -EFAULT;
+
+ smp_store_release(&fifo->tail, (fifo->tail + count) & (stream->fifo_size - 1));
+ *ppos += count;
+
+ return count;
+}
+
+static const struct file_operations stream_fops = {
+ .owner = THIS_MODULE,
+ .release = msm_perfcntrs_stream_release,
+ .poll = msm_perfcntrs_stream_poll,
+ .read = msm_perfcntrs_stream_read,
+};
+
+static void
+sel_worker(struct work_struct *w)
+{
+ struct msm_perfcntr_stream *stream =
+ container_of(w, typeof(*stream), sel_work);
+ struct msm_gpu *gpu = stream->gpu;
+ /* Reprogram SEL regs on highest priority rb: */
+ struct msm_ringbuffer *ring = stream->gpu->rb[0];
+
+ /*
+ * If in the process of resuming, wait for that. Otherwise sel_worker
+ * which is enqueued in the resume path can be scheduled before the
+ * resume completes.
+ */
+ pm_runtime_barrier(&gpu->pdev->dev);
+
+ /*
+ * sel_work could end up scheduled before suspend, but running
+ * after. See msm_perfcntr_suspend_locked()
+ *
+ * So if we end up running sel_work after the GPU is already
+ * suspended, just bail. It will be scheduled again after
+ * the GPU is resumed.
+ */
+ if (!pm_runtime_get_if_active(&gpu->pdev->dev))
+ return;
+
+ scoped_guard (mutex, &gpu->lock) {
+ guard(mutex)(&gpu->perfcntr_lock);
+ if (stream != gpu->perfcntrs->stream)
+ break;
+ msm_gpu_hw_init(gpu);
+ gpu->funcs->perfcntr_configure(gpu, ring, stream);
+ }
+
+ pm_runtime_put_autosuspend(&gpu->pdev->dev);
+}
+
+static void
+sample_write(struct msm_perfcntr_stream *stream, int *head, const void *buf, size_t sz)
+{
+ /*
+ * FIFO size is power-of-two, and guaranteed to have enough space to
+ * fit what we are writing. So we should not hit the wrap-around
+ * point writing things that are power-of-two sized
+ */
+ WARN_ON(CIRC_SPACE_TO_END(*head, stream->fifo.tail, stream->fifo_size) < sz);
+
+ memcpy(&stream->fifo.buf[*head], buf, sz);
+
+ /* Advance head, wrapping around if necessary: */
+ *head = (*head + sz) & (stream->fifo_size - 1);
+}
+
+static void
+sample_write_u32(struct msm_perfcntr_stream *stream, int *head, uint32_t val)
+{
+ sample_write(stream, head, &val, sizeof(val));
+}
+
+static void
+sample_write_u64(struct msm_perfcntr_stream *stream, int *head, uint64_t val)
+{
+ sample_write(stream, head, &val, sizeof(val));
+}
+
+static void
+sample_worker(struct kthread_work *work)
+{
+ struct msm_perfcntr_stream *stream =
+ container_of(work, typeof(*stream), sample_work);
+ struct msm_gpu *gpu = stream->gpu;
+ struct msm_rbmemptrs *memptrs = gpu->rb[0]->memptrs;
+
+ if (memptrs->perfcntr_fence != stream->sel_fence)
+ return;
+
+ /*
+ * Ensure we have enough space to capture a sample period's
+ * worth of data:
+ */
+ if (stream->period_size > fifo_space(stream)) {
+ stream->seqno = 0;
+ return;
+ }
+
+ if (gpu->funcs->perfcntr_flush)
+ gpu->funcs->perfcntr_flush(gpu);
+
+ /* Keep local copy of head to avoid updating fifo until the end: */
+ int head = stream->fifo.head;
+
+ /*
+ * We expect the GPU to be powered at this point, as the timer
+ * and kthread work are canceled/flushed in the suspend path:
+ */
+ sample_write_u64(stream, &head,
+ to_adreno_gpu(gpu)->funcs->get_timestamp(gpu));
+ sample_write_u32(stream, &head, stream->seqno++);
+ sample_write_u32(stream, &head, 0);
+
+ for (unsigned i = 0; i < stream->nr_groups; i++) {
+ unsigned group_idx = msm_perfcntr_group_idx(stream, i);
+ unsigned base = msm_perfcntr_counter_base(stream, group_idx);
+
+ const struct msm_perfcntr_group *group =
+ &gpu->perfcntr_groups[group_idx];
+
+ struct msm_perfcntr_group_state *group_state =
+ gpu->perfcntrs->groups[group_idx];
+
+ unsigned nr = group_state->allocated_counters;
+ for (unsigned j = 0; j < nr; j++) {
+ const struct msm_perfcntr_counter *counter =
+ &group->counters[j + base];
+ uint64_t val = gpu_read64(gpu, counter->counter_reg_lo);
+ sample_write_u64(stream, &head, val);
+ }
+ }
+
+ smp_store_release(&stream->fifo.head, head);
+ wake_up_all(&stream->poll_wq);
+}
+
+static enum hrtimer_restart
+sample_timer(struct hrtimer *hrtimer)
+{
+ struct msm_perfcntr_stream *stream =
+ container_of(hrtimer, typeof(*stream), sample_timer);
+
+ kthread_queue_work(stream->gpu->worker, &stream->sample_work);
+
+ hrtimer_forward_now(hrtimer, ns_to_ktime(stream->sample_period_ns));
+
+ return HRTIMER_RESTART;
+}
+
+static int
+get_group_idx(struct msm_gpu *gpu, const char *name, size_t len)
+{
+ for (unsigned i = 0; i < gpu->num_perfcntr_groups; i++) {
+ const struct msm_perfcntr_group *group =
+ &gpu->perfcntr_groups[i];
+ if (!strncmp(group->name, name, len))
+ return i;
+ }
+
+ return -1;
+}
+
+static int
+get_available_counters(struct msm_gpu *gpu, int group_idx, uint32_t flags)
+{
+ struct msm_perfcntr_state *perfcntrs = gpu->perfcntrs;
+
+ /*
+ * For local counter reservation, anything that is not used by
+ * global perfcntr stream is available:
+ */
+ if (!(flags & MSM_PERFCNTR_STREAM)) {
+ return gpu->perfcntr_groups[group_idx].num_counters -
+ perfcntrs->groups[group_idx]->allocated_counters;
+ }
+
+ /*
+ * For global counter collection, anything that is not reserved by
+ * one or more contexts is available:
+ */
+ guard(mutex)(&gpu->dev->filelist_mutex);
+
+ unsigned reserved_counters = 0;
+ struct drm_file *file;
+
+ list_for_each_entry (file, &gpu->dev->filelist, lhead) {
+ struct msm_context *ctx = file->driver_priv;
+
+ if (!ctx || !ctx->perfctx)
+ continue;
+
+ unsigned n = ctx->perfctx->reserved_counters[group_idx];
+ reserved_counters = max(reserved_counters, n);
+ }
+
+ return gpu->perfcntr_groups[group_idx].num_counters - reserved_counters;
+}
+
+int
+msm_ioctl_perfcntr_config(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ const struct drm_msm_perfcntr_config *args = data;
+ struct msm_context *ctx = file->driver_priv;
+ struct msm_gpu *gpu = priv->gpu;
+ int stream_fd = 0;
+
+ if (!gpu || !gpu->num_perfcntr_groups)
+ return -ENXIO;
+
+ struct msm_perfcntr_state *perfcntrs = gpu->perfcntrs;
+
+ /*
+ * Validate args that don't require locks/power first:
+ */
+
+ if (args->flags & ~MSM_PERFCNTR_FLAGS)
+ return UERR(EINVAL, dev, "invalid flags");
+
+ if (args->nr_groups && !args->group_stride)
+ return UERR(EINVAL, dev, "invalid group_stride");
+
+ if (args->flags & MSM_PERFCNTR_STREAM) {
+ if (!perfmon_capable())
+ return UERR(EPERM, dev, "invalid permissions");
+ if (!args->nr_groups)
+ return UERR(EINVAL, dev, "invalid nr_groups");
+ if (!args->period)
+ return UERR(EINVAL, dev, "invalid sampling period");
+ } else {
+ if (args->period)
+ return UERR(EINVAL, dev, "sampling period not allowed");
+ if (args->bufsz_shift)
+ return UERR(EINVAL, dev, "sample buf size not allowed");
+ }
+
+ if (args->nr_groups && !args->groups)
+ return UERR(EINVAL, dev, "no groups");
+
+ /*
+ * To avoid iterating over the groups multiple times, allocate and setup
+ * both a ctx and global stream object. Only one of the two will be
+ * kept in the end.
+ */
+
+ struct msm_perfcntr_context_state *perfctx __free(kfree) = kzalloc(
+ struct_size(perfctx, reserved_counters, gpu->num_perfcntr_groups),
+ GFP_KERNEL);
+ if (!perfctx)
+ return -ENOMEM;
+
+ struct msm_perfcntr_stream *stream __free(kfree) =
+ kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+
+ uint32_t *group_idx __free(kfree) =
+ kcalloc(args->nr_groups, sizeof(uint32_t), GFP_KERNEL);
+ if (!group_idx)
+ return -ENOMEM;
+
+ stream->gpu = gpu;
+ stream->sample_period_ns = args->period;
+ stream->nr_groups = args->nr_groups;
+ stream->fifo_size = 1 << args->bufsz_shift;
+
+ mutex_init(&stream->read_lock);
+
+ guard(pm_runtime_active_auto)(&gpu->pdev->dev);
+ guard(mutex)(&gpu->perfcntr_lock);
+
+ if (args->flags & MSM_PERFCNTR_STREAM) {
+ if (perfcntrs->stream)
+ return UERR(EBUSY, dev, "perfcntr stream already open");
+ }
+
+ size_t bufsz = 16; /* header size includes seqno and 64b timestamp: */
+ int ret = 0;
+
+ for (unsigned i = 0; i < args->nr_groups; i++) {
+ struct drm_msm_perfcntr_group g = {0};
+ void __user *userptr =
+ u64_to_user_ptr(args->groups + (i * args->group_stride));
+
+ if (copy_from_user(&g, userptr, args->group_stride))
+ return -EFAULT;
+
+ if (g.pad)
+ return UERR(EINVAL, dev, "groups[%d]: invalid pad", i);
+
+ int idx = get_group_idx(gpu, g.group_name, sizeof(g.group_name));
+
+ if (idx < 0)
+ return UERR(EINVAL, dev, "groups[%d]: unknown group", i);
+
+ if (g.nr_countables > gpu->perfcntr_groups[idx].num_counters)
+ return UERR(EINVAL, dev, "groups[%d]: too many counters", i);
+
+ if (args->flags & MSM_PERFCNTR_STREAM) {
+ if (g.nr_countables && !g.countables)
+ return UERR(EINVAL, dev, "groups[%d]: no countables", i);
+ } else {
+ if (g.countables)
+ return UERR(EINVAL, dev, "groups[%d]: countables should be NULL", i);
+ }
+
+ int avail_counters = get_available_counters(gpu, idx, args->flags);
+ if (g.nr_countables > avail_counters) {
+ /*
+ * Defer error return until we process all groups, in
+ * case there are other E2BIG groups:
+ */
+ ret = UERR(E2BIG, dev, "groups[%d]: too few counters available", i);
+
+ if (args->flags & MSM_PERFCNTR_UPDATE) {
+ /* Let userspace know how many counters are actually avail: */
+ g.nr_countables = avail_counters;
+ if (copy_to_user(userptr, &g, args->group_stride))
+ return -EFAULT;
+ }
+ }
+
+ group_idx[i] = idx;
+ perfctx->reserved_counters[idx] = g.nr_countables;
+
+ if (args->flags & MSM_PERFCNTR_STREAM) {
+ perfcntrs->groups[idx]->allocated_counters = g.nr_countables;
+
+ size_t sz = sizeof(uint32_t) * g.nr_countables;
+ void __user *userptr = u64_to_user_ptr(g.countables);
+
+ if (copy_from_user(perfcntrs->groups[idx]->countables, userptr, sz))
+ return -EFAULT;
+
+ /* Samples are 64b per countable: */
+ bufsz += 2 * sz;
+ }
+ }
+
+ if (ret)
+ return ret;
+
+ if (args->flags & MSM_PERFCNTR_STREAM) {
+ /*
+ * Validate requested buffer size is large enough for at least
+ * a single sample period.
+ *
+ * Note the circ_buf implementation needs to be 1 byte larger
+ * than max it can hold (see CIRC_SPACE()).
+ */
+ if (bufsz >= stream->fifo_size)
+ return UERR(ETOOSMALL, dev, "required buffer size: %zu", bufsz);
+
+ stream->period_size = bufsz;
+
+ void *buf __free(kfree) =
+ kmalloc(1 << args->bufsz_shift, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ stream_fd = anon_inode_getfd("[msm_perfcntrs]", &stream_fops, stream, 0);
+ if (stream_fd < 0)
+ return stream_fd;
+
+ INIT_WORK(&stream->sel_work, sel_worker);
+ kthread_init_work(&stream->sample_work, sample_worker);
+ init_waitqueue_head(&stream->poll_wq);
+ hrtimer_setup(&stream->sample_timer, sample_timer,
+ CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+
+ stream->sel_fence = ++perfcntrs->sel_seqno;
+ stream->group_idx = no_free_ptr(group_idx);
+ stream->fifo.buf = no_free_ptr(buf);
+ perfcntrs->stream = no_free_ptr(stream);
+
+ msm_perfcntr_resume_locked(perfcntrs->stream);
+ } else {
+ kfree(ctx->perfctx);
+ ctx->perfctx = no_free_ptr(perfctx);
+ }
+
+ return stream_fd;
+}
+
/**
* msm_perfcntr_group_idx - map idx of perfcntr group to group_idx
* @stream: The global perfcntr stream
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.h b/drivers/gpu/drm/msm/msm_perfcntr.h
index 14506bc37d05..198856b18445 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.h
+++ b/drivers/gpu/drm/msm/msm_perfcntr.h
@@ -7,6 +7,11 @@
#define __MSM_PERFCNTR_H__
#include "linux/array_size.h"
+#include "linux/circ_buf.h"
+#include "linux/hrtimer.h"
+#include "linux/kthread.h"
+#include "linux/wait.h"
+#include "linux/workqueue.h"
#include "adreno_common.xml.h"
@@ -42,12 +47,49 @@ struct msm_perfcntr_stream {
/** @gpu: Back-link to the GPU */
struct msm_gpu *gpu;
+ /** @sample_timer: Timer to sample counters */
+ struct hrtimer sample_timer;
+
+ /** @poll_wq: Wait queue for waiting for OA data to be available */
+ wait_queue_head_t poll_wq;
+
+ /** @sample_period_ns: Sampling period */
+ uint64_t sample_period_ns;
+
/** @nr_groups: # of counter groups with enabled counters */
uint32_t nr_groups;
+ /** @seqno: counter for collected samples */
+ uint32_t seqno;
+
/** @sel_fence: Fence for SEL reg programming */
uint32_t sel_fence;
+ /**
+ * @sel_work: Worker for SEL reg programming
+ *
+ * Initial SEL reg programming (as opposed to restoring the SEL
+ * regs on runpm resume) must run on the same ordered wq as is
+ * used by drm_sched, to serialize it with GEM_SUBMITs written
+ * into the same ringbuffer.
+ */
+ struct work_struct sel_work;
+
+ /**
+ * @sample_work: Worker for collecting samples
+ */
+ struct kthread_work sample_work;
+
+ /**
+ * @read_lock:
+ *
+ * Fifo access is synchronied on the producer side by virtue
+ * of there being a single timer collecting samples and writing
+ * into the fifo. It is protected on the consumer side by
+ * @read_lock.
+ */
+ struct mutex read_lock;
+
/**
* @group_idx: array of nr_groups
*
@@ -56,6 +98,15 @@ struct msm_perfcntr_stream {
* the ioctl call that setup the stream
*/
uint32_t *group_idx;
+
+ /** @fifo: circular buffer for samples */
+ struct circ_buf fifo;
+
+ /** @fifo_size: circular buffer size */
+ size_t fifo_size;
+
+ /** @period_size: size of data for single sampling period */
+ size_t period_size;
};
uint32_t msm_perfcntr_group_idx(const struct msm_perfcntr_stream *stream, uint32_t n);
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index b99098792371..289cf228b873 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -491,6 +491,52 @@ struct drm_msm_submitqueue_query {
__u32 pad;
};
+#define MSM_PERFCNTR_STREAM 0x00000001
+#define MSM_PERFCNTR_UPDATE 0x00000002
+#define MSM_PERFCNTR_FLAGS ( \
+ MSM_PERFCNTR_STREAM | \
+ MSM_PERFCNTR_UPDATE | \
+ 0)
+
+struct drm_msm_perfcntr_group {
+ char group_name[16];
+ __u32 nr_countables;
+ __u32 pad;
+ __u64 countables; /* pointer to an array of nr_countables u32 */
+};
+
+/*
+ * Note, for MSM_PERFCNTR_STREAM, the ioctl returns an fd to read recorded
+ * counters. This only works because the ioctl is DRM_IOW(), if we returned
+ * a out param in the ioctl struct the copy_to_user() (in drm_ioctl())
+ * could fault, causing us to leak the fd.
+ *
+ * If the ioctl returns with error E2BIG, that means more counters/countables
+ * are requested than are currently available. If MSM_PERFCNTR_UPDATE flag
+ * is set, drm_msm_perfcntr_group::nr_countables will be updated to return
+ * the actual # of counters available.
+ *
+ * The data read from the has the following format for each sampling period:
+ *
+ * uint64_t timestamp; // CP_ALWAYS_ON_COUNTER captured at sample time
+ * uint32_t seqno; // increments by 1 each period, reset to 0 on discontinuity
+ * uint32_t mbz; // pad out counters to 64b
+ * struct {
+ * uint64_t counter[nr_countables];
+ * } groups[nr_groups];
+ *
+ * The ordering of groups and counters matches the order in PERFCNTR_CONFIG
+ * ioctl.
+ */
+struct drm_msm_perfcntr_config {
+ __u32 flags; /* bitmask of MSM_PERFCNTR_x */
+ __u32 nr_groups; /* # of entries in groups array */
+ __u64 groups; /* pointer to array of drm_msm_perfcntr_group */
+ __u64 period; /* sampling period in ns */
+ __u32 bufsz_shift; /* sample buffer size in bytes is 1<<bufsz_shift */
+ __u32 group_stride; /* sizeof(struct drm_msm_perfcntr_group) */
+};
+
#define DRM_MSM_GET_PARAM 0x00
#define DRM_MSM_SET_PARAM 0x01
#define DRM_MSM_GEM_NEW 0x02
@@ -507,6 +553,7 @@ struct drm_msm_submitqueue_query {
#define DRM_MSM_SUBMITQUEUE_CLOSE 0x0B
#define DRM_MSM_SUBMITQUEUE_QUERY 0x0C
#define DRM_MSM_VM_BIND 0x0D
+#define DRM_MSM_PERFCNTR_CONFIG 0x0E
#define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GET_PARAM, struct drm_msm_param)
#define DRM_IOCTL_MSM_SET_PARAM DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SET_PARAM, struct drm_msm_param)
@@ -521,6 +568,7 @@ struct drm_msm_submitqueue_query {
#define DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_CLOSE, __u32)
#define DRM_IOCTL_MSM_SUBMITQUEUE_QUERY DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_QUERY, struct drm_msm_submitqueue_query)
#define DRM_IOCTL_MSM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_VM_BIND, struct drm_msm_vm_bind)
+#define DRM_IOCTL_MSM_PERFCNTR_CONFIG DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_PERFCNTR_CONFIG, struct drm_msm_perfcntr_config)
#if defined(__cplusplus)
}
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 14/16] drm/msm/a6xx: Increase pwrup_reglist size
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (11 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 13/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 15/16] drm/msm/a6xx: Append SEL regs to dyn pwrup reglist Rob Clark
` (2 subsequent siblings)
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
To make room for appending SEL reg programming. Without increasing the
size, we would overflow the pwrup_reglist at ~190 counters on gen8.
Or possibly fewer, considering that some gen8 counter groups also have
separate slice vs unslice SELectors.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index a329d20033d7..61c6b0e781ce 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -1183,7 +1183,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
}
- a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
+ a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, 2 * PAGE_SIZE,
MSM_BO_WC | MSM_BO_MAP_PRIV,
gpu->vm, &a6xx_gpu->pwrup_reglist_bo,
&a6xx_gpu->pwrup_reglist_iova);
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 15/16] drm/msm/a6xx: Append SEL regs to dyn pwrup reglist
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (12 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 14/16] drm/msm/a6xx: Increase pwrup_reglist size Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 16/16] drm/msm/a6xx: Allow IFPC with perfcntr stream Rob Clark
2026-05-04 22:06 ` Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl Claude Code Review Bot
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
This is needed so that SEL reg values are restored on exit from IFPC.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 82 +++++++++++++++++++++++++--
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 11 +++-
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 1 +
3 files changed, 87 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 61c6b0e781ce..e047ed550347 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -946,6 +946,7 @@ static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
A7XX_CP_APERTURE_CNTL_HOST_PIPE(PIPE_NONE));
}
lock->dynamic_list_len = dyn_pwrup_reglist_count;
+ a6xx_gpu->dynamic_sel_reglist_offset = dyn_pwrup_reglist_count;
}
static int a7xx_preempt_start(struct msm_gpu *gpu)
@@ -2535,11 +2536,60 @@ static bool a6xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
return progress;
}
+static void
+perfcntr_select(struct msm_ringbuffer *ring, enum adreno_pipe pipe,
+ uint32_t regidx, uint32_t *countables, uint32_t nr,
+ uint32_t **reglist)
+{
+ OUT_PKT4(ring, regidx, nr);
+ for (unsigned i = 0; i < nr; i++)
+ OUT_RING(ring, countables[i]);
+
+ if (!*reglist)
+ return;
+
+ for (unsigned i = 0; i < nr; i++) {
+ /*
+ * Bitfield is in same position on a7xx, but only 2 bits..
+ * which is sufficient for NONE/BR/BV:
+ */
+ *(*reglist)++ = A8XX_CP_APERTURE_CNTL_HOST_PIPEID(pipe);
+ *(*reglist)++ = regidx + i;
+ *(*reglist)++ = countables[i];
+ }
+}
+
static void
a6xx_perfcntr_configure(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
const struct msm_perfcntr_stream *stream)
{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
enum adreno_pipe pipe = PIPE_NONE;
+ uint32_t *reglist = NULL;
+ uint32_t *reglist_sel_start;
+
+ if (to_adreno_gpu(gpu)->info->family >= ADRENO_7XX_GEN1) {
+ WARN_ON(!a6xx_gpu->pwrup_reglist_emitted);
+
+ struct cpu_gpu_lock *lock = a6xx_gpu->pwrup_reglist_ptr;
+ int off = (2 * lock->ifpc_list_len) +
+ (2 * lock->preemption_list_len) +
+ (3 * a6xx_gpu->dynamic_sel_reglist_offset);
+
+ reglist = (uint32_t *)&lock->regs[0];
+ reglist += off;
+ reglist_sel_start = reglist;
+
+ /* Clear any previously configured SEL reg entries: */
+ lock->dynamic_list_len = a6xx_gpu->dynamic_sel_reglist_offset;
+
+ /*
+ * Ensure CP sees the dynamic_list_len update before we
+ * start modifying the SEL entries:
+ */
+ wmb();
+ }
for (unsigned i = 0; i < stream->nr_groups; i++) {
unsigned group_idx = msm_perfcntr_group_idx(stream, i);
@@ -2567,17 +2617,15 @@ a6xx_perfcntr_configure(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
const struct msm_perfcntr_counter *counter = &group->counters[base];
unsigned nr = group_state->allocated_counters;
- OUT_PKT4(ring, counter->select_reg, nr);
- for (unsigned c = 0; c < nr; c++)
- OUT_RING(ring, group_state->countables[c]);
+ perfcntr_select(ring, pipe, counter->select_reg,
+ group_state->countables, nr, ®list);
for (unsigned s = 0; s < ARRAY_SIZE(counter->slice_select_regs); s++) {
if (!counter->slice_select_regs[s])
break;
- OUT_PKT4(ring, counter->slice_select_regs[s], nr);
- for (unsigned c = 0; c < nr; c++)
- OUT_RING(ring, group_state->countables[c]);
+ perfcntr_select(ring, pipe, counter->slice_select_regs[s],
+ group_state->countables, nr, ®list);
}
}
@@ -2591,6 +2639,28 @@ a6xx_perfcntr_configure(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
OUT_RING(ring, upper_32_bits(rbmemptr(ring, perfcntr_fence)));
OUT_RING(ring, stream->sel_fence);
+ /*
+ * Update the pwrup reglist size before flushing. Kgsl does a shared-
+ * memory spinlock dance with SQE to avoid racing with IFPC exit. But
+ * we can skip that since the ringbuffer programming will be executed
+ * by SQE after dynamic reglist size is updated. So even if we lose
+ * the race, the register programming in the rb will overwrite/correct
+ * the SEL regs restored by SQE on IFPC exit, before sampling begins.
+ */
+ if (reglist) {
+ struct cpu_gpu_lock *lock = a6xx_gpu->pwrup_reglist_ptr;
+ unsigned nr_regs = (reglist_sel_start - reglist) / 3;
+
+ /*
+ * Ensure CP sees updates to the pwrup_reglist before it
+ * sees the new (increased) length:
+ */
+ wmb();
+
+ /* Update dynamic reglist len to include new SEL reg programming: */
+ lock->dynamic_list_len = a6xx_gpu->dynamic_sel_reglist_offset + nr_regs;
+ }
+
a6xx_flush_yield(gpu, ring);
/* Check to see if we need to start preemption */
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index 3491a24a9320..f3cc9478b079 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -21,17 +21,19 @@ struct cpu_gpu_lock {
uint32_t cpu_req;
uint32_t turn;
union {
+ /* a6xx: */
struct {
uint16_t list_length;
uint16_t list_offset;
};
+ /* a7xx+: */
struct {
uint8_t ifpc_list_len;
uint8_t preemption_list_len;
uint16_t dynamic_list_len;
};
};
- uint64_t regs[62];
+ uint64_t regs[];
};
/**
@@ -100,6 +102,13 @@ struct a6xx_gpu {
uint64_t pwrup_reglist_iova;
bool pwrup_reglist_emitted;
+ /*
+ * Offset of start of SEL regs appended to pwrup_reglist. This
+ * is equal to lock->dynamic_list_len if no SEL regs are appended
+ * to the end of the dynamic reglist.
+ */
+ uint16_t dynamic_sel_reglist_offset;
+
bool has_whereami;
void __iomem *llc_mmio;
diff --git a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
index 6c040f718176..2ce7c6ac4521 100644
--- a/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a8xx_gpu.c
@@ -468,6 +468,7 @@ static void a8xx_patch_pwrup_reglist(struct msm_gpu *gpu)
}
lock->dynamic_list_len = dyn_pwrup_reglist_count;
+ a6xx_gpu->dynamic_sel_reglist_offset = dyn_pwrup_reglist_count;
done:
a8xx_aperture_clear(gpu);
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v3 16/16] drm/msm/a6xx: Allow IFPC with perfcntr stream
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (13 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 15/16] drm/msm/a6xx: Append SEL regs to dyn pwrup reglist Rob Clark
@ 2026-05-04 19:06 ` Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 22:06 ` Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl Claude Code Review Bot
15 siblings, 1 reply; 34+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark, Sean Paul,
Konrad Dybcio, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Marijn Suijten, David Airlie, Simona Vetter, open list
Now that the dynamic pwrup reglist has SEL reg values to restore
appended, so that SEL regs are restored on IFPC exit, we can stop
completely disabling IFPC while global counter sampling is active.
To accomplish this, we re-use sysprof_setup() with a force_on param
to inhibit IFPC specifically while the counter regs are being read,
while leaving IFPC enabled the rest of the time.
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 4 ++--
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 2 +-
drivers/gpu/drm/msm/msm_gpu.h | 10 ++--------
drivers/gpu/drm/msm/msm_perfcntr.c | 8 ++++++++
drivers/gpu/drm/msm/msm_submitqueue.c | 2 +-
5 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index aba08fb76249..3fe0f1cda46a 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -2034,9 +2034,9 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
return irq;
}
-void a6xx_gmu_sysprof_setup(struct msm_gpu *gpu)
+void a6xx_gmu_sysprof_setup(struct msm_gpu *gpu, bool force_on)
{
- bool sysprof = msm_gpu_sysprof_no_ifpc(gpu);
+ bool sysprof = msm_gpu_sysprof_no_ifpc(gpu) | force_on;
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index f3cc9478b079..eecc71843bed 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -280,7 +280,7 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state);
int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
-void a6xx_gmu_sysprof_setup(struct msm_gpu *gpu);
+void a6xx_gmu_sysprof_setup(struct msm_gpu *gpu, bool force_on);
void a6xx_preempt_init(struct msm_gpu *gpu);
void a6xx_preempt_hw_init(struct msm_gpu *gpu);
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 67f1e84eb631..93124c032dd4 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -93,7 +93,7 @@ struct msm_gpu_funcs {
* for cmdstream that is buffered in this FIFO upstream of the CP fw.
*/
bool (*progress)(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
- void (*sysprof_setup)(struct msm_gpu *gpu);
+ void (*sysprof_setup)(struct msm_gpu *gpu, bool force_on);
/* Configure perfcntr SELect regs: */
void (*perfcntr_configure)(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
@@ -378,13 +378,7 @@ msm_gpu_sysprof_no_perfcntr_zap(struct msm_gpu *gpu)
static inline bool
msm_gpu_sysprof_no_ifpc(struct msm_gpu *gpu)
{
- /*
- * For now, this is the same condition as disabling perfcntr clears
- * on context switch. But once kernel perfcntr IFPC support is in
- * place, we will only need to disable IFPC for legacy userspace
- * setting SYSPROF param.
- */
- return msm_gpu_sysprof_no_perfcntr_zap(gpu);
+ return refcount_read(&gpu->sysprof_active) > 1;
}
/*
diff --git a/drivers/gpu/drm/msm/msm_perfcntr.c b/drivers/gpu/drm/msm/msm_perfcntr.c
index 39bec201d5c9..d8ec65fa25f0 100644
--- a/drivers/gpu/drm/msm/msm_perfcntr.c
+++ b/drivers/gpu/drm/msm/msm_perfcntr.c
@@ -256,6 +256,10 @@ sample_worker(struct kthread_work *work)
return;
}
+ /* Inhibit IFPC while accessing registers: */
+ if (gpu->funcs->sysprof_setup)
+ gpu->funcs->sysprof_setup(gpu, true);
+
if (gpu->funcs->perfcntr_flush)
gpu->funcs->perfcntr_flush(gpu);
@@ -290,6 +294,10 @@ sample_worker(struct kthread_work *work)
}
}
+ /* Re-enable IFPC: */
+ if (gpu->funcs->sysprof_setup)
+ gpu->funcs->sysprof_setup(gpu, false);
+
smp_store_release(&stream->fifo.head, head);
wake_up_all(&stream->poll_wq);
}
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index a58fe41602c6..1a5a77b28016 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -42,7 +42,7 @@ int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sy
/* Some gpu families require additional setup for sysprof */
if (gpu->funcs->sysprof_setup)
- gpu->funcs->sysprof_setup(gpu);
+ gpu->funcs->sysprof_setup(gpu, false);
ctx->sysprof = sysprof;
--
2.54.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
` (14 preceding siblings ...)
2026-05-04 19:06 ` [PATCH v3 16/16] drm/msm/a6xx: Allow IFPC with perfcntr stream Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
15 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Overall Series Review
Subject: drm/msm: Add PERFCNTR_CONFIG ioctl
Author: Rob Clark <robin.clark@oss.qualcomm.com>
Patches: 16
Reviewed: 2026-05-05T08:06:01.354257
---
This is a well-structured 16-patch series from Rob Clark adding a new `PERFCNTR_CONFIG` ioctl to the MSM/Adreno DRM driver. The series cleanly replaces the old debugfs-based performance counter infrastructure with a proper UAPI for both global (kernel-sampled) and local (per-cmdstream, UMD-managed) counter collection. The architecture is sound: global counter streaming via an fd with circular buffer, local reservations tracked per-context, conflict detection between the two modes, and proper IFPC handling via the pwrup reglist for a7xx+.
The code is v3 and has already had some review (Dmitry's R-b on patches 1-2). The series is generally in good shape, but I have a few issues ranging from a behavioral change introduced by a refactoring to some locking/error-path concerns.
**Key concerns:**
1. The `a6xx_flush_yield` refactoring silently changes behavior in the `preempt_start` paths (dword[3] changes from 0x00 to 0x01).
2. The `nr_regs` calculation in patch 15 has the subtraction operands reversed.
3. `guard(pm_runtime_active_auto)` does not check the return value of `pm_runtime_get_sync()`, which can fail.
4. The IOCTL has no upper bound on `bufsz_shift`, allowing potentially very large kernel allocations.
5. Missing `NULL` check on `gpu->perfcntrs` in `msm_gpu_sysprof_no_perfcntr_zap()` before the series finishes initializing perfcntrs for all GPU generations.
---
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Remove obsolete perf infrastructure
2026-05-04 19:06 ` [PATCH v3 01/16] drm/msm: Remove obsolete perf infrastructure Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean removal of the old debugfs-based `msm_perf.c` and all its plumbing (`perf_lock`, `perfcntr_active`, `last_cntrs[]`, `msm_gpu_perfcntr` struct, etc.) from `msm_gpu.h`, `msm_gpu.c`, the per-GPU init files, and `msm_debugfs.c`.
No issues. Already has R-b from Dmitry.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Allow CAP_PERFMON for setting SYSPROF
2026-05-04 19:06 ` [PATCH v3 02/16] drm/msm: Allow CAP_PERFMON for setting SYSPROF Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
One-liner changing `capable(CAP_SYS_ADMIN)` to `perfmon_capable()` in `adreno_set_param()`:
```c
- if (!capable(CAP_SYS_ADMIN))
+ if (!perfmon_capable())
```
Good change, aligns with i915/xe precedent. Already has R-b from Dmitry.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/registers: Sync gen_header.py from mesa
2026-05-04 19:06 ` [PATCH v3 04/16] drm/msm/registers: Sync gen_header.py from mesa Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Large sync of the code generator from mesa. Not reviewed in detail as it's a tool sync.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/registers: Add perfcntr json
2026-05-04 19:06 ` [PATCH v3 05/16] drm/msm/registers: Add perfcntr json Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds JSON files describing performance counter groups for a2xx, a5xx, a6xx, a7xx, and a8xx. Machine-generated data, no issues.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add a6xx+ perfcntr tables
2026-05-04 19:06 ` [PATCH v3 06/16] drm/msm: Add a6xx+ perfcntr tables Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Wires up the generated perfcntr tables. The family range checks look correct:
```c
if ((ADRENO_6XX_GEN1 <= config->info->family) &&
(config->info->family <= ADRENO_6XX_GEN4)) {
```
Minor style nit: the outer parentheses around each condition in the `if` are unnecessary per kernel style but harmless.
The `msm_perfcntr.h` header introduces the `msm_perfcntr_group` struct with `name`, `pipe`, `num_counters`, `num_countables`, `countables`, and `counters` — the foundation for everything else. Looks clean.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add sysprof accessors
2026-05-04 19:06 ` [PATCH v3 07/16] drm/msm: Add sysprof accessors Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds `msm_gpu_sysprof_no_perfcntr_zap()` and `msm_gpu_sysprof_no_ifpc()` inline helpers. These are preparation for later patches. Clean.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/a6xx: Add yield & flush helper
2026-05-04 19:06 ` [PATCH v3 08/16] drm/msm/a6xx: Add yield & flush helper Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Issue: Behavioral change in preempt_start paths.**
The new `a6xx_flush_yield()` helper hardcodes dword[3] as `0x01`:
```c
OUT_RING(ring, 0x01);
```
But the `a7xx_preempt_start()` and `a8xx_preempt_start()` code that is being replaced used `0x00` for dword[3]:
```c
- OUT_RING(ring, 0x00); /* dword[3] was 0x00 in preempt_start */
```
Meanwhile the `a7xx_submit()` path already used `0x01`. So this refactoring silently changes the preempt_start behavior. The comment says "Data value - not used if the address above is 0" so this is likely benign, but it should either be called out in the commit message as an intentional change, or the helper should accept the data value as a parameter to preserve the original behavior.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add per-context perfcntr state
2026-05-04 19:06 ` [PATCH v3 09/16] drm/msm: Add per-context perfcntr state Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Adds `msm_perfcntr_context_state` with a flex array `reserved_counters[]` and a `dummy` member:
```c
struct msm_perfcntr_context_state {
unsigned dummy;
unsigned reserved_counters[];
};
```
The `dummy` field with a comment about compilers disliking flex-only structs is fine — this is a known C limitation. The `kfree(ctx->perfctx)` in `__msm_context_destroy` is correct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add basic perfcntr infrastructure
2026-05-04 19:06 ` [PATCH v3 10/16] drm/msm: Add basic perfcntr infrastructure Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This adds `msm_perfcntr.c` with init/cleanup/resume/suspend stubs and group_idx/counter_base helpers.
**Issue: `msm_gpu_sysprof_no_perfcntr_zap()` NULL dereference.**
The inline function is changed to:
```c
return (refcount_read(&gpu->sysprof_active) > 1) ||
READ_ONCE(gpu->perfcntrs->stream);
```
But `gpu->perfcntrs` is only set when `gpu->num_perfcntr_groups > 0` (which is a6xx+ only). For older GPUs (a2xx-a5xx), `gpu->perfcntrs` will be NULL, causing a NULL pointer dereference if this function is ever called on those GPUs. This should be guarded:
```c
return (refcount_read(&gpu->sysprof_active) > 1) ||
(gpu->perfcntrs && READ_ONCE(gpu->perfcntrs->stream));
```
**Minor: `devm_kzalloc`/`devm_kfree` usage.** Using `devm_*` allocations then manually freeing in `msm_perfcntr_cleanup()` is a bit unusual. The `devm_kfree` calls in cleanup are necessary since the device outlives the cleanup, but mixing devm with manual lifetime management can be confusing. Not a bug, just worth noting.
**Minor: `msm_perfcntr_resume()` doesn't check `gpu->perfcntrs` for NULL** before accessing `gpu->perfcntrs->stream`, same concern as above.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/a6xx+: Add support to configure perfcntrs
2026-05-04 19:06 ` [PATCH v3 11/16] drm/msm/a6xx+: Add support to configure perfcntrs Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
The `a6xx_perfcntr_configure()` function programs SEL registers via the ringbuffer and writes a `perfcntr_fence` to signal completion. The code correctly handles pipe switching for BR/BV/BOTH threads.
The `sel_fence` field added to `msm_perfcntr_stream` is used to gate sampling — the sample_worker checks `memptrs->perfcntr_fence != stream->sel_fence` before collecting data. Good design to avoid sampling stale counter configurations.
The `perfcntr_fence` field added to `msm_rbmemptrs` is fine — it's appended at the end of the struct.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/a8xx: Add perfcntr flush sequence
2026-05-04 19:06 ` [PATCH v3 12/16] drm/msm/a8xx: Add perfcntr flush sequence Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Clean a8xx-specific flush implementation:
```c
gpu_write(gpu, REG_A8XX_RBBM_PERFCTR_FLUSH_HOST_CMD, BIT(0));
gpu_write(gpu, REG_A8XX_RBBM_SLICE_PERFCTR_FLUSH_HOST_CMD, BIT(0));
wmb();
if (gpu_poll_timeout(...)) { dev_err(...); }
```
The `wmb()` before polling is correct. The 100us poll interval and 100ms timeout are reasonable. Only wired up for a8xx, not a6xx/a7xx — that's expected since the flush mechanism differs per generation.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl
2026-05-04 19:06 ` [PATCH v3 13/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
This is the core patch. Several observations:
**Issue: No upper bound on `bufsz_shift`.**
```c
stream->fifo_size = 1 << args->bufsz_shift;
...
void *buf = kmalloc(1 << args->bufsz_shift, GFP_KERNEL);
```
A malicious/buggy user could pass `bufsz_shift = 30` and try to allocate 1 GB. The ioctl requires `perfmon_capable()` for STREAM mode so this is somewhat mitigated, but a bounds check would be prudent. Also, if `bufsz_shift` is 0 and STREAM is not set, `fifo_size` would be 1 which is odd but harmless since the FIFO is never used.
**Issue: `guard(pm_runtime_active_auto)` does not check return value.**
```c
guard(pm_runtime_active_auto)(&gpu->pdev->dev);
```
This expands to `pm_runtime_get_sync()` which can fail. The guard pattern doesn't allow checking the return value. If the device fails to resume, the ioctl will proceed with a powered-off GPU. Consider using an explicit `pm_runtime_get_sync()` with error checking, or at minimum documenting why this is acceptable.
**Good: UAPI design.** The `DRM_IOW` direction is correctly chosen — the comment explains that returning the fd as a return value (rather than via an output parameter) avoids the `copy_to_user` fault-after-fd-creation problem. The `MSM_PERFCNTR_UPDATE` flag for returning available counter counts on `E2BIG` is a nice touch for discoverability.
**Good: Locking discipline.** The `perfcntr_lock` protects counter allocation state, `read_lock` protects FIFO consumer side, and the single-producer pattern (kthread worker) avoids needing locks on the producer side. The `sel_work` is correctly dispatched on the scheduler's submit_wq to serialize with GEM_SUBMITs.
**Minor: `UERR(ETOOSMALL, ...)`.** Is `ETOOSMALL` a standard errno? I believe the correct name is `ETOOMANYREFS` or similar — this may be a custom MSM driver macro. If `UERR` handles non-standard names, fine; otherwise this should be `EINVAL` or a more standard error.
**Minor: `group_stride` validation.** The stride is validated to be non-zero when `nr_groups > 0`, but there's no check that `group_stride >= sizeof(struct drm_msm_perfcntr_group)`. A too-small stride would cause `copy_from_user` to read partial data. However, since `copy_from_user` uses `args->group_stride` as the size, a smaller stride would just read fewer fields — the zero-initialization of `g` mitigates this.
**Minor: `scoped_guard` with `break`.** In `sel_worker`:
```c
scoped_guard (mutex, &gpu->lock) {
guard(mutex)(&gpu->perfcntr_lock);
if (stream != gpu->perfcntrs->stream)
break;
...
}
```
Using `break` to exit a `scoped_guard` block is valid but unusual — it works because `scoped_guard` expands to a loop-like construct. This is fine but may surprise readers.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/a6xx: Increase pwrup_reglist size
2026-05-04 19:06 ` [PATCH v3 14/16] drm/msm/a6xx: Increase pwrup_reglist size Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Simple change from `PAGE_SIZE` to `2 * PAGE_SIZE` for the pwrup reglist allocation. Needed because SEL registers will be appended in patch 15. Clean.
Also converts `cpu_gpu_lock::regs[62]` to a flex array `regs[]`, which is correct given the increased allocation size.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/a6xx: Append SEL regs to dyn pwrup reglist
2026-05-04 19:06 ` [PATCH v3 15/16] drm/msm/a6xx: Append SEL regs to dyn pwrup reglist Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
**Bug: `nr_regs` subtraction is backwards.**
```c
unsigned nr_regs = (reglist_sel_start - reglist) / 3;
```
At this point, `reglist` has been advanced past the SEL entries, so `reglist > reglist_sel_start`. The subtraction `reglist_sel_start - reglist` would be negative (wrapping to a huge unsigned value). This should be:
```c
unsigned nr_regs = (reglist - reglist_sel_start) / 3;
```
The division by 3 is correct — each dynamic reglist entry is 3 uint32_t values (pipe/aperture, register, value).
**Good: Barrier placement.** The two `wmb()` barriers are correctly placed:
1. Before modifying SEL entries: ensures CP sees the reduced `dynamic_list_len` before entries are modified.
2. Before increasing `dynamic_list_len`: ensures CP sees the new entries before the new length.
The comment about skipping the shared-memory spinlock dance with SQE (as kgsl does) is well-reasoned — the ringbuffer programming will correct any race with IFPC exit.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
* Claude review: drm/msm/a6xx: Allow IFPC with perfcntr stream
2026-05-04 19:06 ` [PATCH v3 16/16] drm/msm/a6xx: Allow IFPC with perfcntr stream Rob Clark
@ 2026-05-04 22:06 ` Claude Code Review Bot
0 siblings, 0 replies; 34+ messages in thread
From: Claude Code Review Bot @ 2026-05-04 22:06 UTC (permalink / raw)
To: dri-devel-reviews
Patch Review
Changes `sysprof_setup()` signature to add `bool force_on`, then uses it in `sample_worker` to inhibit IFPC while reading counter registers and re-enable it afterwards.
The `msm_gpu_sysprof_no_ifpc()` is decoupled from `msm_gpu_sysprof_no_perfcntr_zap()`:
```c
static inline bool msm_gpu_sysprof_no_ifpc(struct msm_gpu *gpu)
{
return refcount_read(&gpu->sysprof_active) > 1;
}
```
This means IFPC is now only disabled for legacy userspace SYSPROF usage, not for the new kernel-managed perfcntr stream. The perfcntr stream instead briefly inhibits IFPC during sampling via `sysprof_setup(gpu, true/false)`. This is correct — the pwrup reglist (patch 15) handles restoring SEL registers on IFPC exit.
**Minor: bitwise OR vs logical OR.**
```c
bool sysprof = msm_gpu_sysprof_no_ifpc(gpu) | force_on;
```
This uses bitwise OR (`|`) on bools. While functionally correct, `||` would be more conventional for boolean operands. Very minor.
**Minor: diff artifact.** There appears to be a whitespace-damaged comment in the `msm_gpu_sysprof_no_ifpc()` diff (leading `+` on the comment line that was part of the removed block). This may just be a display artifact from the mbox formatting.
---
Generated by Claude Code Patch Reviewer
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2026-05-04 22:06 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
2026-05-04 19:06 ` [PATCH v3 01/16] drm/msm: Remove obsolete perf infrastructure Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 02/16] drm/msm: Allow CAP_PERFMON for setting SYSPROF Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 04/16] drm/msm/registers: Sync gen_header.py from mesa Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 05/16] drm/msm/registers: Add perfcntr json Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 06/16] drm/msm: Add a6xx+ perfcntr tables Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 07/16] drm/msm: Add sysprof accessors Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 08/16] drm/msm/a6xx: Add yield & flush helper Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 09/16] drm/msm: Add per-context perfcntr state Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 10/16] drm/msm: Add basic perfcntr infrastructure Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 11/16] drm/msm/a6xx+: Add support to configure perfcntrs Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 12/16] drm/msm/a8xx: Add perfcntr flush sequence Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 13/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 14/16] drm/msm/a6xx: Increase pwrup_reglist size Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 15/16] drm/msm/a6xx: Append SEL regs to dyn pwrup reglist Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 19:06 ` [PATCH v3 16/16] drm/msm/a6xx: Allow IFPC with perfcntr stream Rob Clark
2026-05-04 22:06 ` Claude review: " Claude Code Review Bot
2026-05-04 22:06 ` Claude review: drm/msm: Add PERFCNTR_CONFIG ioctl Claude Code Review Bot
-- strict thread matches above, loose matches on Subject: below --
2026-04-20 22:25 [PATCH 00/13] " Rob Clark
2026-04-20 22:25 ` [PATCH 13/13] " Rob Clark
2026-04-22 23:13 ` Claude review: " Claude Code Review Bot
2026-04-22 23:13 ` Claude Code Review Bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox