From: Lizhi Hou <lizhi.hou@amd.com>
To: Mario Limonciello <mario.limonciello@amd.com>,
<ogabbay@kernel.org>, <quic_jhugo@quicinc.com>,
<dri-devel@lists.freedesktop.org>,
<maciej.falkowski@linux.intel.com>
Cc: <linux-kernel@vger.kernel.org>, <max.zhen@amd.com>,
<sonal.santan@amd.com>
Subject: Re: [PATCH V1] accel/amdxdna: Fix runtime suspend deadlock when there is pending job
Date: Tue, 10 Mar 2026 11:00:18 -0700 [thread overview]
Message-ID: <6e3a74ca-403e-80eb-19b3-4aae21f8b743@amd.com> (raw)
In-Reply-To: <75bf73a3-496c-4b84-ab49-3c4ace01b953@amd.com>
On 3/10/26 10:50, Mario Limonciello wrote:
> On 3/10/26 12:49 PM, Lizhi Hou wrote:
>> The runtime suspend callback drains the running job workqueue before
>> suspending the device. If a job is still executing and calls
>> pm_runtime_resume_and_get(), it can deadlock with the runtime suspend
>> path.
>>
>> Fix this by moving pm_runtime_resume_and_get() from the job execution
>> routine to the job submission routine, ensuring the device is resumed
>> before the job is queued and avoiding the deadlock during runtime
>> suspend.
>>
>> Signed-off-by: Lizhi Hou <lizhi.hou@amd.com>
> Fixes tag?
I missed it again. I will send V2.
Thanks,
Lizhi
>> ---
>> drivers/accel/amdxdna/aie2_ctx.c | 14 ++------------
>> drivers/accel/amdxdna/amdxdna_ctx.c | 10 ++++++++++
>> 2 files changed, 12 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/accel/amdxdna/aie2_ctx.c
>> b/drivers/accel/amdxdna/aie2_ctx.c
>> index afee5e667f77..c0d348884f74 100644
>> --- a/drivers/accel/amdxdna/aie2_ctx.c
>> +++ b/drivers/accel/amdxdna/aie2_ctx.c
>> @@ -165,7 +165,6 @@ aie2_sched_notify(struct amdxdna_sched_job *job)
>> trace_xdna_job(&job->base, job->hwctx->name, "signaled
>> fence", job->seq);
>> - amdxdna_pm_suspend_put(job->hwctx->client->xdna);
>> job->hwctx->priv->completed++;
>> dma_fence_signal(fence);
>> @@ -290,19 +289,11 @@ aie2_sched_job_run(struct drm_sched_job
>> *sched_job)
>> struct dma_fence *fence;
>> int ret;
>> - ret = amdxdna_pm_resume_get(hwctx->client->xdna);
>> - if (ret)
>> + if (!hwctx->priv->mbox_chann)
>> return NULL;
>> - if (!hwctx->priv->mbox_chann) {
>> - amdxdna_pm_suspend_put(hwctx->client->xdna);
>> - return NULL;
>> - }
>> -
>> - if (!mmget_not_zero(job->mm)) {
>> - amdxdna_pm_suspend_put(hwctx->client->xdna);
>> + if (!mmget_not_zero(job->mm))
>> return ERR_PTR(-ESRCH);
>> - }
>> kref_get(&job->refcnt);
>> fence = dma_fence_get(job->fence);
>> @@ -333,7 +324,6 @@ aie2_sched_job_run(struct drm_sched_job *sched_job)
>> out:
>> if (ret) {
>> - amdxdna_pm_suspend_put(hwctx->client->xdna);
>> dma_fence_put(job->fence);
>> aie2_job_put(job);
>> mmput(job->mm);
>> diff --git a/drivers/accel/amdxdna/amdxdna_ctx.c
>> b/drivers/accel/amdxdna/amdxdna_ctx.c
>> index 666dfd7b2a80..838430903a3e 100644
>> --- a/drivers/accel/amdxdna/amdxdna_ctx.c
>> +++ b/drivers/accel/amdxdna/amdxdna_ctx.c
>> @@ -17,6 +17,7 @@
>> #include "amdxdna_ctx.h"
>> #include "amdxdna_gem.h"
>> #include "amdxdna_pci_drv.h"
>> +#include "amdxdna_pm.h"
>> #define MAX_HWCTX_ID 255
>> #define MAX_ARG_COUNT 4095
>> @@ -445,6 +446,7 @@ amdxdna_arg_bos_lookup(struct amdxdna_client
>> *client,
>> void amdxdna_sched_job_cleanup(struct amdxdna_sched_job *job)
>> {
>> trace_amdxdna_debug_point(job->hwctx->name, job->seq, "job
>> release");
>> + amdxdna_pm_suspend_put(job->hwctx->client->xdna);
>> amdxdna_arg_bos_put(job);
>> amdxdna_gem_put_obj(job->cmd_bo);
>> dma_fence_put(job->fence);
>> @@ -482,6 +484,12 @@ int amdxdna_cmd_submit(struct amdxdna_client
>> *client,
>> goto cmd_put;
>> }
>> + ret = amdxdna_pm_resume_get(xdna);
>> + if (ret) {
>> + XDNA_ERR(xdna, "Resume failed, ret %d", ret);
>> + goto put_bos;
>> + }
>> +
>> idx = srcu_read_lock(&client->hwctx_srcu);
>> hwctx = xa_load(&client->hwctx_xa, hwctx_hdl);
>> if (!hwctx) {
>> @@ -522,6 +530,8 @@ int amdxdna_cmd_submit(struct amdxdna_client
>> *client,
>> dma_fence_put(job->fence);
>> unlock_srcu:
>> srcu_read_unlock(&client->hwctx_srcu, idx);
>> + amdxdna_pm_suspend_put(xdna);
>> +put_bos:
>> amdxdna_arg_bos_put(job);
>> cmd_put:
>> amdxdna_gem_put_obj(job->cmd_bo);
>
next prev parent reply other threads:[~2026-03-10 18:00 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-10 17:49 [PATCH V1] accel/amdxdna: Fix runtime suspend deadlock when there is pending job Lizhi Hou
2026-03-10 17:50 ` Mario Limonciello
2026-03-10 18:00 ` Lizhi Hou [this message]
2026-03-11 3:06 ` Claude review: " Claude Code Review Bot
2026-03-11 3:06 ` Claude Code Review Bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6e3a74ca-403e-80eb-19b3-4aae21f8b743@amd.com \
--to=lizhi.hou@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maciej.falkowski@linux.intel.com \
--cc=mario.limonciello@amd.com \
--cc=max.zhen@amd.com \
--cc=ogabbay@kernel.org \
--cc=quic_jhugo@quicinc.com \
--cc=sonal.santan@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox