public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
From: Chenyuan Mi <chenyuan_mi@163.com>
To: alexander.deucher@amd.com, christian.koenig@amd.com
Cc: Arunpravin.PaneerSelvam@amd.com, airlied@gmail.com,
	simona@ffwll.ch, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH 1/2] drm/amdgpu: protect waitq access with userq_mutex in wait IOCTL
Date: Mon,  9 Mar 2026 10:22:28 +0800	[thread overview]
Message-ID: <20260309022229.63071-2-chenyuan_mi@163.com> (raw)
In-Reply-To: <20260309022229.63071-1-chenyuan_mi@163.com>

amdgpu_userq_wait_ioctl() accesses the wait queue object obtained
from xa_load() without holding userq_mutex or taking a reference on
the queue. A concurrent AMDGPU_USERQ_OP_FREE call can destroy and
free the queue between the xa_load() and the subsequent
xa_alloc(&waitq->fence_drv_xa, ...), resulting in a use-after-free.

This is a regression introduced by commit 4b27406380b0
("drm/amdgpu: Add queue id support to the user queue wait IOCTL"),
which removed the indirect fence_drv_xa_ptr model and its NULL
check safety net from commit ed5fdc1fc282 ("drm/amdgpu: Fix the
use-after-free issue in wait IOCTL") and replaced it with a direct
waitq->fence_drv_xa access, but did not add any lifetime protection
around the new waitq pointer.

Fix this by holding userq_mutex across the xa_load() and the
subsequent fence_drv_xa operations, matching the locking used by
the destroy path.

Fixes: 4b27406380b0 ("drm/amdgpu: Add queue id support to the user queue wait IOCTL")
Cc: stable@vger.kernel.org
Signed-off-by: Chenyuan Mi <chenyuan_mi@163.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
index 8013260e29dc..1785ea7c18fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
@@ -912,8 +912,10 @@ int amdgpu_userq_wait_ioctl(struct drm_device *dev, void *data,
 		 */
 		num_fences = dma_fence_dedup_array(fences, num_fences);
 
+		mutex_lock(&userq_mgr->userq_mutex);
 		waitq = xa_load(&userq_mgr->userq_xa, wait_info->waitq_id);
 		if (!waitq) {
+			mutex_unlock(&userq_mgr->userq_mutex);
 			r = -EINVAL;
 			goto free_fences;
 		}
@@ -932,6 +934,7 @@ int amdgpu_userq_wait_ioctl(struct drm_device *dev, void *data,
 				r = dma_fence_wait(fences[i], true);
 				if (r) {
 					dma_fence_put(fences[i]);
+					mutex_unlock(&userq_mgr->userq_mutex);
 					goto free_fences;
 				}
 
@@ -948,8 +951,10 @@ int amdgpu_userq_wait_ioctl(struct drm_device *dev, void *data,
 			 */
 			r = xa_alloc(&waitq->fence_drv_xa, &index, fence_drv,
 				     xa_limit_32b, GFP_KERNEL);
-			if (r)
+			if (r) {
+				mutex_unlock(&userq_mgr->userq_mutex);
 				goto free_fences;
+			}
 
 			amdgpu_userq_fence_driver_get(fence_drv);
 
@@ -961,6 +966,7 @@ int amdgpu_userq_wait_ioctl(struct drm_device *dev, void *data,
 			/* Increment the actual userq fence count */
 			cnt++;
 		}
+		mutex_unlock(&userq_mgr->userq_mutex);
 
 		wait_info->num_fences = cnt;
 		/* Copy userq fence info to user space */
-- 
2.53.0


  reply	other threads:[~2026-03-09 15:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-09  2:22 [PATCH 0/2] drm/amdgpu: fix use-after-free in userq signal/wait IOCTLs Chenyuan Mi
2026-03-09  2:22 ` Chenyuan Mi [this message]
2026-03-09 10:07   ` [PATCH 1/2] drm/amdgpu: protect waitq access with userq_mutex in wait IOCTL Christian König
2026-03-10  2:44     ` Claude review: " Claude Code Review Bot
2026-03-09  2:22 ` [PATCH 2/2] drm/amdgpu: protect queue access in signal IOCTL Chenyuan Mi
2026-03-09 10:09   ` Christian König
2026-03-10  2:44     ` Claude review: " Claude Code Review Bot
2026-03-10  2:44 ` Claude review: drm/amdgpu: fix use-after-free in userq signal/wait IOCTLs Claude Code Review Bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260309022229.63071-2-chenyuan_mi@163.com \
    --to=chenyuan_mi@163.com \
    --cc=Arunpravin.PaneerSelvam@amd.com \
    --cc=airlied@gmail.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=simona@ffwll.ch \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox