From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C3C9F55811 for ; Wed, 22 Apr 2026 08:27:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5A49110E967; Wed, 22 Apr 2026 08:27:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; secure) header.d=usp.br header.i=@usp.br header.b="UoD+yE3n"; dkim-atps=neutral Received: from mail-dl1-f46.google.com (mail-dl1-f46.google.com [74.125.82.46]) by gabe.freedesktop.org (Postfix) with ESMTPS id C751610E1E3 for ; Tue, 21 Apr 2026 20:03:24 +0000 (UTC) Received: by mail-dl1-f46.google.com with SMTP id a92af1059eb24-12714f01940so292952c88.0 for ; Tue, 21 Apr 2026 13:03:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=usp.br; s=usp-google; t=1776801804; x=1777406604; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=ctfYCUwnq+ND/BK2YrW6WN95Q5+5Z1z7I8KH/f9G8o4=; b=UoD+yE3nq0R9oZkiN9Ly2w+Kr/YhyUyIAy15h4ItRKImnW8JU7P+NqdxoQPAQiq2oS wPnBtWYtv4O0txogxnVZRj7pQy5wcmQTzSR3AMtTaNsdZV0Qm1IGQGnBEM6XVCjrb5qX 856OD9rhIUblU40aRGaMbIZrfIU8+7VcmjOr6yBivGrHDXDNSmBL+nUDkKd5P0BJ0M1w 6HksAmiOiOlbPRnQqpKlv1iNcR6AHb+RgGHxLu/6+wuAi9eZLXVfWI0QHqIhLZBlxuxY uHfNVp+1xIgewJnipFzWB/T2Q/2Ye8wVh6wRisOrvCdWu6+NorhMIJtfxVO312AUjCm2 eZsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776801804; x=1777406604; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ctfYCUwnq+ND/BK2YrW6WN95Q5+5Z1z7I8KH/f9G8o4=; b=kU+N/kuWyM09UoVnbj+E+sxrEzuolsY+g/B9cB7LeKVEPY5QuP2hW5hnWu9K6VfJPo xGkMkbeSCVbSLVxqj2PVkRCkD38x55INhS02JLWPMzWFdYK9X04+bTCY3NDrgrZx6WJL L9V46Nr8Jzc0kunpMAxPXwCSLQPgl01JqUinhs8l9dJgWzuSXPlU5JaJXpK5rxUswoVB aHah+eCK1umGxNDS+ngOETgZy4nReczSDo3A2Xr1Y06D6Fc0WEh1R2Uga2TVRY/Ly8gV Tf1Tuagm20XGoK1Dn8mZtHuZ20W6WxfYhFp2HOVOTlqTetP4PEE0nUhCuKZnvF+LszO1 iaDQ== X-Forwarded-Encrypted: i=1; AFNElJ+36azAA9sDW4WMciMvEKXnE93LEtbNR3f5VztzHIRzt3vp6XlAiE+Kcef72ea+wSEgW66wwGymty8=@lists.freedesktop.org X-Gm-Message-State: AOJu0YyFeO3Z96qQwoTqq2ukfCtvZD5H8MDEAOmK4zp4np9rC3+T9h78 8T677aevHko+XjoOdahK9QdTcXgz5FOhgM7G25zJWofyOzuseh+hZJzdYaYGMWB8hkY= X-Gm-Gg: AeBDietEM0HSY26S5ewbnLa236tA0pDI2r1mmH8mSr5gKDtZooawMy1nCfV11Ix0Y2/ jtMKcDvYerCy9D2naDy1nghZ3qPQd0Kg1EUF2IohiWYqBS0dPgzpJaHw4Z2drmaUExjRNYw+0eL VN2MrAaWZon0Y//cHfOuJFMTlhQhug1fe7FYK9hWfbXAx1wCMcKIlVz45N8HcHJyQnB3kgEowYE DcyWFByycgW/7EM5C/Km35GXv/jRrEPDD/hIzwx8hJmHsRWtXbzmEMJDiiWIG+aAVX5f0W6KwHH 2srZ5SxZEv2gpuJp89qAvQhGTNhsHdQZCYo44/jYj843JIvcjB1MLr7VrSd/BLma4lHnAM87pTJ VBdGPxeipPPvng/ofNAkykwUhsKa4fSDeoYbMkZgSVoHlcrB9qPwR2tmz2p3EKvCrJS06fO9Npt iCiofTowIuJcRUNotdrmcR7ERhHfiFhhpPOqNj+7lu86g+s6p08i6FwrqaHHA= X-Received: by 2002:a05:7022:90d:b0:12c:900b:9dee with SMTP id a92af1059eb24-12c900ba4bamr2072612c88.1.1776801803528; Tue, 21 Apr 2026 13:03:23 -0700 (PDT) Received: from Haru.. ([2804:1b1:f983:1702:ed2:736c:4e1d:a8a8]) by smtp.gmail.com with ESMTPSA id a92af1059eb24-12c74a185a8sm20830097c88.9.2026.04.21.13.03.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Apr 2026 13:03:23 -0700 (PDT) From: Leonardo Cesar To: alexander.deucher@amd.com, christian.koenig@amd.com, airlied@gmail.com, simona@ffwll.ch Cc: Leonardo Cesar , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH] drm/amdgpu: deduplicate ring preempt ib function Date: Tue, 21 Apr 2026 17:03:00 -0300 Message-ID: <20260421200311.15624-1-leonardocesar@usp.br> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Mailman-Approved-At: Wed, 22 Apr 2026 08:26:51 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The ring preemption function is identical for both gfx_v11_0 and gfx_v12_0. This patch refactors the code by moving the core logic into a generic function inside amdgpu_gfx.c to reduce code duplication and simplify future maintenance. Signed-off-by: Leonardo Cesar --- v1 -> v2: - Removed wrapper functions for gfx_v11 and gfx_v12 and updated call sites directly. drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | ... --- drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 51 ++++++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h | 2 + drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 52 +------------------------ drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 52 +------------------------ 4 files changed, 55 insertions(+), 102 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c index 2956e45c9..a157cbd8e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c @@ -2684,3 +2684,54 @@ void amdgpu_debugfs_compute_sched_mask_init(struct amdgpu_device *adev) #endif } +int amdgpu_gfx_ring_preempt_ib(struct amdgpu_ring *ring) +{ + int i, r = 0; + struct amdgpu_device *adev = ring->adev; + struct amdgpu_kiq *kiq = &adev->gfx.kiq[0]; + struct amdgpu_ring *kiq_ring = &kiq->ring; + unsigned long flags; + + if (adev->enable_mes) + return -EINVAL; + + if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) + return -EINVAL; + + spin_lock_irqsave(&kiq->ring_lock, flags); + + if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) { + spin_unlock_irqrestore(&kiq->ring_lock, flags); + return -ENOMEM; + } + + /* assert preemption condition */ + amdgpu_ring_set_preempt_cond_exec(ring, false); + + /* assert IB preemption, emit the trailing fence */ + kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP, + ring->trail_fence_gpu_addr, + ++ring->trail_seq); + amdgpu_ring_commit(kiq_ring); + + spin_unlock_irqrestore(&kiq->ring_lock, flags); + + /* poll the trailing fence */ + for (i = 0; i < adev->usec_timeout; i++) { + if (ring->trail_seq == + le32_to_cpu(*(ring->trail_fence_cpu_addr))) + break; + udelay(1); + } + + if (i >= adev->usec_timeout) { + r = -EINVAL; + DRM_ERROR("ring %d failed to preempt ib\n", ring->idx); + } + + /* deassert preemption condition */ + amdgpu_ring_set_preempt_cond_exec(ring, true); + return r; +} + + diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h index a0cf0a3b4..77050f988 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h @@ -664,6 +664,8 @@ void amdgpu_gfx_csb_preamble_end(u32 *buffer, u32 count); void amdgpu_debugfs_gfx_sched_mask_init(struct amdgpu_device *adev); void amdgpu_debugfs_compute_sched_mask_init(struct amdgpu_device *adev); +int amdgpu_gfx_ring_preempt_ib(struct amdgpu_ring *ring); + static inline const char *amdgpu_gfx_compute_mode_desc(int mode) { switch (mode) { diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c index 5097de940..1ba848bfa 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c @@ -6206,56 +6206,6 @@ static void gfx_v11_0_ring_emit_gfx_shadow(struct amdgpu_ring *ring, ring->set_q_mode_offs = offs; } -static int gfx_v11_0_ring_preempt_ib(struct amdgpu_ring *ring) -{ - int i, r = 0; - struct amdgpu_device *adev = ring->adev; - struct amdgpu_kiq *kiq = &adev->gfx.kiq[0]; - struct amdgpu_ring *kiq_ring = &kiq->ring; - unsigned long flags; - - if (adev->enable_mes) - return -EINVAL; - - if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) - return -EINVAL; - - spin_lock_irqsave(&kiq->ring_lock, flags); - - if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) { - spin_unlock_irqrestore(&kiq->ring_lock, flags); - return -ENOMEM; - } - - /* assert preemption condition */ - amdgpu_ring_set_preempt_cond_exec(ring, false); - - /* assert IB preemption, emit the trailing fence */ - kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP, - ring->trail_fence_gpu_addr, - ++ring->trail_seq); - amdgpu_ring_commit(kiq_ring); - - spin_unlock_irqrestore(&kiq->ring_lock, flags); - - /* poll the trailing fence */ - for (i = 0; i < adev->usec_timeout; i++) { - if (ring->trail_seq == - le32_to_cpu(*(ring->trail_fence_cpu_addr))) - break; - udelay(1); - } - - if (i >= adev->usec_timeout) { - r = -EINVAL; - DRM_ERROR("ring %d failed to preempt ib\n", ring->idx); - } - - /* deassert preemption condition */ - amdgpu_ring_set_preempt_cond_exec(ring, true); - return r; -} - static void gfx_v11_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume) { struct amdgpu_device *adev = ring->adev; @@ -7295,7 +7245,7 @@ static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_gfx = { .emit_cntxcntl = gfx_v11_0_ring_emit_cntxcntl, .emit_gfx_shadow = gfx_v11_0_ring_emit_gfx_shadow, .init_cond_exec = gfx_v11_0_ring_emit_init_cond_exec, - .preempt_ib = gfx_v11_0_ring_preempt_ib, + .preempt_ib = amdgpu_gfx_ring_preempt_ib, .emit_frame_cntl = gfx_v11_0_ring_emit_frame_cntl, .emit_wreg = gfx_v11_0_ring_emit_wreg, .emit_reg_wait = gfx_v11_0_ring_emit_reg_wait, diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c index 65c33823a..6cf244349 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c @@ -4611,56 +4611,6 @@ static unsigned gfx_v12_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring, return ret; } -static int gfx_v12_0_ring_preempt_ib(struct amdgpu_ring *ring) -{ - int i, r = 0; - struct amdgpu_device *adev = ring->adev; - struct amdgpu_kiq *kiq = &adev->gfx.kiq[0]; - struct amdgpu_ring *kiq_ring = &kiq->ring; - unsigned long flags; - - if (adev->enable_mes) - return -EINVAL; - - if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) - return -EINVAL; - - spin_lock_irqsave(&kiq->ring_lock, flags); - - if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) { - spin_unlock_irqrestore(&kiq->ring_lock, flags); - return -ENOMEM; - } - - /* assert preemption condition */ - amdgpu_ring_set_preempt_cond_exec(ring, false); - - /* assert IB preemption, emit the trailing fence */ - kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP, - ring->trail_fence_gpu_addr, - ++ring->trail_seq); - amdgpu_ring_commit(kiq_ring); - - spin_unlock_irqrestore(&kiq->ring_lock, flags); - - /* poll the trailing fence */ - for (i = 0; i < adev->usec_timeout; i++) { - if (ring->trail_seq == - le32_to_cpu(*(ring->trail_fence_cpu_addr))) - break; - udelay(1); - } - - if (i >= adev->usec_timeout) { - r = -EINVAL; - DRM_ERROR("ring %d failed to preempt ib\n", ring->idx); - } - - /* deassert preemption condition */ - amdgpu_ring_set_preempt_cond_exec(ring, true); - return r; -} - static void gfx_v12_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t reg_val_offs) { @@ -5539,7 +5489,7 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_gfx = { .pad_ib = amdgpu_ring_generic_pad_ib, .emit_cntxcntl = gfx_v12_0_ring_emit_cntxcntl, .init_cond_exec = gfx_v12_0_ring_emit_init_cond_exec, - .preempt_ib = gfx_v12_0_ring_preempt_ib, + .preempt_ib = amdgpu_gfx_ring_preempt_ib, .emit_wreg = gfx_v12_0_ring_emit_wreg, .emit_reg_wait = gfx_v12_0_ring_emit_reg_wait, .emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait, -- 2.43.0