From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B25FFF8868 for ; Mon, 27 Apr 2026 16:00:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B7DA910E803; Mon, 27 Apr 2026 16:00:56 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=arm.com header.i=@arm.com header.b="EJotlB79"; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="EJotlB79"; dkim-atps=neutral Received: from DU2PR03CU002.outbound.protection.outlook.com (mail-northeuropeazon11011013.outbound.protection.outlook.com [52.101.65.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 44C6D10E802 for ; Mon, 27 Apr 2026 16:00:55 +0000 (UTC) ARC-Seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=A2eJ2JV6Xcc7Hz1VCxRNRcJViQxxRFoCD8m80xwiJL9Bq8RnRAcUordbh1W9LpnCr5lkEf+0X67N0mr5NU+rZ6HQnNBe/V1tU8I9BCWjXOp04rTTdtyo664RPZ2Im2BR1UJrx7WoTn6zuO+4jJATFy3vj/gaHphFCcjT3XvxPol/D2VguAwiqbVqF94gRWNC52evLBQtQRnkJ2eXoUfg3CIEjgaX+gsK9F8Y/IxOSDujmxMnxXkGuTJ1tbcFWEyAaxrvHi5joKMBuoPTD7Pzm4FJCz6TNGsbq6fL+6uiJU7fkeYiu2tbZ705DEgRMsc1Y8RwX6A2z4orwRnJQ/dr1Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PWWRA5SXtz4XCIZQfGwxyrrEObuu0MPauWSjN8QgfPU=; b=t95p7kHd8mGLLMZdje+XxEvpByyDSyhF91VjZEfJJDSDBnKqrMAhOmaDSvPwVqw/vPNmri2h305iXtWBHFYf8PFJFRMZTStKj1QTUypCPaCIxDqr9ignPaxxQV7wu7TcoLEynfGO2MKJxTUwLVTW5eGbnXfrx7KTK1g20ogBUvDficxb/Om5wFpXdm7QPPAV9+YPJws6p55P9hcgbVmJsJliQ23ZiyeL+zsoFryHrd6LWuTPcLgfobpEvGomNxdRJIjeqrnGwjqsjdrxBtgg74kZSNtEu0khH8ja1CfPo9TnZis5mwfPanX5gT5lSbJLXWMZdi2uOeUR4AL4iQZ75w== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=lists.freedesktop.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PWWRA5SXtz4XCIZQfGwxyrrEObuu0MPauWSjN8QgfPU=; b=EJotlB79PUFXZSz4VsuWcjYnCMpF7H6TYr4VRx3zjnRwMCynkb5tn/RFh/6/48rSDmatwLTSOvqYHMdoTVp/OXX307c5KP+PELaNgRPmCJ0mdSjIjsIV9/7HLBPboNzOSunJNzFsGyazocSG2pdCyhBkPq80NgrpFsgpbpDpD58= Received: from CWLP265CA0414.GBRP265.PROD.OUTLOOK.COM (2603:10a6:400:1b6::17) by AS2PR08MB8624.eurprd08.prod.outlook.com (2603:10a6:20b:55f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9846.26; Mon, 27 Apr 2026 16:00:45 +0000 Received: from DB5PEPF00014B97.eurprd02.prod.outlook.com (2603:10a6:400:1b6:cafe::37) by CWLP265CA0414.outlook.office365.com (2603:10a6:400:1b6::17) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9846.26 via Frontend Transport; Mon, 27 Apr 2026 16:00:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 4.158.2.129) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=arm.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 4.158.2.129 as permitted sender) receiver=protection.outlook.com; client-ip=4.158.2.129; helo=outbound-uk1.az.dlp.m.darktrace.com; pr=C Received: from outbound-uk1.az.dlp.m.darktrace.com (4.158.2.129) by DB5PEPF00014B97.mail.protection.outlook.com (10.167.8.235) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9846.18 via Frontend Transport; Mon, 27 Apr 2026 16:00:44 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ZXbJQS8LbmwiVzBNLnUL7IvMmq9UIzQ5GTRqtdK46/5UpO9beUSs5s9eBcrUYUSIH9r469kCw65lrpoemtlrcmA5tukcT/3+oIwACgrVkUNION0IeZZbx5Nm57hr8EXgGDgDGu5mnAiM72/JawHenJ28QMOCkMtsR/fa1navMgm7SVWAqkHJInb6csAlVrIs1N7RakKAnXPJ8hWhdGrwUlVvUOW0IyFrX32pJUsNaajIvAop+CGdtrudpT//ayVEUlf3smAjkduV5kl7dX7ma0awmrIppiMvwSkLowKoNnmNnTkbofkMn0BCoPs5nFrtgF+3l+qGXJQLaJUOq9MSUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PWWRA5SXtz4XCIZQfGwxyrrEObuu0MPauWSjN8QgfPU=; b=BK2qfF1VddSovcEm/CR+ueVR+clgv+D+Pc7Lb4LyjQS9lg00/WorJz0fBbY+Ee+eJlTvDXfkDZlqXAfJ+Vn66NmkLUZ1Fk9SOlSWhlnXd15QibI/lpmYGAhD8py4+0rn8E3W3iOgduEinYtMyAZrwS7fS76JvqjdSPKvgaadm8g1UTU5mrXX7oosqxfZpCLFicyUhjV9BuDrHIkMe8aULHCSXnPC09Ifr8VC/ZAmGKR1QVXrSxR4KIdFdxmAIwg76fzgtxlFj6/q0NI0TyvQM3SqPIopettkvtTMdQ7nNX5X2xFxNo0H33fIQBHgO521oCQiP6R3bMrZkWxFFGgRuQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PWWRA5SXtz4XCIZQfGwxyrrEObuu0MPauWSjN8QgfPU=; b=EJotlB79PUFXZSz4VsuWcjYnCMpF7H6TYr4VRx3zjnRwMCynkb5tn/RFh/6/48rSDmatwLTSOvqYHMdoTVp/OXX307c5KP+PELaNgRPmCJ0mdSjIjsIV9/7HLBPboNzOSunJNzFsGyazocSG2pdCyhBkPq80NgrpFsgpbpDpD58= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI0PR08MB11200.eurprd08.prod.outlook.com (2603:10a6:800:257::18) by DB9PR08MB7721.eurprd08.prod.outlook.com (2603:10a6:10:390::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9846.26; Mon, 27 Apr 2026 15:59:42 +0000 Received: from VI0PR08MB11200.eurprd08.prod.outlook.com ([fe80::27c:ea0c:e75a:d41d]) by VI0PR08MB11200.eurprd08.prod.outlook.com ([fe80::27c:ea0c:e75a:d41d%6]) with mapi id 15.20.9846.025; Mon, 27 Apr 2026 15:59:42 +0000 From: Karunika Choo To: dri-devel@lists.freedesktop.org Cc: nd@arm.com, Boris Brezillon , Steven Price , Liviu Dudau , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org Subject: [PATCH v3 1/8] drm/panthor: Pass an iomem pointer to GPU register access helpers Date: Mon, 27 Apr 2026 16:59:27 +0100 Message-ID: <20260427155934.416502-2-karunika.choo@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260427155934.416502-1-karunika.choo@arm.com> References: <20260427155934.416502-1-karunika.choo@arm.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: PR3P250CA0026.EURP250.PROD.OUTLOOK.COM (2603:10a6:102:57::31) To VI0PR08MB11200.eurprd08.prod.outlook.com (2603:10a6:800:257::18) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI0PR08MB11200:EE_|DB9PR08MB7721:EE_|DB5PEPF00014B97:EE_|AS2PR08MB8624:EE_ X-MS-Office365-Filtering-Correlation-Id: 71707177-c470-44db-da25-08dea47623f9 X-LD-Processed: f34e5979-57d9-4aaa-ad4d-b122a662184d,ExtAddr,ExtAddr x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; ARA:13230040|376014|366016|1800799024|56012099003|18002099003|22082099003|11006099003; X-Microsoft-Antispam-Message-Info-Original: I2qU9+P3uL+NruH/khi1lv7VKwVPyOkGe0DtUTyx1g3S78PhJQmr6sMYESaPK8oN0fC/31KlhHCu6GBvPJleBaBdASWVWXe8WR68x7T0vZBRmm4YEjgvrimpaZj3k47LefbYf/IhwSrAO8SsW9en25+3kz/c/obvvcQxC3u8hKgqzV3cl+hEnuV7lLJqNpnsgeTAI84+DFrvNJUIFfuaAUgEaK5DEG65hFftn5gVJVK3zZezgRy6lEve2z1G8P0CsINfxJKF//oyUSbT9V9PHh1iJsQrtETGotmiQnEMBUfRRVYUQtsR5gtx63kH2Ch0/XNqOdrD07EtjiOMd3Db4/HUgfmiym0zsqtXB5f0XVG1VHYsfhzE5mfUgedPws/LkAkwvggsuvxcQa5rSiP5igzfRFdBhLlXwSK2LH40KFY6hy7cpRCDU9nrNdC3LBeJ5HNO5tZruo+HvU3v/XugAXV4K7CBANfDPdw2VDTvMJKCuOL0w2NflpdRYhFUGOJs1lW5OdIV8vkd1LtSaWczECGI9kR2x8OzHeBG3tUpPoWPDuDRxfM0z0Oux2GEkcd46C2hnpDupkfNr3Uqt4bsnal4gYbGZeBaQsDIUYh+VvUb/DvW65YjfkoNzHE8o5bdDu/DpHTg2WSWaj2fMts1txCrI35ZXdF7prmo3Kvu04LfVYJ/5eA38KzGLlpTzvHCynOvtEOlrUZKiCbC2ilUHNbQP9hKJh2rA7xUlcGCogo= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI0PR08MB11200.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(376014)(366016)(1800799024)(56012099003)(18002099003)(22082099003)(11006099003); DIR:OUT; SFP:1101; X-Exchange-RoutingPolicyChecked: THYp9h7puGkjtQeahiSGpLT9jjphI8FX04qjHfrltrNYxvAPFvQL+U2RGhfVceUE9pVbJaicxoz7WNwlPNFJarC0NjR9Jg+Yyrx9oW8nnxmDa79Zxa84tPlxPPYkhcr5DeENCSe7MkjbVKkemB2oYZtv1uufU4lphjNwHsQuHijCm0pJ7mYQUxoka146pbEVYJoNp07RS4rlHmD/UYJzb9xNnWM2fHu9kaRXsEPzgpcQyEU1eOGnWbMyGJ26HNwu4ridpWz5cALHkCPWnfX66mM9rbeLV1VvYTv9NqQwxb1Hsi/Uh/bhov1H10JLn3OfSZosCefA5brnlnoXzSbzCw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7721 X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5PEPF00014B97.eurprd02.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 70961931-23c1-4e06-d6ae-08dea475fe98 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|35042699022|14060799003|36860700016|1800799024|82310400026|22082099003|18002099003|56012099003|11006099003; X-Microsoft-Antispam-Message-Info: nZemXLliqC0c2gD+KX6vGuQiV9M691HygmDmzqPWRa2PWeIdF5I8AyfYiOHLU/m/JzM3OYS1whhY6arEy4tVMqQ1O21S8g2i9ceL/ITpGqz7qrYrUNwcJRFynHy5nagBD6jQYF3I2nlFaQWYJTwFDMbpNeZeQajzn6R6QvV3QK8sygkQrLlYMIpRktG3rewKS/pqFHKrvyltk00l07RLs2sQvlgAPtMc4FRfs8jYAbySE0GTNmR7csTxw8GbAxTNSWUo+wqnwW4A/4Ovrq/MhMbGW+nTk3tYUpx8XYFJ+oFfwgZIiZXuFcs3QEzxuDKAasY0KBNlDDKOXsV3yvkqmN77eYL9FLCV93k0mfteFDsvpJfidqeaL5Yp2YBnQ4v1321FUR2l8dqrkv5p5BxG9Mkj/s1fI2oKCKL7Lq2JYE2c9lW6S9AABQuUdlsPWjRtPRq8BkBIJ9MfFvJNucJ10E9DeeSH1JZl1LeZ/ifJe6vvk/eEFiHXUFidMQK9tLd0y1vkEsDnRpD7CUqC4PTA/bvJWc2EA3C8BGxAeexkSyKHBlBXubpJFeAvMiWmzWsgSbp4zhxbAcCOTw8LWQBaex9sbZ7bYYkEgDOYVlItChGPfHFXaxF858wtc255oRlrrtgoBKUOBqO19026IHKLCsBnZP7Nj7IODlAUpj3/VFuShjD7bFZYHm4hYbUp0sRdH/d6HIqdsCliJ2z1kaExZBfbgNUV3rxGshjxDE3fzZLwRF5WEtejLC9w2p0Gd6izJsLURWHD6ZIlnC7RCsckbA== X-Forefront-Antispam-Report: CIP:4.158.2.129; CTRY:GB; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:outbound-uk1.az.dlp.m.darktrace.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(376014)(35042699022)(14060799003)(36860700016)(1800799024)(82310400026)(22082099003)(18002099003)(56012099003)(11006099003); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: T6Pgx8ygHkmXjjF2GjanWnlCk5lZtlpUdq2tWnxPVFAxrmU9ihMLM1WpIeiiAdOwJKJKlVdiyeX9DjxcXNmxlTXxCqAm1BV0uz2LQKIE9STODOGHks6LjCXNp93zOTE+9tynZ4lkSKrFpFhNuuQsX/PYttA38bQFmYSU1aAR97myUnsRp+TCWuBsG39l4cN8GnRamINKetCp3MWpVAfn9VyQo2+JEPM+irIP3BiCSIpe0yzJDgOmHA15TRKb+RHAwMayEzI43euSWn6vgV3n0CIhOzQJfzhto6VIzrRdhRBhdwkKoJtTCUYHUP1BGYLfu6TZHRJERVx9qiC+lKcLsR0ureWnq/0Wqywn92FP7zIWPqwmkzScDXiQxr4fmRpimt40tfHSXveghUf9uEdxGkw+Pl7PiauF/VjFkSl9Q37s+ly4z3SA8v0ICOMZGH+W X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2026 16:00:44.8697 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 71707177-c470-44db-da25-08dea47623f9 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[4.158.2.129]; Helo=[outbound-uk1.az.dlp.m.darktrace.com] X-MS-Exchange-CrossTenant-AuthSource: DB5PEPF00014B97.eurprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8624 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Convert the Panthor register access helpers to take an iomem pointer instead of a panthor_device pointer. This makes the helpers usable with block-local registers instead of routing all accesses to go through ptdev->iomem. It is a preparatory change for splitting the register space by components and for moving callers away from cross-component register accesses. No functional change intended. v3: - Pick up R-bs from Liviu and Steve v2: - Pick up Ack from Boris. Reviewed-by: Steven Price Reviewed-by: Liviu Dudau Acked-by: Boris Brezillon Signed-off-by: Karunika Choo --- drivers/gpu/drm/panthor/panthor_device.c | 2 +- drivers/gpu/drm/panthor/panthor_device.h | 78 ++++++++++++------------ drivers/gpu/drm/panthor/panthor_drv.c | 6 +- drivers/gpu/drm/panthor/panthor_fw.c | 22 +++---- drivers/gpu/drm/panthor/panthor_gpu.c | 42 ++++++------- drivers/gpu/drm/panthor/panthor_hw.c | 47 +++++++------- drivers/gpu/drm/panthor/panthor_mmu.c | 29 +++++---- drivers/gpu/drm/panthor/panthor_pwr.c | 61 +++++++++--------- drivers/gpu/drm/panthor/panthor_sched.c | 2 +- 9 files changed, 146 insertions(+), 143 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c index bc62a498a8a8..d62017b73409 100644 --- a/drivers/gpu/drm/panthor/panthor_device.c +++ b/drivers/gpu/drm/panthor/panthor_device.c @@ -43,7 +43,7 @@ static int panthor_gpu_coherency_init(struct panthor_device *ptdev) /* Check if the ACE-Lite coherency protocol is actually supported by the GPU. * ACE protocol has never been supported for command stream frontend GPUs. */ - if ((gpu_read(ptdev, GPU_COHERENCY_FEATURES) & + if ((gpu_read(ptdev->iomem, GPU_COHERENCY_FEATURES) & GPU_COHERENCY_PROT_BIT(ACE_LITE))) { ptdev->gpu_info.selected_coherency = GPU_COHERENCY_ACE_LITE; return 0; diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h index 5cba272f9b4d..285bf7e4439e 100644 --- a/drivers/gpu/drm/panthor/panthor_device.h +++ b/drivers/gpu/drm/panthor/panthor_device.h @@ -505,7 +505,7 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) struct panthor_device *ptdev = pirq->ptdev; \ enum panthor_irq_state old_state; \ \ - if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT)) \ + if (!gpu_read(ptdev->iomem, __reg_prefix ## _INT_STAT)) \ return IRQ_NONE; \ \ guard(spinlock_irqsave)(&pirq->mask_lock); \ @@ -515,7 +515,7 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) if (old_state != PANTHOR_IRQ_STATE_ACTIVE) \ return IRQ_NONE; \ \ - gpu_write(ptdev, __reg_prefix ## _INT_MASK, 0); \ + gpu_write(ptdev->iomem, __reg_prefix ## _INT_MASK, 0); \ return IRQ_WAKE_THREAD; \ } \ \ @@ -534,7 +534,7 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da * right before the HW event kicks in. TLDR; it's all expected races we're \ * covered for. \ */ \ - u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ + u32 status = gpu_read(ptdev->iomem, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ \ if (!status) \ break; \ @@ -550,7 +550,7 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da PANTHOR_IRQ_STATE_PROCESSING, \ PANTHOR_IRQ_STATE_ACTIVE); \ if (old_state == PANTHOR_IRQ_STATE_PROCESSING) \ - gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + gpu_write(ptdev->iomem, __reg_prefix ## _INT_MASK, pirq->mask); \ } \ \ return ret; \ @@ -560,7 +560,7 @@ static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) { \ scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ + gpu_write(pirq->ptdev->iomem, __reg_prefix ## _INT_MASK, 0); \ } \ synchronize_irq(pirq->irq); \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ @@ -571,8 +571,8 @@ static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq) guard(spinlock_irqsave)(&pirq->mask_lock); \ \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + gpu_write(pirq->ptdev->iomem, __reg_prefix ## _INT_CLEAR, pirq->mask); \ + gpu_write(pirq->ptdev->iomem, __reg_prefix ## _INT_MASK, pirq->mask); \ } \ \ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ @@ -603,7 +603,7 @@ static inline void panthor_ ## __name ## _irq_enable_events(struct panthor_irq * * If the IRQ is suspended/suspending, the mask is restored at resume time. \ */ \ if (atomic_read(&pirq->state) == PANTHOR_IRQ_STATE_ACTIVE) \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + gpu_write(pirq->ptdev->iomem, __reg_prefix ## _INT_MASK, pirq->mask); \ } \ \ static inline void panthor_ ## __name ## _irq_disable_events(struct panthor_irq *pirq, u32 mask)\ @@ -617,80 +617,80 @@ static inline void panthor_ ## __name ## _irq_disable_events(struct panthor_irq * If the IRQ is suspended/suspending, the mask is restored at resume time. \ */ \ if (atomic_read(&pirq->state) == PANTHOR_IRQ_STATE_ACTIVE) \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + gpu_write(pirq->ptdev->iomem, __reg_prefix ## _INT_MASK, pirq->mask); \ } extern struct workqueue_struct *panthor_cleanup_wq; -static inline void gpu_write(struct panthor_device *ptdev, u32 reg, u32 data) +static inline void gpu_write(void __iomem *iomem, u32 reg, u32 data) { - writel(data, ptdev->iomem + reg); + writel(data, iomem + reg); } -static inline u32 gpu_read(struct panthor_device *ptdev, u32 reg) +static inline u32 gpu_read(void __iomem *iomem, u32 reg) { - return readl(ptdev->iomem + reg); + return readl(iomem + reg); } -static inline u32 gpu_read_relaxed(struct panthor_device *ptdev, u32 reg) +static inline u32 gpu_read_relaxed(void __iomem *iomem, u32 reg) { - return readl_relaxed(ptdev->iomem + reg); + return readl_relaxed(iomem + reg); } -static inline void gpu_write64(struct panthor_device *ptdev, u32 reg, u64 data) +static inline void gpu_write64(void __iomem *iomem, u32 reg, u64 data) { - gpu_write(ptdev, reg, lower_32_bits(data)); - gpu_write(ptdev, reg + 4, upper_32_bits(data)); + gpu_write(iomem, reg, lower_32_bits(data)); + gpu_write(iomem, reg + 4, upper_32_bits(data)); } -static inline u64 gpu_read64(struct panthor_device *ptdev, u32 reg) +static inline u64 gpu_read64(void __iomem *iomem, u32 reg) { - return (gpu_read(ptdev, reg) | ((u64)gpu_read(ptdev, reg + 4) << 32)); + return (gpu_read(iomem, reg) | ((u64)gpu_read(iomem, reg + 4) << 32)); } -static inline u64 gpu_read64_relaxed(struct panthor_device *ptdev, u32 reg) +static inline u64 gpu_read64_relaxed(void __iomem *iomem, u32 reg) { - return (gpu_read_relaxed(ptdev, reg) | - ((u64)gpu_read_relaxed(ptdev, reg + 4) << 32)); + return (gpu_read_relaxed(iomem, reg) | + ((u64)gpu_read_relaxed(iomem, reg + 4) << 32)); } -static inline u64 gpu_read64_counter(struct panthor_device *ptdev, u32 reg) +static inline u64 gpu_read64_counter(void __iomem *iomem, u32 reg) { u32 lo, hi1, hi2; do { - hi1 = gpu_read(ptdev, reg + 4); - lo = gpu_read(ptdev, reg); - hi2 = gpu_read(ptdev, reg + 4); + hi1 = gpu_read(iomem, reg + 4); + lo = gpu_read(iomem, reg); + hi2 = gpu_read(iomem, reg + 4); } while (hi1 != hi2); return lo | ((u64)hi2 << 32); } -#define gpu_read_poll_timeout(dev, reg, val, cond, delay_us, timeout_us) \ +#define gpu_read_poll_timeout(iomem, reg, val, cond, delay_us, timeout_us) \ read_poll_timeout(gpu_read, val, cond, delay_us, timeout_us, false, \ - dev, reg) + iomem, reg) -#define gpu_read_poll_timeout_atomic(dev, reg, val, cond, delay_us, \ +#define gpu_read_poll_timeout_atomic(iomem, reg, val, cond, delay_us, \ timeout_us) \ read_poll_timeout_atomic(gpu_read, val, cond, delay_us, timeout_us, \ - false, dev, reg) + false, iomem, reg) -#define gpu_read64_poll_timeout(dev, reg, val, cond, delay_us, timeout_us) \ +#define gpu_read64_poll_timeout(iomem, reg, val, cond, delay_us, timeout_us) \ read_poll_timeout(gpu_read64, val, cond, delay_us, timeout_us, false, \ - dev, reg) + iomem, reg) -#define gpu_read64_poll_timeout_atomic(dev, reg, val, cond, delay_us, \ +#define gpu_read64_poll_timeout_atomic(iomem, reg, val, cond, delay_us, \ timeout_us) \ read_poll_timeout_atomic(gpu_read64, val, cond, delay_us, timeout_us, \ - false, dev, reg) + false, iomem, reg) -#define gpu_read_relaxed_poll_timeout_atomic(dev, reg, val, cond, delay_us, \ +#define gpu_read_relaxed_poll_timeout_atomic(iomem, reg, val, cond, delay_us, \ timeout_us) \ read_poll_timeout_atomic(gpu_read_relaxed, val, cond, delay_us, \ - timeout_us, false, dev, reg) + timeout_us, false, iomem, reg) -#define gpu_read64_relaxed_poll_timeout(dev, reg, val, cond, delay_us, \ +#define gpu_read64_relaxed_poll_timeout(iomem, reg, val, cond, delay_us, \ timeout_us) \ read_poll_timeout(gpu_read64_relaxed, val, cond, delay_us, timeout_us, \ - false, dev, reg) + false, iomem, reg) #endif diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c index 73fc983dc9b4..4f926c861fba 100644 --- a/drivers/gpu/drm/panthor/panthor_drv.c +++ b/drivers/gpu/drm/panthor/panthor_drv.c @@ -839,7 +839,7 @@ static int panthor_query_timestamp_info(struct panthor_device *ptdev, } if (flags & DRM_PANTHOR_TIMESTAMP_GPU_OFFSET) - arg->timestamp_offset = gpu_read64(ptdev, GPU_TIMESTAMP_OFFSET); + arg->timestamp_offset = gpu_read64(ptdev->iomem, GPU_TIMESTAMP_OFFSET); else arg->timestamp_offset = 0; @@ -854,7 +854,7 @@ static int panthor_query_timestamp_info(struct panthor_device *ptdev, query_start_time = 0; if (flags & DRM_PANTHOR_TIMESTAMP_GPU) - arg->current_timestamp = gpu_read64_counter(ptdev, GPU_TIMESTAMP); + arg->current_timestamp = gpu_read64_counter(ptdev->iomem, GPU_TIMESTAMP); else arg->current_timestamp = 0; @@ -870,7 +870,7 @@ static int panthor_query_timestamp_info(struct panthor_device *ptdev, } if (flags & DRM_PANTHOR_TIMESTAMP_GPU_CYCLE_COUNT) - arg->cycle_count = gpu_read64_counter(ptdev, GPU_CYCLE_COUNT); + arg->cycle_count = gpu_read64_counter(ptdev->iomem, GPU_CYCLE_COUNT); else arg->cycle_count = 0; diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c index be0da5b1f3ab..69a19751a314 100644 --- a/drivers/gpu/drm/panthor/panthor_fw.c +++ b/drivers/gpu/drm/panthor/panthor_fw.c @@ -1054,7 +1054,7 @@ static void panthor_fw_init_global_iface(struct panthor_device *ptdev) GLB_CFG_POWEROFF_TIMER | GLB_CFG_PROGRESS_TIMER); - gpu_write(ptdev, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); + gpu_write(ptdev->iomem, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); /* Kick the watchdog. */ mod_delayed_work(ptdev->reset.wq, &ptdev->fw->watchdog.ping_work, @@ -1069,7 +1069,7 @@ static void panthor_job_irq_handler(struct panthor_device *ptdev, u32 status) if (tracepoint_enabled(gpu_job_irq)) start = ktime_get_ns(); - gpu_write(ptdev, JOB_INT_CLEAR, status); + gpu_write(ptdev->iomem, JOB_INT_CLEAR, status); if (!ptdev->fw->booted && (status & JOB_INT_GLOBAL_IF)) ptdev->fw->booted = true; @@ -1097,13 +1097,13 @@ static int panthor_fw_start(struct panthor_device *ptdev) ptdev->fw->booted = false; panthor_job_irq_enable_events(&ptdev->fw->irq, ~0); panthor_job_irq_resume(&ptdev->fw->irq); - gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_AUTO); + gpu_write(ptdev->iomem, MCU_CONTROL, MCU_CONTROL_AUTO); if (!wait_event_timeout(ptdev->fw->req_waitqueue, ptdev->fw->booted, msecs_to_jiffies(1000))) { if (!ptdev->fw->booted && - !(gpu_read(ptdev, JOB_INT_STAT) & JOB_INT_GLOBAL_IF)) + !(gpu_read(ptdev->iomem, JOB_INT_STAT) & JOB_INT_GLOBAL_IF)) timedout = true; } @@ -1114,7 +1114,7 @@ static int panthor_fw_start(struct panthor_device *ptdev) [MCU_STATUS_HALT] = "halt", [MCU_STATUS_FATAL] = "fatal", }; - u32 status = gpu_read(ptdev, MCU_STATUS); + u32 status = gpu_read(ptdev->iomem, MCU_STATUS); drm_err(&ptdev->base, "Failed to boot MCU (status=%s)", status < ARRAY_SIZE(status_str) ? status_str[status] : "unknown"); @@ -1128,8 +1128,8 @@ static void panthor_fw_stop(struct panthor_device *ptdev) { u32 status; - gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_DISABLE); - if (gpu_read_poll_timeout(ptdev, MCU_STATUS, status, + gpu_write(ptdev->iomem, MCU_CONTROL, MCU_CONTROL_DISABLE); + if (gpu_read_poll_timeout(ptdev->iomem, MCU_STATUS, status, status == MCU_STATUS_DISABLED, 10, 100000)) drm_err(&ptdev->base, "Failed to stop MCU"); } @@ -1139,7 +1139,7 @@ static bool panthor_fw_mcu_halted(struct panthor_device *ptdev) struct panthor_fw_global_iface *glb_iface = panthor_fw_get_glb_iface(ptdev); bool halted; - halted = gpu_read(ptdev, MCU_STATUS) == MCU_STATUS_HALT; + halted = gpu_read(ptdev->iomem, MCU_STATUS) == MCU_STATUS_HALT; if (panthor_fw_has_glb_state(ptdev)) halted &= (GLB_STATE_GET(glb_iface->output->ack) == GLB_STATE_HALT); @@ -1156,7 +1156,7 @@ static void panthor_fw_halt_mcu(struct panthor_device *ptdev) else panthor_fw_update_reqs(glb_iface, req, GLB_HALT, GLB_HALT); - gpu_write(ptdev, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); + gpu_write(ptdev->iomem, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); } static bool panthor_fw_wait_mcu_halted(struct panthor_device *ptdev) @@ -1414,7 +1414,7 @@ void panthor_fw_ring_csg_doorbells(struct panthor_device *ptdev, u32 csg_mask) struct panthor_fw_global_iface *glb_iface = panthor_fw_get_glb_iface(ptdev); panthor_fw_toggle_reqs(glb_iface, doorbell_req, doorbell_ack, csg_mask); - gpu_write(ptdev, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); + gpu_write(ptdev->iomem, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); } static void panthor_fw_ping_work(struct work_struct *work) @@ -1429,7 +1429,7 @@ static void panthor_fw_ping_work(struct work_struct *work) return; panthor_fw_toggle_reqs(glb_iface, req, ack, GLB_PING); - gpu_write(ptdev, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); + gpu_write(ptdev->iomem, CSF_DOORBELL(CSF_GLB_DOORBELL_ID), 1); ret = panthor_fw_glb_wait_acks(ptdev, GLB_PING, &acked, 100); if (ret) { diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c index 2ab444ee8c71..bdb72cebccb3 100644 --- a/drivers/gpu/drm/panthor/panthor_gpu.c +++ b/drivers/gpu/drm/panthor/panthor_gpu.c @@ -56,7 +56,7 @@ struct panthor_gpu { static void panthor_gpu_coherency_set(struct panthor_device *ptdev) { - gpu_write(ptdev, GPU_COHERENCY_PROTOCOL, + gpu_write(ptdev->iomem, GPU_COHERENCY_PROTOCOL, ptdev->gpu_info.selected_coherency); } @@ -75,26 +75,26 @@ static void panthor_gpu_l2_config_set(struct panthor_device *ptdev) } for (i = 0; i < ARRAY_SIZE(data->asn_hash); i++) - gpu_write(ptdev, GPU_ASN_HASH(i), data->asn_hash[i]); + gpu_write(ptdev->iomem, GPU_ASN_HASH(i), data->asn_hash[i]); - l2_config = gpu_read(ptdev, GPU_L2_CONFIG); + l2_config = gpu_read(ptdev->iomem, GPU_L2_CONFIG); l2_config |= GPU_L2_CONFIG_ASN_HASH_ENABLE; - gpu_write(ptdev, GPU_L2_CONFIG, l2_config); + gpu_write(ptdev->iomem, GPU_L2_CONFIG, l2_config); } static void panthor_gpu_irq_handler(struct panthor_device *ptdev, u32 status) { - gpu_write(ptdev, GPU_INT_CLEAR, status); + gpu_write(ptdev->iomem, GPU_INT_CLEAR, status); if (tracepoint_enabled(gpu_power_status) && (status & GPU_POWER_INTERRUPTS_MASK)) trace_gpu_power_status(ptdev->base.dev, - gpu_read64(ptdev, SHADER_READY), - gpu_read64(ptdev, TILER_READY), - gpu_read64(ptdev, L2_READY)); + gpu_read64(ptdev->iomem, SHADER_READY), + gpu_read64(ptdev->iomem, TILER_READY), + gpu_read64(ptdev->iomem, L2_READY)); if (status & GPU_IRQ_FAULT) { - u32 fault_status = gpu_read(ptdev, GPU_FAULT_STATUS); - u64 address = gpu_read64(ptdev, GPU_FAULT_ADDR); + u32 fault_status = gpu_read(ptdev->iomem, GPU_FAULT_STATUS); + u64 address = gpu_read64(ptdev->iomem, GPU_FAULT_ADDR); drm_warn(&ptdev->base, "GPU Fault 0x%08x (%s) at 0x%016llx\n", fault_status, panthor_exception_name(ptdev, fault_status & 0xFF), @@ -204,7 +204,7 @@ int panthor_gpu_block_power_off(struct panthor_device *ptdev, u32 val; int ret; - ret = gpu_read64_relaxed_poll_timeout(ptdev, pwrtrans_reg, val, + ret = gpu_read64_relaxed_poll_timeout(ptdev->iomem, pwrtrans_reg, val, !(mask & val), 100, timeout_us); if (ret) { drm_err(&ptdev->base, @@ -213,9 +213,9 @@ int panthor_gpu_block_power_off(struct panthor_device *ptdev, return ret; } - gpu_write64(ptdev, pwroff_reg, mask); + gpu_write64(ptdev->iomem, pwroff_reg, mask); - ret = gpu_read64_relaxed_poll_timeout(ptdev, pwrtrans_reg, val, + ret = gpu_read64_relaxed_poll_timeout(ptdev->iomem, pwrtrans_reg, val, !(mask & val), 100, timeout_us); if (ret) { drm_err(&ptdev->base, @@ -247,7 +247,7 @@ int panthor_gpu_block_power_on(struct panthor_device *ptdev, u32 val; int ret; - ret = gpu_read64_relaxed_poll_timeout(ptdev, pwrtrans_reg, val, + ret = gpu_read64_relaxed_poll_timeout(ptdev->iomem, pwrtrans_reg, val, !(mask & val), 100, timeout_us); if (ret) { drm_err(&ptdev->base, @@ -256,9 +256,9 @@ int panthor_gpu_block_power_on(struct panthor_device *ptdev, return ret; } - gpu_write64(ptdev, pwron_reg, mask); + gpu_write64(ptdev->iomem, pwron_reg, mask); - ret = gpu_read64_relaxed_poll_timeout(ptdev, rdy_reg, val, + ret = gpu_read64_relaxed_poll_timeout(ptdev->iomem, rdy_reg, val, (mask & val) == val, 100, timeout_us); if (ret) { @@ -326,7 +326,7 @@ int panthor_gpu_flush_caches(struct panthor_device *ptdev, spin_lock_irqsave(&ptdev->gpu->reqs_lock, flags); if (!(ptdev->gpu->pending_reqs & GPU_IRQ_CLEAN_CACHES_COMPLETED)) { ptdev->gpu->pending_reqs |= GPU_IRQ_CLEAN_CACHES_COMPLETED; - gpu_write(ptdev, GPU_CMD, GPU_FLUSH_CACHES(l2, lsc, other)); + gpu_write(ptdev->iomem, GPU_CMD, GPU_FLUSH_CACHES(l2, lsc, other)); } else { ret = -EIO; } @@ -340,7 +340,7 @@ int panthor_gpu_flush_caches(struct panthor_device *ptdev, msecs_to_jiffies(100))) { spin_lock_irqsave(&ptdev->gpu->reqs_lock, flags); if ((ptdev->gpu->pending_reqs & GPU_IRQ_CLEAN_CACHES_COMPLETED) != 0 && - !(gpu_read(ptdev, GPU_INT_RAWSTAT) & GPU_IRQ_CLEAN_CACHES_COMPLETED)) + !(gpu_read(ptdev->iomem, GPU_INT_RAWSTAT) & GPU_IRQ_CLEAN_CACHES_COMPLETED)) ret = -ETIMEDOUT; else ptdev->gpu->pending_reqs &= ~GPU_IRQ_CLEAN_CACHES_COMPLETED; @@ -370,8 +370,8 @@ int panthor_gpu_soft_reset(struct panthor_device *ptdev) if (!drm_WARN_ON(&ptdev->base, ptdev->gpu->pending_reqs & GPU_IRQ_RESET_COMPLETED)) { ptdev->gpu->pending_reqs |= GPU_IRQ_RESET_COMPLETED; - gpu_write(ptdev, GPU_INT_CLEAR, GPU_IRQ_RESET_COMPLETED); - gpu_write(ptdev, GPU_CMD, GPU_SOFT_RESET); + gpu_write(ptdev->iomem, GPU_INT_CLEAR, GPU_IRQ_RESET_COMPLETED); + gpu_write(ptdev->iomem, GPU_CMD, GPU_SOFT_RESET); } spin_unlock_irqrestore(&ptdev->gpu->reqs_lock, flags); @@ -380,7 +380,7 @@ int panthor_gpu_soft_reset(struct panthor_device *ptdev) msecs_to_jiffies(100))) { spin_lock_irqsave(&ptdev->gpu->reqs_lock, flags); if ((ptdev->gpu->pending_reqs & GPU_IRQ_RESET_COMPLETED) != 0 && - !(gpu_read(ptdev, GPU_INT_RAWSTAT) & GPU_IRQ_RESET_COMPLETED)) + !(gpu_read(ptdev->iomem, GPU_INT_RAWSTAT) & GPU_IRQ_RESET_COMPLETED)) timedout = true; else ptdev->gpu->pending_reqs &= ~GPU_IRQ_RESET_COMPLETED; diff --git a/drivers/gpu/drm/panthor/panthor_hw.c b/drivers/gpu/drm/panthor/panthor_hw.c index d135aa6724fa..9309d0938212 100644 --- a/drivers/gpu/drm/panthor/panthor_hw.c +++ b/drivers/gpu/drm/panthor/panthor_hw.c @@ -194,35 +194,38 @@ static int panthor_gpu_info_init(struct panthor_device *ptdev) { unsigned int i; - ptdev->gpu_info.csf_id = gpu_read(ptdev, GPU_CSF_ID); - ptdev->gpu_info.gpu_rev = gpu_read(ptdev, GPU_REVID); - ptdev->gpu_info.core_features = gpu_read(ptdev, GPU_CORE_FEATURES); - ptdev->gpu_info.l2_features = gpu_read(ptdev, GPU_L2_FEATURES); - ptdev->gpu_info.tiler_features = gpu_read(ptdev, GPU_TILER_FEATURES); - ptdev->gpu_info.mem_features = gpu_read(ptdev, GPU_MEM_FEATURES); - ptdev->gpu_info.mmu_features = gpu_read(ptdev, GPU_MMU_FEATURES); - ptdev->gpu_info.thread_features = gpu_read(ptdev, GPU_THREAD_FEATURES); - ptdev->gpu_info.max_threads = gpu_read(ptdev, GPU_THREAD_MAX_THREADS); - ptdev->gpu_info.thread_max_workgroup_size = gpu_read(ptdev, GPU_THREAD_MAX_WORKGROUP_SIZE); - ptdev->gpu_info.thread_max_barrier_size = gpu_read(ptdev, GPU_THREAD_MAX_BARRIER_SIZE); - ptdev->gpu_info.coherency_features = gpu_read(ptdev, GPU_COHERENCY_FEATURES); + ptdev->gpu_info.csf_id = gpu_read(ptdev->iomem, GPU_CSF_ID); + ptdev->gpu_info.gpu_rev = gpu_read(ptdev->iomem, GPU_REVID); + ptdev->gpu_info.core_features = gpu_read(ptdev->iomem, GPU_CORE_FEATURES); + ptdev->gpu_info.l2_features = gpu_read(ptdev->iomem, GPU_L2_FEATURES); + ptdev->gpu_info.tiler_features = gpu_read(ptdev->iomem, GPU_TILER_FEATURES); + ptdev->gpu_info.mem_features = gpu_read(ptdev->iomem, GPU_MEM_FEATURES); + ptdev->gpu_info.mmu_features = gpu_read(ptdev->iomem, GPU_MMU_FEATURES); + ptdev->gpu_info.thread_features = gpu_read(ptdev->iomem, GPU_THREAD_FEATURES); + ptdev->gpu_info.max_threads = gpu_read(ptdev->iomem, GPU_THREAD_MAX_THREADS); + ptdev->gpu_info.thread_max_workgroup_size = + gpu_read(ptdev->iomem, GPU_THREAD_MAX_WORKGROUP_SIZE); + ptdev->gpu_info.thread_max_barrier_size = + gpu_read(ptdev->iomem, GPU_THREAD_MAX_BARRIER_SIZE); + ptdev->gpu_info.coherency_features = gpu_read(ptdev->iomem, GPU_COHERENCY_FEATURES); for (i = 0; i < 4; i++) - ptdev->gpu_info.texture_features[i] = gpu_read(ptdev, GPU_TEXTURE_FEATURES(i)); + ptdev->gpu_info.texture_features[i] = + gpu_read(ptdev->iomem, GPU_TEXTURE_FEATURES(i)); - ptdev->gpu_info.as_present = gpu_read(ptdev, GPU_AS_PRESENT); + ptdev->gpu_info.as_present = gpu_read(ptdev->iomem, GPU_AS_PRESENT); /* Introduced in arch 11.x */ - ptdev->gpu_info.gpu_features = gpu_read64(ptdev, GPU_FEATURES); + ptdev->gpu_info.gpu_features = gpu_read64(ptdev->iomem, GPU_FEATURES); if (panthor_hw_has_pwr_ctrl(ptdev)) { /* Introduced in arch 14.x */ - ptdev->gpu_info.l2_present = gpu_read64(ptdev, PWR_L2_PRESENT); - ptdev->gpu_info.tiler_present = gpu_read64(ptdev, PWR_TILER_PRESENT); - ptdev->gpu_info.shader_present = gpu_read64(ptdev, PWR_SHADER_PRESENT); + ptdev->gpu_info.l2_present = gpu_read64(ptdev->iomem, PWR_L2_PRESENT); + ptdev->gpu_info.tiler_present = gpu_read64(ptdev->iomem, PWR_TILER_PRESENT); + ptdev->gpu_info.shader_present = gpu_read64(ptdev->iomem, PWR_SHADER_PRESENT); } else { - ptdev->gpu_info.shader_present = gpu_read64(ptdev, GPU_SHADER_PRESENT); - ptdev->gpu_info.tiler_present = gpu_read64(ptdev, GPU_TILER_PRESENT); - ptdev->gpu_info.l2_present = gpu_read64(ptdev, GPU_L2_PRESENT); + ptdev->gpu_info.shader_present = gpu_read64(ptdev->iomem, GPU_SHADER_PRESENT); + ptdev->gpu_info.tiler_present = gpu_read64(ptdev->iomem, GPU_TILER_PRESENT); + ptdev->gpu_info.l2_present = gpu_read64(ptdev->iomem, GPU_L2_PRESENT); } return overload_shader_present(ptdev); @@ -287,7 +290,7 @@ static int panthor_hw_bind_device(struct panthor_device *ptdev) static int panthor_hw_gpu_id_init(struct panthor_device *ptdev) { - ptdev->gpu_info.gpu_id = gpu_read(ptdev, GPU_ID); + ptdev->gpu_info.gpu_id = gpu_read(ptdev->iomem, GPU_ID); if (!ptdev->gpu_info.gpu_id) return -ENXIO; diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c index fc930ee158a5..6a8ac2239b9c 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -522,9 +522,8 @@ static int wait_ready(struct panthor_device *ptdev, u32 as_nr) /* Wait for the MMU status to indicate there is no active command, in * case one is pending. */ - ret = gpu_read_relaxed_poll_timeout_atomic(ptdev, AS_STATUS(as_nr), val, - !(val & AS_STATUS_AS_ACTIVE), - 10, 100000); + ret = gpu_read_relaxed_poll_timeout_atomic(ptdev->iomem, AS_STATUS(as_nr), val, + !(val & AS_STATUS_AS_ACTIVE), 10, 100000); if (ret) { panthor_device_schedule_reset(ptdev); @@ -541,7 +540,7 @@ static int as_send_cmd_and_wait(struct panthor_device *ptdev, u32 as_nr, u32 cmd /* write AS_COMMAND when MMU is ready to accept another command */ status = wait_ready(ptdev, as_nr); if (!status) { - gpu_write(ptdev, AS_COMMAND(as_nr), cmd); + gpu_write(ptdev->iomem, AS_COMMAND(as_nr), cmd); status = wait_ready(ptdev, as_nr); } @@ -592,9 +591,9 @@ static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr, panthor_mmu_irq_enable_events(&ptdev->mmu->irq, panthor_mmu_as_fault_mask(ptdev, as_nr)); - gpu_write64(ptdev, AS_TRANSTAB(as_nr), transtab); - gpu_write64(ptdev, AS_MEMATTR(as_nr), memattr); - gpu_write64(ptdev, AS_TRANSCFG(as_nr), transcfg); + gpu_write64(ptdev->iomem, AS_TRANSTAB(as_nr), transtab); + gpu_write64(ptdev->iomem, AS_MEMATTR(as_nr), memattr); + gpu_write64(ptdev->iomem, AS_TRANSCFG(as_nr), transcfg); return as_send_cmd_and_wait(ptdev, as_nr, AS_COMMAND_UPDATE); } @@ -629,9 +628,9 @@ static int panthor_mmu_as_disable(struct panthor_device *ptdev, u32 as_nr, if (recycle_slot) return 0; - gpu_write64(ptdev, AS_TRANSTAB(as_nr), 0); - gpu_write64(ptdev, AS_MEMATTR(as_nr), 0); - gpu_write64(ptdev, AS_TRANSCFG(as_nr), AS_TRANSCFG_ADRMODE_UNMAPPED); + gpu_write64(ptdev->iomem, AS_TRANSTAB(as_nr), 0); + gpu_write64(ptdev->iomem, AS_MEMATTR(as_nr), 0); + gpu_write64(ptdev->iomem, AS_TRANSCFG(as_nr), AS_TRANSCFG_ADRMODE_UNMAPPED); return as_send_cmd_and_wait(ptdev, as_nr, AS_COMMAND_UPDATE); } @@ -784,7 +783,7 @@ int panthor_vm_active(struct panthor_vm *vm) */ fault_mask = panthor_mmu_as_fault_mask(ptdev, as); if (ptdev->mmu->as.faulty_mask & fault_mask) { - gpu_write(ptdev, MMU_INT_CLEAR, fault_mask); + gpu_write(ptdev->iomem, MMU_INT_CLEAR, fault_mask); ptdev->mmu->as.faulty_mask &= ~fault_mask; } @@ -1731,7 +1730,7 @@ static int panthor_vm_lock_region(struct panthor_vm *vm, u64 start, u64 size) mutex_lock(&ptdev->mmu->as.slots_lock); if (vm->as.id >= 0 && size) { /* Lock the region that needs to be updated */ - gpu_write64(ptdev, AS_LOCKADDR(vm->as.id), + gpu_write64(ptdev->iomem, AS_LOCKADDR(vm->as.id), pack_region_range(ptdev, &start, &size)); /* If the lock succeeded, update the locked_region info. */ @@ -1792,8 +1791,8 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) u32 access_type; u32 source_id; - fault_status = gpu_read(ptdev, AS_FAULTSTATUS(as)); - addr = gpu_read64(ptdev, AS_FAULTADDRESS(as)); + fault_status = gpu_read(ptdev->iomem, AS_FAULTSTATUS(as)); + addr = gpu_read64(ptdev->iomem, AS_FAULTADDRESS(as)); /* decode the fault status */ exception_type = fault_status & 0xFF; @@ -1824,7 +1823,7 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) * Note that COMPLETED irqs are never cleared, but this is fine * because they are always masked. */ - gpu_write(ptdev, MMU_INT_CLEAR, mask); + gpu_write(ptdev->iomem, MMU_INT_CLEAR, mask); if (ptdev->mmu->as.slots[as].vm) ptdev->mmu->as.slots[as].vm->unhandled_fault = true; diff --git a/drivers/gpu/drm/panthor/panthor_pwr.c b/drivers/gpu/drm/panthor/panthor_pwr.c index ed3b2b4479ca..b77c85ad733a 100644 --- a/drivers/gpu/drm/panthor/panthor_pwr.c +++ b/drivers/gpu/drm/panthor/panthor_pwr.c @@ -55,7 +55,7 @@ struct panthor_pwr { static void panthor_pwr_irq_handler(struct panthor_device *ptdev, u32 status) { spin_lock(&ptdev->pwr->reqs_lock); - gpu_write(ptdev, PWR_INT_CLEAR, status); + gpu_write(ptdev->iomem, PWR_INT_CLEAR, status); if (unlikely(status & PWR_IRQ_COMMAND_NOT_ALLOWED)) drm_err(&ptdev->base, "PWR_IRQ: COMMAND_NOT_ALLOWED"); @@ -74,14 +74,14 @@ PANTHOR_IRQ_HANDLER(pwr, PWR, panthor_pwr_irq_handler); static void panthor_pwr_write_command(struct panthor_device *ptdev, u32 command, u64 args) { if (args) - gpu_write64(ptdev, PWR_CMDARG, args); + gpu_write64(ptdev->iomem, PWR_CMDARG, args); - gpu_write(ptdev, PWR_COMMAND, command); + gpu_write(ptdev->iomem, PWR_COMMAND, command); } static bool reset_irq_raised(struct panthor_device *ptdev) { - return gpu_read(ptdev, PWR_INT_RAWSTAT) & PWR_IRQ_RESET_COMPLETED; + return gpu_read(ptdev->iomem, PWR_INT_RAWSTAT) & PWR_IRQ_RESET_COMPLETED; } static bool reset_pending(struct panthor_device *ptdev) @@ -96,7 +96,7 @@ static int panthor_pwr_reset(struct panthor_device *ptdev, u32 reset_cmd) drm_WARN(&ptdev->base, 1, "Reset already pending"); } else { ptdev->pwr->pending_reqs |= PWR_IRQ_RESET_COMPLETED; - gpu_write(ptdev, PWR_INT_CLEAR, PWR_IRQ_RESET_COMPLETED); + gpu_write(ptdev->iomem, PWR_INT_CLEAR, PWR_IRQ_RESET_COMPLETED); panthor_pwr_write_command(ptdev, reset_cmd, 0); } } @@ -185,7 +185,7 @@ static int panthor_pwr_domain_wait_transition(struct panthor_device *ptdev, u32 u64 val; int ret = 0; - ret = gpu_read64_poll_timeout(ptdev, pwrtrans_reg, val, !(PWR_ALL_CORES_MASK & val), 100, + ret = gpu_read64_poll_timeout(ptdev->iomem, pwrtrans_reg, val, !(PWR_ALL_CORES_MASK & val), 100, timeout_us); if (ret) { drm_err(&ptdev->base, "%s domain power in transition, pwrtrans(0x%llx)", @@ -198,17 +198,17 @@ static int panthor_pwr_domain_wait_transition(struct panthor_device *ptdev, u32 static void panthor_pwr_debug_info_show(struct panthor_device *ptdev) { - drm_info(&ptdev->base, "GPU_FEATURES: 0x%016llx", gpu_read64(ptdev, GPU_FEATURES)); - drm_info(&ptdev->base, "PWR_STATUS: 0x%016llx", gpu_read64(ptdev, PWR_STATUS)); - drm_info(&ptdev->base, "L2_PRESENT: 0x%016llx", gpu_read64(ptdev, PWR_L2_PRESENT)); - drm_info(&ptdev->base, "L2_PWRTRANS: 0x%016llx", gpu_read64(ptdev, PWR_L2_PWRTRANS)); - drm_info(&ptdev->base, "L2_READY: 0x%016llx", gpu_read64(ptdev, PWR_L2_READY)); - drm_info(&ptdev->base, "TILER_PRESENT: 0x%016llx", gpu_read64(ptdev, PWR_TILER_PRESENT)); - drm_info(&ptdev->base, "TILER_PWRTRANS: 0x%016llx", gpu_read64(ptdev, PWR_TILER_PWRTRANS)); - drm_info(&ptdev->base, "TILER_READY: 0x%016llx", gpu_read64(ptdev, PWR_TILER_READY)); - drm_info(&ptdev->base, "SHADER_PRESENT: 0x%016llx", gpu_read64(ptdev, PWR_SHADER_PRESENT)); - drm_info(&ptdev->base, "SHADER_PWRTRANS: 0x%016llx", gpu_read64(ptdev, PWR_SHADER_PWRTRANS)); - drm_info(&ptdev->base, "SHADER_READY: 0x%016llx", gpu_read64(ptdev, PWR_SHADER_READY)); + drm_info(&ptdev->base, "GPU_FEATURES: 0x%016llx", gpu_read64(ptdev->iomem, GPU_FEATURES)); + drm_info(&ptdev->base, "PWR_STATUS: 0x%016llx", gpu_read64(ptdev->iomem, PWR_STATUS)); + drm_info(&ptdev->base, "L2_PRESENT: 0x%016llx", gpu_read64(ptdev->iomem, PWR_L2_PRESENT)); + drm_info(&ptdev->base, "L2_PWRTRANS: 0x%016llx", gpu_read64(ptdev->iomem, PWR_L2_PWRTRANS)); + drm_info(&ptdev->base, "L2_READY: 0x%016llx", gpu_read64(ptdev->iomem, PWR_L2_READY)); + drm_info(&ptdev->base, "TILER_PRESENT: 0x%016llx", gpu_read64(ptdev->iomem, PWR_TILER_PRESENT)); + drm_info(&ptdev->base, "TILER_PWRTRANS: 0x%016llx", gpu_read64(ptdev->iomem, PWR_TILER_PWRTRANS)); + drm_info(&ptdev->base, "TILER_READY: 0x%016llx", gpu_read64(ptdev->iomem, PWR_TILER_READY)); + drm_info(&ptdev->base, "SHADER_PRESENT: 0x%016llx", gpu_read64(ptdev->iomem, PWR_SHADER_PRESENT)); + drm_info(&ptdev->base, "SHADER_PWRTRANS: 0x%016llx", gpu_read64(ptdev->iomem, PWR_SHADER_PWRTRANS)); + drm_info(&ptdev->base, "SHADER_READY: 0x%016llx", gpu_read64(ptdev->iomem, PWR_SHADER_READY)); } static int panthor_pwr_domain_transition(struct panthor_device *ptdev, u32 cmd, u32 domain, @@ -240,13 +240,13 @@ static int panthor_pwr_domain_transition(struct panthor_device *ptdev, u32 cmd, return ret; /* domain already in target state, return early */ - if ((gpu_read64(ptdev, ready_reg) & mask) == expected_val) + if ((gpu_read64(ptdev->iomem, ready_reg) & mask) == expected_val) return 0; panthor_pwr_write_command(ptdev, pwr_cmd, mask); - ret = gpu_read64_poll_timeout(ptdev, ready_reg, val, (mask & val) == expected_val, 100, - timeout_us); + ret = gpu_read64_poll_timeout(ptdev->iomem, ready_reg, val, (mask & val) == expected_val, + 100, timeout_us); if (ret) { drm_err(&ptdev->base, "timeout waiting on %s power domain transition, cmd(0x%x), arg(0x%llx)", @@ -279,7 +279,7 @@ static int panthor_pwr_domain_transition(struct panthor_device *ptdev, u32 cmd, static int retract_domain(struct panthor_device *ptdev, u32 domain) { const u32 pwr_cmd = PWR_COMMAND_DEF(PWR_COMMAND_RETRACT, domain, 0); - const u64 pwr_status = gpu_read64(ptdev, PWR_STATUS); + const u64 pwr_status = gpu_read64(ptdev->iomem, PWR_STATUS); const u64 delegated_mask = PWR_STATUS_DOMAIN_DELEGATED(domain); const u64 allow_mask = PWR_STATUS_DOMAIN_ALLOWED(domain); u64 val; @@ -288,8 +288,9 @@ static int retract_domain(struct panthor_device *ptdev, u32 domain) if (drm_WARN_ON(&ptdev->base, domain == PWR_COMMAND_DOMAIN_L2)) return -EPERM; - ret = gpu_read64_poll_timeout(ptdev, PWR_STATUS, val, !(PWR_STATUS_RETRACT_PENDING & val), - 0, PWR_RETRACT_TIMEOUT_US); + ret = gpu_read64_poll_timeout(ptdev->iomem, PWR_STATUS, val, + !(PWR_STATUS_RETRACT_PENDING & val), 0, + PWR_RETRACT_TIMEOUT_US); if (ret) { drm_err(&ptdev->base, "%s domain retract pending", get_domain_name(domain)); return ret; @@ -306,7 +307,7 @@ static int retract_domain(struct panthor_device *ptdev, u32 domain) * On successful retraction * allow-flag will be set with delegated-flag being cleared. */ - ret = gpu_read64_poll_timeout(ptdev, PWR_STATUS, val, + ret = gpu_read64_poll_timeout(ptdev->iomem, PWR_STATUS, val, ((delegated_mask | allow_mask) & val) == allow_mask, 10, PWR_TRANSITION_TIMEOUT_US); if (ret) { @@ -333,7 +334,7 @@ static int retract_domain(struct panthor_device *ptdev, u32 domain) static int delegate_domain(struct panthor_device *ptdev, u32 domain) { const u32 pwr_cmd = PWR_COMMAND_DEF(PWR_COMMAND_DELEGATE, domain, 0); - const u64 pwr_status = gpu_read64(ptdev, PWR_STATUS); + const u64 pwr_status = gpu_read64(ptdev->iomem, PWR_STATUS); const u64 allow_mask = PWR_STATUS_DOMAIN_ALLOWED(domain); const u64 delegated_mask = PWR_STATUS_DOMAIN_DELEGATED(domain); u64 val; @@ -362,7 +363,7 @@ static int delegate_domain(struct panthor_device *ptdev, u32 domain) * On successful delegation * allow-flag will be cleared with delegated-flag being set. */ - ret = gpu_read64_poll_timeout(ptdev, PWR_STATUS, val, + ret = gpu_read64_poll_timeout(ptdev->iomem, PWR_STATUS, val, ((delegated_mask | allow_mask) & val) == delegated_mask, 10, PWR_TRANSITION_TIMEOUT_US); if (ret) { @@ -410,7 +411,7 @@ static int panthor_pwr_delegate_domains(struct panthor_device *ptdev) */ static int panthor_pwr_domain_force_off(struct panthor_device *ptdev, u32 domain) { - const u64 domain_ready = gpu_read64(ptdev, get_domain_ready_reg(domain)); + const u64 domain_ready = gpu_read64(ptdev->iomem, get_domain_ready_reg(domain)); int ret; /* Domain already powered down, early exit. */ @@ -471,7 +472,7 @@ int panthor_pwr_init(struct panthor_device *ptdev) int panthor_pwr_reset_soft(struct panthor_device *ptdev) { - if (!(gpu_read64(ptdev, PWR_STATUS) & PWR_STATUS_ALLOW_SOFT_RESET)) { + if (!(gpu_read64(ptdev->iomem, PWR_STATUS) & PWR_STATUS_ALLOW_SOFT_RESET)) { drm_err(&ptdev->base, "RESET_SOFT not allowed"); return -EOPNOTSUPP; } @@ -482,7 +483,7 @@ int panthor_pwr_reset_soft(struct panthor_device *ptdev) void panthor_pwr_l2_power_off(struct panthor_device *ptdev) { const u64 l2_allow_mask = PWR_STATUS_DOMAIN_ALLOWED(PWR_COMMAND_DOMAIN_L2); - const u64 pwr_status = gpu_read64(ptdev, PWR_STATUS); + const u64 pwr_status = gpu_read64(ptdev->iomem, PWR_STATUS); /* Abort if L2 power off constraints are not satisfied */ if (!(pwr_status & l2_allow_mask)) { @@ -508,7 +509,7 @@ void panthor_pwr_l2_power_off(struct panthor_device *ptdev) int panthor_pwr_l2_power_on(struct panthor_device *ptdev) { - const u32 pwr_status = gpu_read64(ptdev, PWR_STATUS); + const u32 pwr_status = gpu_read64(ptdev->iomem, PWR_STATUS); const u32 l2_allow_mask = PWR_STATUS_DOMAIN_ALLOWED(PWR_COMMAND_DOMAIN_L2); int ret; diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c index 41d6369fa9c0..55660d9056ed 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -3372,7 +3372,7 @@ queue_run_job(struct drm_sched_job *sched_job) if (resume_tick) sched_resume_tick(ptdev); - gpu_write(ptdev, CSF_DOORBELL(queue->doorbell_id), 1); + gpu_write(ptdev->iomem, CSF_DOORBELL(queue->doorbell_id), 1); if (!sched->pm.has_ref && !(group->blocked_queues & BIT(job->queue_idx))) { pm_runtime_get(ptdev->base.dev); -- 2.43.0