From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-mapping: introduce DMA_ATTR_CC_DECRYPTED for pre-decrypted memory Date: Tue, 24 Feb 2026 10:16:38 +1000 Message-ID: In-Reply-To: <20260223095136.225277-2-jiri@resnulli.us> References: <20260223095136.225277-1-jiri@resnulli.us> <20260223095136.225277-2-jiri@resnulli.us> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review The new attribute definition and trace event addition are straightforward. > + if (is_swiotlb_force_bounce(dev)) { > + if (!(attrs & DMA_ATTR_CC_DECRYPTED)) { > + if (attrs & DMA_ATTR_MMIO) > + return DMA_MAPPING_ERROR; > + > + return swiotlb_map(dev, phys, size, dir, attrs); > + } > + } else if (attrs & DMA_ATTR_CC_DECRYPTED) { > + return DMA_MAPPING_ERROR; > } The logic here is correct: CC_DECRYPTED bypasses bounce buffering when force-bounce is active (the CoCo case), and returns an error when force-bounce is not active (CC_DECRYPTED makes no sense outside CoCo). The existing MMIO and swiotlb paths are unchanged when CC_DECRYPTED is not set. > + } else if (attrs & DMA_ATTR_CC_DECRYPTED) { > + dma_addr = phys_to_dma_unencrypted(dev, phys); > + if (unlikely(!dma_capable(dev, dma_addr, size, false))) > + goto err_overflow; Using `phys_to_dma_unencrypted()` here is correct -- it produces a DMA address without the encryption bit, matching how swiotlb buffers are mapped. Passing `false` to `dma_capable` (no need to check against the encryption-adjusted mask) follows the same pattern as MMIO. One thing worth discussing: `dma_direct_unmap_phys` is not updated to handle `DMA_ATTR_CC_DECRYPTED`. The unmap path calls `dma_to_phys(dev, addr)` on the unencrypted DMA address and then calls `swiotlb_tbl_unmap_single()`. This works because `dma_addr_canonical()` correctly strips encryption bits and `swiotlb_tbl_unmap_single` will find no matching pool and return. But it is slightly different from MMIO which gets an early return. Is there a reason not to add a similar early return for CC_DECRYPTED, or should the CPU sync still run for these pages (which are cacheable, unlike MMIO)? Similarly, the `arch_sync_dma_for_device` call after the mapping succeeds does not exclude CC_DECRYPTED: > if (!dev_is_dma_coherent(dev) && > !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) > arch_sync_dma_for_device(phys, size, dir); Since CC_DECRYPTED pages are normal cacheable memory, including them in the sync is the right behavior. This is fine. --- Generated by Claude Code Patch Reviewer