From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: dma-buf: heaps: system: add system_cc_decrypted heap for explicitly decrypted memory Date: Tue, 24 Feb 2026 10:16:38 +1000 Message-ID: In-Reply-To: <20260223095136.225277-3-jiri@resnulli.us> References: <20260223095136.225277-1-jiri@resnulli.us> <20260223095136.225277-3-jiri@resnulli.us> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review The overall structure is reasonable: a `system_heap_priv` structure distinguishes the two heap variants, and the code correctly applies `pgprot_decrypted()` for mmap and vmap, `DMA_ATTR_CC_DECRYPTED` for DMA mapping, and `set_memory_decrypted()`/`set_memory_encrypted()` for allocation and free. **Partial decryption error path bug:** > + if (decrypted) { > + for_each_sgtable_sg(table, sg, i) { > + ret = system_heap_set_page_decrypted(sg_page(sg)); > + if (ret) > + goto free_pages; > + } > + } If `system_heap_set_page_decrypted()` fails at entry K (having successfully decrypted entries 0 through K-1), the code jumps to `free_pages`: > free_pages: > for_each_sgtable_sg(table, sg, i) { > struct page *p = sg_page(sg); > > + if (buffer->decrypted && > + system_heap_set_page_encrypted(p)) > + continue; > __free_pages(p, compound_order(p)); > } The `for_each_sgtable_sg` macro reinitializes `i` to 0, so this iterates all pages from the beginning. Pages K through the end were never decrypted, yet `set_memory_encrypted()` is called on them. On CoCo VMs, calling `set_memory_encrypted()` on a page that was never transitioned to shared may fail (the hypervisor validates state transitions). If it fails, the page is leaked despite being perfectly fine to free. This should be fixed by tracking how many pages were successfully decrypted so the cleanup only re-encrypts those. **Intentional page leak and DoS concern:** > + /* > + * Intentionally leak pages that cannot be re-encrypted > + * to prevent decrypted memory from being reused. > + */ > + if (buffer->decrypted && > + system_heap_set_page_encrypted(page)) > + continue; John Stultz already raised this in his reply. If `set_memory_encrypted()` can fail under stress, a malicious guest process could repeatedly allocate from this heap and trigger leaks. Would it be worth maintaining a list of leaked pages so the kernel can retry re-encryption later, or is the expectation that `set_memory_encrypted()` essentially never fails in practice? The design decision to leak rather than free decrypted memory is correct for security, but the recoverability question deserves an answer. **HIGHMEM guard:** > + if (IS_ENABLED(CONFIG_HIGHMEM) || > + !cc_platform_has(CC_ATTR_MEM_ENCRYPT)) > + return 0; This correctly prevents registration of the cc_decrypted heap when pages might not have direct map addresses (HIGHMEM) or when memory encryption is not active. Good. **Unnecessary UAPI changes:** > -/* Currently no heap flags */ > -#define DMA_HEAP_VALID_HEAP_FLAGS (0ULL) > +#define DMA_HEAP_VALID_HEAP_FLAGS (0) And: > +#include As John Stultz noted, these changes appear to be leftovers from v1 (which used flags). In v2, the approach is a separate heap with no new flags, so no UAPI header changes should be needed. The `0ULL` to `0` change also subtly changes the type of the macro, though in practice `~(0)` sign-extends to the same value as `~(0ULL)` when promoted. --- Generated by Claude Code Patch Reviewer