From mboxrd@z Thu Jan 1 00:00:00 1970 From: Claude Code Review Bot To: dri-devel-reviews@example.com Subject: Claude review: nvme-pci: implement dma_token backed requests Date: Tue, 05 May 2026 11:26:01 +1000 Message-ID: In-Reply-To: <5cecb1157ab784f9f303a91449fdf11b03aa6002.1777475843.git.asml.silence@gmail.com> References: <5cecb1157ab784f9f303a91449fdf11b03aa6002.1777475843.git.asml.silence@gmail.com> X-Mailer: Claude Code Patch Reviewer Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Patch Review The largest patch, implementing the NVMe PCI backend for dmabuf-backed I/O. **Bug: Wrong `sizeof` in `nvme_create_dmabuf_token`** ```c data = kzalloc(sizeof(data), GFP_KERNEL); ``` This allocates `sizeof(struct nvme_dmabuf_token *)` (a pointer, 8 bytes) instead of `sizeof(struct nvme_dmabuf_token)` (the struct itself). Should be: ```c data = kzalloc(sizeof(*data), GFP_KERNEL); ``` `struct nvme_dmabuf_token` contains a `struct dma_buf_attachment *`, which is also 8 bytes, so on 64-bit this happens to allocate just enough memory by coincidence, but it's still wrong and fragile. **Resource leak: `nvme_create_dmabuf_token` error path** ```c data = kzalloc(sizeof(data), GFP_KERNEL); if (!data) return -ENOMEM; token->dev_priv = data; token->dev_ops = &nvme_dma_token_ops; attach = dma_buf_dynamic_attach(dmabuf, dev->dev, &nvme_dmabuf_importer_ops, token); if (IS_ERR(attach)) return PTR_ERR(attach); ``` If `dma_buf_dynamic_attach` fails, `data` is leaked (it was `kzalloc`'d but never freed). The caller (`io_dmabuf_token_create`) does `memset(token, 0, sizeof(*token))` and `dma_buf_put(dmabuf)` on failure, but `data` is orphaned. This should `kfree(data)` before returning. **Potential out-of-bounds in `nvme_dmabuf_token_map`** ```c nr_entries = token->dmabuf->size / NVME_CTRL_PAGE_SIZE; dma_list = kmalloc_array(nr_entries, sizeof(dma_list[0]), GFP_KERNEL); ... while (sg_len) { dma_list[i++] = dma_addr; ``` If the scatter-gather entries don't align perfectly with the pre-computed `nr_entries`, `i` could exceed `nr_entries`. The `sg_len % NVME_CTRL_PAGE_SIZE` check prevents this for individual entries, but there's no check that the total SG length matches `dmabuf->size`. If the DMA mapping coalesces or splits pages differently, this could overflow. **DMA sync granularity** ```c while (length > 0) { u64 dma_addr = dma_list[map_idx++]; ... length -= NVME_CTRL_PAGE_SIZE; } ``` The sync loop always syncs full `NVME_CTRL_PAGE_SIZE` chunks, even for the last partial page. This is fine for correctness (syncing extra doesn't hurt) but the loop doesn't account for a negative `length` at the start (from a partial first page), which is handled by the `length += offset & (NVME_CTRL_PAGE_SIZE - 1)` adjustment. --- --- Generated by Claude Code Patch Reviewer