public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: misc: fastrpc: Add polling mode support for fastRPC driver
Date: Thu, 23 Apr 2026 07:50:44 +1000	[thread overview]
Message-ID: <review-patch4-20260422092409.4107093-5-ekansh.gupta@oss.qualcomm.com> (raw)
In-Reply-To: <20260422092409.4107093-5-ekansh.gupta@oss.qualcomm.com>

Patch Review

**Assessment: Has several issues that need addressing.**

**Issue 1 (Build break): `of_machine_get_match` does not exist**

```c
data->poll_mode_supported = soc_data->poll_mode_supported ||
    of_machine_get_match(fastrpc_poll_supported_machines);
```

The kernel has `of_machine_device_match()` (returns `bool`) and `of_machine_get_match_data()` (returns `const void *`). Since you only need a boolean "is this machine in the list", use `of_machine_device_match()`:

```c
data->poll_mode_supported = soc_data->poll_mode_supported ||
    of_machine_device_match(fastrpc_poll_supported_machines);
```

**Issue 2: `readl_poll_timeout_atomic` on DMA buffer memory**

```c
ret = readl_poll_timeout_atomic(ctx->poll_addr, val,
                (val == FASTRPC_POLL_RESPONSE) || ctx->is_work_done, 1,
                FASTRPC_POLL_MAX_TIMEOUT_US);
```

`ctx->poll_addr` points into `ctx->buf->virt`, which is a DMA coherent buffer allocated via `dma_alloc_coherent()`, not an MMIO register. `readl()` is intended for MMIO and may apply `__iomem` annotations or arch-specific I/O accessors that are inappropriate for regular memory.

Use `read_poll_timeout(READ_ONCE, ...)` or a custom polling loop with `READ_ONCE()` instead. This also avoids the atomic context restriction (see Issue 3).

**Issue 3: 10ms atomic busy-wait**

`readl_poll_timeout_atomic` uses `udelay()` and disables preemption for the entire duration. With `FASTRPC_POLL_MAX_TIMEOUT_US = 10000` (10ms), this is a very long time to hold a CPU non-preemptible. The non-atomic variant (`readl_poll_timeout` / `read_poll_timeout`) uses `usleep_range()` which is much more scheduler-friendly, and this code path is in process context (ioctl handler), so sleeping is fine.

**Issue 4: Data race on `ctx->is_work_done`**

`ctx->is_work_done` is read in the polling condition and written from `fastrpc_rpmsg_callback` (interrupt context) without any synchronization:

```c
// In poll_for_remote_response (process context):
(val == FASTRPC_POLL_RESPONSE) || ctx->is_work_done

// In fastrpc_rpmsg_callback (interrupt context):
ctx->is_work_done = true;
```

A plain `bool` is not guaranteed to be atomically visible across CPUs. This should use `READ_ONCE(ctx->is_work_done)` in the polling condition and `WRITE_ONCE(ctx->is_work_done, true)` in the callback (or use an atomic flag).

Similarly in `poll_for_remote_response`:
```c
if (!ret && val == FASTRPC_POLL_RESPONSE) {
    ctx->is_work_done = true;
```
This write should also use `WRITE_ONCE`.

**Issue 5: `ctx->retval = 0` in poll path may race with callback**

In `poll_for_remote_response`:
```c
if (!ret && val == FASTRPC_POLL_RESPONSE) {
    ctx->is_work_done = true;
    ctx->retval = 0;
}
```

The callback also sets `ctx->retval = rsp->retval`. If the DSP writes the poll response *and* sends an rpmsg response, both paths could write `retval` concurrently. The poll path hardcodes `retval = 0`, which may not match the actual DSP return value. If the DSP puts the real return value in the rpmsg response, blindly setting it to 0 in the poll path loses error information.

**Issue 6: Missing `dma_rmb()` after polling**

The existing code has a `dma_rmb()` barrier between receiving the completion and calling `fastrpc_put_args()`:
```c
/* make sure that all memory writes by DSP are seen by CPU */
dma_rmb();
```

After the polling path succeeds, the same barrier is needed to ensure the CPU sees all DSP writes to the output buffers. The current code does fall through to `fastrpc_put_args()` after `fastrpc_wait_for_completion()` returns, but the `dma_rmb()` is placed after the original wait code. Verify the barrier is still hit on the polling-success path — looking at the patched flow, `fastrpc_wait_for_completion()` returns 0, then execution continues to the existing `dma_rmb()` + `fastrpc_put_args()`, so this should be fine.

**Issue 7: `__maybe_unused` on the match table**

```c
static const struct of_device_id fastrpc_poll_supported_machines[] __maybe_unused = {
```

This suggests it may be unused when `CONFIG_OF` is off. But `of_machine_device_match()` has a stub returning `false` when `!CONFIG_OF`, so the table would indeed be unused. A cleaner approach would be to wrap the call with `IS_ENABLED(CONFIG_OF)` or just accept that this driver practically requires OF.

**Issue 8: IOCTL number 12 skipped for `SET_OPTION`**

The existing header has ioctl numbers 1-11 and then 13. The new `FASTRPC_IOCTL_SET_OPTION` is assigned number 12, which fills the gap. This is fine and intentional.

**Issue 9: `FASTRPC_POLL_MODE` value documentation says "handle > 20" but code uses `FASTRPC_MAX_STATIC_HANDLE`**

The UAPI header comment says:
```c
 * - Only applies to dynamic modules (handle > 20)
```

And the code uses:
```c
if (handle > FASTRPC_MAX_STATIC_HANDLE && ...)
```

Where `FASTRPC_MAX_STATIC_HANDLE` is defined as 20. This is consistent, but hardcoding "20" in the UAPI comment is fragile. Consider referencing the macro name instead.

**Minor: `fastrpc_wait_for_response` is marked `inline`**

```c
static inline int fastrpc_wait_for_response(...)
```

The `inline` keyword is unnecessary — the compiler will decide. Kernel style generally avoids explicit `inline` on static functions unless there's a specific reason.

---
Generated by Claude Code Patch Reviewer

  parent reply	other threads:[~2026-04-22 21:50 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-22  9:24 [PATCH v9 0/4] misc: fastrpc: Add polling mode support Ekansh Gupta
2026-04-22  9:24 ` [PATCH v9 1/4] misc: fastrpc: Move fdlist to invoke context structure Ekansh Gupta
2026-04-22 15:48   ` Dmitry Baryshkov
2026-04-22 21:50   ` Claude review: " Claude Code Review Bot
2026-04-22  9:24 ` [PATCH v9 2/4] misc: fastrpc: Replace hardcoded ctxid mask with GENMASK Ekansh Gupta
2026-04-22 15:47   ` Dmitry Baryshkov
2026-04-22 21:50   ` Claude review: " Claude Code Review Bot
2026-04-22  9:24 ` [PATCH v9 3/4] misc: fastrpc: Expand context ID mask for DSP polling mode support Ekansh Gupta
2026-04-22 21:50   ` Claude review: " Claude Code Review Bot
2026-04-22  9:24 ` [PATCH v9 4/4] misc: fastrpc: Add polling mode support for fastRPC driver Ekansh Gupta
2026-04-22 15:47   ` Dmitry Baryshkov
2026-04-22 21:50   ` Claude Code Review Bot [this message]
2026-04-22 21:50 ` Claude review: misc: fastrpc: Add polling mode support Claude Code Review Bot
  -- strict thread matches above, loose matches on Subject: below --
2026-02-15 18:21 [PATCH v6 0/4] " Ekansh Gupta
2026-02-15 18:21 ` [PATCH v6 4/4] misc: fastrpc: Add polling mode support for fastRPC driver Ekansh Gupta
2026-02-15 22:26   ` Claude review: " Claude Code Review Bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=review-patch4-20260422092409.4107093-5-ekansh.gupta@oss.qualcomm.com \
    --to=claude-review@example.com \
    --cc=dri-devel-reviews@example.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox