public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
From: Claude Code Review Bot <claude-review@example.com>
To: dri-devel-reviews@example.com
Subject: Claude review: accel/amdxdna: Fix dead lock for suspend and resume
Date: Wed, 11 Feb 2026 16:16:32 +1000	[thread overview]
Message-ID: <review-patch1-20260210191626.311976-1-lizhi.hou@amd.com> (raw)
In-Reply-To: <20260210191626.311976-1-lizhi.hou@amd.com>

Patch Review

**Commit Message Analysis:**
- Clearly describes the deadlock scenario
- Missing: Fixes tag, impact description, testing methodology
- Missing: Whether this is observed in production or theoretical
- Missing: Alternative approaches considered

**Code Review:**

#### 1. Core Lock Inversion Fix (amdxdna_pm.h:337-346)

```c
+static inline int amdxdna_pm_resume_get_locked(struct amdxdna_dev *xdna)
+{
+	int ret;
+
+	mutex_unlock(&xdna->dev_lock);
+	ret = amdxdna_pm_resume_get(xdna);
+	mutex_lock(&xdna->dev_lock);
+
+	return ret;
+}
```

**Issues:**
1. **No lockdep annotation** - Should use `lockdep_assert_held()` to verify caller holds lock
2. **No validation after relock** - Device state could change while lock was dropped:
   - Device could be removed (hot-unplug)
   - Suspend could complete and immediately start again
   - Hardware could be reset
3. **Error path unclear** - If `amdxdna_pm_resume_get()` fails, we reacquire lock but device may be suspended
4. **Window of vulnerability** - Other threads can acquire `dev_lock` between unlock/lock and modify state

**Suggested improvements:**
```c
static inline int amdxdna_pm_resume_get_locked(struct amdxdna_dev *xdna)
{
	int ret;
	
	lockdep_assert_held(&xdna->dev_lock);
	
	mutex_unlock(&xdna->dev_lock);
	ret = amdxdna_pm_resume_get(xdna);
	mutex_lock(&xdna->dev_lock);
	
	/* TODO: Validate device state unchanged? */
	return ret;
}
```

#### 2. Suspend Callback Changes (aie2_pci.c:193-199)

```c
 static int aie2_hw_suspend(struct amdxdna_dev *xdna)
 {
 	struct amdxdna_client *client;
 
-	guard(mutex)(&xdna->dev_lock);
 	list_for_each_entry(client, &xdna->client_list, node)
 		aie2_hwctx_suspend(client);
```

**Critical Issue:**
- **Removes lock protection from client_list traversal**
- The `client_list` is now accessed **without any lock protection**
- This is a **use-after-free bug** - clients can be added/removed during iteration
- `list_for_each_entry()` is not safe without synchronization

**The commit message says:** "acquire dev_lock in the resume callback to keep the locking consistent"

But the code **removes** the lock from suspend! This is the **opposite** of what's described.

**Where is the lock now?** Looking at amdxdna_pm.c:313-319:

```c
 int amdxdna_pm_suspend(struct device *dev)
 {
 	struct amdxdna_dev *xdna = to_xdna_dev(dev_get_drvdata(dev));
 	int ret = -EOPNOTSUPP;
 
+	guard(mutex)(&xdna->dev_lock);
 	if (xdna->dev_info->ops->suspend)
 		ret = xdna->dev_info->ops->suspend(xdna);
```

**Ah!** The lock moved **up** to the PM callback wrapper. This is actually correct, but:
- The lock is now held across the **entire** suspend/resume operation
- This is the **original deadlock** scenario - holding dev_lock while in PM callbacks
- This **does not fix the deadlock**, it makes it **worse**

**Deadlock scenario still exists:**
1. IOCTL: holds dev_lock → calls `pm_runtime_resume_and_get()`
2. PM runtime: waits for idle, tries to call `amdxdna_pm_suspend()`
3. `amdxdna_pm_suspend()`: tries to acquire dev_lock → **DEADLOCK**

#### 3. IOCTL Path Conversions (aie2_pci.c, aie2_ctx.c, aie2_pm.c)

Multiple callsites changed from `amdxdna_pm_resume_get()` to `amdxdna_pm_resume_get_locked()`:

```c
-	ret = amdxdna_pm_resume_get(xdna);
+	ret = amdxdna_pm_resume_get_locked(xdna);
```

**Issue:** 
- These callsites already hold `dev_lock` (verified by the need for `_locked` variant)
- After the unlock/lock cycle, **no validation** that:
  - Hardware context still valid
  - Device still present
  - Resources not freed
  
**Example - aie2_ctx.c:629-678 (aie2_hwctx_init):**

```c
ret = amdxdna_pm_resume_get_locked(xdna);  // Drops & reacquires dev_lock
if (ret)
	goto free_col_list;

// Device could have been suspended/removed while lock was dropped
ret = xdna->dev_info->ops->hwctx_config(hwctx, ...); // Uses hardware!
```

**Race condition:** Hardware could be in suspend state or removed entirely.

#### 4. Unrelated SRCU Removal (amdxdna_ctx.c:260-279)

```c
-	mutex_lock(&xdna->dev_lock);
-	idx = srcu_read_lock(&client->hwctx_srcu);
+	guard(mutex)(&xdna->dev_lock);
 	hwctx = xa_load(&client->hwctx_xa, args->handle);
 	if (!hwctx) {
 		XDNA_DBG(xdna, "PID %d failed to get hwctx %d", client->pid, args->handle);
 		ret = -EINVAL;
-		goto unlock_srcu;
+		goto free_buf;
 	}
 
 	ret = xdna->dev_info->ops->hwctx_config(hwctx, args->param_type, val, buf, buf_size);
 
-unlock_srcu:
-	srcu_read_unlock(&client->hwctx_srcu, idx);
```

**Critical Issue:**
- **Removes SRCU read-side critical section** 
- SRCU protects against hwctx being freed during use
- Now relying **only** on `dev_lock` - but we just introduced code that **drops** dev_lock!
- This creates **use-after-free** vulnerability:
  1. Thread A: loads hwctx from xa_load()
  2. Thread A: calls `hwctx_config()` which calls `amdxdna_pm_resume_get_locked()`
  3. Thread A: drops dev_lock in `_locked()` helper
  4. Thread B: acquires dev_lock, destroys hwctx, frees memory
  5. Thread A: reacquires dev_lock, returns from `_locked()`
  6. Thread A: continues using freed hwctx → **use-after-free**

**This removal is WRONG and introduces memory safety bugs.**

The same issue exists in `amdxdna_hwctx_sync_debug_bo()` at amdxdna_ctx.c:331-308.

#### 5. Missing Error Handling

Throughout the patch, after `amdxdna_pm_resume_get_locked()` fails, code continues to hold `dev_lock` but device is suspended. Should there be cleanup or state validation?

**Summary of Technical Issues:**

| Issue | Severity | Location |
|-------|----------|----------|
| client_list traversal without lock | **CRITICAL** | aie2_pci.c:198 |
| SRCU removal creates use-after-free | **CRITICAL** | amdxdna_ctx.c:261-274 |
| Deadlock still possible | **HIGH** | amdxdna_pm.c:313-327 |
| No state validation after relock | **HIGH** | All `_locked()` callsites |
| No lockdep annotations | **MEDIUM** | amdxdna_pm.h:337 |
| Missing Fixes tag | **LOW** | Commit message |

**Fundamental Design Flaw:**

The approach of dropping locks to avoid deadlock is **not correct**. The proper solutions are:

1. **Refcount-based approach**: Take PM reference before acquiring locks
2. **Separate locks**: Use different locks for PM and device state
3. **Lock ordering**: Define and enforce consistent lock ordering
4. **PM workqueues**: Handle PM transitions asynchronously

**Recommendation: NAK**

This patch introduces more bugs than it fixes:
- Use-after-free from SRCU removal
- Potential use-after-free from client_list access
- Race conditions from lock dropping
- May not even fix the original deadlock

The driver needs a **comprehensive locking redesign**, not a tactical patch.

---
Generated by Claude Code Patch Reviewer

  parent reply	other threads:[~2026-02-11  6:16 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-10 19:16 [PATCH V1] accel/amdxdna: Fix dead lock for suspend and resume Lizhi Hou
2026-02-11  6:16 ` Claude review: " Claude Code Review Bot
2026-02-11  6:16 ` Claude Code Review Bot [this message]
2026-02-11 19:23 ` [PATCH V1] " Mario Limonciello
2026-02-11 19:29   ` Lizhi Hou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=review-patch1-20260210191626.311976-1-lizhi.hou@amd.com \
    --to=claude-review@example.com \
    --cc=dri-devel-reviews@example.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox