public inbox for drm-ai-reviews@public-inbox.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support
@ 2026-04-29 20:47 dongwon.kim
  2026-04-29 20:47 ` [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume dongwon.kim
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: dongwon.kim @ 2026-04-29 20:47 UTC (permalink / raw)
  To: dri-devel, airlied, kraxel, dmitry.osipenko

From: Dongwon Kim <dongwon.kim@intel.com>

This patch series implements the .freeze and .restore hooks for the virtio-gpu 
driver, alongside a PM notifier to manage object restoration during S4 
(hibernation) cycles.

The series is structured as follows:

1. The first patch introduces `virtgpu_freeze` and `virtgpu_restore`. These
   functions handle the teardown of virtio queues before suspension and their
   subsequent recreation during the resume process.

2. The second patch implements a tracking and restoration mechanism for
   `virtio_gpu_object` instances. This is essential because the host (e.g., QEMU)
   destroys all associated graphics resources during the virtio-gpu reset
   triggered by the hibernation/resume cycle.

3. The third patch adds a PM notifier to handle the resubmission of these
   tracked virtio-gpu objects to the host once the guest restores its
   virtqueues.

Together, these changes ensure that the virtio-gpu driver can survive 
hibernation without resource loss or invalid resource errors.

v2: - Added 10ms sleep in virtgpu_freeze to prevent lockups during resumption.

v3: - Replaced the fixed 10ms delay with wait calls that trigger once the 
      virtio queue is empty.
      (Dmitry Osipenko)

v4: - Limited the scope to S4 (hibernation). S3 resource loss can be avoided 
      via QEMU configurations (e.g., x-pcie-pm-no-soft-reset=true).

v5: - Implemented removal of objects from the restore list before freeing to 
      prevent a use-after-free situation.
      (Nirmoy Das)

    - Protected restore list operations with a spinlock.
      (Nirmoy Das)

    - Moved the restore list node into the virtio_gpu_bo struct to optimize 
      memory usage.
      (Dmitry Osipenko)

    - Removed unused header: drm_atomic_helper.h.
      (Dmitry Osipenko)

v6: - Added support for objects backed by imported dma-bufs.
      (Dmitry Osipenko)

    - Excluded virgl 3D objects from the restore list as they are currently 
      non-recoverable.
      (Dmitry Osipenko)

    - Renamed the list node to 'restore_node'.
      (Nirmoy Das)

    - Switched from spinlock to mutex for restore list updates.
      (Nirmoy Das)

    - Initialized restore_node at creation to allow safe 'list_empty' checks.

    - Introduced a 'hibernation' flag in the PM notifier to trigger restoration 
      at the correct stage in `virtgpu_restore`.

v7: - Added helper: virtio_gpu_add_object_to_restore_list.
      (Dmitry Osipenko)

    - Implemented unreferencing of all host-side objects before hibernation 
      to ensure a clean state upon resume.
      (Dmitry Osipenko)

v8: - Introduced virtio_gpu_remove_from_restore_list helper and ensured it is 
      called across all object destruction paths (General, Prime, and VRAM) 
      to prevent use-after-free.
      (Dmitry Osipenko)

    - Relocated restore list removal logic from the final cleanup stage to 
      the initial free stage for improved synchronization.
      (Dmitry Osipenko)

    - Added missing drm_print.h header.

Dongwon Kim (3):
  drm/virtio: Freeze and restore hooks to support suspend and resume
  drm/virtio: Add support for saving and restoring virtio_gpu_objects
  drm/virtio: Add PM notifier to restore objects after hibernation

 drivers/gpu/drm/virtio/virtgpu_drv.c    | 74 ++++++++++++++++++++-
 drivers/gpu/drm/virtio/virtgpu_drv.h    | 24 ++++++-
 drivers/gpu/drm/virtio/virtgpu_kms.c    | 54 +++++++++++++--
 drivers/gpu/drm/virtio/virtgpu_object.c | 88 ++++++++++++++++++++++++-
 drivers/gpu/drm/virtio/virtgpu_prime.c  | 45 ++++++++++++-
 drivers/gpu/drm/virtio/virtgpu_vq.c     | 13 +++-
 drivers/gpu/drm/virtio/virtgpu_vram.c   |  5 +-
 7 files changed, 289 insertions(+), 14 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume
  2026-04-29 20:47 [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support dongwon.kim
@ 2026-04-29 20:47 ` dongwon.kim
  2026-05-05  1:00   ` Claude review: " Claude Code Review Bot
  2026-04-29 20:47 ` [PATCH v8 2/3] drm/virtio: Add support for saving and restoring virtio_gpu_objects dongwon.kim
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: dongwon.kim @ 2026-04-29 20:47 UTC (permalink / raw)
  To: dri-devel, airlied, kraxel, dmitry.osipenko

From: Dongwon Kim <dongwon.kim@intel.com>

virtio device needs to delete before VM suspend happens
then reinitialize all virtqueues again upon resume

v2: 10ms sleep was added in virtgpu_freeze to avoid the situation
    the driver is locked up during resumption.

v3: Plain 10ms delay was replaced with wait calls which wait until
    the virtio queue is empty.
    (Dmitry Osipenko)

v4: Change wait_event to wait_event_timeout to prevent permanent wait
    (Nirmoy Das)

Suggested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: Nirmoy Das <nirmoyd@nvidia.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c | 62 +++++++++++++++++++++++++++-
 drivers/gpu/drm/virtio/virtgpu_drv.h |  1 +
 drivers/gpu/drm/virtio/virtgpu_kms.c | 23 ++++++++---
 3 files changed, 79 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index a5ce96fb8a1d..397f3d4f5906 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -164,6 +164,62 @@ static unsigned int features[] = {
 	VIRTIO_GPU_F_RESOURCE_BLOB,
 	VIRTIO_GPU_F_CONTEXT_INIT,
 };
+
+#ifdef CONFIG_PM_SLEEP
+static int virtgpu_freeze(struct virtio_device *vdev)
+{
+	struct drm_device *dev = vdev->priv;
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	int error;
+
+	error = drm_mode_config_helper_suspend(dev);
+	if (error) {
+		DRM_ERROR("suspend error %d\n", error);
+		return error;
+	}
+
+	flush_work(&vgdev->obj_free_work);
+	flush_work(&vgdev->ctrlq.dequeue_work);
+	flush_work(&vgdev->cursorq.dequeue_work);
+	flush_work(&vgdev->config_changed_work);
+
+	wait_event_timeout(vgdev->ctrlq.ack_queue,
+		vgdev->ctrlq.vq->num_free == vgdev->ctrlq.vq->num_max,
+		5 * HZ);
+
+	wait_event_timeout(vgdev->cursorq.ack_queue,
+		vgdev->cursorq.vq->num_free == vgdev->cursorq.vq->num_max,
+		5 * HZ);
+
+	vdev->config->del_vqs(vdev);
+
+	return 0;
+}
+
+static int virtgpu_restore(struct virtio_device *vdev)
+{
+	struct drm_device *dev = vdev->priv;
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	int error;
+
+	error = virtio_gpu_find_vqs(vgdev);
+	if (error) {
+		DRM_ERROR("failed to find virt queues\n");
+		return error;
+	}
+
+	virtio_device_ready(vdev);
+
+	error = drm_mode_config_helper_resume(dev);
+	if (error) {
+		DRM_ERROR("resume error %d\n", error);
+		return error;
+	}
+
+	return 0;
+}
+#endif
+
 static struct virtio_driver virtio_gpu_driver = {
 	.feature_table = features,
 	.feature_table_size = ARRAY_SIZE(features),
@@ -172,7 +228,11 @@ static struct virtio_driver virtio_gpu_driver = {
 	.probe = virtio_gpu_probe,
 	.remove = virtio_gpu_remove,
 	.shutdown = virtio_gpu_shutdown,
-	.config_changed = virtio_gpu_config_changed
+	.config_changed = virtio_gpu_config_changed,
+#ifdef CONFIG_PM_SLEEP
+	.freeze = virtgpu_freeze,
+	.restore = virtgpu_restore,
+#endif
 };
 
 static int __init virtio_gpu_driver_init(void)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index f17660a71a3e..1279f998c8e0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -300,6 +300,7 @@ void virtio_gpu_deinit(struct drm_device *dev);
 void virtio_gpu_release(struct drm_device *dev);
 int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file);
 void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file);
+int virtio_gpu_find_vqs(struct virtio_gpu_device *vgdev);
 
 /* virtgpu_gem.c */
 int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 80ba69b4860b..90634a5b0ad7 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -115,15 +115,28 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
 	vgdev->num_capsets = num_capsets;
 }
 
-int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
+int virtio_gpu_find_vqs(struct virtio_gpu_device *vgdev)
 {
 	struct virtqueue_info vqs_info[] = {
 		{ "control", virtio_gpu_ctrl_ack },
 		{ "cursor", virtio_gpu_cursor_ack },
 	};
-	struct virtio_gpu_device *vgdev;
-	/* this will expand later */
 	struct virtqueue *vqs[2];
+	int ret;
+
+	ret = virtio_find_vqs(vgdev->vdev, 2, vqs, vqs_info, NULL);
+	if (ret)
+		return ret;
+
+	vgdev->ctrlq.vq = vqs[0];
+	vgdev->cursorq.vq = vqs[1];
+
+	return 0;
+}
+
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
+{
+	struct virtio_gpu_device *vgdev;
 	u32 num_scanouts, num_capsets;
 	int ret = 0;
 
@@ -207,13 +220,11 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 	DRM_INFO("features: %ccontext_init\n",
 		 vgdev->has_context_init ? '+' : '-');
 
-	ret = virtio_find_vqs(vgdev->vdev, 2, vqs, vqs_info, NULL);
+	ret = virtio_gpu_find_vqs(vgdev);
 	if (ret) {
 		DRM_ERROR("failed to find virt queues\n");
 		goto err_vqs;
 	}
-	vgdev->ctrlq.vq = vqs[0];
-	vgdev->cursorq.vq = vqs[1];
 	ret = virtio_gpu_alloc_vbufs(vgdev);
 	if (ret) {
 		DRM_ERROR("failed to alloc vbufs\n");
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v8 2/3] drm/virtio: Add support for saving and restoring virtio_gpu_objects
  2026-04-29 20:47 [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support dongwon.kim
  2026-04-29 20:47 ` [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume dongwon.kim
@ 2026-04-29 20:47 ` dongwon.kim
  2026-05-05  1:00   ` Claude review: " Claude Code Review Bot
  2026-04-29 20:47 ` [PATCH v8 3/3] drm/virtio: Add PM notifier to restore objects after hibernation dongwon.kim
  2026-05-05  1:00 ` Claude review: Virtio-GPU S4 (hibernation) support Claude Code Review Bot
  3 siblings, 1 reply; 8+ messages in thread
From: dongwon.kim @ 2026-04-29 20:47 UTC (permalink / raw)
  To: dri-devel, airlied, kraxel, dmitry.osipenko

From: Dongwon Kim <dongwon.kim@intel.com>

When the host KVM/QEMU resumes from hibernation, it loses all graphics
resources previously submitted by the guest OS, as the QEMU process is
terminated during the suspend-resume cycle. This leads to invalid resource
errors when the guest OS attempts to interact with the host using those
resources after resumption.

To resolve this, the virtio-gpu driver now tracks all active virtio_gpu_objects
and provides a mechanism to restore them by re-submitting the objects to QEMU
when needed (e.g., during resume from hibernation).

v2: - Attach backing is done if bo->attached was set before

v3: - Restoration is no longer triggered via .restore; instead, it is handled
      by a PM notifier only during hibernation.

v4: - Remove virtio_gpu_object from the restore list before freeing the object
      to prevent a use-after-free situation.
      (Nirmoy Das)

    - Protect restore list operations with a spinlock
      (Nirmoy Das)

    - Initialize ret with 0 in virtio_gpu_object_restore_all
      (Nirmoy Das)

    - Move restore list node into virtio_gpu_bo struct to reduce memory usage
      (Dmitry Osipenko)

v5: - Include object backed by imported dmabuf
      (Dmitry Osipenko)

    - Not storing virgl objects in the restore_list as virgl 3D objects are not
      recoverable.
      (Dmitry Osipenko)

    - Change the name 'list',a node in restore_list to 'restore_node'
      (Nirmoy Das)

    - Use mutex instead of spinlock when updating restore_list
      (Nirmoy Das)

    - Initialize restore_node when virtio_gpu_object is created - this is to
      determine whether the object should be removed from the restore_list with
      'list_empty' function when it is time to free the object as not all objects
      will be added to the list.

v6: - Add a helper, virtio_gpu_add_object_to_restore_list
      (Dmitry Osipenko)

v7: - Add drm_print.h

v8: - Introduce virtio_gpu_remove_from_restore_list helper and ensure its call in
      all object destruction paths (General, Prime, and VRAM) to prevent
      use-after-free.
      (Dmitry Osipenko)

    - Relocate the restore list removal logic from the final cleanup stage
      to the initial free stage for better synchronization.
      (Dmitry Osipenko)

Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: Nirmoy Das <nirmoyd@nvidia.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    | 15 ++++++
 drivers/gpu/drm/virtio/virtgpu_kms.c    |  3 ++
 drivers/gpu/drm/virtio/virtgpu_object.c | 70 +++++++++++++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_prime.c  | 43 +++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_vram.c   |  3 ++
 5 files changed, 134 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 1279f998c8e0..f91f31b861b8 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -98,6 +98,10 @@ struct virtio_gpu_object {
 
 	int uuid_state;
 	uuid_t uuid;
+
+	/* for restoration of objects after hibernation */
+	struct virtio_gpu_object_params params;
+	struct list_head restore_node;
 };
 #define gem_to_virtio_gpu_obj(gobj) \
 	container_of((gobj), struct virtio_gpu_object, base.base)
@@ -265,6 +269,8 @@ struct virtio_gpu_device {
 	struct work_struct obj_free_work;
 	spinlock_t obj_free_lock;
 	struct list_head obj_free_list;
+	struct mutex obj_restore_lock;
+	struct list_head obj_restore_list;
 
 	struct virtio_gpu_drv_capset *capsets;
 	uint32_t num_capsets;
@@ -467,6 +473,7 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
 				    u64 fence_id);
 
 /* virtgpu_object.c */
+void virtio_gpu_remove_from_restore_list(struct virtio_gpu_object *bo);
 void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo);
 struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
 						size_t size);
@@ -479,6 +486,12 @@ bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
 
 int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
 			       uint32_t *resid);
+
+void virtio_gpu_add_object_to_restore_list(struct virtio_gpu_device *vgdev,
+					   struct virtio_gpu_object *bo);
+
+int virtio_gpu_object_restore_all(struct virtio_gpu_device *vgdev);
+
 /* virtgpu_prime.c */
 int virtio_gpu_resource_assign_uuid(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_object *bo);
@@ -493,6 +506,8 @@ int virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
 			       unsigned int *nents,
 			       struct virtio_gpu_object *bo,
 			       struct dma_buf_attachment *attach);
+int virtgpu_dma_buf_obj_resubmit(struct virtio_gpu_device *vgdev,
+				 struct virtio_gpu_object *bo);
 
 /* virtgpu_debugfs.c */
 void virtio_gpu_debugfs_init(struct drm_minor *minor);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 90634a5b0ad7..ad12ea7165c4 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -171,6 +171,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 		  virtio_gpu_array_put_free_work);
 	INIT_LIST_HEAD(&vgdev->obj_free_list);
 	spin_lock_init(&vgdev->obj_free_lock);
+	INIT_LIST_HEAD(&vgdev->obj_restore_list);
+	mutex_init(&vgdev->obj_restore_lock);
 
 #ifdef __LITTLE_ENDIAN
 	if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_VIRGL))
@@ -299,6 +301,7 @@ void virtio_gpu_deinit(struct drm_device *dev)
 	flush_work(&vgdev->config_changed_work);
 	virtio_reset_device(vgdev->vdev);
 	vgdev->vdev->config->del_vqs(vgdev->vdev);
+	mutex_destroy(&vgdev->obj_restore_lock);
 }
 
 void virtio_gpu_release(struct drm_device *dev)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index ec9efacc6919..a1ea65f7b1de 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -63,6 +63,15 @@ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t
 		ida_free(&vgdev->resource_ida, id - 1);
 }
 
+void virtio_gpu_remove_from_restore_list(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+
+	mutex_lock(&vgdev->obj_restore_lock);
+	list_del_init(&bo->restore_node);
+	mutex_unlock(&vgdev->obj_restore_lock);
+}
+
 void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
 {
 	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
@@ -94,6 +103,7 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
 	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
 
 	if (bo->created) {
+		virtio_gpu_remove_from_restore_list(bo);
 		virtio_gpu_cmd_unref_resource(vgdev, bo);
 		virtio_gpu_notify(vgdev);
 		/* completion handler calls virtio_gpu_cleanup_object() */
@@ -220,6 +230,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		return PTR_ERR(shmem_obj);
 	bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
 
+	INIT_LIST_HEAD(&bo->restore_node);
+
 	ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
 	if (ret < 0)
 		goto err_free_gem;
@@ -258,6 +270,12 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
 	}
 
+	if (!params->virgl) {
+		/* store non-virgl object with its param to the restore list */
+		bo->params = *params;
+		virtio_gpu_add_object_to_restore_list(vgdev, bo);
+	}
+
 	*bo_ptr = bo;
 	return 0;
 
@@ -271,3 +289,55 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	drm_gem_shmem_free(shmem_obj);
 	return ret;
 }
+
+void virtio_gpu_add_object_to_restore_list(struct virtio_gpu_device *vgdev,
+					   struct virtio_gpu_object *bo)
+{
+	mutex_lock(&vgdev->obj_restore_lock);
+	list_add_tail(&bo->restore_node, &vgdev->obj_restore_list);
+	mutex_unlock(&vgdev->obj_restore_lock);
+}
+
+int virtio_gpu_object_restore_all(struct virtio_gpu_device *vgdev)
+{
+	struct virtio_gpu_object *bo, *tmp;
+	struct virtio_gpu_mem_entry *ents;
+	unsigned int nents;
+	int ret = 0;
+
+	mutex_lock(&vgdev->obj_restore_lock);
+	list_for_each_entry_safe(bo, tmp, &vgdev->obj_restore_list,
+				 restore_node) {
+		if (drm_gem_is_imported(&bo->base.base)) {
+			ret = virtgpu_dma_buf_obj_resubmit(vgdev, bo);
+			if (ret)
+				break;
+
+			continue;
+		}
+
+		if (bo->params.blob || bo->attached) {
+			ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents,
+							   &nents);
+			if (ret)
+				break;
+		}
+
+		if (bo->params.blob) {
+			virtio_gpu_cmd_resource_create_blob(vgdev, bo,
+							    &bo->params,
+							    ents, nents);
+		} else {
+			virtio_gpu_cmd_create_resource(vgdev, bo, &bo->params,
+						       NULL, NULL);
+			if (bo->attached) {
+				bo->attached = false;
+				virtio_gpu_object_attach(vgdev, bo, ents,
+							 nents);
+			}
+		}
+	}
+	mutex_unlock(&vgdev->obj_restore_lock);
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 70b3b836e1c9..27eaab9f5cfa 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -23,6 +23,7 @@
  */
 
 #include <drm/drm_prime.h>
+#include <drm/drm_print.h>
 #include <linux/virtio_dma_buf.h>
 
 #include "virtgpu_drv.h"
@@ -216,6 +217,7 @@ static void virtgpu_dma_buf_free_obj(struct drm_gem_object *obj)
 	}
 
 	if (bo->created) {
+		virtio_gpu_remove_from_restore_list(bo);
 		virtio_gpu_cmd_unref_resource(vgdev, bo);
 		virtio_gpu_notify(vgdev);
 		return;
@@ -262,6 +264,12 @@ static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
 	dma_buf_unpin(attach);
 	dma_resv_unlock(resv);
 
+	/* store the dmabuf imported object with its params to
+	 * the restore list
+	 */
+	bo->params = params;
+	virtio_gpu_add_object_to_restore_list(vgdev, bo);
+
 	return 0;
 
 err_import:
@@ -272,6 +280,39 @@ static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
 	return ret;
 }
 
+int virtgpu_dma_buf_obj_resubmit(struct virtio_gpu_device *vgdev,
+				 struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_mem_entry *ents;
+	struct scatterlist *sl;
+	int i;
+
+	if (!bo->sgt) {
+		DRM_ERROR("no sgt bound to virtio_gpu_object\n");
+		return -ENOMEM;
+	}
+
+	ents = kvmalloc_array(bo->sgt->nents,
+			      sizeof(struct virtio_gpu_mem_entry),
+			      GFP_KERNEL);
+	if (!ents) {
+		DRM_ERROR("failed to allocate ent list\n");
+		return -ENOMEM;
+	}
+
+	for_each_sgtable_dma_sg(bo->sgt, sl, i) {
+		ents[i].addr = cpu_to_le64(sg_dma_address(sl));
+		ents[i].length = cpu_to_le32(sg_dma_len(sl));
+		ents[i].padding = 0;
+	}
+
+	virtio_gpu_cmd_resource_create_blob(vgdev, bo, &bo->params,
+					    ents, bo->sgt->nents);
+
+	return 0;
+}
+
+
 static const struct drm_gem_object_funcs virtgpu_gem_dma_buf_funcs = {
 	.free = virtgpu_dma_buf_free_obj,
 };
@@ -317,6 +358,8 @@ struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
 	if (!bo)
 		return ERR_PTR(-ENOMEM);
 
+	INIT_LIST_HEAD(&bo->restore_node);
+
 	obj = &bo->base.base;
 	obj->resv = buf->resv;
 	obj->funcs = &virtgpu_gem_dma_buf_funcs;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
index 084e80227433..35f3f592ab19 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vram.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
@@ -18,6 +18,7 @@ static void virtio_gpu_vram_free(struct drm_gem_object *obj)
 		if (unmap)
 			virtio_gpu_cmd_unmap(vgdev, bo);
 
+		virtio_gpu_remove_from_restore_list(bo);
 		virtio_gpu_cmd_unref_resource(vgdev, bo);
 		virtio_gpu_notify(vgdev);
 		return;
@@ -200,6 +201,8 @@ int virtio_gpu_vram_create(struct virtio_gpu_device *vgdev,
 	obj = &vram->base.base.base;
 	obj->funcs = &virtio_gpu_vram_funcs;
 
+	INIT_LIST_HEAD(&vram->base.restore_node);
+
 	params->size = PAGE_ALIGN(params->size);
 	drm_gem_private_object_init(vgdev->ddev, obj, params->size);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v8 3/3] drm/virtio: Add PM notifier to restore objects after hibernation
  2026-04-29 20:47 [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support dongwon.kim
  2026-04-29 20:47 ` [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume dongwon.kim
  2026-04-29 20:47 ` [PATCH v8 2/3] drm/virtio: Add support for saving and restoring virtio_gpu_objects dongwon.kim
@ 2026-04-29 20:47 ` dongwon.kim
  2026-05-05  1:00   ` Claude review: " Claude Code Review Bot
  2026-05-05  1:00 ` Claude review: Virtio-GPU S4 (hibernation) support Claude Code Review Bot
  3 siblings, 1 reply; 8+ messages in thread
From: dongwon.kim @ 2026-04-29 20:47 UTC (permalink / raw)
  To: dri-devel, airlied, kraxel, dmitry.osipenko

From: Dongwon Kim <dongwon.kim@intel.com>

Register a PM notifier in virtio-gpu to handle suspend/hibernate
events. On PM_POST_HIBERNATION, restore all GPU objects so that the
driver can properly recover after resume.

v2: Remove unused header - drm_atomic_helper.h
    (Dmitry Osipenko)

v3: Objects for virgl usecase can't be recovered after resume so
    blocking S4 when virgl is enabled
    (Dmitry Osipenko)

v4: Restoring objects in the PM notifier is too late, as virtio-gpu
    message communication begins in virtgpu_restore once virtqueues
    are re-established. To address this, a 'hibernation' flag is set
    during the PM_HIBERNATION_PREPARE phase in the notifier. This flag
    is then used in virtgpu_restore to detect if the system is resuming
    from S4, allowing objects to be recovered immediately after virtqueues
    are reconfigured.

v5: Unreference all objects before hibernation so they can be removed
    on the host side, since they will be fully restored anyway. This
    prevents the situation where host-side hibernation fails (leaving
    all associated resources still alive) while the virtio-gpu driver
    still attempts to restore those objects.
    (Dmitry Osipenko)

Suggested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c    | 12 +++++++++++
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  8 ++++++-
 drivers/gpu/drm/virtio/virtgpu_kms.c    | 28 +++++++++++++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_object.c | 18 +++++++++++++++-
 drivers/gpu/drm/virtio/virtgpu_prime.c  |  2 +-
 drivers/gpu/drm/virtio/virtgpu_vq.c     | 13 +++++++++---
 drivers/gpu/drm/virtio/virtgpu_vram.c   |  2 +-
 7 files changed, 76 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 397f3d4f5906..bc3c0cc69eae 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -178,6 +178,9 @@ static int virtgpu_freeze(struct virtio_device *vdev)
 		return error;
 	}
 
+	if (vgdev->hibernation)
+		virtio_gpu_object_unref_all(vgdev);
+
 	flush_work(&vgdev->obj_free_work);
 	flush_work(&vgdev->ctrlq.dequeue_work);
 	flush_work(&vgdev->cursorq.dequeue_work);
@@ -210,6 +213,15 @@ static int virtgpu_restore(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	if (vgdev->hibernation) {
+		vgdev->hibernation = false;
+		error = virtio_gpu_object_restore_all(vgdev);
+		if (error) {
+			DRM_ERROR("Failed to recover virtio-gpu objects\n");
+			return error;
+		}
+	}
+
 	error = drm_mode_config_helper_resume(dev);
 	if (error) {
 		DRM_ERROR("resume error %d\n", error);
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index f91f31b861b8..c75db6ad51fc 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -261,6 +261,7 @@ struct virtio_gpu_device {
 	bool has_resource_blob;
 	bool has_host_visible;
 	bool has_context_init;
+	bool hibernation;
 	struct virtio_shm_region host_visible_region;
 	struct drm_mm host_visible_mm;
 
@@ -277,6 +278,8 @@ struct virtio_gpu_device {
 	uint64_t capset_id_mask;
 	struct list_head cap_cache;
 
+	struct notifier_block pm_nb;
+
 	/* protects uuid state when exporting */
 	spinlock_t resource_export_lock;
 	/* protects map state and host_visible_mm */
@@ -341,7 +344,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_object_array *objs,
 				    struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
-				   struct virtio_gpu_object *bo);
+				   struct virtio_gpu_object *bo,
+				   int no_cb);
 int virtio_gpu_panic_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 					     uint64_t offset,
 					     uint32_t width, uint32_t height,
@@ -492,6 +496,8 @@ void virtio_gpu_add_object_to_restore_list(struct virtio_gpu_device *vgdev,
 
 int virtio_gpu_object_restore_all(struct virtio_gpu_device *vgdev);
 
+void virtio_gpu_object_unref_all(struct virtio_gpu_device *vgdev);
+
 /* virtgpu_prime.c */
 int virtio_gpu_resource_assign_uuid(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_object *bo);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index ad12ea7165c4..5688e8f39dd6 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -26,6 +26,8 @@
 #include <linux/virtio.h>
 #include <linux/virtio_config.h>
 #include <linux/virtio_ring.h>
+#include <linux/suspend.h>
+#include <linux/pm_runtime.h>
 
 #include <drm/drm_file.h>
 #include <drm/drm_managed.h>
@@ -134,6 +136,25 @@ int virtio_gpu_find_vqs(struct virtio_gpu_device *vgdev)
 	return 0;
 }
 
+static int virtio_gpu_pm_notifier(struct notifier_block *nb, unsigned long mode,
+				  void *data)
+{
+	struct virtio_gpu_device *vgdev = container_of(nb,
+						struct virtio_gpu_device,
+						pm_nb);
+
+	if (mode == PM_HIBERNATION_PREPARE) {
+		if (vgdev->has_virgl_3d) {
+			DRM_ERROR("S4 not allowed when VIRGL is enabled\n");
+			return notifier_from_errno(-EPERM);
+		}
+
+		vgdev->hibernation = true;
+	}
+
+	return NOTIFY_DONE;
+}
+
 int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 {
 	struct virtio_gpu_device *vgdev;
@@ -270,6 +291,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 		wait_event_timeout(vgdev->resp_wq, !vgdev->display_info_pending,
 				   5 * HZ);
 	}
+
+	vgdev->pm_nb.notifier_call = virtio_gpu_pm_notifier;
+	ret = register_pm_notifier(&vgdev->pm_nb);
+	if (ret)
+		goto err_scanouts;
+
 	return 0;
 
 err_scanouts:
@@ -302,6 +329,7 @@ void virtio_gpu_deinit(struct drm_device *dev)
 	virtio_reset_device(vgdev->vdev);
 	vgdev->vdev->config->del_vqs(vgdev->vdev);
 	mutex_destroy(&vgdev->obj_restore_lock);
+	unregister_pm_notifier(&vgdev->pm_nb);
 }
 
 void virtio_gpu_release(struct drm_device *dev)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index a1ea65f7b1de..e4275274c2fa 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -104,7 +104,7 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
 
 	if (bo->created) {
 		virtio_gpu_remove_from_restore_list(bo);
-		virtio_gpu_cmd_unref_resource(vgdev, bo);
+		virtio_gpu_cmd_unref_resource(vgdev, bo, false);
 		virtio_gpu_notify(vgdev);
 		/* completion handler calls virtio_gpu_cleanup_object() */
 		return;
@@ -341,3 +341,19 @@ int virtio_gpu_object_restore_all(struct virtio_gpu_device *vgdev)
 
 	return ret;
 }
+
+void virtio_gpu_object_unref_all(struct virtio_gpu_device *vgdev)
+{
+	struct virtio_gpu_object *bo, *tmp;
+
+	mutex_lock(&vgdev->obj_restore_lock);
+	list_for_each_entry_safe(bo, tmp, &vgdev->obj_restore_list,
+				 restore_node)
+		if (bo->created) {
+			virtio_gpu_cmd_unref_resource(vgdev, bo, true);
+			virtio_gpu_notify(vgdev);
+		}
+
+	mutex_unlock(&vgdev->obj_restore_lock);
+}
+
diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 27eaab9f5cfa..56686c453763 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -218,7 +218,7 @@ static void virtgpu_dma_buf_free_obj(struct drm_gem_object *obj)
 
 	if (bo->created) {
 		virtio_gpu_remove_from_restore_list(bo);
-		virtio_gpu_cmd_unref_resource(vgdev, bo);
+		virtio_gpu_cmd_unref_resource(vgdev, bo, false);
 		virtio_gpu_notify(vgdev);
 		return;
 	}
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 67865810a2e7..f454ba8553b0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -626,14 +626,21 @@ static void virtio_gpu_cmd_unref_cb(struct virtio_gpu_device *vgdev,
 }
 
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
-				   struct virtio_gpu_object *bo)
+				   struct virtio_gpu_object *bo,
+				   int no_cb)
 {
 	struct virtio_gpu_resource_unref *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 	int ret;
 
-	cmd_p = virtio_gpu_alloc_cmd_cb(vgdev, &vbuf, sizeof(*cmd_p),
-					virtio_gpu_cmd_unref_cb);
+	if (no_cb) {
+		cmd_p = virtio_gpu_alloc_cmd_cb(vgdev, &vbuf, sizeof(*cmd_p),
+						NULL);
+	} else {
+		cmd_p = virtio_gpu_alloc_cmd_cb(vgdev, &vbuf, sizeof(*cmd_p),
+						virtio_gpu_cmd_unref_cb);
+	}
+
 	memset(cmd_p, 0, sizeof(*cmd_p));
 
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF);
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
index 35f3f592ab19..db64d89539c6 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vram.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
@@ -19,7 +19,7 @@ static void virtio_gpu_vram_free(struct drm_gem_object *obj)
 			virtio_gpu_cmd_unmap(vgdev, bo);
 
 		virtio_gpu_remove_from_restore_list(bo);
-		virtio_gpu_cmd_unref_resource(vgdev, bo);
+		virtio_gpu_cmd_unref_resource(vgdev, bo, false);
 		virtio_gpu_notify(vgdev);
 		return;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Claude review: Virtio-GPU S4 (hibernation) support
  2026-04-29 20:47 [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support dongwon.kim
                   ` (2 preceding siblings ...)
  2026-04-29 20:47 ` [PATCH v8 3/3] drm/virtio: Add PM notifier to restore objects after hibernation dongwon.kim
@ 2026-05-05  1:00 ` Claude Code Review Bot
  3 siblings, 0 replies; 8+ messages in thread
From: Claude Code Review Bot @ 2026-05-05  1:00 UTC (permalink / raw)
  To: dri-devel-reviews

Overall Series Review

Subject: Virtio-GPU S4 (hibernation) support
Author: dongwon.kim@intel.com
Patches: 4
Reviewed: 2026-05-05T11:00:34.590670

---

This is a well-structured 3-patch series adding S4 (hibernation) support to the virtio-gpu driver. The approach is sound: patch 1 adds the virtio freeze/restore plumbing, patch 2 tracks GPU objects for restoration, and patch 3 wires up the PM notifier to orchestrate unref-before-suspend and re-create-on-resume. The series has been through significant review iteration (v8) with Dmitry Osipenko and Nirmoy Das, and many earlier issues have been addressed. 

However, several issues remain:

- The `no_cb` parameter added to `virtio_gpu_cmd_unref_resource` has a latent use-after-free if the virtqueue submission fails.
- `wait_event_timeout` return values are silently ignored during freeze, meaning queue drain failures are undetected.
- Minor style/type issues: `int no_cb` should be `bool`, unnecessary `#include`, and missing braces in a macro loop body.

The overall design is reasonable — virgl objects are correctly excluded (non-recoverable), imported dma-buf objects get special restoration handling, and the PM notifier cleanly scopes the work to S4 only.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Claude review: drm/virtio: Freeze and restore hooks to support suspend and resume
  2026-04-29 20:47 ` [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume dongwon.kim
@ 2026-05-05  1:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 8+ messages in thread
From: Claude Code Review Bot @ 2026-05-05  1:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This patch adds `virtgpu_freeze()` / `virtgpu_restore()` and extracts `virtio_gpu_find_vqs()` as a reusable helper. The refactoring of `virtio_gpu_init()` to call the new helper is clean.

**Issue 1 (medium): `wait_event_timeout` return values ignored in freeze**

```c
wait_event_timeout(vgdev->ctrlq.ack_queue,
    vgdev->ctrlq.vq->num_free == vgdev->ctrlq.vq->num_max,
    5 * HZ);

wait_event_timeout(vgdev->cursorq.ack_queue,
    vgdev->cursorq.vq->num_free == vgdev->cursorq.vq->num_max,
    5 * HZ);
```

If either wait times out, the function proceeds to `del_vqs()` without warning. This means in-flight commands could be silently lost. At minimum a `DRM_WARN` on timeout would help debugging; better would be to return an error from freeze, which would abort the hibernation attempt.

**Observation: `del_vqs()` called via `config->del_vqs` vs direct**

In `virtgpu_freeze()`:
```c
vdev->config->del_vqs(vdev);
```

This is fine and matches the style used in `virtio_gpu_deinit()`.

Otherwise the patch is straightforward and correct. The helper extraction is clean.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Claude review: drm/virtio: Add support for saving and restoring virtio_gpu_objects
  2026-04-29 20:47 ` [PATCH v8 2/3] drm/virtio: Add support for saving and restoring virtio_gpu_objects dongwon.kim
@ 2026-05-05  1:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 8+ messages in thread
From: Claude Code Review Bot @ 2026-05-05  1:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This patch adds the object tracking infrastructure: a per-device restore list protected by a mutex, and logic to re-create objects on the host after hibernation.

**Issue 1 (minor): VRAM objects initialized but never tracked**

In `virtgpu_vram.c`:
```c
INIT_LIST_HEAD(&vram->base.restore_node);
```

VRAM objects get their `restore_node` initialized but are never added to the restore list. The `virtio_gpu_remove_from_restore_list()` call in `virtio_gpu_vram_free()` would be a harmless no-op on an initialized-but-not-linked list node. If VRAM objects are intentionally excluded from restoration (like virgl objects), a brief comment explaining why would be helpful, as the restore_node removal in the free path suggests they might have been intended for tracking.

**Issue 2 (minor): Misleading error code in `virtgpu_dma_buf_obj_resubmit()`**

```c
if (!bo->sgt) {
    DRM_ERROR("no sgt bound to virtio_gpu_object\n");
    return -ENOMEM;
}
```

`-ENOMEM` doesn't describe "no sgt bound". `-EINVAL` or `-ENOENT` would be more accurate.

**Issue 3 (nit): Inconsistent allocation macro**

`virtgpu_dma_buf_obj_resubmit()` uses `kvmalloc_array()` directly:
```c
ents = kvmalloc_array(bo->sgt->nents,
              sizeof(struct virtio_gpu_mem_entry),
              GFP_KERNEL);
```

while `virtio_gpu_object_shmem_init()` uses the `kvmalloc_objs` macro for the same type. Minor inconsistency.

**Issue 4 (nit): Extra blank line**

After `virtgpu_dma_buf_obj_resubmit()` there are two blank lines before `static const struct drm_gem_object_funcs`. Should be one.

**Observation: Restore logic correctness**

The `virtio_gpu_object_restore_all()` logic correctly handles three cases:
1. Imported dma-buf objects → `virtgpu_dma_buf_obj_resubmit()`
2. Blob objects → `shmem_init` + `cmd_resource_create_blob()`
3. Regular (non-blob, non-virgl) objects → `shmem_init` + `cmd_create_resource()` + `object_attach()`

The conditional `if (bo->params.blob || bo->attached)` before `shmem_init` correctly gates the ents/nents initialization to paths that use them. However, `ents` and `nents` are uninitialized at declaration — adding `= NULL` / `= 0` would silence potential compiler warnings and improve clarity.

---

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Claude review: drm/virtio: Add PM notifier to restore objects after hibernation
  2026-04-29 20:47 ` [PATCH v8 3/3] drm/virtio: Add PM notifier to restore objects after hibernation dongwon.kim
@ 2026-05-05  1:00   ` Claude Code Review Bot
  0 siblings, 0 replies; 8+ messages in thread
From: Claude Code Review Bot @ 2026-05-05  1:00 UTC (permalink / raw)
  To: dri-devel-reviews

Patch Review

This patch ties everything together with the PM notifier and adds the unref-before-hibernate / restore-after-resume logic.

**Issue 1 (bug): Use-after-free risk in `virtio_gpu_cmd_unref_resource` with `no_cb=true`**

```c
vbuf->resp_cb_data = bo;
ret = virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
if (ret < 0)
    virtio_gpu_cleanup_object(bo);
```

When `no_cb=true` (called from `virtio_gpu_object_unref_all`), if `virtio_gpu_queue_ctrl_buffer()` fails, `virtio_gpu_cleanup_object(bo)` is called. This frees the BO while it's still on the restore list and still referenced by userspace GEM handles. The `no_cb` path needs different error handling — it should not call `virtio_gpu_cleanup_object()` since the intent is to keep the guest-side object alive. A simple approach:

```c
if (ret < 0 && !no_cb)
    virtio_gpu_cleanup_object(bo);
```

**Issue 2 (style): `no_cb` parameter type and naming**

```c
void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
                   struct virtio_gpu_object *bo,
                   int no_cb);
```

`int no_cb` should be `bool`. Also, negative-sense parameter names are confusing — `true` means "don't do the callback". Consider `bool skip_cleanup` for clarity.

**Issue 3 (style): Missing braces in `virtio_gpu_object_unref_all`**

```c
list_for_each_entry_safe(bo, tmp, &vgdev->obj_restore_list,
             restore_node)
    if (bo->created) {
        virtio_gpu_cmd_unref_resource(vgdev, bo, true);
        virtio_gpu_notify(vgdev);
    }
```

The `list_for_each_entry_safe` macro body should have braces around it per kernel coding style when the body is a compound statement. Also, `_safe` is not needed here since entries are not removed from the list during iteration — plain `list_for_each_entry` would suffice.

**Issue 4 (minor): Unnecessary include**

```c
#include <linux/pm_runtime.h>
```

in `virtgpu_kms.c` — the code only uses `register_pm_notifier()` / `unregister_pm_notifier()` from `<linux/suspend.h>`. `pm_runtime.h` appears unused.

**Issue 5 (minor): Per-object `virtio_gpu_notify` in unref_all loop**

```c
list_for_each_entry_safe(bo, tmp, ...) 
    if (bo->created) {
        virtio_gpu_cmd_unref_resource(vgdev, bo, true);
        virtio_gpu_notify(vgdev);
    }
```

Calling `virtio_gpu_notify()` after every single unref is inefficient. A single notify after the loop would batch the kick to the host.

**Issue 6 (minor): Stale `hibernation` flag after aborted hibernation**

The PM notifier sets `vgdev->hibernation = true` on `PM_HIBERNATION_PREPARE` but never clears it on `PM_POST_HIBERNATION` (which fires if hibernation is aborted). The flag is only cleared in `virtgpu_restore()`. If hibernation is attempted but aborts before freeze() runs, the flag remains set. While this is likely benign in practice (freeze/restore are only called during actual S4), handling `PM_POST_HIBERNATION` to clear the flag would be more robust:

```c
if (mode == PM_POST_HIBERNATION)
    vgdev->hibernation = false;
```

**Issue 7 (minor): Partial state on restore failure**

In `virtgpu_restore()`, if `virtio_gpu_object_restore_all()` fails:
```c
if (vgdev->hibernation) {
    vgdev->hibernation = false;
    error = virtio_gpu_object_restore_all(vgdev);
    if (error) {
        DRM_ERROR("Failed to recover virtio-gpu objects\n");
        return error;
    }
}
```

The function returns an error but the device is already marked ready (`virtio_device_ready(vdev)`) and virtqueues are established. The error path doesn't tear down the partially initialized state, though there may not be a clean way to recover at that point.

---
Generated by Claude Code Patch Reviewer

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-05  1:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-29 20:47 [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support dongwon.kim
2026-04-29 20:47 ` [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume dongwon.kim
2026-05-05  1:00   ` Claude review: " Claude Code Review Bot
2026-04-29 20:47 ` [PATCH v8 2/3] drm/virtio: Add support for saving and restoring virtio_gpu_objects dongwon.kim
2026-05-05  1:00   ` Claude review: " Claude Code Review Bot
2026-04-29 20:47 ` [PATCH v8 3/3] drm/virtio: Add PM notifier to restore objects after hibernation dongwon.kim
2026-05-05  1:00   ` Claude review: " Claude Code Review Bot
2026-05-05  1:00 ` Claude review: Virtio-GPU S4 (hibernation) support Claude Code Review Bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox