From: dongwon.kim@intel.com
To: dri-devel@lists.freedesktop.org, airlied@redhat.com,
kraxel@redhat.com, dmitry.osipenko@collabora.com
Subject: [PATCH v8 2/3] drm/virtio: Add support for saving and restoring virtio_gpu_objects
Date: Wed, 29 Apr 2026 13:47:28 -0700 [thread overview]
Message-ID: <20260429204729.993669-3-dongwon.kim@intel.com> (raw)
In-Reply-To: <20260429204729.993669-1-dongwon.kim@intel.com>
From: Dongwon Kim <dongwon.kim@intel.com>
When the host KVM/QEMU resumes from hibernation, it loses all graphics
resources previously submitted by the guest OS, as the QEMU process is
terminated during the suspend-resume cycle. This leads to invalid resource
errors when the guest OS attempts to interact with the host using those
resources after resumption.
To resolve this, the virtio-gpu driver now tracks all active virtio_gpu_objects
and provides a mechanism to restore them by re-submitting the objects to QEMU
when needed (e.g., during resume from hibernation).
v2: - Attach backing is done if bo->attached was set before
v3: - Restoration is no longer triggered via .restore; instead, it is handled
by a PM notifier only during hibernation.
v4: - Remove virtio_gpu_object from the restore list before freeing the object
to prevent a use-after-free situation.
(Nirmoy Das)
- Protect restore list operations with a spinlock
(Nirmoy Das)
- Initialize ret with 0 in virtio_gpu_object_restore_all
(Nirmoy Das)
- Move restore list node into virtio_gpu_bo struct to reduce memory usage
(Dmitry Osipenko)
v5: - Include object backed by imported dmabuf
(Dmitry Osipenko)
- Not storing virgl objects in the restore_list as virgl 3D objects are not
recoverable.
(Dmitry Osipenko)
- Change the name 'list',a node in restore_list to 'restore_node'
(Nirmoy Das)
- Use mutex instead of spinlock when updating restore_list
(Nirmoy Das)
- Initialize restore_node when virtio_gpu_object is created - this is to
determine whether the object should be removed from the restore_list with
'list_empty' function when it is time to free the object as not all objects
will be added to the list.
v6: - Add a helper, virtio_gpu_add_object_to_restore_list
(Dmitry Osipenko)
v7: - Add drm_print.h
v8: - Introduce virtio_gpu_remove_from_restore_list helper and ensure its call in
all object destruction paths (General, Prime, and VRAM) to prevent
use-after-free.
(Dmitry Osipenko)
- Relocate the restore list removal logic from the final cleanup stage
to the initial free stage for better synchronization.
(Dmitry Osipenko)
Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: Nirmoy Das <nirmoyd@nvidia.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 15 ++++++
drivers/gpu/drm/virtio/virtgpu_kms.c | 3 ++
drivers/gpu/drm/virtio/virtgpu_object.c | 70 +++++++++++++++++++++++++
drivers/gpu/drm/virtio/virtgpu_prime.c | 43 +++++++++++++++
drivers/gpu/drm/virtio/virtgpu_vram.c | 3 ++
5 files changed, 134 insertions(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 1279f998c8e0..f91f31b861b8 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -98,6 +98,10 @@ struct virtio_gpu_object {
int uuid_state;
uuid_t uuid;
+
+ /* for restoration of objects after hibernation */
+ struct virtio_gpu_object_params params;
+ struct list_head restore_node;
};
#define gem_to_virtio_gpu_obj(gobj) \
container_of((gobj), struct virtio_gpu_object, base.base)
@@ -265,6 +269,8 @@ struct virtio_gpu_device {
struct work_struct obj_free_work;
spinlock_t obj_free_lock;
struct list_head obj_free_list;
+ struct mutex obj_restore_lock;
+ struct list_head obj_restore_list;
struct virtio_gpu_drv_capset *capsets;
uint32_t num_capsets;
@@ -467,6 +473,7 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
u64 fence_id);
/* virtgpu_object.c */
+void virtio_gpu_remove_from_restore_list(struct virtio_gpu_object *bo);
void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo);
struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
size_t size);
@@ -479,6 +486,12 @@ bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
uint32_t *resid);
+
+void virtio_gpu_add_object_to_restore_list(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo);
+
+int virtio_gpu_object_restore_all(struct virtio_gpu_device *vgdev);
+
/* virtgpu_prime.c */
int virtio_gpu_resource_assign_uuid(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *bo);
@@ -493,6 +506,8 @@ int virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
unsigned int *nents,
struct virtio_gpu_object *bo,
struct dma_buf_attachment *attach);
+int virtgpu_dma_buf_obj_resubmit(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo);
/* virtgpu_debugfs.c */
void virtio_gpu_debugfs_init(struct drm_minor *minor);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 90634a5b0ad7..ad12ea7165c4 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -171,6 +171,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
virtio_gpu_array_put_free_work);
INIT_LIST_HEAD(&vgdev->obj_free_list);
spin_lock_init(&vgdev->obj_free_lock);
+ INIT_LIST_HEAD(&vgdev->obj_restore_list);
+ mutex_init(&vgdev->obj_restore_lock);
#ifdef __LITTLE_ENDIAN
if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_VIRGL))
@@ -299,6 +301,7 @@ void virtio_gpu_deinit(struct drm_device *dev)
flush_work(&vgdev->config_changed_work);
virtio_reset_device(vgdev->vdev);
vgdev->vdev->config->del_vqs(vgdev->vdev);
+ mutex_destroy(&vgdev->obj_restore_lock);
}
void virtio_gpu_release(struct drm_device *dev)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index ec9efacc6919..a1ea65f7b1de 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -63,6 +63,15 @@ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t
ida_free(&vgdev->resource_ida, id - 1);
}
+void virtio_gpu_remove_from_restore_list(struct virtio_gpu_object *bo)
+{
+ struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+
+ mutex_lock(&vgdev->obj_restore_lock);
+ list_del_init(&bo->restore_node);
+ mutex_unlock(&vgdev->obj_restore_lock);
+}
+
void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
{
struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
@@ -94,6 +103,7 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
if (bo->created) {
+ virtio_gpu_remove_from_restore_list(bo);
virtio_gpu_cmd_unref_resource(vgdev, bo);
virtio_gpu_notify(vgdev);
/* completion handler calls virtio_gpu_cleanup_object() */
@@ -220,6 +230,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
return PTR_ERR(shmem_obj);
bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
+ INIT_LIST_HEAD(&bo->restore_node);
+
ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
if (ret < 0)
goto err_free_gem;
@@ -258,6 +270,12 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
virtio_gpu_object_attach(vgdev, bo, ents, nents);
}
+ if (!params->virgl) {
+ /* store non-virgl object with its param to the restore list */
+ bo->params = *params;
+ virtio_gpu_add_object_to_restore_list(vgdev, bo);
+ }
+
*bo_ptr = bo;
return 0;
@@ -271,3 +289,55 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
drm_gem_shmem_free(shmem_obj);
return ret;
}
+
+void virtio_gpu_add_object_to_restore_list(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo)
+{
+ mutex_lock(&vgdev->obj_restore_lock);
+ list_add_tail(&bo->restore_node, &vgdev->obj_restore_list);
+ mutex_unlock(&vgdev->obj_restore_lock);
+}
+
+int virtio_gpu_object_restore_all(struct virtio_gpu_device *vgdev)
+{
+ struct virtio_gpu_object *bo, *tmp;
+ struct virtio_gpu_mem_entry *ents;
+ unsigned int nents;
+ int ret = 0;
+
+ mutex_lock(&vgdev->obj_restore_lock);
+ list_for_each_entry_safe(bo, tmp, &vgdev->obj_restore_list,
+ restore_node) {
+ if (drm_gem_is_imported(&bo->base.base)) {
+ ret = virtgpu_dma_buf_obj_resubmit(vgdev, bo);
+ if (ret)
+ break;
+
+ continue;
+ }
+
+ if (bo->params.blob || bo->attached) {
+ ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents,
+ &nents);
+ if (ret)
+ break;
+ }
+
+ if (bo->params.blob) {
+ virtio_gpu_cmd_resource_create_blob(vgdev, bo,
+ &bo->params,
+ ents, nents);
+ } else {
+ virtio_gpu_cmd_create_resource(vgdev, bo, &bo->params,
+ NULL, NULL);
+ if (bo->attached) {
+ bo->attached = false;
+ virtio_gpu_object_attach(vgdev, bo, ents,
+ nents);
+ }
+ }
+ }
+ mutex_unlock(&vgdev->obj_restore_lock);
+
+ return ret;
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 70b3b836e1c9..27eaab9f5cfa 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -23,6 +23,7 @@
*/
#include <drm/drm_prime.h>
+#include <drm/drm_print.h>
#include <linux/virtio_dma_buf.h>
#include "virtgpu_drv.h"
@@ -216,6 +217,7 @@ static void virtgpu_dma_buf_free_obj(struct drm_gem_object *obj)
}
if (bo->created) {
+ virtio_gpu_remove_from_restore_list(bo);
virtio_gpu_cmd_unref_resource(vgdev, bo);
virtio_gpu_notify(vgdev);
return;
@@ -262,6 +264,12 @@ static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
dma_buf_unpin(attach);
dma_resv_unlock(resv);
+ /* store the dmabuf imported object with its params to
+ * the restore list
+ */
+ bo->params = params;
+ virtio_gpu_add_object_to_restore_list(vgdev, bo);
+
return 0;
err_import:
@@ -272,6 +280,39 @@ static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
return ret;
}
+int virtgpu_dma_buf_obj_resubmit(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo)
+{
+ struct virtio_gpu_mem_entry *ents;
+ struct scatterlist *sl;
+ int i;
+
+ if (!bo->sgt) {
+ DRM_ERROR("no sgt bound to virtio_gpu_object\n");
+ return -ENOMEM;
+ }
+
+ ents = kvmalloc_array(bo->sgt->nents,
+ sizeof(struct virtio_gpu_mem_entry),
+ GFP_KERNEL);
+ if (!ents) {
+ DRM_ERROR("failed to allocate ent list\n");
+ return -ENOMEM;
+ }
+
+ for_each_sgtable_dma_sg(bo->sgt, sl, i) {
+ ents[i].addr = cpu_to_le64(sg_dma_address(sl));
+ ents[i].length = cpu_to_le32(sg_dma_len(sl));
+ ents[i].padding = 0;
+ }
+
+ virtio_gpu_cmd_resource_create_blob(vgdev, bo, &bo->params,
+ ents, bo->sgt->nents);
+
+ return 0;
+}
+
+
static const struct drm_gem_object_funcs virtgpu_gem_dma_buf_funcs = {
.free = virtgpu_dma_buf_free_obj,
};
@@ -317,6 +358,8 @@ struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
if (!bo)
return ERR_PTR(-ENOMEM);
+ INIT_LIST_HEAD(&bo->restore_node);
+
obj = &bo->base.base;
obj->resv = buf->resv;
obj->funcs = &virtgpu_gem_dma_buf_funcs;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
index 084e80227433..35f3f592ab19 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vram.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
@@ -18,6 +18,7 @@ static void virtio_gpu_vram_free(struct drm_gem_object *obj)
if (unmap)
virtio_gpu_cmd_unmap(vgdev, bo);
+ virtio_gpu_remove_from_restore_list(bo);
virtio_gpu_cmd_unref_resource(vgdev, bo);
virtio_gpu_notify(vgdev);
return;
@@ -200,6 +201,8 @@ int virtio_gpu_vram_create(struct virtio_gpu_device *vgdev,
obj = &vram->base.base.base;
obj->funcs = &virtio_gpu_vram_funcs;
+ INIT_LIST_HEAD(&vram->base.restore_node);
+
params->size = PAGE_ALIGN(params->size);
drm_gem_private_object_init(vgdev->ddev, obj, params->size);
--
2.34.1
next prev parent reply other threads:[~2026-04-29 20:52 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-29 20:47 [PATCH v8 0/3] Virtio-GPU S4 (hibernation) support dongwon.kim
2026-04-29 20:47 ` [PATCH v8 1/3] drm/virtio: Freeze and restore hooks to support suspend and resume dongwon.kim
2026-05-05 1:00 ` Claude review: " Claude Code Review Bot
2026-04-29 20:47 ` dongwon.kim [this message]
2026-05-05 1:00 ` Claude review: drm/virtio: Add support for saving and restoring virtio_gpu_objects Claude Code Review Bot
2026-04-29 20:47 ` [PATCH v8 3/3] drm/virtio: Add PM notifier to restore objects after hibernation dongwon.kim
2026-05-05 1:00 ` Claude review: " Claude Code Review Bot
2026-05-05 1:00 ` Claude review: Virtio-GPU S4 (hibernation) support Claude Code Review Bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260429204729.993669-3-dongwon.kim@intel.com \
--to=dongwon.kim@intel.com \
--cc=airlied@redhat.com \
--cc=dmitry.osipenko@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=kraxel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox