We write a debug message and then handle things using fallback.
Fixes error messages when trying to import incompatible dmabufs.
(in my case: llvmpipe dmabufs into radv)
Instead of allocating one large descriptor pool and hoping we never run
out of descriptors, allocate small ones dynamically, so we know we never
run out.
Test incldued, though the test doesn't fail in CI, because llvmpipe
doesn't care about pool size limits. It does fail on my AMD though.
A fun side note about that test is that the GL renderer handles it best
in normal operationbecause it caches offscreens per node and we draw the
same node repeatedly.
But, the replay test expands them to duplicated unique nodes, and then
the GL renderer runs out of command queue length, so I had to disable
the test on it.
There is now a GskGpuYcbcr struct that maintains all the Vulkan
machinery related to YCbCrConversions.
It's a GskGpuCached, so it will make itself go away when it is no longer
used, ie a video stopped playing.
Instead of trying to cram all descriptors into one large array and only
binding it at the start, we now keep 1 descriptor set per image+sampler
combo and just rebind it every time we switch textures.
This is the very dumb solution that essentially maps to what GL does,
but the performance impact is negligible compared to the complicated
dance we were attempting before.
If desired, try creating GL_SRGB images. Pass a try_srgb boolean down to
the image creation functions and have them attempt to create images like
that.
When it is not possible to create srgb images in the given format, just
fall back to regular images. The calling code is meant to check the
GSK_GPU_IMAGE_SRGB flags to determine the actual format of the resulting
image.
The VK_IMAGE_LAYOUT_UNDEFINED layout means that the data hold by the
texture can be discarded, and we don't want to discard it. Because the
Vulkan spec is unclear (see [1] for a discussion), err on the side of
caution and use VK_IMAGE_LAYOUT_GENERAL.
Fixes import failures with WebKit.
[1] https://github.com/ValveSoftware/gamescope/issues/356
Add GSK_GPU_IMAGE_RENDERABLE and GSK_GPU_IMAGE_FILTERABLE and make sure
to check formats for this feature.
This requires reorganizing code to actually do this work instead of just
pretending formats are supported.
This fixes GLES upload tests with NGL.
This ensures both that we signal a semaphore for a dmabuf when we export
an image and that we import semaphores for dmabufs and wait on them.
Fixes Vulkan node-editor displaying the Vulkan renderer in the sidebar.
Make gsk_renderer_render_texture() create a dmabuf texture if that is
possible.
If it isn't (ie if we're not on Linux or if dmabufs are otherwise not
working) fall back to the previous code of creating a memory texture.
We now handle the case where memory is not HOST_CACHED.
We also track the memory type now so we can avoid mapping image memory
that is not HOST_CACHED and use buffer transfers instead.
This adds GSK_GPU_IMAGE_CAN_MIPMAP and GSK_GPU_IMAGE_MIPMAP flags and
support to ensure_image() and image creation functions for creating a
mipmapped image.
Mipmaps are created using the new mipmap op that uses
glGenerateMipmap() on GL and equivalent blit ops on Vulkan.
This is then used to ensure the image is mipmapped when rendering it
with a texture-scale node.
Add a GSK_GPU_IMAGE_STRAIGHT_ALPHA and use it for images that have
straight alpha.
Make sure those images get passed through a premultiplying pass with
the new straight alpha shader.
Also remove the old Postprocess flags from the Vulkan image that were a
leftover from copying that code from the old Vulkan renderer.
This now uses all the previously added features to allow displaying YUV
images.
Also add a utility function that turns an image into a toggle ref for a
texture. This makes sure that reffing the image also refs the texture
and that ensures that textures stay alive as long as the image is in
use.
For now, the flags are just there because, and nobody uses them yet.
The only flag is EXTERNAL, which for now I'm using for YUV buffers,
though it's a bit undefined what that means.
Images can now have samplers - meaning they must be rendered with that
sampler. It also means that sampler must be handled as an immutable
sampler in descriptorsets.
These samplers can be created with a samplerYcbcrConversion, so code has
been added to pass that conversion when creating the imageview.
Also add code to GskVulkanFrame to track immutable samplers.
Nobody is making use of this yet.
... and use it to initialize the "proper" projection matrix to use in
shaders.
The resulting viewport will go from top left (0,0) to bottom right
(width, height) and the z clipping plane will go from -10000 to 10000.
This heaves over an inital chunk of code from the Vulkan renderer to
execute shaders.
The only shader that exists for now is a shader that draws a single
texture.
We use that to replace the blit op we were doing before.
For now, it just renders using cairo, uploads the result to the GPU,
blits it onto the framebuffer and then is happy.
But it can do that using Vulkan and using GL (no idea which version).
The most important thing still missing is shaders.
It also has a bunch of copy/paste from the Vulkan renderer that isn't
used yet.
But I didn't want to rip it out and then try to copy it back later