Compare commits

...

193 Commits

Author SHA1 Message Date
Matthias Clasen
ad2c2592c1 inspector: Update the logging tab
Make the logging-related checkboxes match the flags we have.
2024-01-05 20:54:14 -05:00
Matthias Clasen
d6cfe2365b inspector: Show renderer information
Show the max texture size for the GL renderer, and the optimization
flags (ie what is set with GSK_GPU_SKIP) for the gpu renderers.
2024-01-05 20:41:16 -05:00
Matthias Clasen
2883e527ec gpu: Add a private getter for opt flags
We want to show these flags in the inspector, so make them
available.
2024-01-05 20:40:38 -05:00
Matthias Clasen
b4ea4087c3 inspector: Show more Vulkan information
Use GskVulkanDevice for the Vulkan information, and also show
numeric limits and which Vulkan features are enabled.
2024-01-05 20:09:30 -05:00
Matthias Clasen
0554d44d9e inspector: Don't check for nonexisting types
GskVkOldRenderer does not exist anymore.
2024-01-05 19:41:27 -05:00
Matthias Clasen
cfc0cb1553 Update docs
Update the docs for environment variables that we parse to include
GDK_VULKAN_SKIP and GSK_GPU_SKIP.
2024-01-05 19:36:34 -05:00
Matthias Clasen
7543584492 gpu: Cosmetics
Tweak debug value descriptions.
2024-01-05 19:36:34 -05:00
Matthias Clasen
0e9ff94d3e gpu: Whitespace fixes 2024-01-05 19:36:34 -05:00
Benjamin Otte
3fd8ddc38b gpu: Setup attribute locations
Make the generator generate calls for the correct glBindAttribLocation()
calls.

Usually this was done correctly, but we can't rely on it. So do it
explicitly.
2024-01-04 14:54:29 +01:00
Benjamin Otte
b175acbdf5 gl: Don't initialize texture storage in wrong format
We're calling glTexImage2D() right after with the correct format.

So add the special format "0" to avoid intializing the texture memory.
2024-01-04 14:54:29 +01:00
Benjamin Otte
b67d2018ad testsuite: Add a test for a recent mipmap generation bug
Tests the fix in ab1fba6fdc.

Related: #6298
Related: !6704
2024-01-04 14:54:28 +01:00
Benjamin Otte
d56d1eda4a gpu: Add GSK_GPU_SKIP=blit
This allows bypassing all blit operations using gsk_gpu_blit_op()
in favor of shaders.
2024-01-04 14:54:28 +01:00
Benjamin Otte
7047cb6553 gpu: Add more assertions to blitop
The blitop is nasty, so we should make sure we have supported images
when using it.
2024-01-04 14:54:28 +01:00
Benjamin Otte
36b0df8e74 FIXME: Disable ubershader in NGL CI
Gets around the ubershader compile taking multiple minutes and causing
each test to timeout.
2024-01-04 14:54:28 +01:00
Matthias Clasen
7c426add4f testsuite: Add NGL and Vulkan renderer to compare tests
Add a testsuite called gsk-compare-vulkan to run
the gsk renderer tests with the Vulkan renderer and
gsk-compare-ngl to run them with the NGL renderer.

To run the tests locally, you can do:

meson test -C_build --suite gsk-compare-vulkan
2024-01-04 14:54:28 +01:00
Benjamin Otte
afd2ed5101 gpu: Generate mipmap by default when downscaling too much
When downscaling more than 2x in either dimension, force mipmap use for
the texture in a texture node.
It improves the quality of textures but takes extra work.

The GL renderer does this, too (for textures that aren't in the icon cache).

This can be disabled via GSK_GPU_SKIP=mipmap.

Fixes the big-checkerboard-scaled-down2 test.
2024-01-04 14:54:28 +01:00
Benjamin Otte
f971b1bb15 gpu: Add supersampling for gradients
Unless GSK_GPU_SKIP=gradients is given, we sample every point 4x instead
of 1x. That makes the shader run slower (by roughly a factor of 2.5x)
but it improves quality quite a bit.
2024-01-04 14:54:28 +01:00
Benjamin Otte
bcb927d1d7 gpu: Add a blend mode shader
I'm a bit unsure about using the zero rect in the fallback situtation
where one image doesn't exist, but it seems to work.

This removes the last pattern-only rendernode and with that the last
fallback usage with disabled ubershader.
2024-01-04 14:54:28 +01:00
Benjamin Otte
c1ba79f0f7 gpu: Use variations in the blur shader
Have one variation for colorizing to a shadow color and another
variation that just blurs.
2024-01-04 14:54:28 +01:00
Benjamin Otte
b0a3976337 gpu: Use variations in the straight-alpha shader
This way we can toggle opacity handling on/off.

THe shader slowly turns into a fancy texture op - but I don't want to
rename it to "fancytexture" just yet.
2024-01-04 14:54:28 +01:00
Benjamin Otte
7a6377d1ff gpu: Use variations for radial gradients 2024-01-04 14:54:27 +01:00
Benjamin Otte
4100436c50 gpu: Use variations for the mask mode 2024-01-04 14:54:27 +01:00
Benjamin Otte
2ce60f1260 gpu: Make linear gradients use variations 2024-01-04 14:54:27 +01:00
Benjamin Otte
88e5b6ba8d gpu: Make box shadow shader use variations
Use it do differentiate between inset and outset shadows.
2024-01-04 14:54:27 +01:00
Benjamin Otte
0268248d9e gpu: Introduce the concept of "variation"
A variation is a #define/specialization constant that every shader can
use to specialize itself as it sees fit.

This commit adds the infrastrcture, future commits will add
implementations.
2024-01-04 14:54:27 +01:00
Benjamin Otte
3976f88fed png: Don't set a size limit when saving
gdk_texture_save_to_png_bytes() cannot fail, so ensure that it doesn't.

Testsuite has been updated to check for this case.

Note that we do not load the PNG file that we generate here.
Loading is a lot more scary than saving after all.
If people want to load oversized PNG files, they should use a real PNG
loader.
2024-01-04 14:54:27 +01:00
Benjamin Otte
5bf21e264e gpu: Add a cross-fade shader 2024-01-04 14:54:27 +01:00
Benjamin Otte
407d08ff0d gpu: Handle a clipping cornercase properly
If we enter the situation where we need to redirect the clipping to an
offscreen, make sure that:

* the ubershader gets only used when beneficial

* we size the offscreen properly and don't let it grow infinitely.

Fixes the clip-intersection-fail-opacity test
2024-01-04 14:54:27 +01:00
Benjamin Otte
2cb7433065 gpu: Handle opacity in the Cairo path
Fixes random calls to add_fallback_node() from code that wants to handle
opacity.

Also makes Cairo nodes work with opacity, of course.
2024-01-04 14:54:27 +01:00
Benjamin Otte
d08e5f7d82 gpu: Handle alpha in image_op() wrapper
There are various places where the alpha is implicitly assumed to be
handled, so just handle it.

As a bonus, this simplifies a bunch of code and makes the texture node
rendering work with alpha.
2024-01-04 14:54:27 +01:00
Benjamin Otte
a21fcd9ca6 gpu: Optimize box-shadow shader
Like in the border shader, don't draw the (potentially large for the
window's shadow) inside part that is transparent.
2024-01-04 14:54:26 +01:00
Benjamin Otte
f8b0c95894 gpu: Replace a fallback with offscreens
Use an offscreen and mask it if the clips get too complicated.

Technically, the code could be improved to set the rounded clip on the
offscreen instead of rendering it as a mask, but that would require more
sophisticated tracking of clip regions by respecting the scissor, and
the current clip handling can't do that yet.

This removes one of the last places where the GPU renderer was still
using Cairo fallbacks.
2024-01-04 14:54:26 +01:00
Benjamin Otte
c46ab19f76 gpu: Add gsk_gpu_node_processor_add_images()
This is for generating descriptors for more than 1 image. The arguments
for this function are very awkward, but I couldn't come up with better
ones and the function isn't that important.

And the calling places still look a lot nicer now.
2024-01-04 14:54:26 +01:00
Benjamin Otte
11232b2611 gpu: Add gsk_gpu_node_processor_init_draw/finish_draw()
These initialize/finish an offscreen and start drawing to it.

It simplifies the process of using offscreens quite a bit that way.
2024-01-04 14:54:26 +01:00
Benjamin Otte
caa898103f gpu: Add stroke support
Same as for fill nodes, we render the stroke to a mask and then run a
mask shader.

The color node child optimization is included already.
2024-01-04 14:54:26 +01:00
Benjamin Otte
2692cb9a6e gpu: Optimize solid color fills 2024-01-04 14:54:26 +01:00
Benjamin Otte
9ba8c95b8b gpu: Add fill node support
For now this uses Cairo to generate a mask and then runs a mask op.

This is different from just using fallback in that the child is rendered
with the GPU and not via fallback.
2024-01-04 14:54:26 +01:00
Benjamin Otte
58953432d3 gpu: Add a radial gradient shader
This is mainly copy/paste, so now this almost identical gradient code
exists 3 times.
That's kinda suboptimal.
2024-01-04 14:54:26 +01:00
Benjamin Otte
ac6d854a3a gpu: Add a conic gradient shader 2024-01-04 14:54:26 +01:00
Benjamin Otte
89b65ec9b9 gpu: Split linear gradient shader into 2 parts
A generic part that can be shared by all gradient shaders that does the
color stop handling and a gradient-specific part that needs to be
implemented individually by each gradient implementation.
2024-01-04 14:54:25 +01:00
Benjamin Otte
0ed32f4d2d gpu: Change the cairo upload op prototype
Make it take a draw function instead of a node.

This way, we can draw more fancy stuff with Cairo.
2024-01-04 14:54:25 +01:00
Benjamin Otte
6cabaad562 gpu: Handle >7 color stops
If there are more than 7 color stops, we can split the gradient into
multiple gradients with color stops like so:
  0, 1, 2, 3, 4, 5, transparent
  transparent, 6, 7, 8, 9, 10, transparent
  ...
  transparent, n-2, n-1, n
and use the new BLEND_ADD to draw them on top of each other.

Adapt the testcae that tests this to use colors that work with the fancy
algorithm we use now, so that BLEND_ADD and transitions to transparent
do not cause issues.
2024-01-04 14:54:25 +01:00
Benjamin Otte
801225c3d3 gpu: Make the ubershader use the correct coordinates
Instead of scaled coordinates, use the unscaled ones.

This ensure that gradients get computed correctly as they are not safe
against nonorthogonal transforms - like scales with different scale
factors.
2024-01-04 14:54:25 +01:00
Benjamin Otte
66b783cc7e gpu: Make blend modes configurable
For now, we only have OVER and ADD blend modes. This commit doesn't use
ADD, it just sets up all the machinery and refactors things.
2024-01-04 14:54:25 +01:00
Benjamin Otte
136d3d9439 gpu: Add a linear-gradient shader
The shader can only deal with up to 7 color stops - but that's good
enough for the real world.

Plus, we have the uber shader.

And if that fails, we can still fall back to Cairo.

The code also doesn't handle repeating linear gradients yet.
2024-01-04 14:54:25 +01:00
Benjamin Otte
ec5aa5e736 gpu: Add a mask shader
This shader can take over from the ubershader. And it can be used
instead of launching the ubershader when no offscreens are necessary.

Also includes an optimization that uses the colorize shader when
appropriate.
2024-01-04 14:54:25 +01:00
Benjamin Otte
f862f66e4a rgba: Add a few macros
... and use them.

Those macros hopefully make code more readable.
2024-01-04 14:54:25 +01:00
Benjamin Otte
73143dc7ab gpu: Use ubershader for repeat nodes when possible 2024-01-04 14:54:25 +01:00
Benjamin Otte
ee54f5a50a gpu: Add a repeat node renderer
The ubershader has some corner cases where it can't be used, in
particular when the child is massively larger than the repeat node and
the repeat node is used to clip lots of the source.
2024-01-04 14:54:25 +01:00
Benjamin Otte
db93fd6b7b renderer: Add Vulkan renderer to the list of renderers
It's better than the Cairo renderer, so use it instead.

It's still only picked once GL fails, so it will probably only ever be
picked when people use GDK_DEBUG=gl-disable, but at least it will be
picked.
2024-01-04 14:54:24 +01:00
Benjamin Otte
fb6ee45064 renderer: Split function
per-backend renderers and GL renderers are a different thing, so treat
them as such.

Also, try the GL renderer unconditionally. The renderer initialization
code will take care of GL not being available.
2024-01-04 14:54:24 +01:00
Benjamin Otte
f6f43cd374 vulkan: Add a Vulkan downloader
This is using the Vulkan renderer.

It also allows claiming support for all the formats that only Vulkan
supports, but that neither GL nor native mmap can handle.
2024-01-04 14:54:24 +01:00
Benjamin Otte
0473a429ac gpu: Implement a GdkDmabufDownloader 2024-01-04 14:54:24 +01:00
Benjamin Otte
d225f40acd gpu: Add a boolean flag allow_dmabuf to the downloadop
It can be set to force the downloadop to not create dmabuf textures.
2024-01-04 14:54:24 +01:00
Benjamin Otte
2bcee1e69e gpu: Update to memoryformat Vulkan code
The existing code for setting up formats was copied from the old Vulkan
renderer and never updated.
2024-01-04 14:54:24 +01:00
Benjamin Otte
9a81ed9ab1 testsuite: Add new renderers to the scaling test 2024-01-04 14:54:24 +01:00
Benjamin Otte
fb9c4c15be gpu: Reorganize format handling
Add GSK_GPU_IMAGE_RENDERABLE and GSK_GPU_IMAGE_FILTERABLE and make sure
to check formats for this feature.

This requires reorganizing code to actually do this work instead of just
pretending formats are supported.

This fixes GLES upload tests with NGL.
2024-01-04 14:54:24 +01:00
Benjamin Otte
f2ac427035 gpu: Add debug messages
Add FALLBACK debug messages when a texture upload format is not
supported.
2024-01-04 14:54:24 +01:00
Benjamin Otte
95d7c7de3a vulkan: Use vulkan feature flags for incremental present
I did it because it unifies the code.

But it also gains the benefit of being debuggable because it can
now be turned off via GDK_VULKAN_SKIP=incremental-present
2024-01-04 14:54:24 +01:00
Benjamin Otte
80a2cbb447 gpu: sync dmabufs via semaphores
This ensures both that we signal a semaphore for a dmabuf when we export
an image and that we import semaphores for dmabufs and wait on them.

Fixes Vulkan node-editor displaying the Vulkan renderer in the sidebar.
2024-01-04 14:54:23 +01:00
Benjamin Otte
14a1c53358 gpu: Make VulkanRealDescriptor keep the frame
There's too much interaction between the two to warrant not having it
around.
2024-01-04 14:54:23 +01:00
Benjamin Otte
fcb8e920c8 gpu: Add support for blend modes 2024-01-04 14:54:23 +01:00
Benjamin Otte
8003d875ee gpu: Make Vulkan renderer provide dmabuf textures
Make gsk_renderer_render_texture() create a dmabuf texture if that is
possible.

If it isn't (ie if we're not on Linux or if dmabufs are otherwise not
working) fall back to the previous code of creating a memory texture.
2024-01-04 14:54:23 +01:00
Benjamin Otte
ce08750906 gpu: Add gsk_gpu_device_create_download_image()
This way, we can differentiate between regular offscreens and images
that are meant to be used for gsk_renderer_render_texture() or similar.
2024-01-04 14:54:23 +01:00
Benjamin Otte
15c72b32bd gpu: Update the pipeline cache
When a new shader was compiled, queue a save of the pipeline cache.
2024-01-04 14:54:23 +01:00
Benjamin Otte
21841872f4 gpu: Implement support for multiple storage buffers
When using the uber shader a lot, we may overflow the (only 16kB large)
storage buffer.

Stop crashing when that happens and instead just allocate a new one.
2024-01-04 14:54:23 +01:00
Benjamin Otte
9ef440a577 gpu: Handle storage buffers via descriptors
This makes the (currently single) storage buffer handled by
GskGpuDescriptors.
A side effect is that we now have support for multiple buffers in place.
We just have to use it.

Mixed into this commit is a complete rework of the pattern writer.
Instead of writing straight into the buffer (complete with repeatedly
backtracking when we have to do offscreens), write into a temporary
buffer and copy into the storage buffer on committing.
2024-01-04 14:54:23 +01:00
Georges Basile Stavracas Neto
83b922f2ee gpu/renderer: Improve scale detection
The GL branch should eventually call into gdk_gl_context_get_scale(),
which is what checks for GDK_DEBUG=gl-fractional; whereas the Vulkan
branch needs no change.
2024-01-04 14:54:23 +01:00
Benjamin Otte
a6bcc2208a gpu: Rename some descriptors APIs
We want tosupport buffers here, too, so make the names unambiguous.
2024-01-04 14:54:22 +01:00
Benjamin Otte
f7f61fadb3 gpu: Only run uber shaders if beneficial
If we have the choice between running the ubershader or a normal shader
with offscreens, make the choice depend on if the ubershader would
offscreen anyway.
If so, just run the normal shader.

This really gets rid of all ubershader invocations in Adwaita
widget-factory.
2024-01-04 14:54:22 +01:00
Benjamin Otte
8bb3cca512 gpu: Add a color matrix shader
This allows avoiding the uber shader in 2 important cases:

1. opacity - and Adwaita uses that a lot
2. color-matrix - it's used for symbolic icons
2024-01-04 14:54:22 +01:00
Benjamin Otte
6dc9736b37 gpu: Add support for subsurfaces 2024-01-04 14:54:22 +01:00
Benjamin Otte
cd054a6e6d gpu: Rework caching layer
Instead of using an enum, use a usual custom class struct like we use
for GskGpuOp.

As a side effect of that refactoring, the display gained a hash table
for textures where we can't use the render data because the texture is
used in multiple renderers.
The goal here is that a texture is always cached and we can ensure that
there is a 1:1 relation between textures and their GskGpuImage. This is
important in particular for external textures - like dmabufs - where we
absolutely don't want 2 images with 2 device memories, and where we use
toggle references to keep them alive.
2024-01-04 14:54:22 +01:00
Benjamin Otte
2e11cf754c gpu: Handle opacity in a bunch of nodes 2024-01-04 14:54:22 +01:00
Benjamin Otte
880aaab037 gpu: Add optimization for opacity nodes
Track the global opacity and allow nodes to handle it themselves.

If the nodes don't, fall back to what we did previously: Use the
ubershader.
2024-01-04 14:54:22 +01:00
Benjamin Otte
a295d59db8 gpu: Make the uber shader handle affine transforms
Fixes the last (non-blur) Nautilus offscreens in Nautilus.
2024-01-04 14:54:22 +01:00
Benjamin Otte
13e6ecb330 gpu: Replace clip node fallback with uber or offscreen
We don't need to use a fallback here anymore, we have enough code by now
to do this smarter.
2024-01-04 14:54:22 +01:00
Benjamin Otte
88c25199ce gpu: Split out a function
We want to use it elsewhere.
2024-01-04 14:54:22 +01:00
Benjamin Otte
743dfc276a gpu: Replace fallback with offscreen
We don't need to fallback with transform nodes when the clip is
too complex. We can redirect to an offscreen instead, which doesn't have
a clip.
2024-01-04 14:54:21 +01:00
Benjamin Otte
134d809a7d gpu: Split out a function
I want to use it in more places.
2024-01-04 14:54:21 +01:00
Benjamin Otte
3936e7ee7d gpu: Be stricter about texture units
Reserve 3 texture units per immutable sampler (because that's the
maximum per YUV sampler).
Ensure that the max-sampler calculations always include the immutable
samplers, too.
2024-01-04 14:54:21 +01:00
Benjamin Otte
a5c7339920 gpu: Improve memory handling on Vulkan
We now handle the case where memory is not HOST_CACHED.

We also track the memory type now so we can avoid mapping image memory
that is not HOST_CACHED and use buffer transfers instead.
2024-01-04 14:54:21 +01:00
Benjamin Otte
e8fb0e71c9 gpu: Make the texture ladder handle 32 textures
So now we can put more textures in one descriptor set even if dynamic
indexing isn't supported.
2024-01-04 14:54:21 +01:00
Benjamin Otte
9d0b15273d gpu: Require Vulkan 1.2 shaders for dynamic indexing
Shader compilers struggle with compiling code that indexes texture
arrays by indexes, so keep the fallback shaders simple and don't do that
there.

There's not much of a performance difference anyway between those two
methods.
2024-01-04 14:54:21 +01:00
Benjamin Otte
168be35bed gpu: Add back single descriptors set usage with descriptor indexing 2024-01-04 14:54:21 +01:00
Benjamin Otte
9777188ae4 gpu: Make descriptor-indexing optional
Do extra work if it's not available.
2024-01-04 14:54:21 +01:00
Benjamin Otte
21f52cacf9 gpu: Remove UPDATE_AFTER_BIND flag
We don't update after binding.
2024-01-04 14:54:21 +01:00
Benjamin Otte
5ed293e3df gpu: Handle multiple image descriptors
In the case where descriptor indexing is not enabled and the number of
max images is small (or we use extensive amounts of immutable samplers),
we need to be able to switch descriptors.

This patch makes that possible.
2024-01-04 14:54:21 +01:00
Benjamin Otte
c51c3b2660 gpu: Cache GL state
That way we don't need to setup the textures and program for every
command.
2024-01-04 14:54:20 +01:00
Benjamin Otte
5dc455f011 gpu: Add a CommandState struct to the command vfuncs
This way, we can make it writable and track things like the active
textures and the current program.

We don't do that yet, but we can.
2024-01-04 14:54:20 +01:00
Benjamin Otte
fbd972498f gpu: Handle missing support nor nonuniform texture accesses
We compile custom shaders for Vulkan 1.0 that don't require the
extension.

We  also ensure that our accesses are uniform by only executing one
shader at a time.
2024-01-04 14:54:20 +01:00
Benjamin Otte
b87d82de7e gpu: Change the meson code for how SPIR-V is built
This does the same thing, but in a way that's a bit more flexible.
And it prepares the next commit...
2024-01-04 14:54:20 +01:00
Benjamin Otte
c165b9d47c gpu: Add GDK_VULKAN_SKIP env var
The variable forces skipping of optional Vulkan features. This is useful
for testing.

Also debug-print the features in use.
2024-01-04 14:54:20 +01:00
Benjamin Otte
37d4e29c44 gpu: Set max samplers/buffers based on features
If we run older code, we don't have enough samplers and buffers
available. So make sure to reflect that.
2024-01-04 14:54:20 +01:00
Benjamin Otte
1923cad9f7 gpu: Add Vulkan feature flags for more advanced features
Code continues if those features aren't available, buit is probably
broken.
2024-01-04 14:54:20 +01:00
Benjamin Otte
9349ba8330 gpu: Update shader code for different buffer/sampler sizes
Use specialization constants for that.
2024-01-04 14:54:20 +01:00
Benjamin Otte
0c72233a9d gpu: Make PipelineLayout objects do more things
Let the objects track the number of samplers or buffers needed.

This is a required step for making Vulkan work with less featureful
(read: mobile) implementations.
2024-01-04 14:54:20 +01:00
Benjamin Otte
9733b8d4e1 gpu: Track position fwidth explicitly
This is relevant went encountering repeat nodes, where the repeat cutoff
will make the fwidth of the position go wild otherwise.

Gradients require more work now, because we need to compute offsets
twice - once for the pixel, once for the offst.
2024-01-04 14:54:20 +01:00
Benjamin Otte
4d9ff5e46a gpu: Add support for dmabuf import to GL 2024-01-04 14:54:19 +01:00
Benjamin Otte
9cf776d02e gpu: Prepare GL rendering for samplerExternalEOS
Carry an n_external_textures variable around when selecting programs and
compile different programs for different amounts of external textures.

For now, this code is unused, but dmabufs will need it.
2024-01-04 14:54:19 +01:00
Benjamin Otte
55dbbabd0c gpu: Add importing of GL textures to the GL renderer
syncing with the GLsync is kind of a hack (because we just do it on
import), but it works.
2024-01-04 14:54:19 +01:00
Benjamin Otte
46baaa9337 gpu: Add support for texture-scale nodes
This adds GSK_GPU_IMAGE_CAN_MIPMAP and GSK_GPU_IMAGE_MIPMAP flags and
support to ensure_image() and image creation functions for creating a
mipmapped image.

Mipmaps are created using the new mipmap op that uses
glGenerateMipmap() on GL and equivalent blit ops on Vulkan.

This is then used to ensure the image is mipmapped when rendering it
with a texture-scale node.
2024-01-04 14:54:19 +01:00
Benjamin Otte
697f2b8e95 gpu: Add blitting support
Add GSK_GPU_IMAGE_NO_BLIT flag for textures that can't be blitted from.

Use a blit op to do image copies otherwise.
2024-01-04 14:54:19 +01:00
Benjamin Otte
9af809a7ff testsuite: Add a test to check scaling behavior
Various GL texture formats do not support linear scaling or mipmap
generation.

Add a test that checks this.
2024-01-04 14:54:19 +01:00
Benjamin Otte
2e25936383 testsuite: Update memorytexture text to use new renderers
1. Refactor checks for GL support

2. Add NGL and Vulkan
2024-01-04 01:36:13 +01:00
Benjamin Otte
7652dbcacf gpu: Add straight alpha support
Add a GSK_GPU_IMAGE_STRAIGHT_ALPHA and use it for images that have
straight alpha.
Make sure those images get passed through a premultiplying pass with
the new straight alpha shader.

Also remove the old Postprocess flags from the Vulkan image that were a
leftover from copying that code from the old Vulkan renderer.
2024-01-04 01:36:13 +01:00
Benjamin Otte
8df8e1283f gpu: Work around Ycbcr not working with dynamically uniform indices
There's a well hidden line in the spec that says in
https://registry.khronos.org/vulkan/specs/1.3/html/chap15.html#interfaces-resources-descset

  If the combined image sampler enables sampler Y′CBCR conversion,
  it **must** be indexed only by constant integral expressions when
  aggregated into arrays in shader code, irrespective of the
  shaderSampledImageArrayDynamicIndexing feature.

So we'll use the same trick that we use for old GL here and do an
if dance that gives us dynamically uniform expressions.
2024-01-04 01:36:13 +01:00
Benjamin Otte
2199822d5f gpu: Add dmabuf import for Vulkan
This now uses all the previously added features to allow displaying YUV
images.

Also add a utility function that turns an image into a toggle ref for a
texture. This makes sure that reffing the image also refs the texture
and that ensures that textures stay alive as long as the image is in
use.
2024-01-04 01:36:13 +01:00
Benjamin Otte
16a3a04405 vulkan: Add support for enumerating dmabuf formats
This code does not add a downloader, so we do not claim support for all
the new formats.

It just queries the formats. But this can be used to import dmabufs
directly into the Vulkaan renderer.
2024-01-04 01:36:13 +01:00
Benjamin Otte
f21938cae6 gpu: Handle flags for images
For now, the flags are just there because, and nobody uses them yet.
The only flag is EXTERNAL, which for now I'm using for YUV buffers,
though it's a bit undefined what that means.
2024-01-04 01:36:12 +01:00
Benjamin Otte
2cb06f212b gpu: Add a mipmap sampler
It's not used yet, but the sampler infrastructure needs to be expanded,
so I decided to split this out to easier find regressions.
2024-01-04 01:36:12 +01:00
Benjamin Otte
18e39f86b6 gpu: Add support for immutable samplers to Vulkan
Images can now have samplers - meaning they must be rendered with that
sampler. It also means that sampler must be handled as an immutable
sampler in descriptorsets.
These samplers can be created with a samplerYcbcrConversion, so code has
been added to pass that conversion when creating the imageview.

Also add code to GskVulkanFrame to track immutable samplers.

Nobody is making use of this yet.
2024-01-04 01:36:12 +01:00
Benjamin Otte
f13f70a448 gpu: Hook up immutable samplers to shaders
Define an array with a compile-time-constant variable size for the
immutable samplers.

A bunch of work is necessary to ensure that at least one element is in
the sampler array, because the GLSL code
  sampler2D immutable_textures[0];
is invalid.
2024-01-04 01:36:12 +01:00
Benjamin Otte
faf24c4bfe gpu: Add GskVulkanPipelineLayout
This allows having different layouts sothat we can support immutable
samplers, whcih are required for multiplane and YUV formats.

We don't use them yet.
2024-01-04 01:36:12 +01:00
Benjamin Otte
894db4d96e gpu: Add a cache for YcbcrConversions
We index them only by VkFormat for now because we don't have another
differentiator.

It's unused so far.
2024-01-04 01:36:12 +01:00
Benjamin Otte
d6593f3a3b gpu: Add an "external" allocator to Vulkan
The allocator is supposed to be used with externally allocated vkMemory
objects that are meant to be freed normally - in particular dmabufs.
2024-01-04 01:36:12 +01:00
Benjamin Otte
3f42d92eaf gpu: Add GdkDisplay::vulkan_features
use it to collect the optional features we are interested in and turn
them on only if available.

For now we add the dmabuf features, but we don't use them yet.
2024-01-04 01:36:12 +01:00
Benjamin Otte
33d531b417 gpu: Create Vulkan samplers on-demand 2024-01-04 01:36:12 +01:00
Benjamin Otte
89b1a0a10b gpu: Make Vulkan image formats check use newer functions
This is just an update of all vkGetFoo() calls to use vkGetFoo2().
2024-01-04 01:36:12 +01:00
Benjamin Otte
6e76e8b56e gpu: Move caching to the upload_texture() function
So when uploading a texture, we will automatically put it into the cache
now.
2024-01-04 01:36:11 +01:00
Benjamin Otte
f4879a0f2e gpu: Factor out uploading textures into a vfunc
This way GL and Vulkan can run custom code to import GL textures and
dmabufs.

This function also decides if and how to cache the textures it creates.
2024-01-04 01:36:11 +01:00
Benjamin Otte
cfb2a31382 gpu: Apply clip to ubershader bounds
Fixes excessive bounds when using the ubershader for huge nodes
contained inside clip nodes.
2024-01-04 01:36:11 +01:00
Benjamin Otte
40797e146d gpu: Fail to create images that are too big
It's up to the renderers to handle the NULL return value.
2024-01-04 01:36:11 +01:00
Benjamin Otte
5e6517a43f gpu: Allow texture uploads to fail
The main reason here is that we want to not fail when the texture size
is larger than the supported GpuImage size.

When that happens, for now we just fallback slowly - ulitmately to
drawing with Cairo, which is going to be clipped.
2024-01-04 01:36:11 +01:00
Benjamin Otte
095b798e8c gpu: Add render_texture() fallback impl for huge sizes
This copies over the GLRenderer approach of step-by-step filling a
memorytexture.

It just adds some extra niceties by respecting the best format.
2024-01-04 01:36:11 +01:00
Benjamin Otte
d259b5a8cc gpu: Add gsk_gpu_device_get_max_image_size()
... and initialize it properly.
2024-01-04 01:36:11 +01:00
Benjamin Otte
3c11c703e6 gpu: Handle overlapping rounded rect corners
Have a fallback in place for the most egregious abuses of rounded
corners, like
  0 0 50 50 / 50 0
and the like.

Fixes obscure border colors.
2024-01-04 01:36:11 +01:00
Benjamin Otte
78113ac63c gpu: Add a box shadow shader
Code was inspired mainly by
  https://madebyevan.com/shaders/fast-rounded-rectangle-shadows/
and
  https://pcwalton.github.io/_posts/2015-12-21-drawing-css-box-shadows-in-webrender.html

So far the results aren't cached, that's the task of future commits.
2024-01-04 01:36:11 +01:00
Benjamin Otte
e9f5328010 gpu: Add a rounded color shader
There's multiple uses I want it for:

1. Generating the box-shadow area for blurring
2. Generating masks for rounded-rect masking
3. Optimizing the common use case of rounded-clip + color

Only the last one is implemented in this commit.
2024-01-04 01:36:11 +01:00
Benjamin Otte
463cd55395 gpu: Turn globals into macros
This way, we can be more flexible in refactoring how we handle globals
(guess what we're gonna do next).
2024-01-04 01:36:10 +01:00
Benjamin Otte
ebf291eb2d gpu: Don't try to be smart
Don't try to use all those fancy GL features like glMapBuffer() and
such. Just malloc() some buffer memory and glBufferSubData() it later.

That works everywhere and is faster than (almost?) any combination of
fancy new buffer APIs. And yes I'm frustrated because I played with
those flags and none of them were better than this.

Doubles the framerate on my discrete AMD GPU.
2024-01-04 01:36:10 +01:00
Benjamin Otte
e54ff3110b gpu: Use GL_STREAM_DRAW for the push constants buffer
This seems to hit a bunch of optimizations and makes push constants
slightly faster.
2024-01-04 01:36:10 +01:00
Benjamin Otte
84505db614 gpu: Merge ops on GL, too
Just like on Vulkan, try to minimize the glDrawArrays() calls by merging
adjacent ops.
2024-01-04 01:36:10 +01:00
Benjamin Otte
5e95af0c4c gpu: Refactor image handling
Introduce a new GskGpuImageDescriptors object that tracks descriptors
for a set of images that can be managed by the GPU.
Then have each GskGpuShaderOp just reference the descriptors object they are
using, so that the coe can set things up properly.

To reference an image, the ops now just reference their descriptor -
which is the uint32 we've been sending to the shaders since forever.
2024-01-04 01:36:10 +01:00
Benjamin Otte
ee89451a78 gpu: Add atlas support
... and use it for glyphs.
2024-01-04 01:36:10 +01:00
Benjamin Otte
998ede9e87 gpu: Add a GL optimization
Use glDrawArraysInstancedBaseInstance() to draw. (Yay for GL naming.)
That allows setting up the offset in the vertex array without having to
glVertexAttribPointer() everything again.

However, this is only supported since GL 4.2 and not at all in stock GLES,
so we need to have code that can work without it.
Fortunately, it is mandatory in Vulkan, so every recent GPU supports it.
And if that GPU has a proper driver, it will also expose the GL extension
for it.
(Hint: You can check https://opengles.gpuinfo.org/listextensions.php for
how many proper drivers exist outside of Mesa.)
2024-01-04 01:36:10 +01:00
Benjamin Otte
9fe0eab943 gpu: Make border shader usable for inset/outset
... and use it for those when unblurred.
2024-01-04 01:36:10 +01:00
Benjamin Otte
f0835717b9 gpu: Add GSK_GPU_SKIP env var
The env var allows skipping various optimizations in the GPU shader.

This is useful for testing during development when trying to figure
out how to make a renderer as fast as possible.

We could also use it to enable/disable optimizations depending on GL
version or so, but I didn't think about that too much yet.
2024-01-04 01:36:10 +01:00
Benjamin Otte
33f02a6365 gpu: Copy the clear trick from the Vulkan shader
When drawing opaque color regions that are large enough, use
vkCmdClearAttachments()/glClear() instead of a shader. This speeds up
background rendering on particular on older GPUs.

See the commit messages of
  bb2cd7225e
  ce042f7ba1
  0edd7547c1
for a further discussion of performance impacts.
2024-01-04 01:36:09 +01:00
Benjamin Otte
0d39299337 gpu: Add a color shader
We don't want to use the pattern shader for simple colors, slow GPUs do
not like this at all.
2024-01-04 01:36:09 +01:00
Benjamin Otte
144303afe6 gpu: Implement blur nodes
With the work already done with shadow nodes, this one was easy.
2024-01-04 01:36:09 +01:00
Benjamin Otte
3e2a93352f gpu: Implement shadow nodes 2024-01-04 01:36:09 +01:00
Benjamin Otte
4f3c3a05c9 gpu: Change sorting for ops
The previous algorithm would reverse the order of subpasses, whcih leads
to unexpected behavior if dependent subpasses are not added as children
of a subpass, but just as a previous subpass - like when a subpass is
used multiple times later.

An example for this is a shadow node with multiple shadows - the source
of the shadow is used by the multiple shadows.

So ensure that adjacent subpasses stay in the same order.
2024-01-04 01:36:09 +01:00
Benjamin Otte
733f936350 gpu: Add a "transparent" sampler
This is using the equivalent of EXTEND_NONE, but I wasn't sure what to
call it.

It's unused atm.
2024-01-04 01:36:09 +01:00
Benjamin Otte
cee93a5dc8 gpu: Turn off optimizing in glslc
The code generated by glslc -O is optimized worse by Mesa than
code generated unoptimized.

So generate unoptimized code until somebody figures out what's going
wrong here.
2024-01-04 01:36:09 +01:00
Benjamin Otte
c5dac3298f gpu: Add support for mask patterns 2024-01-04 01:36:09 +01:00
Benjamin Otte
7917b08320 gpu: Add support for cross-fades 2024-01-04 01:36:09 +01:00
Benjamin Otte
83a411890f gpu: Add repeat nodes
They're done using the pattern shader.

The pattern shader now gained a stack where vec4's can be pushed and
popped back later, which allows storing the position before computing
the new position inside the repeat node's child.
2024-01-04 01:36:09 +01:00
Benjamin Otte
4473f5d439 gpu: Introduce gsk_texture() shader function/macro
Due to GLES and old GL not allowing non-constant texture array
lookups,we need to turn the array lookup into a big switch statementin
those versions, and that requires putting the texture() call into that
switch.

But with that trick, we can use texture IDs in GLSL.
2024-01-04 01:36:08 +01:00
Benjamin Otte
da29dd3fbb gpu: Add colorize shader
... and use it for glyphs.

The name is a slight variation of the "coloring" name from the GL
renderer.
The functionality is exactly what the "glyph" shader from the Vulkan
renderer does.
2024-01-04 01:36:08 +01:00
Benjamin Otte
655dcba6ae gpu: Improve conic gradient rendering
1. Compute the fwidth() twice with offset offsets
   That way, we avoid glitches at the boundary between 0.0 and 1.0,
   because by offsetting it by 0.5, that boundary goes away.
   Then we take the min() of both which gives us the one we care about.

2. Set the gradient to repeating
   By doing that, we don't get values at the 0.0/1.0 boundary clamped,
   but things smoothly transition.
   This smoothes the line at that boundary and makes it look just like
   every other line.
2024-01-04 01:36:08 +01:00
Benjamin Otte
c459184b64 gpu: Round offscreens to pixel boundaries
Instead of strictly rounding to the given clip rectangle, increase the
rectangle to the next pixel boundary.

Also add docs that the clip_bounds do not influence the actual size of
the returned image.
2024-01-04 01:36:08 +01:00
Benjamin Otte
275aa1b549 gpu: Add GskGpuPatternWriter
It's just an object that encapsulates everything needed to create (the
data for) a pattern op.

It also clarifies which code does what, because now the NodeProcessor
and the PatternWriter are 2 different things.
2024-01-04 01:36:08 +01:00
Benjamin Otte
1fa8ce7226 gpu: Make shader image access a vfunc
That allows shaders to handle textures differently.

In particularly, it will allow the pattern shader to take a huge amount
of textures.
2024-01-04 01:36:08 +01:00
Benjamin Otte
c94f900946 gpu: Add clip pattern
So now we can clip inside an opacity node without needing fallback.
2024-01-04 01:36:08 +01:00
Benjamin Otte
41a6237799 gpu: Passthrough subsurface nodes
We don't support subsurfaces for now, so we can just ignore the nodes.
2024-01-04 01:36:08 +01:00
Benjamin Otte
5d73d7073c gpu: Add support for debug nodes
Passthrough is always easy.
2024-01-04 01:36:08 +01:00
Benjamin Otte
b06610e614 gpu: Add a border shader
Pretty much a copy of the Vulkan border shader.

A notable change is that the input arguments are changed, because GL
gets confused if you put a mat4 at the end.
2024-01-04 01:36:08 +01:00
Benjamin Otte
1757934582 gpu: Handle nested buffer writes
when doing get_node_as_image(), that may spawn a new buffer writer that
writes into the samme buffer when rendering an offscreen with patterns.

So as a more or less hacky workaround, we now abort the current buffer
write and restart it once we've created the image.
2024-01-04 01:36:07 +01:00
Benjamin Otte
c34b2df412 gpu: Implement conic gradients 2024-01-04 01:36:07 +01:00
Benjamin Otte
59ad96bc30 gpu: Add radial gradients 2024-01-04 01:36:07 +01:00
Benjamin Otte
248a24aa63 gpu: Add linear gradients to pattern shader
This copy/pastes the gist of the Vulkan gradient renderer.
2024-01-04 01:36:07 +01:00
Benjamin Otte
5d5e63eb1d gpu: Add glyphs support
This is very rudimentary, but it's a step in the direction of getting
text going.

There's lots of things missing still.
2024-01-04 01:36:07 +01:00
Benjamin Otte
14081c2c59 gpu: Add a utility function for colors in patterns
... and use it.
2024-01-04 01:36:07 +01:00
Benjamin Otte
3744cf5b8b gpu: Add color-matrix handling to pattern shader 2024-01-04 01:36:07 +01:00
Benjamin Otte
3ba82ab8aa gpu: Make pattern creation always succeed
If creation fails, create an offscreen image instead and draw that as a
texture.

Because offscreens basically always succeed, we can pretty much assume
success everywhere - apart from pattern creation functions that also
create images, because they can run out of shader space.
2024-01-04 01:36:07 +01:00
Benjamin Otte
4b024a8a9e gpu: Have a get_node_as_pattern() function
... and use it to replace all the individual parts of code that get the
node as a pattern.
2024-01-04 01:36:07 +01:00
Benjamin Otte
a354fadd32 gpu: Add a texture pattern
Needs a lot of infrastructure for handling images, but we're handling
images now.
2024-01-04 01:36:06 +01:00
Benjamin Otte
71d064c68e gpu: Move the pattern code into the nodeprocessor
We need the nodeprocessor infrastructure to create patterns, so keeping
it in a different source file would just cause header headaches.
2024-01-04 01:36:06 +01:00
Benjamin Otte
98e52eea50 gpu: Add a texture cache
... and use it when drawing textures.

We'll need it in other places later, but for now that's what we have.
2024-01-04 01:36:06 +01:00
Benjamin Otte
312423eab1 gpu: Make frames carry a timestamp
Frames now carry a timestamp for when they are used.

This is mainly intended to attach timestamps to cached items (textures
or glyphs), but it could in theory also be used when profiling.

We use wallclock time here, not server time, because it's cheaper and
because we're more intereseted in the local machine we're rendering on.
2024-01-04 01:36:06 +01:00
Benjamin Otte
658a68b112 gpu: Make patterns do opacity nodes
Of course, for now this only works for opacity nodes that contain color
nodes, but we're still building up to ore useful stuff here.
2024-01-04 01:36:06 +01:00
Benjamin Otte
b0e26fe13c gpu: Move pattern code into its own file
Now we can extend the pattern creation easily - and we can add new
patterns quickly later.

Plus, we need to keep this file in sync with pattern.glsl and it's neat
when those 2 files reference only each other.
2024-01-04 01:36:06 +01:00
Benjamin Otte
a34fd1c318 gpu: Add float array to shaders and add an ubershader
... and use it for a naive color node implementation using both
so I can test it actually works.
2024-01-04 01:36:06 +01:00
Benjamin Otte
78fd547fd4 gpu: Clip fallback nodes to current clip
Avoids uploading parts of the node that aren't visible.
2024-01-04 01:36:06 +01:00
Benjamin Otte
9cebb8f395 gpu: Set scissor rect before clearing
The render area that restricts clearing on Vulkan needs to be respected
by the GL renderer, too.
2024-01-04 01:36:06 +01:00
Benjamin Otte
bbf162f8e5 gpu: Move global syncing out
This is necessary so that fallback code can properly sync itself,
instead of just add_node().

Fixes a bunch of glitches when fallbacks would be used.
2024-01-04 01:36:06 +01:00
Benjamin Otte
e0d411d13f gpu: Add a flip_y argument to shader execution
Because GL flips its shit sometimes (ie when it's the framebuffer),
pass the height of the target as the flip variable, so commands
that need to operate on the pixels can flip the y axis around this value.
2024-01-04 01:36:05 +01:00
Benjamin Otte
108fb83723 gpu: Add handling for (rounded) clip node(s)
This is again mostly a copy of the Vulkan renderer.

It's a bit awkward codewise with the new invalidation framework,
because we need to cache the previous values individually now,
but it's a lot more finegrained, and we don't emit globals multiple
times when clips are nested.
2024-01-04 01:36:05 +01:00
Benjamin Otte
6cc21122dc gpu: Add scissor operation
Nothing is using it yet (we don't do clipping) apart from initializing
the scissor rect at startup.
2024-01-04 01:36:05 +01:00
Benjamin Otte
737f04f6fe gpu: Add support for transform nodes
This essentially copies the Vulkan renderer machinery, but adapts it to
the new handling with pending globals.
2024-01-04 01:36:05 +01:00
Benjamin Otte
9a6c55566d gpu: Document the coordinate systems we use 2024-01-04 01:36:05 +01:00
Benjamin Otte
d767d56284 gpu: Add gsk_gpu_image_get_projection_matrix()
... and use it to initialize the "proper" projection matrix to use in
shaders.

The resulting viewport will go from top left (0,0) to bottom right
(width, height) and the z clipping plane will go from -10000 to 10000.
2024-01-04 01:36:05 +01:00
Benjamin Otte
0b029fe497 gpu: Handle container nodes
Now everything is slower because we upload every other node
individually!
2024-01-04 01:36:05 +01:00
Benjamin Otte
9e6fad7727 gpu: Add ability to run shaders
This heaves over an inital chunk of code from the Vulkan renderer to
execute shaders.

The only shader that exists for now is a shader that draws a single
texture.
We use that to replace the blit op we were doing before.
2024-01-04 01:36:05 +01:00
Benjamin Otte
df985a53ea inspector: Learn about new renderers 2024-01-04 01:36:05 +01:00
Benjamin Otte
971cbc6478 gsk: Add new renderers to the GSK_RENDERER env var
Use:
  "ngl" for the new GL renderer
  "vulkan" for the new Vulkan renderer
2024-01-04 01:36:05 +01:00
Benjamin Otte
ffb247e975 node-editor: Add the new renderer(s) 2024-01-04 01:36:04 +01:00
Benjamin Otte
05ec9949dc gl: Undeprecate the NGL renderer for now
It's used for the new GPU renderer.
2024-01-04 01:36:04 +01:00
Benjamin Otte
5cf8b66244 gpu: Use gdk_draw_context_empty_frame() when appropriate
It's a new function, so make use of it.
2024-01-04 01:36:04 +01:00
Benjamin Otte
e7890b98a3 gpu: Add outline of new GPU renderer
For now, it just renders using cairo, uploads the result to the GPU,
blits it onto the framebuffer and then is happy.

But it can do that using Vulkan and using GL (no idea which version).

The most important thing still missing is shaders.

It also has a bunch of copy/paste from the Vulkan renderer that isn't
used yet.
But I didn't want to rip it out and then try to copy it back later
2024-01-04 01:36:04 +01:00
Benjamin Otte
28214e3b46 vulkan: Turn Vulkan instances into init/ref/unref
This is not yet used.
2024-01-04 01:36:04 +01:00
Benjamin Otte
c69a0ebc2b vulkan: Remove GskVulkanRenderer
We want to introduce a new one next.

Technically, this breaks API, because gsk_vulkan_renderer_new() is going
away, but practically, we're gonna bring it back once we introduce that
renderer in a few commits.
2024-01-04 01:36:04 +01:00
244 changed files with 24926 additions and 10466 deletions

View File

@@ -1157,6 +1157,9 @@ node_editor_window_realize (GtkWidget *widget)
node_editor_window_add_renderer (self,
gsk_gl_renderer_new (),
"OpenGL");
node_editor_window_add_renderer (self,
gsk_ngl_renderer_new (),
"NGL");
#ifdef GDK_RENDERING_VULKAN
node_editor_window_add_renderer (self,
gsk_vulkan_renderer_new (),

View File

@@ -274,49 +274,46 @@ are only available when GTK has been configured with `-Ddebug=true`.
`renderer`
: General renderer information
`cairo`
: cairo renderer information
`opengl`
: OpenGL renderer information
`shaders`
: Shaders
`surface`
: Surfaces
`vulkan`
: Vulkan renderer information
`shaders`
: Information about shaders
`surface`
: Information about surfaces
`fallback`
: Information about fallbacks
: Information about fallback usage in renderers
`glyphcache`
: Information about glyph caching
`verbose`
: Print verbose output while rendering
A number of options affect behavior instead of logging:
`diff`
: Show differences
`geometry`
: Show borders
: Show borders (when using cairo)
`full-redraw`
: Force full redraws for every frame
: Force full redraws
`sync`
: Sync after each frame
`staging`
: Use a staging image for texture upload (Vulkan only)
`offload-disable`
: Disable graphics offload to subsurfaces
`vulkan-staging-image`
: Use a staging image for Vulkan texture upload
`vulkan-staging-buffer`
: Use a staging buffer for Vulkan texture upload
`cairo`
: Overlay error pattern over cairo drawing (finds fallbacks)
The special value `all` can be used to turn on all debug options. The special
value `help` can be used to obtain a list of all supported debug options.
@@ -357,6 +354,38 @@ the default selection of the device that is used for Vulkan rendering.
The special value `list` can be used to obtain a list of all Vulkan
devices.
### `GDK_VULKAN_SKIP`
This variable can be set to a list of values, which cause GDK to
disable certain features of the Vulkan support.
`dmabuf`
: Never import Dmabufs
`ycbr`
: Do not support Ycbcr textures
`descriptor-indexing`
: Force slow descriptor set layout codepath
`dynamic-indexing`
: Hardcode small number of buffer and texure arrays
`nonuniform-indexing`
: Split draw calls to ensure uniform texture accesses
`semaphore-export`
: Disable sync of exported dmabufs
`semaphore-import`
: Disable sync of imported dmabufs
`incremental-present`
: Do not send damage regions
The special value `all` can be used to turn on all values. The special
value `help` can be used to obtain a list of all supported values.
### `GSK_RENDERER`
If set, selects the GSK renderer to use. The following renderers can
@@ -378,6 +407,9 @@ using and the GDK backend supports them:
`gl`
: Selects the "gl" OpenGL renderer
`ngl`
: Selects the "ngl" OpenGL renderer
`vulkan`
: Selects the Vulkan renderer
@@ -395,6 +427,32 @@ to rememdy this on the GTK side; the best bet before trying the above
workarounds is to try to update your graphics drivers and Nahimic
installation.
### `GSK_GPU_SKIP`
This variable can be set to a list of values, which cause GSK to
disable certain features of the "ngl" and "vulkan" renderer.
`uber`
: Don't use the uber shader
`clear`
: Use shaders instead of vkCmdClearAttachment()/glClear()
`blit`
: Use shaders instead of vkCmdBlit()/glBlitFramebuffer()
`gradients`
: Don't supersample gradients
`mipmap`
: Avoid creating mipmaps
`gl-baseinstance`
:Assume no ARB/EXT_base_instance support
The special value `all` can be used to turn on all values. The special
value `help` can be used to obtain a list of all supported values.
### `GSK_MAX_TEXTURE_SIZE`
Limit texture size to the minimum of this value and the OpenGL limit

View File

@@ -40,7 +40,7 @@
#include "gdkglcontextprivate.h"
#include "gdkmonitorprivate.h"
#include "gdkrectangle.h"
#include "gdkvulkancontext.h"
#include "gdkvulkancontextprivate.h"
#ifdef HAVE_EGL
#include <epoxy/egl.h>
@@ -416,6 +416,13 @@ gdk_display_dispose (GObject *object)
g_clear_pointer (&display->egl_dmabuf_formats, gdk_dmabuf_formats_unref);
g_clear_pointer (&display->egl_external_formats, gdk_dmabuf_formats_unref);
#ifdef GDK_RENDERING_VULKAN
if (display->vk_dmabuf_formats)
{
gdk_display_unref_vulkan (display);
g_assert (display->vk_dmabuf_formats == NULL);
}
#endif
g_clear_object (&priv->gl_context);
#ifdef HAVE_EGL
@@ -1939,6 +1946,10 @@ gdk_display_init_dmabuf (GdkDisplay *self)
#ifdef HAVE_DMABUF
if (!GDK_DISPLAY_DEBUG_CHECK (self, DMABUF_DISABLE))
{
#ifdef GDK_RENDERING_VULKAN
gdk_display_add_dmabuf_downloader (self, gdk_vulkan_get_dmabuf_downloader (self, builder));
#endif
#ifdef HAVE_EGL
gdk_display_add_dmabuf_downloader (self, gdk_dmabuf_get_egl_downloader (self, builder));
#endif

View File

@@ -41,6 +41,17 @@ G_BEGIN_DECLS
typedef struct _GdkDisplayClass GdkDisplayClass;
typedef enum {
GDK_VULKAN_FEATURE_DMABUF = 1 << 0,
GDK_VULKAN_FEATURE_YCBCR = 1 << 1,
GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING = 1 << 2,
GDK_VULKAN_FEATURE_DYNAMIC_INDEXING = 1 << 3,
GDK_VULKAN_FEATURE_NONUNIFORM_INDEXING = 1 << 4,
GDK_VULKAN_FEATURE_SEMAPHORE_EXPORT = 1 << 5,
GDK_VULKAN_FEATURE_SEMAPHORE_IMPORT = 1 << 6,
GDK_VULKAN_FEATURE_INCREMENTAL_PRESENT = 1 << 7,
} GdkVulkanFeatures;
/* Tracks information about the device grab on this display */
typedef struct
{
@@ -107,6 +118,8 @@ struct _GdkDisplay
char *vk_pipeline_cache_etag;
guint vk_save_pipeline_cache_source;
GHashTable *vk_shader_modules;
GdkDmabufFormats *vk_dmabuf_formats;
GdkVulkanFeatures vulkan_features;
guint vulkan_refcount;
#endif /* GDK_RENDERING_VULKAN */

View File

@@ -1919,6 +1919,19 @@ gdk_dmabuf_get_memory_format (guint32 fourcc,
}
#ifdef GDK_RENDERING_VULKAN
gboolean
gdk_dmabuf_vk_get_nth (gsize n,
guint32 *fourcc,
VkFormat *vk_format)
{
if (n >= G_N_ELEMENTS (supported_formats))
return FALSE;
*fourcc = supported_formats[n].fourcc;
*vk_format = supported_formats[n].vk.format;
return TRUE;
}
VkFormat
gdk_dmabuf_get_vk_format (guint32 fourcc,
VkComponentMapping *out_components)

View File

@@ -54,6 +54,9 @@ gboolean gdk_dmabuf_get_memory_format (guint32
gboolean premultiplied,
GdkMemoryFormat *out_format);
#ifdef GDK_RENDERING_VULKAN
gboolean gdk_dmabuf_vk_get_nth (gsize n,
guint32 *fourcc,
VkFormat *vk_format);
VkFormat gdk_dmabuf_get_vk_format (guint32 fourcc,
VkComponentMapping *out_components);
#endif

View File

@@ -642,7 +642,7 @@ gdk_rgba_parser_parse (GtkCssParser *parser,
{
if (gtk_css_token_is_ident (token, "transparent"))
{
*rgba = (GdkRGBA) { 0, 0, 0, 0 };
*rgba = GDK_RGBA_TRANSPARENT;
}
else if (gdk_rgba_parse (rgba, gtk_css_token_get_string (token)))
{

View File

@@ -34,6 +34,11 @@
((_GDK_RGBA_SELECT_COLOR(str, 2, 4) << 4) | _GDK_RGBA_SELECT_COLOR(str, 2, 5)) / 255., \
((sizeof(str) % 4 == 1) ? ((_GDK_RGBA_SELECT_COLOR(str, 3, 6) << 4) | _GDK_RGBA_SELECT_COLOR(str, 3, 7)) : 0xFF) / 255. })
#define GDK_RGBA_INIT_ALPHA(rgba,opacity) ((GdkRGBA) { (rgba)->red, (rgba)->green, (rgba)->blue, (rgba)->alpha * (opacity) })
#define GDK_RGBA_TRANSPARENT ((GdkRGBA) { 0, 0, 0, 0 })
#define GDK_RGBA_BLACK ((GdkRGBA) { 0, 0, 0, 1 })
#define GDK_RGBA_WHITE ((GdkRGBA) { 1, 1, 1, 1 })
gboolean gdk_rgba_parser_parse (GtkCssParser *parser,
GdkRGBA *rgba);

View File

@@ -24,10 +24,27 @@
#include "gdkvulkancontextprivate.h"
#include "gdkdebugprivate.h"
#include "gdkdmabufformatsbuilderprivate.h"
#include "gdkdmabuffourccprivate.h"
#include "gdkdmabuftextureprivate.h"
#include "gdkdisplayprivate.h"
#include <glib/gi18n-lib.h>
#include <math.h>
#ifdef GDK_RENDERING_VULKAN
static const GdkDebugKey gsk_vulkan_feature_keys[] = {
{ "dmabuf", GDK_VULKAN_FEATURE_DMABUF, "Never import Dmabufs" },
{ "ycbcr", GDK_VULKAN_FEATURE_YCBCR, "Do not support Ycbcr textures" },
{ "descriptor-indexing", GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING, "Force slow descriptor set layout codepath" },
{ "dynamic-indexing", GDK_VULKAN_FEATURE_DYNAMIC_INDEXING, "Hardcode small number of buffer and texure arrays" },
{ "nonuniform-indexing", GDK_VULKAN_FEATURE_NONUNIFORM_INDEXING, "Split draw calls to ensure uniform texture accesses" },
{ "semaphore-export", GDK_VULKAN_FEATURE_SEMAPHORE_EXPORT, "Disable sync of exported dmabufs" },
{ "semaphore-import", GDK_VULKAN_FEATURE_SEMAPHORE_IMPORT, "Disable sync of imported dmabufs" },
{ "incremental-present", GDK_VULKAN_FEATURE_INCREMENTAL_PRESENT, "Do not send damage regions" },
};
#endif
/**
* GdkVulkanContext:
*
@@ -60,9 +77,6 @@ struct _GdkVulkanContextPrivate {
guint n_images;
VkImage *images;
cairo_region_t **regions;
gboolean has_present_region;
#endif
guint32 draw_index;
@@ -533,6 +547,74 @@ physical_device_supports_extension (VkPhysicalDevice device,
return FALSE;
}
static gboolean
physical_device_check_features (VkPhysicalDevice device,
GdkVulkanFeatures *out_features)
{
VkPhysicalDeviceVulkan12Features v12_features = {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES,
};
VkPhysicalDeviceSamplerYcbcrConversionFeatures ycbcr_features = {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SAMPLER_YCBCR_CONVERSION_FEATURES,
.pNext = &v12_features
};
VkPhysicalDeviceFeatures2 features = {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_FEATURES_2,
.pNext = &ycbcr_features
};
VkExternalSemaphoreProperties semaphore_props = {
.sType = VK_STRUCTURE_TYPE_EXTERNAL_SEMAPHORE_PROPERTIES,
};
vkGetPhysicalDeviceFeatures2 (device, &features);
vkGetPhysicalDeviceExternalSemaphoreProperties (device,
&(VkPhysicalDeviceExternalSemaphoreInfo) {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_SEMAPHORE_INFO,
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
},
&semaphore_props);
*out_features = 0;
if (features.features.shaderUniformBufferArrayDynamicIndexing &&
features.features.shaderSampledImageArrayDynamicIndexing)
*out_features |= GDK_VULKAN_FEATURE_DYNAMIC_INDEXING;
if (v12_features.descriptorIndexing &&
v12_features.descriptorBindingPartiallyBound &&
v12_features.descriptorBindingVariableDescriptorCount &&
v12_features.descriptorBindingSampledImageUpdateAfterBind &&
v12_features.descriptorBindingStorageBufferUpdateAfterBind)
*out_features |= GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING;
else if (physical_device_supports_extension (device, VK_EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME))
*out_features |= GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING;
if (v12_features.shaderSampledImageArrayNonUniformIndexing &&
v12_features.shaderStorageBufferArrayNonUniformIndexing)
*out_features |= GDK_VULKAN_FEATURE_NONUNIFORM_INDEXING;
if (ycbcr_features.samplerYcbcrConversion)
*out_features |= GDK_VULKAN_FEATURE_YCBCR;
if (physical_device_supports_extension (device, VK_KHR_EXTERNAL_MEMORY_FD_EXTENSION_NAME) &&
physical_device_supports_extension (device, VK_EXT_IMAGE_DRM_FORMAT_MODIFIER_EXTENSION_NAME))
*out_features |= GDK_VULKAN_FEATURE_DMABUF;
if (physical_device_supports_extension (device, VK_KHR_EXTERNAL_SEMAPHORE_FD_EXTENSION_NAME))
{
if (semaphore_props.externalSemaphoreFeatures & VK_EXTERNAL_SEMAPHORE_FEATURE_EXPORTABLE_BIT)
*out_features |= GDK_VULKAN_FEATURE_SEMAPHORE_EXPORT;
if (semaphore_props.externalSemaphoreFeatures & VK_EXTERNAL_SEMAPHORE_FEATURE_IMPORTABLE_BIT)
*out_features |= GDK_VULKAN_FEATURE_SEMAPHORE_IMPORT;
}
if (physical_device_supports_extension (device, VK_KHR_INCREMENTAL_PRESENT_EXTENSION_NAME))
*out_features |= GDK_VULKAN_FEATURE_INCREMENTAL_PRESENT;
return TRUE;
}
static void
gdk_vulkan_context_begin_frame (GdkDrawContext *draw_context,
GdkMemoryDepth depth,
@@ -578,42 +660,38 @@ gdk_vulkan_context_end_frame (GdkDrawContext *draw_context,
GdkVulkanContext *context = GDK_VULKAN_CONTEXT (draw_context);
GdkVulkanContextPrivate *priv = gdk_vulkan_context_get_instance_private (context);
GdkSurface *surface = gdk_draw_context_get_surface (draw_context);
VkPresentRegionsKHR *regionsptr = VK_NULL_HANDLE;
VkPresentRegionsKHR regions;
GdkDisplay *display = gdk_draw_context_get_display (draw_context);
VkRectLayerKHR *rectangles;
double scale;
int n_regions;
scale = gdk_surface_get_scale (surface);
n_regions = cairo_region_num_rectangles (painted);
rectangles = g_alloca (sizeof (VkRectLayerKHR) * n_regions);
for (int i = 0; i < n_regions; i++)
if (display->vulkan_features & GDK_VULKAN_FEATURE_INCREMENTAL_PRESENT)
{
cairo_rectangle_int_t r;
double scale;
cairo_region_get_rectangle (painted, i, &r);
scale = gdk_surface_get_scale (surface);
n_regions = cairo_region_num_rectangles (painted);
rectangles = g_alloca (sizeof (VkRectLayerKHR) * n_regions);
rectangles[i] = (VkRectLayerKHR) {
.layer = 0,
.offset.x = (int) floor (r.x * scale),
.offset.y = (int) floor (r.y * scale),
.extent.width = (int) ceil (r.width * scale),
.extent.height = (int) ceil (r.height * scale),
};
for (int i = 0; i < n_regions; i++)
{
cairo_rectangle_int_t r;
cairo_region_get_rectangle (painted, i, &r);
rectangles[i] = (VkRectLayerKHR) {
.layer = 0,
.offset.x = (int) floor (r.x * scale),
.offset.y = (int) floor (r.y * scale),
.extent.width = (int) ceil (r.width * scale),
.extent.height = (int) ceil (r.height * scale),
};
}
}
else
{
rectangles = NULL;
n_regions = 0;
}
regions = (VkPresentRegionsKHR) {
.sType = VK_STRUCTURE_TYPE_PRESENT_REGIONS_KHR,
.swapchainCount = 1,
.pRegions = &(VkPresentRegionKHR) {
.rectangleCount = n_regions,
.pRectangles = rectangles,
},
};
if (priv->has_present_region)
regionsptr = &regions;
GDK_VK_CHECK (vkQueuePresentKHR, gdk_vulkan_context_get_queue (context),
&(VkPresentInfoKHR) {
@@ -629,7 +707,14 @@ gdk_vulkan_context_end_frame (GdkDrawContext *draw_context,
.pImageIndices = (uint32_t[]) {
priv->draw_index
},
.pNext = regionsptr,
.pNext = rectangles == NULL ? NULL : &(VkPresentRegionsKHR) {
.sType = VK_STRUCTURE_TYPE_PRESENT_REGIONS_KHR,
.swapchainCount = 1,
.pRegions = &(VkPresentRegionKHR) {
.rectangleCount = n_regions,
.pRectangles = rectangles,
},
}
});
cairo_region_destroy (priv->regions[priv->draw_index]);
@@ -699,7 +784,7 @@ gdk_vulkan_context_real_init (GInitable *initable,
VkBool32 supported;
uint32_t i;
priv->vulkan_ref = gdk_display_ref_vulkan (display, error);
priv->vulkan_ref = gdk_display_init_vulkan (display, error);
if (!priv->vulkan_ref)
return FALSE;
@@ -819,9 +904,6 @@ gdk_vulkan_context_real_init (GInitable *initable,
if (priv->formats[GDK_MEMORY_U16].vk_format.format == VK_FORMAT_UNDEFINED)
priv->formats[GDK_MEMORY_U16] = priv->formats[GDK_MEMORY_FLOAT32];
priv->has_present_region = physical_device_supports_extension (display->vk_physical_device,
VK_KHR_INCREMENTAL_PRESENT_EXTENSION_NAME);
if (!gdk_vulkan_context_check_swapchain (context, error))
goto out_surface;
@@ -1122,10 +1204,8 @@ gdk_vulkan_save_pipeline_cache_cb (gpointer data)
}
void
gdk_vulkan_context_pipeline_cache_updated (GdkVulkanContext *self)
gdk_display_vulkan_pipeline_cache_updated (GdkDisplay *display)
{
GdkDisplay *display = gdk_draw_context_get_display (GDK_DRAW_CONTEXT (self));
g_clear_handle_id (&display->vk_save_pipeline_cache_source, g_source_remove);
display->vk_save_pipeline_cache_source = g_timeout_add_seconds_full (G_PRIORITY_DEFAULT_IDLE - 10,
10, /* random choice that is not now */
@@ -1150,14 +1230,6 @@ gdk_display_create_pipeline_cache (GdkDisplay *display)
}
}
VkPipelineCache
gdk_vulkan_context_get_pipeline_cache (GdkVulkanContext *self)
{
g_return_val_if_fail (GDK_IS_VULKAN_CONTEXT (self), NULL);
return gdk_draw_context_get_display (GDK_DRAW_CONTEXT (self))->vk_pipeline_cache;
}
/**
* gdk_vulkan_context_get_image_format:
* @context: a `GdkVulkanContext`
@@ -1268,6 +1340,7 @@ gdk_display_create_vulkan_device (GdkDisplay *display,
const char *override;
gboolean list_devices;
int first, last;
GdkVulkanFeatures skip_features;
uint32_t n_devices = 0;
GDK_VK_CHECK(vkEnumeratePhysicalDevices, display->vk_instance, &n_devices, NULL);
@@ -1287,6 +1360,10 @@ gdk_display_create_vulkan_device (GdkDisplay *display,
first = 0;
last = n_devices;
skip_features = gdk_parse_debug_var ("GDK_VULKAN_SKIP",
gsk_vulkan_feature_keys,
G_N_ELEMENTS (gsk_vulkan_feature_keys));
override = g_getenv ("GDK_VULKAN_DEVICE");
list_devices = FALSE;
if (override)
@@ -1368,11 +1445,14 @@ gdk_display_create_vulkan_device (GdkDisplay *display,
for (i = first; i < last; i++)
{
GdkVulkanFeatures features, device_features;
uint32_t n_queue_props;
if (!physical_device_supports_extension (devices[i], VK_EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME))
if (!physical_device_check_features (devices[i], &device_features))
continue;
features = device_features & ~skip_features;
vkGetPhysicalDeviceQueueFamilyProperties (devices[i], &n_queue_props, NULL);
VkQueueFamilyProperties *queue_props = g_newa (VkQueueFamilyProperties, n_queue_props);
vkGetPhysicalDeviceQueueFamilyProperties (devices[i], &n_queue_props, queue_props);
@@ -1381,18 +1461,22 @@ gdk_display_create_vulkan_device (GdkDisplay *display,
if (queue_props[j].queueFlags & VK_QUEUE_GRAPHICS_BIT)
{
GPtrArray *device_extensions;
gboolean has_incremental_present;
has_incremental_present = physical_device_supports_extension (devices[i],
VK_KHR_INCREMENTAL_PRESENT_EXTENSION_NAME);
device_extensions = g_ptr_array_new ();
g_ptr_array_add (device_extensions, (gpointer) VK_KHR_SWAPCHAIN_EXTENSION_NAME);
g_ptr_array_add (device_extensions, (gpointer) VK_KHR_MAINTENANCE_3_EXTENSION_NAME);
g_ptr_array_add (device_extensions, (gpointer) VK_EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME);
if (has_incremental_present)
if (features & GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING)
g_ptr_array_add (device_extensions, (gpointer) VK_EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME);
if (features & GDK_VULKAN_FEATURE_DMABUF)
{
g_ptr_array_add (device_extensions, (gpointer) VK_KHR_EXTERNAL_MEMORY_FD_EXTENSION_NAME);
g_ptr_array_add (device_extensions, (gpointer) VK_EXT_IMAGE_DRM_FORMAT_MODIFIER_EXTENSION_NAME);
}
if (features & (GDK_VULKAN_FEATURE_SEMAPHORE_IMPORT | GDK_VULKAN_FEATURE_SEMAPHORE_EXPORT))
g_ptr_array_add (device_extensions, (gpointer) VK_KHR_EXTERNAL_SEMAPHORE_FD_EXTENSION_NAME);
if (features & GDK_VULKAN_FEATURE_INCREMENTAL_PRESENT)
g_ptr_array_add (device_extensions, (gpointer) VK_KHR_INCREMENTAL_PRESENT_EXTENSION_NAME);
#define ENABLE_IF(flag) ((features & (flag)) ? VK_TRUE : VK_FALSE)
GDK_DISPLAY_DEBUG (display, VULKAN, "Using Vulkan device %u, queue %u", i, j);
if (GDK_VK_CHECK (vkCreateDevice, devices[i],
&(VkDeviceCreateInfo) {
@@ -1406,15 +1490,19 @@ gdk_display_create_vulkan_device (GdkDisplay *display,
},
.enabledExtensionCount = device_extensions->len,
.ppEnabledExtensionNames = (const char * const *) device_extensions->pdata,
.pNext = &(VkPhysicalDeviceVulkan12Features) {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES,
.shaderSampledImageArrayNonUniformIndexing = VK_TRUE,
.shaderStorageBufferArrayNonUniformIndexing = VK_TRUE,
.descriptorIndexing = VK_TRUE,
.descriptorBindingPartiallyBound = VK_TRUE,
.descriptorBindingVariableDescriptorCount = VK_TRUE,
.descriptorBindingSampledImageUpdateAfterBind = VK_TRUE,
.descriptorBindingStorageBufferUpdateAfterBind = VK_TRUE,
.pNext = &(VkPhysicalDeviceVulkan11Features) {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_1_FEATURES,
.samplerYcbcrConversion = ENABLE_IF (GDK_VULKAN_FEATURE_YCBCR),
.pNext = &(VkPhysicalDeviceVulkan12Features) {
.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES,
.shaderSampledImageArrayNonUniformIndexing = ENABLE_IF (GDK_VULKAN_FEATURE_NONUNIFORM_INDEXING),
.shaderStorageBufferArrayNonUniformIndexing = ENABLE_IF (GDK_VULKAN_FEATURE_NONUNIFORM_INDEXING),
.descriptorIndexing = ENABLE_IF (GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING),
.descriptorBindingPartiallyBound = ENABLE_IF (GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING),
.descriptorBindingVariableDescriptorCount = ENABLE_IF (GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING),
.descriptorBindingSampledImageUpdateAfterBind = ENABLE_IF (GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING),
.descriptorBindingStorageBufferUpdateAfterBind = ENABLE_IF (GDK_VULKAN_FEATURE_DESCRIPTOR_INDEXING),
}
}
},
NULL,
@@ -1423,12 +1511,26 @@ gdk_display_create_vulkan_device (GdkDisplay *display,
g_ptr_array_unref (device_extensions);
continue;
}
#undef ENABLE_IF
g_ptr_array_unref (device_extensions);
display->vk_physical_device = devices[i];
vkGetDeviceQueue(display->vk_device, j, 0, &display->vk_queue);
display->vk_queue_family_index = j;
display->vulkan_features = features;
GDK_DISPLAY_DEBUG (display, VULKAN, "Enabled features (use GDK_VULKAN_SKIP env var to disable):");
for (i = 0; i < G_N_ELEMENTS (gsk_vulkan_feature_keys); i++)
{
GDK_DISPLAY_DEBUG (display, VULKAN, " %s: %s",
gsk_vulkan_feature_keys[i].key,
(features & gsk_vulkan_feature_keys[i].value) ? "YES" :
((skip_features & gsk_vulkan_feature_keys[i].value) ? "disabled via env var" :
(((device_features & gsk_vulkan_feature_keys[i].value) == 0) ? "not supported" :
"Hum, what? This should not happen.")));
}
return TRUE;
}
}
@@ -1625,9 +1727,23 @@ gdk_display_create_vulkan_instance (GdkDisplay *display,
return TRUE;
}
/*
* gdk_display_init_vulkan:
* @display: a display
* @error: A potential error message
*
* Initializes Vulkan and returns an error on failure.
*
* If Vulkan is already initialized, this function returns
* %TRUE and increases the refcount of the existing instance.
*
* You need to gdk_display_unref_vulkan() to close it again.
*
* Returns: %TRUE if Vulkan is initialized.
**/
gboolean
gdk_display_ref_vulkan (GdkDisplay *display,
GError **error)
gdk_display_init_vulkan (GdkDisplay *display,
GError **error)
{
if (display->vulkan_refcount == 0)
{
@@ -1640,6 +1756,23 @@ gdk_display_ref_vulkan (GdkDisplay *display,
return TRUE;
}
/*
* gdk_display_ref_vulkan:
* @display: a GdkDisplay
*
* Increases the refcount of an existing Vulkan instance.
*
* This function must not be called if Vulkan may not be initialized
* yet, call gdk_display_init_vulkan() in that case.
**/
void
gdk_display_ref_vulkan (GdkDisplay *display)
{
g_assert (display->vulkan_refcount > 0);
display->vulkan_refcount++;
}
void
gdk_display_unref_vulkan (GdkDisplay *display)
{
@@ -1653,6 +1786,9 @@ gdk_display_unref_vulkan (GdkDisplay *display)
if (display->vulkan_refcount > 0)
return;
GDK_DEBUG (VULKAN, "Closing Vulkan instance");
display->vulkan_features = 0;
g_clear_pointer (&display->vk_dmabuf_formats, gdk_dmabuf_formats_unref);
g_hash_table_iter_init (&iter, display->vk_shader_modules);
while (g_hash_table_iter_next (&iter, &key, &value))
{
@@ -1690,6 +1826,95 @@ gdk_display_unref_vulkan (GdkDisplay *display)
display->vk_instance = VK_NULL_HANDLE;
}
/* Hack. We don't include gsk/gsk.h here to avoid a build order problem
* with the generated header gskenumtypes.h, so we need to hack around
* a bit to access the gsk api we need.
*/
typedef struct _GskRenderer GskRenderer;
extern GskRenderer * gsk_vulkan_renderer_new (void);
extern gboolean gsk_renderer_realize (GskRenderer *renderer,
GdkSurface *surface,
GError **error);
GdkDmabufDownloader *
gdk_vulkan_get_dmabuf_downloader (GdkDisplay *display,
GdkDmabufFormatsBuilder *builder)
{
GdkDmabufFormatsBuilder *vulkan_builder;
GskRenderer *renderer;
VkDrmFormatModifierPropertiesEXT modifier_list[100];
VkDrmFormatModifierPropertiesListEXT modifier_props = {
.sType = VK_STRUCTURE_TYPE_DRM_FORMAT_MODIFIER_PROPERTIES_LIST_EXT,
.pNext = NULL,
.pDrmFormatModifierProperties = modifier_list,
};
VkFormatProperties2 props = {
.sType = VK_STRUCTURE_TYPE_FORMAT_PROPERTIES_2,
.pNext = &modifier_props,
};
VkFormat vk_format;
guint32 fourcc;
GError *error = NULL;
gsize i, j;
g_assert (display->vk_dmabuf_formats == NULL);
if (!gdk_display_init_vulkan (display, NULL))
return NULL;
if ((display->vulkan_features & GDK_VULKAN_FEATURE_DMABUF) == 0)
return NULL;
vulkan_builder = gdk_dmabuf_formats_builder_new ();
for (i = 0; gdk_dmabuf_vk_get_nth (i, &fourcc, &vk_format); i++)
{
if (vk_format == VK_FORMAT_UNDEFINED)
continue;
modifier_props.drmFormatModifierCount = sizeof (modifier_list);
vkGetPhysicalDeviceFormatProperties2 (display->vk_physical_device,
vk_format,
&props);
g_warn_if_fail (modifier_props.drmFormatModifierCount < sizeof (modifier_list));
for (j = 0; j < modifier_props.drmFormatModifierCount; j++)
{
GDK_DISPLAY_DEBUG (display, DMABUF,
"Vulkan supports dmabuf format %.4s::%016llx with %u planes and features 0x%x",
(char *) &fourcc,
(long long unsigned) modifier_list[j].drmFormatModifier,
modifier_list[j].drmFormatModifierPlaneCount,
modifier_list[j].drmFormatModifierTilingFeatures);
if (modifier_list[j].drmFormatModifier == DRM_FORMAT_MOD_LINEAR)
continue;
gdk_dmabuf_formats_builder_add_format (vulkan_builder,
fourcc,
modifier_list[j].drmFormatModifier);
}
}
display->vk_dmabuf_formats = gdk_dmabuf_formats_builder_free_to_formats (vulkan_builder);
gdk_dmabuf_formats_builder_add_formats (builder, display->vk_dmabuf_formats);
renderer = gsk_vulkan_renderer_new ();
if (!gsk_renderer_realize (renderer, NULL, &error))
{
g_warning ("Failed to realize GL renderer: %s", error->message);
g_error_free (error);
g_object_unref (renderer);
return NULL;
}
return GDK_DMABUF_DOWNLOADER (renderer);
}
VkShaderModule
gdk_display_get_vk_shader_module (GdkDisplay *self,
const char *resource_name)

View File

@@ -23,6 +23,8 @@
#include "gdkvulkancontext.h"
#include "gdkdebugprivate.h"
#include "gdkdmabufprivate.h"
#include "gdkdmabufdownloaderprivate.h"
#include "gdkdrawcontextprivate.h"
#include "gdkenums.h"
@@ -69,15 +71,18 @@ gdk_vulkan_handle_result (VkResult res,
#define GDK_VK_CHECK(func, ...) gdk_vulkan_handle_result (func (__VA_ARGS__), G_STRINGIFY (func))
gboolean gdk_display_ref_vulkan (GdkDisplay *display,
gboolean gdk_display_init_vulkan (GdkDisplay *display,
GError **error);
void gdk_display_ref_vulkan (GdkDisplay *display);
void gdk_display_unref_vulkan (GdkDisplay *display);
GdkDmabufDownloader * gdk_vulkan_get_dmabuf_downloader (GdkDisplay *display,
GdkDmabufFormatsBuilder *builder);
VkShaderModule gdk_display_get_vk_shader_module (GdkDisplay *display,
const char *resource_name);
VkPipelineCache gdk_vulkan_context_get_pipeline_cache (GdkVulkanContext *self);
void gdk_vulkan_context_pipeline_cache_updated (GdkVulkanContext *self);
void gdk_display_vulkan_pipeline_cache_updated (GdkDisplay *display);
GdkMemoryFormat gdk_vulkan_context_get_offscreen_format (GdkVulkanContext *context,
GdkMemoryDepth depth);

View File

@@ -419,6 +419,9 @@ gdk_save_png (GdkTexture *texture)
if (!png)
return NULL;
/* 2^31-1 is the maximum size for PNG files */
png_set_user_limits (png, (1u << 31) - 1, (1u << 31) - 1);
info = png_create_info_struct (png);
if (!info)
{

View File

@@ -23,20 +23,24 @@ def replace_if_changed(new, old):
gl_source_shaders = []
ngl_source_shaders = []
vulkan_compiled_shaders = []
gpu_vulkan_compiled_shaders = []
vulkan_shaders = []
for f in sys.argv[2:]:
if f.endswith('.glsl'):
if f.startswith('ngl'):
ngl_source_shaders.append(f);
if f.find('gsk/gpu') > -1:
ngl_source_shaders.append(f)
else:
gl_source_shaders.append(f)
elif f.endswith('.spv'):
vulkan_compiled_shaders.append(f)
if f.find('gsk/gpu') > -1:
gpu_vulkan_compiled_shaders.append(f)
else:
vulkan_compiled_shaders.append(f)
elif f.endswith('.frag') or f.endswith('.vert'):
vulkan_shaders.append(f)
else:
sys.exit(-1) # FIXME: error message
raise Exception(f"No idea what XML to generate for {f}")
xml = '''<?xml version='1.0' encoding='UTF-8'?>
<gresources>
@@ -50,7 +54,7 @@ for f in gl_source_shaders:
xml += '\n'
for f in ngl_source_shaders:
xml += ' <file alias=\'ngl/{0}\'>ngl/resources/{0}</file>\n'.format(os.path.basename(f))
xml += ' <file alias=\'shaders/gl/{0}\'>gpu/shaders/{0}</file>\n'.format(os.path.basename(f))
xml += '\n'
@@ -59,6 +63,11 @@ for f in vulkan_compiled_shaders:
xml += '\n'
for f in gpu_vulkan_compiled_shaders:
xml += ' <file alias=\'shaders/vulkan/{0}\'>gpu/shaders/{0}</file>\n'.format(os.path.basename(f))
xml += '\n'
for f in vulkan_shaders:
xml += ' <file alias=\'vulkan/{0}\'>vulkan/resources/{0}</file>\n'.format(os.path.basename(f))

View File

@@ -1435,6 +1435,8 @@ gsk_gl_command_queue_create_texture (GskGLCommandQueue *self,
switch (format)
{
case 0:
break;
case GL_RGBA8:
glTexImage2D (GL_TEXTURE_2D, 0, format, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
break;
@@ -1689,7 +1691,7 @@ gsk_gl_command_queue_upload_texture_chunks (GskGLCommandQueue *self,
height = MIN (height, self->max_texture_size);
}
texture_id = gsk_gl_command_queue_create_texture (self, width, height, GL_RGBA8);
texture_id = gsk_gl_command_queue_create_texture (self, width, height, 0);
if (texture_id == -1)
return texture_id;

View File

@@ -537,51 +537,3 @@ gsk_gl_renderer_try_compile_gl_shader (GskGLRenderer *renderer,
return program != NULL;
}
typedef struct {
GskRenderer parent_instance;
} GskNglRenderer;
typedef struct {
GskRendererClass parent_class;
} GskNglRendererClass;
G_DEFINE_TYPE (GskNglRenderer, gsk_ngl_renderer, GSK_TYPE_RENDERER)
static void
gsk_ngl_renderer_init (GskNglRenderer *renderer)
{
}
static gboolean
gsk_ngl_renderer_realize (GskRenderer *renderer,
GdkSurface *surface,
GError **error)
{
g_set_error_literal (error,
G_IO_ERROR, G_IO_ERROR_FAILED,
"please use the GL renderer instead");
return FALSE;
}
static void
gsk_ngl_renderer_class_init (GskNglRendererClass *class)
{
GSK_RENDERER_CLASS (class)->realize = gsk_ngl_renderer_realize;
}
/**
* gsk_ngl_renderer_new:
*
* Same as gsk_gl_renderer_new().
*
* Returns: (transfer full): a new GL renderer
*
* Deprecated: 4.4: Use gsk_gl_renderer_new()
*/
GskRenderer *
gsk_ngl_renderer_new (void)
{
G_GNUC_BEGIN_IGNORE_DEPRECATIONS
return g_object_new (gsk_ngl_renderer_get_type (), NULL);
G_GNUC_END_IGNORE_DEPRECATIONS
}

View File

@@ -40,9 +40,9 @@ GType gsk_gl_renderer_get_type (void) G_GNUC_CONST;
GDK_AVAILABLE_IN_4_2
GskRenderer *gsk_gl_renderer_new (void);
GDK_DEPRECATED_IN_4_4_FOR (gsk_gl_renderer_get_type)
GDK_AVAILABLE_IN_ALL
GType gsk_ngl_renderer_get_type (void) G_GNUC_CONST;
GDK_DEPRECATED_IN_4_4_FOR (gsk_gl_renderer_new)
GDK_AVAILABLE_IN_ALL
GskRenderer *gsk_ngl_renderer_new (void);
G_END_DECLS

97
gsk/gpu/gskglbuffer.c Normal file
View File

@@ -0,0 +1,97 @@
#include "config.h"
#include "gskglbufferprivate.h"
struct _GskGLBuffer
{
GskGpuBuffer parent_instance;
GLenum target;
GLuint buffer_id;
GLenum access;
guchar *data;
};
G_DEFINE_TYPE (GskGLBuffer, gsk_gl_buffer, GSK_TYPE_GPU_BUFFER)
static void
gsk_gl_buffer_finalize (GObject *object)
{
GskGLBuffer *self = GSK_GL_BUFFER (object);
g_free (self->data);
glDeleteBuffers (1, &self->buffer_id);
G_OBJECT_CLASS (gsk_gl_buffer_parent_class)->finalize (object);
}
static guchar *
gsk_gl_buffer_map (GskGpuBuffer *buffer)
{
GskGLBuffer *self = GSK_GL_BUFFER (buffer);
return self->data;
}
static void
gsk_gl_buffer_unmap (GskGpuBuffer *buffer)
{
GskGLBuffer *self = GSK_GL_BUFFER (buffer);
gsk_gl_buffer_bind (self);
glBufferSubData (self->target, 0, gsk_gpu_buffer_get_size (buffer), self->data);
}
static void
gsk_gl_buffer_class_init (GskGLBufferClass *klass)
{
GskGpuBufferClass *buffer_class = GSK_GPU_BUFFER_CLASS (klass);
GObjectClass *gobject_class = G_OBJECT_CLASS (klass);
buffer_class->map = gsk_gl_buffer_map;
buffer_class->unmap = gsk_gl_buffer_unmap;
gobject_class->finalize = gsk_gl_buffer_finalize;
}
static void
gsk_gl_buffer_init (GskGLBuffer *self)
{
}
GskGpuBuffer *
gsk_gl_buffer_new (GLenum target,
gsize size,
GLenum access)
{
GskGLBuffer *self;
self = g_object_new (GSK_TYPE_GL_BUFFER, NULL);
gsk_gpu_buffer_setup (GSK_GPU_BUFFER (self), size);
self->target = target;
self->access = access;
glGenBuffers (1, &self->buffer_id);
glBindBuffer (target, self->buffer_id);
glBufferData (target, size, NULL, GL_STATIC_DRAW);
self->data = malloc (size);
return GSK_GPU_BUFFER (self);
}
void
gsk_gl_buffer_bind (GskGLBuffer *self)
{
glBindBuffer (self->target, self->buffer_id);
}
void
gsk_gl_buffer_bind_base (GskGLBuffer *self,
GLuint index)
{
glBindBufferBase (self->target, index, self->buffer_id);
}

View File

@@ -0,0 +1,22 @@
#pragma once
#include "gskgpubufferprivate.h"
#include "gskgldeviceprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GL_BUFFER (gsk_gl_buffer_get_type ())
G_DECLARE_FINAL_TYPE (GskGLBuffer, gsk_gl_buffer, GSK, GL_BUFFER, GskGpuBuffer)
GskGpuBuffer * gsk_gl_buffer_new (GLenum target,
gsize size,
GLenum access);
void gsk_gl_buffer_bind (GskGLBuffer *self);
void gsk_gl_buffer_bind_base (GskGLBuffer *self,
GLuint index);
G_END_DECLS

144
gsk/gpu/gskgldescriptors.c Normal file
View File

@@ -0,0 +1,144 @@
#include "config.h"
#include "gskgldescriptorsprivate.h"
#include "gskglbufferprivate.h"
#include "gskglimageprivate.h"
struct _GskGLDescriptors
{
GskGpuDescriptors parent_instance;
GskGLDevice *device;
guint n_external;
};
G_DEFINE_TYPE (GskGLDescriptors, gsk_gl_descriptors, GSK_TYPE_GPU_DESCRIPTORS)
static void
gsk_gl_descriptors_finalize (GObject *object)
{
GskGLDescriptors *self = GSK_GL_DESCRIPTORS (object);
g_object_unref (self->device);
G_OBJECT_CLASS (gsk_gl_descriptors_parent_class)->finalize (object);
}
static gboolean
gsk_gl_descriptors_add_image (GskGpuDescriptors *desc,
GskGpuImage *image,
GskGpuSampler sampler,
guint32 *out_descriptor)
{
GskGLDescriptors *self = GSK_GL_DESCRIPTORS (desc);
gsize used_texture_units;
used_texture_units = gsk_gpu_descriptors_get_n_images (desc) + 2 * self->n_external;
if (gsk_gpu_image_get_flags (image) & GSK_GPU_IMAGE_EXTERNAL)
{
if (16 - used_texture_units < 3)
return FALSE;
*out_descriptor = (self->n_external << 1) | 1;
self->n_external++;
return TRUE;
}
else
{
if (used_texture_units >= 16)
return FALSE;
*out_descriptor = (gsk_gpu_descriptors_get_n_images (desc) - self->n_external) << 1;
return TRUE;
}
}
static gboolean
gsk_gl_descriptors_add_buffer (GskGpuDescriptors *desc,
GskGpuBuffer *buffer,
guint32 *out_descriptor)
{
gsize used_buffers;
used_buffers = gsk_gpu_descriptors_get_n_buffers (desc);
if (used_buffers >= 11)
return FALSE;
*out_descriptor = used_buffers;
return TRUE;
}
static void
gsk_gl_descriptors_class_init (GskGLDescriptorsClass *klass)
{
GskGpuDescriptorsClass *descriptors_class = GSK_GPU_DESCRIPTORS_CLASS (klass);
GObjectClass *object_class = G_OBJECT_CLASS (klass);
object_class->finalize = gsk_gl_descriptors_finalize;
descriptors_class->add_image = gsk_gl_descriptors_add_image;
descriptors_class->add_buffer = gsk_gl_descriptors_add_buffer;
}
static void
gsk_gl_descriptors_init (GskGLDescriptors *self)
{
}
GskGpuDescriptors *
gsk_gl_descriptors_new (GskGLDevice *device)
{
GskGLDescriptors *self;
self = g_object_new (GSK_TYPE_GL_DESCRIPTORS, NULL);
self->device = g_object_ref (device);
return GSK_GPU_DESCRIPTORS (self);
}
guint
gsk_gl_descriptors_get_n_external (GskGLDescriptors *self)
{
return self->n_external;
}
void
gsk_gl_descriptors_use (GskGLDescriptors *self)
{
GskGpuDescriptors *desc = GSK_GPU_DESCRIPTORS (self);
gsize i, ext, n_textures;
n_textures = 16 - 3 * self->n_external;
ext = 0;
for (i = 0; i < gsk_gpu_descriptors_get_n_images (desc); i++)
{
GskGLImage *image = GSK_GL_IMAGE (gsk_gpu_descriptors_get_image (desc, i));
if (gsk_gpu_image_get_flags (GSK_GPU_IMAGE (image)) & GSK_GPU_IMAGE_EXTERNAL)
{
glActiveTexture (GL_TEXTURE0 + n_textures + 3 * ext);
gsk_gl_image_bind_texture (image);
ext++;
}
else
{
glActiveTexture (GL_TEXTURE0 + i - ext);
gsk_gl_image_bind_texture (image);
glBindSampler (i - ext, gsk_gl_device_get_sampler_id (self->device, gsk_gpu_descriptors_get_sampler (desc, i)));
}
}
for (i = 0; i < gsk_gpu_descriptors_get_n_buffers (desc); i++)
{
GskGLBuffer *buffer = GSK_GL_BUFFER (gsk_gpu_descriptors_get_buffer (desc, i));
/* index 0 are the globals, we start at 1 */
gsk_gl_buffer_bind_base (buffer, i + 1);
}
}

View File

@@ -0,0 +1,19 @@
#pragma once
#include "gskgpudescriptorsprivate.h"
#include "gskgldeviceprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GL_DESCRIPTORS (gsk_gl_descriptors_get_type ())
G_DECLARE_FINAL_TYPE (GskGLDescriptors, gsk_gl_descriptors, GSK, GL_DESCRIPTORS, GskGpuDescriptors)
GskGpuDescriptors * gsk_gl_descriptors_new (GskGLDevice *device);
guint gsk_gl_descriptors_get_n_external (GskGLDescriptors *self);
void gsk_gl_descriptors_use (GskGLDescriptors *self);
G_END_DECLS

684
gsk/gpu/gskgldevice.c Normal file
View File

@@ -0,0 +1,684 @@
#include "config.h"
#include "gskgldeviceprivate.h"
#include "gskdebugprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskglbufferprivate.h"
#include "gskglimageprivate.h"
#include "gdk/gdkdisplayprivate.h"
#include "gdk/gdkglcontextprivate.h"
#include <glib/gi18n-lib.h>
struct _GskGLDevice
{
GskGpuDevice parent_instance;
GHashTable *gl_programs;
const char *version_string;
GdkGLAPI api;
guint sampler_ids[GSK_GPU_SAMPLER_N_SAMPLERS];
};
struct _GskGLDeviceClass
{
GskGpuDeviceClass parent_class;
};
typedef struct _GLProgramKey GLProgramKey;
struct _GLProgramKey
{
const GskGpuShaderOpClass *op_class;
guint32 variation;
GskGpuShaderClip clip;
guint n_external_textures;
};
G_DEFINE_TYPE (GskGLDevice, gsk_gl_device, GSK_TYPE_GPU_DEVICE)
static guint
gl_program_key_hash (gconstpointer data)
{
const GLProgramKey *key = data;
return GPOINTER_TO_UINT (key->op_class) ^
key->clip ^
(key->variation << 2) ^
(key->n_external_textures << 24);
}
static gboolean
gl_program_key_equal (gconstpointer a,
gconstpointer b)
{
const GLProgramKey *keya = a;
const GLProgramKey *keyb = b;
return keya->op_class == keyb->op_class &&
keya->variation == keyb->variation &&
keya->clip == keyb->clip &&
keya->n_external_textures == keyb->n_external_textures;
}
static GskGpuImage *
gsk_gl_device_create_offscreen_image (GskGpuDevice *device,
gboolean with_mipmap,
GdkMemoryDepth depth,
gsize width,
gsize height)
{
GskGLDevice *self = GSK_GL_DEVICE (device);
return gsk_gl_image_new (self,
gdk_memory_depth_get_format (depth),
GSK_GPU_IMAGE_RENDERABLE | GSK_GPU_IMAGE_FILTERABLE,
width,
height);
}
static GskGpuImage *
gsk_gl_device_create_upload_image (GskGpuDevice *device,
gboolean with_mipmap,
GdkMemoryFormat format,
gsize width,
gsize height)
{
GskGLDevice *self = GSK_GL_DEVICE (device);
return gsk_gl_image_new (self,
format,
0,
width,
height);
}
static GskGpuImage *
gsk_gl_device_create_download_image (GskGpuDevice *device,
GdkMemoryDepth depth,
gsize width,
gsize height)
{
GskGLDevice *self = GSK_GL_DEVICE (device);
return gsk_gl_image_new (self,
gdk_memory_depth_get_format (depth),
GSK_GPU_IMAGE_RENDERABLE,
width,
height);
}
static GskGpuImage *
gsk_gl_device_create_atlas_image (GskGpuDevice *device,
gsize width,
gsize height)
{
GskGLDevice *self = GSK_GL_DEVICE (device);
return gsk_gl_image_new (self,
GDK_MEMORY_R8G8B8A8_PREMULTIPLIED,
GSK_GPU_IMAGE_RENDERABLE,
width,
height);
}
static void
gsk_gl_device_finalize (GObject *object)
{
GskGLDevice *self = GSK_GL_DEVICE (object);
GskGpuDevice *device = GSK_GPU_DEVICE (self);
g_object_steal_data (G_OBJECT (gsk_gpu_device_get_display (device)), "-gsk-gl-device");
gdk_gl_context_make_current (gdk_display_get_gl_context (gsk_gpu_device_get_display (device)));
g_hash_table_unref (self->gl_programs);
glDeleteSamplers (G_N_ELEMENTS (self->sampler_ids), self->sampler_ids);
G_OBJECT_CLASS (gsk_gl_device_parent_class)->finalize (object);
}
static void
gsk_gl_device_class_init (GskGLDeviceClass *klass)
{
GskGpuDeviceClass *gpu_device_class = GSK_GPU_DEVICE_CLASS (klass);
GObjectClass *object_class = G_OBJECT_CLASS (klass);
gpu_device_class->create_offscreen_image = gsk_gl_device_create_offscreen_image;
gpu_device_class->create_atlas_image = gsk_gl_device_create_atlas_image;
gpu_device_class->create_upload_image = gsk_gl_device_create_upload_image;
gpu_device_class->create_download_image = gsk_gl_device_create_download_image;
object_class->finalize = gsk_gl_device_finalize;
}
static void
free_gl_program (gpointer program)
{
glDeleteProgram (GPOINTER_TO_UINT (program));
}
static void
gsk_gl_device_init (GskGLDevice *self)
{
self->gl_programs = g_hash_table_new_full (gl_program_key_hash, gl_program_key_equal, g_free, free_gl_program);
}
static void
gsk_gl_device_setup_samplers (GskGLDevice *self)
{
struct {
GLuint min_filter;
GLuint mag_filter;
GLuint wrap;
} sampler_flags[GSK_GPU_SAMPLER_N_SAMPLERS] = {
[GSK_GPU_SAMPLER_DEFAULT] = {
.min_filter = GL_LINEAR,
.mag_filter = GL_LINEAR,
.wrap = GL_CLAMP_TO_EDGE,
},
[GSK_GPU_SAMPLER_TRANSPARENT] = {
.min_filter = GL_LINEAR,
.mag_filter = GL_LINEAR,
.wrap = GL_CLAMP_TO_BORDER,
},
[GSK_GPU_SAMPLER_REPEAT] = {
.min_filter = GL_LINEAR,
.mag_filter = GL_LINEAR,
.wrap = GL_REPEAT,
},
[GSK_GPU_SAMPLER_NEAREST] = {
.min_filter = GL_NEAREST,
.mag_filter = GL_NEAREST,
.wrap = GL_CLAMP_TO_EDGE,
},
[GSK_GPU_SAMPLER_MIPMAP_DEFAULT] = {
.min_filter = GL_LINEAR_MIPMAP_LINEAR,
.mag_filter = GL_LINEAR,
.wrap = GL_CLAMP_TO_EDGE,
}
};
guint i;
glGenSamplers (G_N_ELEMENTS (self->sampler_ids), self->sampler_ids);
for (i = 0; i < G_N_ELEMENTS (self->sampler_ids); i++)
{
glSamplerParameteri (self->sampler_ids[i], GL_TEXTURE_MIN_FILTER, sampler_flags[i].min_filter);
glSamplerParameteri (self->sampler_ids[i], GL_TEXTURE_MAG_FILTER, sampler_flags[i].mag_filter);
glSamplerParameteri (self->sampler_ids[i], GL_TEXTURE_WRAP_S, sampler_flags[i].wrap);
glSamplerParameteri (self->sampler_ids[i], GL_TEXTURE_WRAP_T, sampler_flags[i].wrap);
}
}
GskGpuDevice *
gsk_gl_device_get_for_display (GdkDisplay *display,
GError **error)
{
GskGLDevice *self;
GdkGLContext *context;
GLint max_texture_size;
self = g_object_get_data (G_OBJECT (display), "-gsk-gl-device");
if (self)
return GSK_GPU_DEVICE (g_object_ref (self));
if (!gdk_display_prepare_gl (display, error))
return NULL;
context = gdk_display_get_gl_context (display);
/* GLES 2 is not supported */
if (!gdk_gl_context_check_version (context, "3.0", "3.0"))
{
g_set_error (error, GDK_GL_ERROR, GDK_GL_ERROR_NOT_AVAILABLE,
_("OpenGL ES 2.0 is not supported by this renderer."));
return NULL;
}
self = g_object_new (GSK_TYPE_GL_DEVICE, NULL);
gdk_gl_context_make_current (context);
glGetIntegerv (GL_MAX_TEXTURE_SIZE, &max_texture_size);
gsk_gpu_device_setup (GSK_GPU_DEVICE (self), display, max_texture_size);
self->version_string = gdk_gl_context_get_glsl_version_string (context);
self->api = gdk_gl_context_get_api (context);
gsk_gl_device_setup_samplers (self);
g_object_set_data (G_OBJECT (display), "-gsk-gl-device", self);
return GSK_GPU_DEVICE (self);
}
static char *
prepend_line_numbers (char *code)
{
GString *s;
char *p;
int line;
s = g_string_new ("");
p = code;
line = 1;
while (*p)
{
char *end = strchr (p, '\n');
if (end)
end = end + 1; /* Include newline */
else
end = p + strlen (p);
g_string_append_printf (s, "%3d| ", line++);
g_string_append_len (s, p, end - p);
p = end;
}
g_free (code);
return g_string_free (s, FALSE);
}
static gboolean
gsk_gl_device_check_shader_error (int shader_id,
GError **error)
{
GLint status;
GLint log_len;
GLint code_len;
char *log;
char *code;
glGetShaderiv (shader_id, GL_COMPILE_STATUS, &status);
if G_LIKELY (status == GL_TRUE)
return TRUE;
glGetShaderiv (shader_id, GL_INFO_LOG_LENGTH, &log_len);
log = g_malloc0 (log_len + 1);
glGetShaderInfoLog (shader_id, log_len, NULL, log);
glGetShaderiv (shader_id, GL_SHADER_SOURCE_LENGTH, &code_len);
code = g_malloc0 (code_len + 1);
glGetShaderSource (shader_id, code_len, NULL, code);
code = prepend_line_numbers (code);
g_set_error (error,
GDK_GL_ERROR,
GDK_GL_ERROR_COMPILATION_FAILED,
"Compilation failure in shader.\n"
"Source Code:\n"
"%s\n"
"\n"
"Error Message:\n"
"%s\n"
"\n",
code,
log);
g_free (code);
g_free (log);
return FALSE;
}
static void
print_shader_info (const char *prefix,
GLuint shader_id,
const char *name)
{
if (GSK_DEBUG_CHECK (SHADERS))
{
int code_len;
glGetShaderiv (shader_id, GL_SHADER_SOURCE_LENGTH, &code_len);
if (code_len > 0)
{
char *code;
code = g_malloc0 (code_len + 1);
glGetShaderSource (shader_id, code_len, NULL, code);
code = prepend_line_numbers (code);
g_message ("%s %d, %s:\n%s",
prefix, shader_id,
name ? name : "unnamed",
code);
g_free (code);
}
}
}
static GLuint
gsk_gl_device_load_shader (GskGLDevice *self,
const char *program_name,
GLenum shader_type,
guint32 variation,
GskGpuShaderClip clip,
guint n_external_textures,
GError **error)
{
GString *preamble;
char *resource_name;
GBytes *bytes;
GLuint shader_id;
preamble = g_string_new (NULL);
g_string_append (preamble, self->version_string);
g_string_append (preamble, "\n");
if (self->api == GDK_GL_API_GLES)
{
if (n_external_textures > 0)
{
g_string_append (preamble, "#extension GL_OES_EGL_image_external_essl3 : require\n");
g_string_append (preamble, "#extension GL_OES_EGL_image_external : require\n");
}
g_string_append (preamble, "#define GSK_GLES 1\n");
g_assert (3 * n_external_textures <= 16);
}
else
{
g_assert (n_external_textures == 0);
}
g_string_append_printf (preamble, "#define N_TEXTURES %u\n", 16 - 3 * n_external_textures);
g_string_append_printf (preamble, "#define N_EXTERNAL_TEXTURES %u\n", n_external_textures);
switch (shader_type)
{
case GL_VERTEX_SHADER:
g_string_append (preamble, "#define GSK_VERTEX_SHADER 1\n");
break;
case GL_FRAGMENT_SHADER:
g_string_append (preamble, "#define GSK_FRAGMENT_SHADER 1\n");
break;
default:
g_assert_not_reached ();
return 0;
}
g_string_append_printf (preamble, "#define GSK_VARIATION %uu\n", variation);
switch (clip)
{
case GSK_GPU_SHADER_CLIP_NONE:
g_string_append (preamble, "#define GSK_SHADER_CLIP GSK_GPU_SHADER_CLIP_NONE\n");
break;
case GSK_GPU_SHADER_CLIP_RECT:
g_string_append (preamble, "#define GSK_SHADER_CLIP GSK_GPU_SHADER_CLIP_RECT\n");
break;
case GSK_GPU_SHADER_CLIP_ROUNDED:
g_string_append (preamble, "#define GSK_SHADER_CLIP GSK_GPU_SHADER_CLIP_ROUNDED\n");
break;
default:
g_assert_not_reached ();
break;
}
resource_name = g_strconcat ("/org/gtk/libgsk/shaders/gl/", program_name, ".glsl", NULL);
bytes = g_resources_lookup_data (resource_name, 0, error);
g_free (resource_name);
if (bytes == NULL)
return 0;
shader_id = glCreateShader (shader_type);
glShaderSource (shader_id,
2,
(const char *[]) {
preamble->str,
g_bytes_get_data (bytes, NULL),
},
NULL);
g_bytes_unref (bytes);
g_string_free (preamble, TRUE);
glCompileShader (shader_id);
print_shader_info (shader_type == GL_FRAGMENT_SHADER ? "fragment" : "vertex", shader_id, program_name);
if (!gsk_gl_device_check_shader_error (shader_id, error))
{
glDeleteShader (shader_id);
return 0;
}
return shader_id;
}
static GLuint
gsk_gl_device_load_program (GskGLDevice *self,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
guint n_external_textures,
GError **error)
{
GLuint vertex_shader_id, fragment_shader_id, program_id;
GLint link_status;
vertex_shader_id = gsk_gl_device_load_shader (self, op_class->shader_name, GL_VERTEX_SHADER, variation, clip, n_external_textures, error);
if (vertex_shader_id == 0)
return 0;
fragment_shader_id = gsk_gl_device_load_shader (self, op_class->shader_name, GL_FRAGMENT_SHADER, variation, clip, n_external_textures, error);
if (fragment_shader_id == 0)
return 0;
program_id = glCreateProgram ();
glAttachShader (program_id, vertex_shader_id);
glAttachShader (program_id, fragment_shader_id);
op_class->setup_attrib_locations (program_id);
glLinkProgram (program_id);
glGetProgramiv (program_id, GL_LINK_STATUS, &link_status);
glDetachShader (program_id, vertex_shader_id);
glDeleteShader (vertex_shader_id);
glDetachShader (program_id, fragment_shader_id);
glDeleteShader (fragment_shader_id);
if (link_status == GL_FALSE)
{
char *buffer = NULL;
int log_len = 0;
glGetProgramiv (program_id, GL_INFO_LOG_LENGTH, &log_len);
if (log_len > 0)
{
/* log_len includes NULL */
buffer = g_malloc0 (log_len);
glGetProgramInfoLog (program_id, log_len, NULL, buffer);
}
g_set_error (error,
GDK_GL_ERROR,
GDK_GL_ERROR_LINK_FAILED,
"Linking failure in shader: %s",
buffer ? buffer : "");
g_free (buffer);
glDeleteProgram (program_id);
return 0;
}
return program_id;
}
void
gsk_gl_device_use_program (GskGLDevice *self,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
guint n_external_textures)
{
GError *error = NULL;
GLuint program_id;
GLProgramKey key = {
.op_class = op_class,
.variation = variation,
.clip = clip,
.n_external_textures = n_external_textures
};
guint i, n_textures;
program_id = GPOINTER_TO_UINT (g_hash_table_lookup (self->gl_programs, &key));
if (program_id)
{
glUseProgram (program_id);
return;
}
program_id = gsk_gl_device_load_program (self, op_class, variation, clip, n_external_textures, &error);
if (program_id == 0)
{
g_critical ("Failed to load shader program: %s", error->message);
g_clear_error (&error);
return;
}
g_hash_table_insert (self->gl_programs, g_memdup (&key, sizeof (GLProgramKey)), GUINT_TO_POINTER (program_id));
glUseProgram (program_id);
n_textures = 16 - 3 * n_external_textures;
for (i = 0; i < n_external_textures; i++)
{
char *name = g_strdup_printf ("external_textures[%u]", i);
glUniform1i (glGetUniformLocation (program_id, name), n_textures + 3 * i);
g_free (name);
}
for (i = 0; i < n_textures; i++)
{
char *name = g_strdup_printf ("textures[%u]", i);
glUniform1i (glGetUniformLocation (program_id, name), i);
g_free (name);
}
}
GLuint
gsk_gl_device_get_sampler_id (GskGLDevice *self,
GskGpuSampler sampler)
{
g_return_val_if_fail (sampler < G_N_ELEMENTS (self->sampler_ids), 0);
return self->sampler_ids[sampler];
}
static gboolean
gsk_gl_device_get_format_flags (GskGLDevice *self,
GdkGLContext *context,
GdkMemoryFormat format,
GskGpuImageFlags *out_flags)
{
GdkGLMemoryFlags gl_flags;
*out_flags = 0;
gl_flags = gdk_gl_context_get_format_flags (context, format);
if (!(gl_flags & GDK_GL_FORMAT_USABLE))
return FALSE;
if (gl_flags & GDK_GL_FORMAT_RENDERABLE)
*out_flags |= GSK_GPU_IMAGE_RENDERABLE;
else if (gdk_gl_context_get_use_es (context))
*out_flags |= GSK_GPU_IMAGE_NO_BLIT;
if (gl_flags & GDK_GL_FORMAT_FILTERABLE)
*out_flags |= GSK_GPU_IMAGE_FILTERABLE;
if ((gl_flags & (GDK_GL_FORMAT_RENDERABLE | GDK_GL_FORMAT_FILTERABLE)) == (GDK_GL_FORMAT_RENDERABLE | GDK_GL_FORMAT_FILTERABLE))
*out_flags |= GSK_GPU_IMAGE_CAN_MIPMAP;
if (gdk_memory_format_alpha (format) == GDK_MEMORY_ALPHA_STRAIGHT)
*out_flags |= GSK_GPU_IMAGE_STRAIGHT_ALPHA;
return TRUE;
}
void
gsk_gl_device_find_gl_format (GskGLDevice *self,
GdkMemoryFormat format,
GskGpuImageFlags required_flags,
GdkMemoryFormat *out_format,
GskGpuImageFlags *out_flags,
GLint *out_gl_internal_format,
GLenum *out_gl_format,
GLenum *out_gl_type,
GLint out_swizzle[4])
{
GdkGLContext *context = gdk_gl_context_get_current ();
GskGpuImageFlags flags;
GdkMemoryFormat alt_format;
const GdkMemoryFormat *fallbacks;
gsize i;
/* First, try the actual format */
if (gsk_gl_device_get_format_flags (self, context, format, &flags) &&
((flags & required_flags) == required_flags))
{
*out_format = format;
*out_flags = flags;
gdk_memory_format_gl_format (format,
out_gl_internal_format,
out_gl_format,
out_gl_type,
out_swizzle);
return;
}
/* Second, try the potential RGBA format */
if (gdk_memory_format_gl_rgba_format (format,
&alt_format,
out_gl_internal_format,
out_gl_format,
out_gl_type,
out_swizzle) &&
gsk_gl_device_get_format_flags (self, context, alt_format, &flags) &&
((flags & required_flags) == required_flags))
{
*out_format = format;
*out_flags = flags;
return;
}
/* Next, try the fallbacks */
fallbacks = gdk_memory_format_get_fallbacks (format);
for (i = 0; fallbacks[i] != -1; i++)
{
if (gsk_gl_device_get_format_flags (self, context, fallbacks[i], &flags) &&
((flags & required_flags) == required_flags))
{
*out_format = fallbacks[i];
*out_flags = flags;
gdk_memory_format_gl_format (fallbacks[i],
out_gl_internal_format,
out_gl_format,
out_gl_type,
out_swizzle);
return;
}
}
/* fallbacks will always fallback to a supported format */
g_assert_not_reached ();
}

View File

@@ -0,0 +1,33 @@
#pragma once
#include "gskgpudeviceprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GL_DEVICE (gsk_gl_device_get_type ())
G_DECLARE_FINAL_TYPE (GskGLDevice, gsk_gl_device, GSK, GL_DEVICE, GskGpuDevice)
GskGpuDevice * gsk_gl_device_get_for_display (GdkDisplay *display,
GError **error);
void gsk_gl_device_use_program (GskGLDevice *self,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
guint n_external_textures);
GLuint gsk_gl_device_get_sampler_id (GskGLDevice *self,
GskGpuSampler sampler);
void gsk_gl_device_find_gl_format (GskGLDevice *self,
GdkMemoryFormat format,
GskGpuImageFlags required_flags,
GdkMemoryFormat *out_format,
GskGpuImageFlags *out_flags,
GLint *out_gl_internal_format,
GLenum *out_gl_format,
GLenum *out_gl_type,
GLint out_swizzle[4]);
G_END_DECLS

240
gsk/gpu/gskglframe.c Normal file
View File

@@ -0,0 +1,240 @@
#include "config.h"
#include "gskglframeprivate.h"
#include "gskgpuglobalsopprivate.h"
#include "gskgpuopprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskglbufferprivate.h"
#include "gskgldescriptorsprivate.h"
#include "gskgldeviceprivate.h"
#include "gskglimageprivate.h"
#include "gdkdmabuftextureprivate.h"
#include "gdkglcontextprivate.h"
#include "gdkgltextureprivate.h"
struct _GskGLFrame
{
GskGpuFrame parent_instance;
GLuint globals_buffer_id;
guint next_texture_slot;
GHashTable *vaos;
};
struct _GskGLFrameClass
{
GskGpuFrameClass parent_class;
};
G_DEFINE_TYPE (GskGLFrame, gsk_gl_frame, GSK_TYPE_GPU_FRAME)
static gboolean
gsk_gl_frame_is_busy (GskGpuFrame *frame)
{
return FALSE;
}
static void
gsk_gl_frame_setup (GskGpuFrame *frame)
{
GskGLFrame *self = GSK_GL_FRAME (frame);
glGenBuffers (1, &self->globals_buffer_id);
}
static void
gsk_gl_frame_cleanup (GskGpuFrame *frame)
{
GskGLFrame *self = GSK_GL_FRAME (frame);
self->next_texture_slot = 0;
GSK_GPU_FRAME_CLASS (gsk_gl_frame_parent_class)->cleanup (frame);
}
static GskGpuImage *
gsk_gl_frame_upload_texture (GskGpuFrame *frame,
gboolean with_mipmap,
GdkTexture *texture)
{
if (GDK_IS_GL_TEXTURE (texture))
{
GdkGLTexture *gl_texture = GDK_GL_TEXTURE (texture);
if (gdk_gl_context_is_shared (GDK_GL_CONTEXT (gsk_gpu_frame_get_context (frame)),
gdk_gl_texture_get_context (gl_texture)))
{
GskGpuImage *image;
GLsync sync;
image = gsk_gl_image_new_for_texture (GSK_GL_DEVICE (gsk_gpu_frame_get_device (frame)),
texture,
gdk_gl_texture_get_id (gl_texture),
FALSE,
gdk_gl_texture_has_mipmap (gl_texture) ? (GSK_GPU_IMAGE_CAN_MIPMAP | GSK_GPU_IMAGE_MIPMAP) : 0);
/* This is a hack, but it works */
sync = gdk_gl_texture_get_sync (gl_texture);
if (sync)
glWaitSync (sync, 0, GL_TIMEOUT_IGNORED);
return image;
}
}
else if (GDK_IS_DMABUF_TEXTURE (texture))
{
gboolean external;
GLuint tex_id;
tex_id = gdk_gl_context_import_dmabuf (GDK_GL_CONTEXT (gsk_gpu_frame_get_context (frame)),
gdk_texture_get_width (texture),
gdk_texture_get_height (texture),
gdk_dmabuf_texture_get_dmabuf (GDK_DMABUF_TEXTURE (texture)),
&external);
if (tex_id)
{
return gsk_gl_image_new_for_texture (GSK_GL_DEVICE (gsk_gpu_frame_get_device (frame)),
texture,
tex_id,
TRUE,
(external ? GSK_GPU_IMAGE_EXTERNAL | GSK_GPU_IMAGE_NO_BLIT : 0));
}
}
return GSK_GPU_FRAME_CLASS (gsk_gl_frame_parent_class)->upload_texture (frame, with_mipmap, texture);
}
static GskGpuDescriptors *
gsk_gl_frame_create_descriptors (GskGpuFrame *frame)
{
return GSK_GPU_DESCRIPTORS (gsk_gl_descriptors_new (GSK_GL_DEVICE (gsk_gpu_frame_get_device (frame))));
}
static GskGpuBuffer *
gsk_gl_frame_create_vertex_buffer (GskGpuFrame *frame,
gsize size)
{
GskGLFrame *self = GSK_GL_FRAME (frame);
/* We could also reassign them all to the new buffer here?
* Is that faster?
*/
g_hash_table_remove_all (self->vaos);
return gsk_gl_buffer_new (GL_ARRAY_BUFFER, size, GL_WRITE_ONLY);
}
static GskGpuBuffer *
gsk_gl_frame_create_storage_buffer (GskGpuFrame *frame,
gsize size)
{
return gsk_gl_buffer_new (GL_UNIFORM_BUFFER, size, GL_WRITE_ONLY);
}
static void
gsk_gl_frame_submit (GskGpuFrame *frame,
GskGpuBuffer *vertex_buffer,
GskGpuOp *op)
{
GskGLFrame *self = GSK_GL_FRAME (frame);
GskGLCommandState state = { 0, };
glEnable (GL_SCISSOR_TEST);
glEnable (GL_DEPTH_TEST);
glDepthFunc (GL_LEQUAL);
glEnable (GL_BLEND);
if (vertex_buffer)
gsk_gl_buffer_bind (GSK_GL_BUFFER (vertex_buffer));
gsk_gl_frame_bind_globals (self);
glBufferData (GL_UNIFORM_BUFFER,
sizeof (GskGpuGlobalsInstance),
NULL,
GL_STREAM_DRAW);
while (op)
{
op = gsk_gpu_op_gl_command (op, frame, &state);
}
}
static void
gsk_gl_frame_finalize (GObject *object)
{
GskGLFrame *self = GSK_GL_FRAME (object);
g_hash_table_unref (self->vaos);
glDeleteBuffers (1, &self->globals_buffer_id);
G_OBJECT_CLASS (gsk_gl_frame_parent_class)->finalize (object);
}
static void
gsk_gl_frame_class_init (GskGLFrameClass *klass)
{
GskGpuFrameClass *gpu_frame_class = GSK_GPU_FRAME_CLASS (klass);
GObjectClass *object_class = G_OBJECT_CLASS (klass);
gpu_frame_class->is_busy = gsk_gl_frame_is_busy;
gpu_frame_class->setup = gsk_gl_frame_setup;
gpu_frame_class->cleanup = gsk_gl_frame_cleanup;
gpu_frame_class->upload_texture = gsk_gl_frame_upload_texture;
gpu_frame_class->create_descriptors = gsk_gl_frame_create_descriptors;
gpu_frame_class->create_vertex_buffer = gsk_gl_frame_create_vertex_buffer;
gpu_frame_class->create_storage_buffer = gsk_gl_frame_create_storage_buffer;
gpu_frame_class->submit = gsk_gl_frame_submit;
object_class->finalize = gsk_gl_frame_finalize;
}
static void
free_vao (gpointer vao)
{
glDeleteVertexArrays (1, (GLuint[1]) { GPOINTER_TO_UINT (vao) });
}
static void
gsk_gl_frame_init (GskGLFrame *self)
{
self->vaos = g_hash_table_new_full (g_direct_hash, g_direct_equal, NULL, free_vao);
}
void
gsk_gl_frame_use_program (GskGLFrame *self,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
guint n_external_textures)
{
GLuint vao;
gsk_gl_device_use_program (GSK_GL_DEVICE (gsk_gpu_frame_get_device (GSK_GPU_FRAME (self))),
op_class,
variation,
clip,
n_external_textures);
vao = GPOINTER_TO_UINT (g_hash_table_lookup (self->vaos, op_class));
if (vao)
{
glBindVertexArray (vao);
return;
}
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
op_class->setup_vao (0);
g_hash_table_insert (self->vaos, (gpointer) op_class, GUINT_TO_POINTER (vao));
}
void
gsk_gl_frame_bind_globals (GskGLFrame *self)
{
glBindBufferBase (GL_UNIFORM_BUFFER, 0, self->globals_buffer_id);
}

View File

@@ -0,0 +1,19 @@
#pragma once
#include "gskgpuframeprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GL_FRAME (gsk_gl_frame_get_type ())
G_DECLARE_FINAL_TYPE (GskGLFrame, gsk_gl_frame, GSK, GL_FRAME, GskGpuFrame)
void gsk_gl_frame_use_program (GskGLFrame *self,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
guint n_external_textures);
void gsk_gl_frame_bind_globals (GskGLFrame *self);
G_END_DECLS

308
gsk/gpu/gskglimage.c Normal file
View File

@@ -0,0 +1,308 @@
#include "config.h"
#include "gskglimageprivate.h"
#include "gdk/gdkdisplayprivate.h"
#include "gdk/gdkglcontextprivate.h"
struct _GskGLImage
{
GskGpuImage parent_instance;
guint texture_id;
guint framebuffer_id;
GLint gl_internal_format;
GLenum gl_format;
GLenum gl_type;
guint owns_texture : 1;
};
struct _GskGLImageClass
{
GskGpuImageClass parent_class;
};
G_DEFINE_TYPE (GskGLImage, gsk_gl_image, GSK_TYPE_GPU_IMAGE)
static void
gsk_gl_image_get_projection_matrix (GskGpuImage *image,
graphene_matrix_t *out_projection)
{
GskGLImage *self = GSK_GL_IMAGE (image);
GSK_GPU_IMAGE_CLASS (gsk_gl_image_parent_class)->get_projection_matrix (image, out_projection);
if (self->texture_id == 0)
graphene_matrix_scale (out_projection, 1.f, -1.f, 1.f);
}
static void
gsk_gl_image_finalize (GObject *object)
{
GskGLImage *self = GSK_GL_IMAGE (object);
if (self->framebuffer_id)
glDeleteFramebuffers (1, &self->framebuffer_id);
if (self->owns_texture)
glDeleteTextures (1, &self->texture_id);
G_OBJECT_CLASS (gsk_gl_image_parent_class)->finalize (object);
}
static void
gsk_gl_image_class_init (GskGLImageClass *klass)
{
GskGpuImageClass *image_class = GSK_GPU_IMAGE_CLASS (klass);
GObjectClass *object_class = G_OBJECT_CLASS (klass);
image_class->get_projection_matrix = gsk_gl_image_get_projection_matrix;
object_class->finalize = gsk_gl_image_finalize;
}
static void
gsk_gl_image_init (GskGLImage *self)
{
}
GskGpuImage *
gsk_gl_image_new_backbuffer (GskGLDevice *device,
GdkMemoryFormat format,
gsize width,
gsize height)
{
GskGLImage *self;
GskGpuImageFlags flags;
GLint swizzle[4];
self = g_object_new (GSK_TYPE_GL_IMAGE, NULL);
/* We only do this so these variables get initialized */
gsk_gl_device_find_gl_format (device,
format,
0,
&format,
&flags,
&self->gl_internal_format,
&self->gl_format,
&self->gl_type,
swizzle);
gsk_gpu_image_setup (GSK_GPU_IMAGE (self), flags, format, width, height);
/* texture_id == 0 means backbuffer */
return GSK_GPU_IMAGE (self);
}
GskGpuImage *
gsk_gl_image_new (GskGLDevice *device,
GdkMemoryFormat format,
GskGpuImageFlags required_flags,
gsize width,
gsize height)
{
GskGLImage *self;
GLint swizzle[4];
GskGpuImageFlags flags;
gsize max_size;
max_size = gsk_gpu_device_get_max_image_size (GSK_GPU_DEVICE (device));
if (width > max_size || height > max_size)
return NULL;
self = g_object_new (GSK_TYPE_GL_IMAGE, NULL);
gsk_gl_device_find_gl_format (device,
format,
required_flags,
&format,
&flags,
&self->gl_internal_format,
&self->gl_format,
&self->gl_type,
swizzle);
gsk_gpu_image_setup (GSK_GPU_IMAGE (self),
flags,
format,
width, height);
glGenTextures (1, &self->texture_id);
self->owns_texture = TRUE;
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, self->texture_id);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D (GL_TEXTURE_2D, 0, self->gl_internal_format, width, height, 0, self->gl_format, self->gl_type, NULL);
/* Only apply swizzle if really needed, might not even be
* supported if default values are set
*/
if (swizzle[0] != GL_RED || swizzle[1] != GL_GREEN || swizzle[2] != GL_BLUE || swizzle[3] != GL_ALPHA)
{
/* Set each channel independently since GLES 3.0 doesn't support the iv method */
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_R, swizzle[0]);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, swizzle[1]);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, swizzle[2]);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, swizzle[3]);
}
return GSK_GPU_IMAGE (self);
}
GskGpuImage *
gsk_gl_image_new_for_texture (GskGLDevice *device,
GdkTexture *owner,
GLuint tex_id,
gboolean take_ownership,
GskGpuImageFlags extra_flags)
{
GdkMemoryFormat format, real_format;
GskGpuImageFlags flags;
GskGLImage *self;
GLint swizzle[4];
format = gdk_texture_get_format (owner);
self = g_object_new (GSK_TYPE_GL_IMAGE, NULL);
gsk_gl_device_find_gl_format (device,
format,
0,
&real_format,
&flags,
&self->gl_internal_format,
&self->gl_format,
&self->gl_type,
swizzle);
if (format != real_format)
flags = GSK_GPU_IMAGE_NO_BLIT |
(gdk_memory_format_alpha (format) == GDK_MEMORY_ALPHA_STRAIGHT ? GSK_GPU_IMAGE_STRAIGHT_ALPHA : 0);
gsk_gpu_image_setup (GSK_GPU_IMAGE (self),
flags | extra_flags,
format,
gdk_texture_get_width (owner),
gdk_texture_get_height (owner));
gsk_gpu_image_toggle_ref_texture (GSK_GPU_IMAGE (self), owner);
self->texture_id = tex_id;
self->owns_texture = take_ownership;
return GSK_GPU_IMAGE (self);
}
void
gsk_gl_image_bind_texture (GskGLImage *self)
{
if (gsk_gpu_image_get_flags (GSK_GPU_IMAGE (self)) & GSK_GPU_IMAGE_EXTERNAL)
glBindTexture (GL_TEXTURE_EXTERNAL_OES, self->texture_id);
else
glBindTexture (GL_TEXTURE_2D, self->texture_id);
}
void
gsk_gl_image_bind_framebuffer_target (GskGLImage *self,
GLenum target)
{
GLenum status;
if (self->framebuffer_id)
{
glBindFramebuffer (target, self->framebuffer_id);
return;
}
/* We're the renderbuffer */
if (self->texture_id == 0)
{
glBindFramebuffer (target, 0);
return;
}
glGenFramebuffers (1, &self->framebuffer_id);
glBindFramebuffer (target, self->framebuffer_id);
glFramebufferTexture2D (target, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, self->texture_id, 0);
status = glCheckFramebufferStatus (target);
switch (status)
{
case GL_FRAMEBUFFER_COMPLETE:
break;
case GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
g_critical ("glCheckFramebufferStatus() returned GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. Expect broken rendering.");
break;
case GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
g_critical ("glCheckFramebufferStatus() returned GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS. Expect broken rendering.");
break;
case GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
g_critical ("glCheckFramebufferStatus() returned GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT. Expect broken rendering.");
break;
case GL_FRAMEBUFFER_UNSUPPORTED:
g_critical ("glCheckFramebufferStatus() returned GL_FRAMEBUFFER_UNSUPPORTED. Expect broken rendering.");
break;
default:
g_critical ("glCheckFramebufferStatus() returned %u (0x%x). Expect broken rendering.", status, status);
break;
}
}
void
gsk_gl_image_bind_framebuffer (GskGLImage *self)
{
gsk_gl_image_bind_framebuffer_target (self, GL_FRAMEBUFFER);
}
gboolean
gsk_gl_image_is_flipped (GskGLImage *self)
{
return self->texture_id == 0;
}
GLint
gsk_gl_image_get_gl_internal_format (GskGLImage *self)
{
return self->gl_internal_format;
}
GLenum
gsk_gl_image_get_gl_format (GskGLImage *self)
{
return self->gl_format;
}
GLenum
gsk_gl_image_get_gl_type (GskGLImage *self)
{
return self->gl_type;
}
GLuint
gsk_gl_image_steal_texture (GskGLImage *self)
{
g_assert (self->owns_texture);
if (self->framebuffer_id)
{
glDeleteFramebuffers (1, &self->framebuffer_id);
self->framebuffer_id = 0;
}
self->owns_texture = FALSE;
return self->texture_id;
}

View File

@@ -0,0 +1,41 @@
#pragma once
#include "gskgpuimageprivate.h"
#include "gskgldeviceprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GL_IMAGE (gsk_gl_image_get_type ())
G_DECLARE_FINAL_TYPE (GskGLImage, gsk_gl_image, GSK, GL_IMAGE, GskGpuImage)
GskGpuImage * gsk_gl_image_new_backbuffer (GskGLDevice *device,
GdkMemoryFormat format,
gsize width,
gsize height);
GskGpuImage * gsk_gl_image_new (GskGLDevice *device,
GdkMemoryFormat format,
GskGpuImageFlags required_flags,
gsize width,
gsize height);
GskGpuImage * gsk_gl_image_new_for_texture (GskGLDevice *device,
GdkTexture *owner,
GLuint tex_id,
gboolean take_ownership,
GskGpuImageFlags extra_flags);
void gsk_gl_image_bind_texture (GskGLImage *self);
void gsk_gl_image_bind_framebuffer (GskGLImage *self);
void gsk_gl_image_bind_framebuffer_target (GskGLImage *self,
GLenum target);
gboolean gsk_gl_image_is_flipped (GskGLImage *self);
GLint gsk_gl_image_get_gl_internal_format (GskGLImage *self);
GLenum gsk_gl_image_get_gl_format (GskGLImage *self);
GLenum gsk_gl_image_get_gl_type (GskGLImage *self);
GLuint gsk_gl_image_steal_texture (GskGLImage *self);
G_END_DECLS

View File

@@ -0,0 +1,86 @@
#include "config.h"
#include "gskgpublendmodeopprivate.h"
#include "gskenumtypes.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpublendmodeinstance.h"
typedef struct _GskGpuBlendModeOp GskGpuBlendModeOp;
struct _GskGpuBlendModeOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_blend_mode_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuBlendmodeInstance *instance;
instance = (GskGpuBlendmodeInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "blend-mode");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->bottom_id);
gsk_gpu_print_enum (string, GSK_TYPE_BLEND_MODE, shader->variation);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->top_id);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_BLEND_MODE_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuBlendModeOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_blend_mode_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpublendmode",
sizeof (GskGpuBlendmodeInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_blendmode_info,
#endif
gsk_gpu_blendmode_setup_attrib_locations,
gsk_gpu_blendmode_setup_vao
};
void
gsk_gpu_blend_mode_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
const graphene_rect_t *rect,
const graphene_point_t *offset,
float opacity,
GskBlendMode blend_mode,
guint32 bottom_descriptor,
const graphene_rect_t *bottom_rect,
guint32 top_descriptor,
const graphene_rect_t *top_rect)
{
GskGpuBlendmodeInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_BLEND_MODE_OP_CLASS,
blend_mode,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
instance->opacity = opacity;
gsk_gpu_rect_to_float (bottom_rect, offset, instance->bottom_rect);
instance->bottom_id = bottom_descriptor;
gsk_gpu_rect_to_float (top_rect, offset, instance->top_rect);
instance->top_id = top_descriptor;
}

View File

@@ -1,20 +1,22 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_blend_mode_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
const graphene_rect_t *bounds,
void gsk_gpu_blend_mode_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
const graphene_rect_t *rect,
const graphene_point_t *offset,
float opacity,
GskBlendMode blend_mode,
GskVulkanImage *top_image,
const graphene_rect_t *top_rect,
const graphene_rect_t *top_tex_rect,
GskVulkanImage *bottom_image,
const graphene_rect_t *bottom_rect,
const graphene_rect_t *bottom_tex_rect);
guint32 start_descriptor,
const graphene_rect_t *start_rect,
guint32 end_descriptor,
const graphene_rect_t *end_rect);
G_END_DECLS

103
gsk/gpu/gskgpublendop.c Normal file
View File

@@ -0,0 +1,103 @@
#include "config.h"
#include "gskgpublendopprivate.h"
#include "gskgpuopprivate.h"
#include "gskgpuprintprivate.h"
typedef struct _GskGpuBlendOp GskGpuBlendOp;
struct _GskGpuBlendOp
{
GskGpuOp op;
GskGpuBlend blend;
};
static void
gsk_gpu_blend_op_finish (GskGpuOp *op)
{
}
static void
gsk_gpu_blend_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuBlendOp *self = (GskGpuBlendOp *) op;
gsk_gpu_print_op (string, indent, "blend");
switch (self->blend)
{
case GSK_GPU_BLEND_OVER:
gsk_gpu_print_string (string, "over");
break;
case GSK_GPU_BLEND_ADD:
gsk_gpu_print_string (string, "add");
break;
default:
g_assert_not_reached ();
break;
}
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_blend_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuBlendOp *self = (GskGpuBlendOp *) op;
state->blend = self->blend;
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_blend_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuBlendOp *self = (GskGpuBlendOp *) op;
switch (self->blend)
{
case GSK_GPU_BLEND_OVER:
glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
break;
case GSK_GPU_BLEND_ADD:
glBlendFunc (GL_ONE, GL_ONE);
break;
default:
g_assert_not_reached ();
break;
}
return op->next;
}
static const GskGpuOpClass GSK_GPU_BLEND_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuBlendOp),
GSK_GPU_STAGE_COMMAND,
gsk_gpu_blend_op_finish,
gsk_gpu_blend_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_blend_op_vk_command,
#endif
gsk_gpu_blend_op_gl_command
};
void
gsk_gpu_blend_op (GskGpuFrame *frame,
GskGpuBlend blend)
{
GskGpuBlendOp *self;
self = (GskGpuBlendOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_BLEND_OP_CLASS);
self->blend = blend;
}

View File

@@ -0,0 +1,12 @@
#pragma once
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
void gsk_gpu_blend_op (GskGpuFrame *frame,
GskGpuBlend blend);
G_END_DECLS

223
gsk/gpu/gskgpublitop.c Normal file
View File

@@ -0,0 +1,223 @@
#include "config.h"
#include "gskgpublitopprivate.h"
#include "gskglimageprivate.h"
#include "gskgpuprintprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkanimageprivate.h"
#endif
typedef struct _GskGpuBlitOp GskGpuBlitOp;
struct _GskGpuBlitOp
{
GskGpuOp op;
GskGpuImage *src_image;
GskGpuImage *dest_image;
cairo_rectangle_int_t src_rect;
cairo_rectangle_int_t dest_rect;
GskGpuBlitFilter filter;
};
static void
gsk_gpu_blit_op_finish (GskGpuOp *op)
{
GskGpuBlitOp *self = (GskGpuBlitOp *) op;
g_object_unref (self->src_image);
g_object_unref (self->dest_image);
}
static void
gsk_gpu_blit_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuBlitOp *self = (GskGpuBlitOp *) op;
gsk_gpu_print_op (string, indent, "blit");
gsk_gpu_print_int_rect (string, &self->dest_rect);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_blit_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuBlitOp *self = (GskGpuBlitOp *) op;
VkImageLayout src_layout, dest_layout;
VkFilter filter;
src_layout = gsk_vulkan_image_get_vk_image_layout (GSK_VULKAN_IMAGE (self->src_image));
if (src_layout != VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR &&
src_layout != VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL &&
src_layout != VK_IMAGE_LAYOUT_GENERAL)
{
gsk_vulkan_image_transition (GSK_VULKAN_IMAGE (self->src_image),
state->semaphores,
state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
VK_ACCESS_TRANSFER_READ_BIT);
src_layout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL;
}
dest_layout = gsk_vulkan_image_get_vk_image_layout (GSK_VULKAN_IMAGE (self->dest_image));
if (dest_layout != VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR &&
dest_layout != VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL &&
dest_layout != VK_IMAGE_LAYOUT_GENERAL)
{
gsk_vulkan_image_transition (GSK_VULKAN_IMAGE (self->dest_image),
state->semaphores,
state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
VK_ACCESS_TRANSFER_WRITE_BIT);
dest_layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
}
switch (self->filter)
{
default:
g_assert_not_reached ();
G_GNUC_FALLTHROUGH;
case GSK_GPU_BLIT_LINEAR:
filter = VK_FILTER_LINEAR;
break;
case GSK_GPU_BLIT_NEAREST:
filter = VK_FILTER_NEAREST;
break;
}
vkCmdBlitImage (state->vk_command_buffer,
gsk_vulkan_image_get_vk_image (GSK_VULKAN_IMAGE (self->src_image)),
src_layout,
gsk_vulkan_image_get_vk_image (GSK_VULKAN_IMAGE (self->dest_image)),
dest_layout,
1,
&(VkImageBlit) {
.srcSubresource = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.mipLevel = 0,
.baseArrayLayer = 0,
.layerCount = 1
},
.srcOffsets = {
{
.x = self->src_rect.x,
.y = self->src_rect.y,
.z = 0,
},
{
.x = self->src_rect.x + self->src_rect.width,
.y = self->src_rect.y + self->src_rect.height,
.z = 1
}
},
.dstSubresource = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.mipLevel = 0,
.baseArrayLayer = 0,
.layerCount = 1
},
.dstOffsets = {
{
.x = self->dest_rect.x,
.y = self->dest_rect.y,
.z = 0,
},
{
.x = self->dest_rect.x + self->dest_rect.width,
.y = self->dest_rect.y + self->dest_rect.height,
.z = 1,
}
},
},
filter);
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_blit_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuBlitOp *self = (GskGpuBlitOp *) op;
GLenum filter;
gsk_gl_image_bind_framebuffer_target (GSK_GL_IMAGE (self->src_image), GL_READ_FRAMEBUFFER);
gsk_gl_image_bind_framebuffer_target (GSK_GL_IMAGE (self->dest_image), GL_DRAW_FRAMEBUFFER);
switch (self->filter)
{
default:
g_assert_not_reached ();
G_GNUC_FALLTHROUGH;
case GSK_GPU_BLIT_LINEAR:
filter = GL_LINEAR;
break;
case GSK_GPU_BLIT_NEAREST:
filter = GL_NEAREST;
break;
}
glDisable (GL_SCISSOR_TEST);
glBlitFramebuffer (self->src_rect.x,
self->src_rect.y,
self->src_rect.x + self->src_rect.width,
self->src_rect.y + self->src_rect.height,
self->dest_rect.x,
state->flip_y ? state->flip_y - self->dest_rect.y - self->dest_rect.height
: self->dest_rect.y,
self->dest_rect.x + self->dest_rect.width,
state->flip_y ? state->flip_y - self->dest_rect.y
: self->dest_rect.y + self->dest_rect.height,
GL_COLOR_BUFFER_BIT,
filter);
glEnable (GL_SCISSOR_TEST);
return op->next;
}
static const GskGpuOpClass GSK_GPU_BLIT_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuBlitOp),
GSK_GPU_STAGE_PASS,
gsk_gpu_blit_op_finish,
gsk_gpu_blit_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_blit_op_vk_command,
#endif
gsk_gpu_blit_op_gl_command
};
void
gsk_gpu_blit_op (GskGpuFrame *frame,
GskGpuImage *src_image,
GskGpuImage *dest_image,
const cairo_rectangle_int_t *src_rect,
const cairo_rectangle_int_t *dest_rect,
GskGpuBlitFilter filter)
{
GskGpuBlitOp *self;
g_assert ((gsk_gpu_image_get_flags (src_image) & GSK_GPU_IMAGE_NO_BLIT) == 0);
g_assert (filter != GSK_GPU_BLIT_LINEAR || (gsk_gpu_image_get_flags (src_image) & GSK_GPU_IMAGE_FILTERABLE) == GSK_GPU_IMAGE_FILTERABLE);
g_assert ((gsk_gpu_image_get_flags (dest_image) & GSK_GPU_IMAGE_RENDERABLE) == GSK_GPU_IMAGE_RENDERABLE);
self = (GskGpuBlitOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_BLIT_OP_CLASS);
self->src_image = g_object_ref (src_image);
self->dest_image = g_object_ref (dest_image);
self->src_rect = *src_rect;
self->dest_rect = *dest_rect;
self->filter = filter;
}

View File

@@ -0,0 +1,20 @@
#pragma once
#include "gskgpuopprivate.h"
G_BEGIN_DECLS
typedef enum {
GSK_GPU_BLIT_NEAREST,
GSK_GPU_BLIT_LINEAR
} GskGpuBlitFilter;
void gsk_gpu_blit_op (GskGpuFrame *frame,
GskGpuImage *src_image,
GskGpuImage *dest_image,
const cairo_rectangle_int_t *src_rect,
const cairo_rectangle_int_t *dest_rect,
GskGpuBlitFilter filter);
G_END_DECLS

131
gsk/gpu/gskgpublurop.c Normal file
View File

@@ -0,0 +1,131 @@
#include "config.h"
#include "gskgpubluropprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gdk/gdkrgbaprivate.h"
#include "gpu/shaders/gskgpublurinstance.h"
#define VARIATION_COLORIZE 1
typedef struct _GskGpuBlurOp GskGpuBlurOp;
struct _GskGpuBlurOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_blur_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuBlurInstance *instance;
instance = (GskGpuBlurInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "blur");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->tex_id);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_BLUR_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuBlurOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_blur_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpublur",
sizeof (GskGpuBlurInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_blur_info,
#endif
gsk_gpu_blur_setup_attrib_locations,
gsk_gpu_blur_setup_vao
};
static void
gsk_gpu_blur_op_full (GskGpuFrame *frame,
guint32 variation,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_vec2_t *blur_direction,
const GdkRGBA *blur_color)
{
GskGpuBlurInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_BLUR_OP_CLASS,
variation,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rect_to_float (tex_rect, offset, instance->tex_rect);
graphene_vec2_to_float (blur_direction, instance->blur_direction);
gsk_gpu_rgba_to_float (blur_color, instance->blur_color);
instance->tex_id = descriptor;
}
void
gsk_gpu_blur_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_vec2_t *blur_direction)
{
gsk_gpu_blur_op_full (frame,
0,
clip,
desc,
descriptor,
rect,
offset,
tex_rect,
blur_direction,
&GDK_RGBA_TRANSPARENT);
}
void
gsk_gpu_blur_shadow_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_vec2_t *blur_direction,
const GdkRGBA *shadow_color)
{
gsk_gpu_blur_op_full (frame,
VARIATION_COLORIZE,
clip,
desc,
descriptor,
rect,
offset,
tex_rect,
blur_direction,
shadow_color);
}

View File

@@ -0,0 +1,30 @@
#pragma once
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_gpu_blur_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_vec2_t *blur_direction);
void gsk_gpu_blur_shadow_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_vec2_t *blur_direction,
const GdkRGBA *shadow_color);
G_END_DECLS

131
gsk/gpu/gskgpuborderop.c Normal file
View File

@@ -0,0 +1,131 @@
#include "config.h"
#include "gskgpuborderopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgpushaderopprivate.h"
#include "gsk/gskroundedrectprivate.h"
#include "gpu/shaders/gskgpuborderinstance.h"
typedef struct _GskGpuBorderOp GskGpuBorderOp;
struct _GskGpuBorderOp
{
GskGpuShaderOp op;
};
static gboolean
color_equal (const float *color1,
const float *color2)
{
return gdk_rgba_equal (&(GdkRGBA) { color1[0], color1[1], color1[2], color1[3] },
&(GdkRGBA) { color1[0], color1[1], color1[2], color1[3] });
}
static void
gsk_gpu_border_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuBorderInstance *instance;
instance = (GskGpuBorderInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "border");
gsk_gpu_print_rounded_rect (string, instance->outline);
gsk_gpu_print_rgba (string, (const float *) &instance->border_colors[0]);
if (!color_equal (&instance->border_colors[12], &instance->border_colors[0]) ||
!color_equal (&instance->border_colors[8], &instance->border_colors[0]) ||
!color_equal (&instance->border_colors[4], &instance->border_colors[0]))
{
gsk_gpu_print_rgba (string, &instance->border_colors[4]);
gsk_gpu_print_rgba (string, &instance->border_colors[8]);
gsk_gpu_print_rgba (string, &instance->border_colors[12]);
}
g_string_append_printf (string, "%g ", instance->border_widths[0]);
if (instance->border_widths[0] != instance->border_widths[1] ||
instance->border_widths[0] != instance->border_widths[2] ||
instance->border_widths[0] != instance->border_widths[3])
{
g_string_append_printf (string, "%g %g %g ",
instance->border_widths[1],
instance->border_widths[2],
instance->border_widths[3]);
}
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_border_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
return gsk_gpu_shader_op_vk_command_n (op, frame, state, 8);
}
#endif
static GskGpuOp *
gsk_gpu_border_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
return gsk_gpu_shader_op_gl_command_n (op, frame, state, 8);
}
static const GskGpuShaderOpClass GSK_GPU_BORDER_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuBorderOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_border_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_border_op_vk_command,
#endif
gsk_gpu_border_op_gl_command
},
"gskgpuborder",
sizeof (GskGpuBorderInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_border_info,
#endif
gsk_gpu_border_setup_attrib_locations,
gsk_gpu_border_setup_vao
};
void
gsk_gpu_border_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const GskRoundedRect *outline,
const graphene_point_t *offset,
const graphene_point_t *inside_offset,
const float widths[4],
const GdkRGBA colors[4])
{
GskGpuBorderInstance *instance;
guint i;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_BORDER_OP_CLASS,
0,
clip,
NULL,
&instance);
gsk_rounded_rect_to_float (outline, offset, instance->outline);
for (i = 0; i < 4; i++)
{
instance->border_widths[i] = widths[i];
gsk_gpu_rgba_to_float (&colors[i], &instance->border_colors[4 * i]);
}
instance->offset[0] = inside_offset->x;
instance->offset[1] = inside_offset->y;
}

View File

@@ -1,13 +1,17 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgputypesprivate.h"
#include "gsktypes.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_border_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
void gsk_gpu_border_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const GskRoundedRect *outline,
const graphene_point_t *offset,
const graphene_point_t *inside_offset,
const float widths[4],
const GdkRGBA colors[4]);

112
gsk/gpu/gskgpuboxshadowop.c Normal file
View File

@@ -0,0 +1,112 @@
#include "config.h"
#include "gskgpuboxshadowopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskrectprivate.h"
#include "gsk/gskroundedrectprivate.h"
#include "gpu/shaders/gskgpuboxshadowinstance.h"
#define VARIATION_INSET 1
typedef struct _GskGpuBoxShadowOp GskGpuBoxShadowOp;
struct _GskGpuBoxShadowOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_box_shadow_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuBoxshadowInstance *instance;
instance = (GskGpuBoxshadowInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, shader->variation & VARIATION_INSET ? "inset-shadow" : "outset-shadow");
gsk_gpu_print_rounded_rect (string, instance->outline);
gsk_gpu_print_rgba (string, instance->color);
g_string_append_printf (string, "%g %g %g %g ",
instance->shadow_offset[0], instance->shadow_offset[1],
instance->blur_radius, instance->shadow_spread);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_box_shadow_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
return gsk_gpu_shader_op_vk_command_n (op, frame, state, 8);
}
#endif
static GskGpuOp *
gsk_gpu_box_shadow_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
return gsk_gpu_shader_op_gl_command_n (op, frame, state, 8);
}
static const GskGpuShaderOpClass GSK_GPU_BOX_SHADOW_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuBoxShadowOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_box_shadow_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_box_shadow_op_vk_command,
#endif
gsk_gpu_box_shadow_op_gl_command
},
"gskgpuboxshadow",
sizeof (GskGpuBoxshadowInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_boxshadow_info,
#endif
gsk_gpu_boxshadow_setup_attrib_locations,
gsk_gpu_boxshadow_setup_vao
};
void
gsk_gpu_box_shadow_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
gboolean inset,
const graphene_rect_t *bounds,
const GskRoundedRect *outline,
const graphene_point_t *shadow_offset,
float spread,
float blur_radius,
const graphene_point_t *offset,
const GdkRGBA *color)
{
GskGpuBoxshadowInstance *instance;
/* Use border shader for no blurring */
g_return_if_fail (blur_radius > 0.0f);
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_BOX_SHADOW_OP_CLASS,
inset ? VARIATION_INSET : 0,
clip,
NULL,
&instance);
gsk_gpu_rect_to_float (bounds, offset, instance->bounds);
gsk_rounded_rect_to_float (outline, offset, instance->outline);
gsk_gpu_rgba_to_float (color, instance->color);
instance->shadow_offset[0] = shadow_offset->x;
instance->shadow_offset[1] = shadow_offset->y;
instance->shadow_spread = spread;
instance->blur_radius = blur_radius;
}

View File

@@ -0,0 +1,23 @@
#pragma once
#include "gskgputypesprivate.h"
#include "gsktypes.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_gpu_box_shadow_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
gboolean inset,
const graphene_rect_t *bounds,
const GskRoundedRect *outline,
const graphene_point_t *shadow_offset,
float spread,
float blur_radius,
const graphene_point_t *offset,
const GdkRGBA *color);
G_END_DECLS

52
gsk/gpu/gskgpubuffer.c Normal file
View File

@@ -0,0 +1,52 @@
#include "config.h"
#include "gskgpubufferprivate.h"
typedef struct _GskGpuBufferPrivate GskGpuBufferPrivate;
struct _GskGpuBufferPrivate
{
gsize size;
};
G_DEFINE_TYPE_WITH_PRIVATE (GskGpuBuffer, gsk_gpu_buffer, G_TYPE_OBJECT)
static void
gsk_gpu_buffer_class_init (GskGpuBufferClass *klass)
{
}
static void
gsk_gpu_buffer_init (GskGpuBuffer *self)
{
}
void
gsk_gpu_buffer_setup (GskGpuBuffer *self,
gsize size)
{
GskGpuBufferPrivate *priv = gsk_gpu_buffer_get_instance_private (self);
priv->size = size;
}
gsize
gsk_gpu_buffer_get_size (GskGpuBuffer *self)
{
GskGpuBufferPrivate *priv = gsk_gpu_buffer_get_instance_private (self);
return priv->size;
}
guchar *
gsk_gpu_buffer_map (GskGpuBuffer *self)
{
return GSK_GPU_BUFFER_GET_CLASS (self)->map (self);
}
void
gsk_gpu_buffer_unmap (GskGpuBuffer *self)
{
GSK_GPU_BUFFER_GET_CLASS (self)->unmap (self);
}

View File

@@ -0,0 +1,42 @@
#pragma once
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GPU_BUFFER (gsk_gpu_buffer_get_type ())
#define GSK_GPU_BUFFER(o) (G_TYPE_CHECK_INSTANCE_CAST ((o), GSK_TYPE_GPU_BUFFER, GskGpuBuffer))
#define GSK_GPU_BUFFER_CLASS(k) (G_TYPE_CHECK_CLASS_CAST ((k), GSK_TYPE_GPU_BUFFER, GskGpuBufferClass))
#define GSK_IS_GPU_BUFFER(o) (G_TYPE_CHECK_INSTANCE_TYPE ((o), GSK_TYPE_GPU_BUFFER))
#define GSK_IS_GPU_BUFFER_CLASS(k) (G_TYPE_CHECK_CLASS_TYPE ((k), GSK_TYPE_GPU_BUFFER))
#define GSK_GPU_BUFFER_GET_CLASS(o) (G_TYPE_INSTANCE_GET_CLASS ((o), GSK_TYPE_GPU_BUFFER, GskGpuBufferClass))
typedef struct _GskGpuBufferClass GskGpuBufferClass;
struct _GskGpuBuffer
{
GObject parent_instance;
};
struct _GskGpuBufferClass
{
GObjectClass parent_class;
guchar * (* map) (GskGpuBuffer *self);
void (* unmap) (GskGpuBuffer *self);
};
GType gsk_gpu_buffer_get_type (void) G_GNUC_CONST;
void gsk_gpu_buffer_setup (GskGpuBuffer *self,
gsize size);
gsize gsk_gpu_buffer_get_size (GskGpuBuffer *self);
guchar * gsk_gpu_buffer_map (GskGpuBuffer *self);
void gsk_gpu_buffer_unmap (GskGpuBuffer *self);
G_DEFINE_AUTOPTR_CLEANUP_FUNC(GskGpuBuffer, g_object_unref)
G_END_DECLS

125
gsk/gpu/gskgpuclearop.c Normal file
View File

@@ -0,0 +1,125 @@
#include "config.h"
#include "gskgpuclearopprivate.h"
#include "gskgpuopprivate.h"
#include "gskgpuprintprivate.h"
/* for gsk_gpu_rgba_to_float() */
#include "gskgpushaderopprivate.h"
typedef struct _GskGpuClearOp GskGpuClearOp;
struct _GskGpuClearOp
{
GskGpuOp op;
cairo_rectangle_int_t rect;
GdkRGBA color;
};
static void
gsk_gpu_clear_op_finish (GskGpuOp *op)
{
}
static void
gsk_gpu_clear_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuClearOp *self = (GskGpuClearOp *) op;
float rgba[4];
gsk_gpu_print_op (string, indent, "clear");
gsk_gpu_print_int_rect (string, &self->rect);
gsk_gpu_rgba_to_float (&self->color, rgba);
gsk_gpu_print_rgba (string, rgba);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static void
gsk_gpu_init_clear_value (VkClearValue *value,
const GdkRGBA *rgba)
{
gsk_gpu_rgba_to_float (rgba, value->color.float32);
}
static GskGpuOp *
gsk_gpu_clear_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuClearOp *self = (GskGpuClearOp *) op;
VkClearValue clear_value;
gsk_gpu_init_clear_value (&clear_value, &self->color);
vkCmdClearAttachments (state->vk_command_buffer,
1,
&(VkClearAttachment) {
VK_IMAGE_ASPECT_COLOR_BIT,
0,
clear_value,
},
1,
&(VkClearRect) {
{
{ self->rect.x, self->rect.y },
{ self->rect.width, self->rect.height },
},
0,
1
});
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_clear_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuClearOp *self = (GskGpuClearOp *) op;
int scissor[4];
glGetIntegerv (GL_SCISSOR_BOX, scissor);
if (state->flip_y)
glScissor (self->rect.x, state->flip_y - self->rect.y - self->rect.height, self->rect.width, self->rect.height);
else
glScissor (self->rect.x, self->rect.y, self->rect.width, self->rect.height);
glClearColor (self->color.red, self->color.green, self->color.blue, self->color.alpha);
glClear (GL_COLOR_BUFFER_BIT);
glScissor (scissor[0], scissor[1], scissor[2], scissor[3]);
return op->next;
}
static const GskGpuOpClass GSK_GPU_CLEAR_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuClearOp),
GSK_GPU_STAGE_COMMAND,
gsk_gpu_clear_op_finish,
gsk_gpu_clear_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_clear_op_vk_command,
#endif
gsk_gpu_clear_op_gl_command
};
void
gsk_gpu_clear_op (GskGpuFrame *frame,
const cairo_rectangle_int_t *rect,
const GdkRGBA *color)
{
GskGpuClearOp *self;
self = (GskGpuClearOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_CLEAR_OP_CLASS);
self->rect = *rect;
self->color = *color;
}

View File

@@ -1,10 +1,10 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
void gsk_vulkan_clear_op (GskVulkanRender *render,
void gsk_gpu_clear_op (GskGpuFrame *frame,
const cairo_rectangle_int_t *rect,
const GdkRGBA *color);

293
gsk/gpu/gskgpuclip.c Normal file
View File

@@ -0,0 +1,293 @@
#include "config.h"
#include "gskgpuclipprivate.h"
#include "gskrectprivate.h"
#include "gskroundedrectprivate.h"
#include "gsktransform.h"
void
gsk_gpu_clip_init_empty (GskGpuClip *clip,
const graphene_rect_t *rect)
{
clip->type = GSK_GPU_CLIP_NONE;
gsk_rounded_rect_init_from_rect (&clip->rect, rect, 0);
}
void
gsk_gpu_clip_init_rect (GskGpuClip *clip,
const graphene_rect_t *rect)
{
clip->type = GSK_GPU_CLIP_RECT;
gsk_rounded_rect_init_from_rect (&clip->rect, rect, 0);
}
void
gsk_gpu_clip_init_copy (GskGpuClip *self,
const GskGpuClip *src)
{
self->type = src->type;
gsk_rounded_rect_init_copy (&self->rect, &src->rect);
}
static gboolean
gsk_gpu_clip_init_after_intersection (GskGpuClip *self,
GskRoundedRectIntersection res)
{
if (res == GSK_INTERSECTION_NOT_REPRESENTABLE)
return FALSE;
if (res == GSK_INTERSECTION_EMPTY)
self->type = GSK_GPU_CLIP_ALL_CLIPPED;
else if (gsk_rounded_rect_is_rectilinear (&self->rect))
self->type = GSK_GPU_CLIP_RECT;
else
self->type = GSK_GPU_CLIP_ROUNDED;
return TRUE;
}
gboolean
gsk_gpu_clip_intersect_rect (GskGpuClip *dest,
const GskGpuClip *src,
const graphene_rect_t *rect)
{
GskRoundedRectIntersection res;
if (gsk_rect_contains_rect (rect, &src->rect.bounds))
{
gsk_gpu_clip_init_copy (dest, src);
return TRUE;
}
if (!gsk_rect_intersects (rect, &src->rect.bounds))
{
dest->type = GSK_GPU_CLIP_ALL_CLIPPED;
return TRUE;
}
switch (src->type)
{
case GSK_GPU_CLIP_ALL_CLIPPED:
dest->type = GSK_GPU_CLIP_ALL_CLIPPED;
break;
case GSK_GPU_CLIP_NONE:
gsk_gpu_clip_init_copy (dest, src);
if (gsk_rect_intersection (&dest->rect.bounds, rect, &dest->rect.bounds))
dest->type = GSK_GPU_CLIP_RECT;
else
dest->type = GSK_GPU_CLIP_ALL_CLIPPED;
break;
case GSK_GPU_CLIP_RECT:
gsk_gpu_clip_init_copy (dest, src);
if (!gsk_rect_intersection (&dest->rect.bounds, rect, &dest->rect.bounds))
dest->type = GSK_GPU_CLIP_ALL_CLIPPED;
break;
case GSK_GPU_CLIP_ROUNDED:
res = gsk_rounded_rect_intersect_with_rect (&src->rect, rect, &dest->rect);
if (!gsk_gpu_clip_init_after_intersection (dest, res))
return FALSE;
break;
default:
g_assert_not_reached ();
return FALSE;
}
return TRUE;
}
gboolean
gsk_gpu_clip_intersect_rounded_rect (GskGpuClip *dest,
const GskGpuClip *src,
const GskRoundedRect *rounded)
{
GskRoundedRectIntersection res;
if (gsk_rounded_rect_contains_rect (rounded, &src->rect.bounds))
{
gsk_gpu_clip_init_copy (dest, src);
return TRUE;
}
if (!gsk_rect_intersects (&rounded->bounds, &src->rect.bounds))
{
dest->type = GSK_GPU_CLIP_ALL_CLIPPED;
return TRUE;
}
switch (src->type)
{
case GSK_GPU_CLIP_ALL_CLIPPED:
dest->type = GSK_GPU_CLIP_ALL_CLIPPED;
break;
case GSK_GPU_CLIP_NONE:
case GSK_GPU_CLIP_RECT:
res = gsk_rounded_rect_intersect_with_rect (rounded, &src->rect.bounds, &dest->rect);
if (!gsk_gpu_clip_init_after_intersection (dest, res))
return FALSE;
break;
case GSK_GPU_CLIP_ROUNDED:
res = gsk_rounded_rect_intersection (&src->rect, rounded, &dest->rect);
if (!gsk_gpu_clip_init_after_intersection (dest, res))
return FALSE;
break;
default:
g_assert_not_reached ();
return FALSE;
}
return TRUE;
}
void
gsk_gpu_clip_scale (GskGpuClip *dest,
const GskGpuClip *src,
float scale_x,
float scale_y)
{
dest->type = src->type;
gsk_rounded_rect_scale_affine (&dest->rect,
&src->rect,
1.0f / scale_x, 1.0f / scale_y,
0, 0);
}
gboolean
gsk_gpu_clip_transform (GskGpuClip *dest,
const GskGpuClip *src,
GskTransform *transform,
const graphene_rect_t *viewport)
{
switch (src->type)
{
default:
g_assert_not_reached();
return FALSE;
case GSK_GPU_CLIP_ALL_CLIPPED:
gsk_gpu_clip_init_copy (dest, src);
return TRUE;
case GSK_GPU_CLIP_NONE:
case GSK_GPU_CLIP_RECT:
case GSK_GPU_CLIP_ROUNDED:
switch (gsk_transform_get_category (transform))
{
case GSK_TRANSFORM_CATEGORY_IDENTITY:
gsk_gpu_clip_init_copy (dest, src);
return TRUE;
case GSK_TRANSFORM_CATEGORY_2D_TRANSLATE:
{
float dx, dy;
gsk_transform_to_translate (transform, &dx, &dy);
gsk_gpu_clip_init_copy (dest, src);
dest->rect.bounds.origin.x -= dx;
dest->rect.bounds.origin.y -= dy;
}
return TRUE;
case GSK_TRANSFORM_CATEGORY_2D_AFFINE:
{
float dx, dy, scale_x, scale_y;
gsk_transform_to_affine (transform, &scale_x, &scale_y, &dx, &dy);
scale_x = 1. / scale_x;
scale_y = 1. / scale_y;
gsk_gpu_clip_init_copy (dest, src);
dest->rect.bounds.origin.x = (dest->rect.bounds.origin.x - dx) * scale_x;
dest->rect.bounds.origin.y = (dest->rect.bounds.origin.y - dy) * scale_y;
dest->rect.bounds.size.width *= scale_x;
dest->rect.bounds.size.height *= scale_y;
if (src->type == GSK_GPU_CLIP_ROUNDED)
{
dest->rect.corner[0].width *= scale_x;
dest->rect.corner[0].height *= scale_y;
dest->rect.corner[1].width *= scale_x;
dest->rect.corner[1].height *= scale_y;
dest->rect.corner[2].width *= scale_x;
dest->rect.corner[2].height *= scale_y;
dest->rect.corner[3].width *= scale_x;
dest->rect.corner[3].height *= scale_y;
}
}
return TRUE;
case GSK_TRANSFORM_CATEGORY_UNKNOWN:
case GSK_TRANSFORM_CATEGORY_ANY:
case GSK_TRANSFORM_CATEGORY_3D:
case GSK_TRANSFORM_CATEGORY_2D:
default:
return FALSE;
}
}
}
gboolean
gsk_gpu_clip_may_intersect_rect (const GskGpuClip *self,
const graphene_point_t *offset,
const graphene_rect_t *rect)
{
graphene_rect_t r = *rect;
r.origin.x += offset->x;
r.origin.y += offset->y;
switch (self->type)
{
default:
g_assert_not_reached();
case GSK_GPU_CLIP_ALL_CLIPPED:
return FALSE;
case GSK_GPU_CLIP_NONE:
case GSK_GPU_CLIP_RECT:
case GSK_GPU_CLIP_ROUNDED:
return gsk_rect_intersects (&self->rect.bounds, &r);
}
}
gboolean
gsk_gpu_clip_contains_rect (const GskGpuClip *self,
const graphene_point_t *offset,
const graphene_rect_t *rect)
{
graphene_rect_t r = *rect;
r.origin.x += offset->x;
r.origin.y += offset->y;
switch (self->type)
{
default:
g_assert_not_reached();
case GSK_GPU_CLIP_ALL_CLIPPED:
return FALSE;
case GSK_GPU_CLIP_NONE:
case GSK_GPU_CLIP_RECT:
return gsk_rect_contains_rect (&self->rect.bounds, &r);
case GSK_GPU_CLIP_ROUNDED:
return gsk_rounded_rect_contains_rect (&self->rect, &r);
}
}
GskGpuShaderClip
gsk_gpu_clip_get_shader_clip (const GskGpuClip *self,
const graphene_point_t *offset,
const graphene_rect_t *rect)
{
if (self->type == GSK_GPU_CLIP_NONE ||
gsk_gpu_clip_contains_rect (self, offset, rect))
return GSK_GPU_SHADER_CLIP_NONE;
else if (self->type == GSK_GPU_CLIP_RECT)
return GSK_GPU_SHADER_CLIP_RECT;
else
return GSK_GPU_SHADER_CLIP_ROUNDED;
}

View File

@@ -1,70 +1,66 @@
#pragma once
#include "gskgputypesprivate.h"
#include <gdk/gdk.h>
#include <graphene.h>
#include <gsk/gskroundedrect.h>
G_BEGIN_DECLS
typedef enum {
GSK_VULKAN_SHADER_CLIP_NONE,
GSK_VULKAN_SHADER_CLIP_RECT,
GSK_VULKAN_SHADER_CLIP_ROUNDED
} GskVulkanShaderClip;
typedef enum {
/* The whole area is clipped, no drawing is necessary.
* This can't be handled by return values because for return
* values we return if clips could even be computed.
*/
GSK_VULKAN_CLIP_ALL_CLIPPED,
GSK_GPU_CLIP_ALL_CLIPPED,
/* No clipping is necessary, but the clip rect is set
* to the actual bounds of the underlying framebuffer
*/
GSK_VULKAN_CLIP_NONE,
GSK_GPU_CLIP_NONE,
/* The clip is a rectangular area */
GSK_VULKAN_CLIP_RECT,
GSK_GPU_CLIP_RECT,
/* The clip is a rounded rectangle */
GSK_VULKAN_CLIP_ROUNDED
} GskVulkanClipComplexity;
GSK_GPU_CLIP_ROUNDED
} GskGpuClipComplexity;
typedef struct _GskVulkanClip GskVulkanClip;
typedef struct _GskGpuClip GskGpuClip;
struct _GskVulkanClip
struct _GskGpuClip
{
GskVulkanClipComplexity type;
GskRoundedRect rect;
GskGpuClipComplexity type;
GskRoundedRect rect;
};
void gsk_vulkan_clip_init_empty (GskVulkanClip *clip,
void gsk_gpu_clip_init_empty (GskGpuClip *clip,
const graphene_rect_t *rect);
void gsk_vulkan_clip_init_copy (GskVulkanClip *self,
const GskVulkanClip *src);
void gsk_vulkan_clip_init_rect (GskVulkanClip *clip,
void gsk_gpu_clip_init_copy (GskGpuClip *self,
const GskGpuClip *src);
void gsk_gpu_clip_init_rect (GskGpuClip *clip,
const graphene_rect_t *rect);
gboolean gsk_vulkan_clip_intersect_rect (GskVulkanClip *dest,
const GskVulkanClip *src,
gboolean gsk_gpu_clip_intersect_rect (GskGpuClip *dest,
const GskGpuClip *src,
const graphene_rect_t *rect) G_GNUC_WARN_UNUSED_RESULT;
gboolean gsk_vulkan_clip_intersect_rounded_rect (GskVulkanClip *dest,
const GskVulkanClip *src,
gboolean gsk_gpu_clip_intersect_rounded_rect (GskGpuClip *dest,
const GskGpuClip *src,
const GskRoundedRect *rounded) G_GNUC_WARN_UNUSED_RESULT;
void gsk_vulkan_clip_scale (GskVulkanClip *dest,
const GskVulkanClip *src,
void gsk_gpu_clip_scale (GskGpuClip *dest,
const GskGpuClip *src,
float scale_x,
float scale_y);
gboolean gsk_vulkan_clip_transform (GskVulkanClip *dest,
const GskVulkanClip *src,
gboolean gsk_gpu_clip_transform (GskGpuClip *dest,
const GskGpuClip *src,
GskTransform *transform,
const graphene_rect_t *viewport) G_GNUC_WARN_UNUSED_RESULT;
gboolean gsk_vulkan_clip_contains_rect (const GskVulkanClip *self,
gboolean gsk_gpu_clip_contains_rect (const GskGpuClip *self,
const graphene_point_t *offset,
const graphene_rect_t *rect) G_GNUC_WARN_UNUSED_RESULT;
gboolean gsk_vulkan_clip_may_intersect_rect (const GskVulkanClip *self,
gboolean gsk_gpu_clip_may_intersect_rect (const GskGpuClip *self,
const graphene_point_t *offset,
const graphene_rect_t *rect) G_GNUC_WARN_UNUSED_RESULT;
GskVulkanShaderClip gsk_vulkan_clip_get_shader_clip (const GskVulkanClip *self,
GskGpuShaderClip gsk_gpu_clip_get_shader_clip (const GskGpuClip *self,
const graphene_point_t *offset,
const graphene_rect_t *rect);

View File

@@ -0,0 +1,79 @@
#include "config.h"
#include "gskgpucolorizeopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpucolorizeinstance.h"
typedef struct _GskGpuColorizeOp GskGpuColorizeOp;
struct _GskGpuColorizeOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_colorize_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuColorizeInstance *instance;
instance = (GskGpuColorizeInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "colorize");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->tex_id);
gsk_gpu_print_rgba (string, instance->color);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_COLORIZE_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuColorizeOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_colorize_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpucolorize",
sizeof (GskGpuColorizeInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_colorize_info,
#endif
gsk_gpu_colorize_setup_attrib_locations,
gsk_gpu_colorize_setup_vao
};
void
gsk_gpu_colorize_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *descriptors,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const GdkRGBA *color)
{
GskGpuColorizeInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_COLORIZE_OP_CLASS,
0,
clip,
descriptors,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rect_to_float (tex_rect, offset, instance->tex_rect);
instance->tex_id = descriptor;
gsk_gpu_rgba_to_float (color, instance->color);
}

View File

@@ -1,12 +1,15 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_glyph_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
GskVulkanImage *image,
void gsk_gpu_colorize_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,

View File

@@ -0,0 +1,111 @@
#include "config.h"
#include "gskgpucolormatrixopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpucolormatrixinstance.h"
typedef struct _GskGpuColorMatrixOp GskGpuColorMatrixOp;
struct _GskGpuColorMatrixOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_color_matrix_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuColormatrixInstance *instance;
instance = (GskGpuColormatrixInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "color-matrix");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->tex_id);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_COLOR_MATRIX_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuColorMatrixOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_color_matrix_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpucolormatrix",
sizeof (GskGpuColormatrixInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_colormatrix_info,
#endif
gsk_gpu_colormatrix_setup_attrib_locations,
gsk_gpu_colormatrix_setup_vao
};
void
gsk_gpu_color_matrix_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_matrix_t *color_matrix,
const graphene_vec4_t *color_offset)
{
GskGpuColormatrixInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_COLOR_MATRIX_OP_CLASS,
0,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rect_to_float (tex_rect, offset, instance->tex_rect);
instance->tex_id = descriptor;
graphene_matrix_to_float (color_matrix, instance->color_matrix);
graphene_vec4_to_float (color_offset, instance->color_offset);
}
void
gsk_gpu_color_matrix_op_opacity (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
float opacity)
{
graphene_matrix_t matrix;
graphene_matrix_init_from_float (&matrix,
(float[16]) {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, opacity
});
gsk_gpu_color_matrix_op (frame,
clip,
desc,
descriptor,
rect,
offset,
tex_rect,
&matrix,
graphene_vec4_zero ());
}

View File

@@ -1,26 +1,29 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_color_matrix_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
GskVulkanImage *image,
void gsk_gpu_color_matrix_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
const graphene_matrix_t *color_matrix,
const graphene_vec4_t *color_offset);
void gsk_vulkan_color_matrix_op_opacity (GskVulkanRender *render,
GskVulkanShaderClip clip,
GskVulkanImage *image,
void gsk_gpu_color_matrix_op_opacity (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect,
float opacity);
G_END_DECLS

74
gsk/gpu/gskgpucolorop.c Normal file
View File

@@ -0,0 +1,74 @@
#include "config.h"
#include "gskgpucoloropprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpucolorinstance.h"
typedef struct _GskGpuColorOp GskGpuColorOp;
struct _GskGpuColorOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_color_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuColorInstance *instance;
instance = (GskGpuColorInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "color");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_rgba (string, instance->color);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_COLOR_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuColorOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_color_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpucolor",
sizeof (GskGpuColorInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_color_info,
#endif
gsk_gpu_color_setup_attrib_locations,
gsk_gpu_color_setup_vao
};
void
gsk_gpu_color_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const GdkRGBA *color)
{
GskGpuColorInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_COLOR_OP_CLASS,
0,
clip,
NULL,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rgba_to_float (color, instance->color);
}

View File

@@ -1,11 +1,13 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgputypesprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_color_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
void gsk_gpu_color_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const GdkRGBA *color);

View File

@@ -0,0 +1,95 @@
#include "config.h"
#include "gskgpuconicgradientopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpuconicgradientinstance.h"
#define VARIATION_SUPERSAMPLING (1 << 0)
typedef struct _GskGpuConicGradientOp GskGpuConicGradientOp;
struct _GskGpuConicGradientOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_conic_gradient_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuConicgradientInstance *instance;
instance = (GskGpuConicgradientInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "conic-gradient");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_CONIC_GRADIENT_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuConicGradientOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_conic_gradient_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpuconicgradient",
sizeof (GskGpuConicgradientInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_conicgradient_info,
#endif
gsk_gpu_conicgradient_setup_attrib_locations,
gsk_gpu_conicgradient_setup_vao
};
void
gsk_gpu_conic_gradient_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const graphene_rect_t *rect,
const graphene_point_t *center,
float angle,
const graphene_point_t *offset,
const GskColorStop *stops,
gsize n_stops)
{
GskGpuConicgradientInstance *instance;
g_assert (n_stops > 1);
g_assert (n_stops <= 7);
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_CONIC_GRADIENT_OP_CLASS,
(gsk_gpu_frame_should_optimize (frame, GSK_GPU_OPTIMIZE_GRADIENTS) ? VARIATION_SUPERSAMPLING : 0),
clip,
NULL,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_point_to_float (center, offset, instance->center);
instance->angle = angle;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 6)].color, instance->color6);
instance->offsets1[2] = stops[MIN (n_stops - 1, 6)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 5)].color, instance->color5);
instance->offsets1[1] = stops[MIN (n_stops - 1, 5)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 4)].color, instance->color4);
instance->offsets1[0] = stops[MIN (n_stops - 1, 4)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 3)].color, instance->color3);
instance->offsets0[3] = stops[MIN (n_stops - 1, 3)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 2)].color, instance->color2);
instance->offsets0[2] = stops[MIN (n_stops - 1, 2)].offset;
gsk_gpu_rgba_to_float (&stops[1].color, instance->color1);
instance->offsets0[1] = stops[1].offset;
gsk_gpu_rgba_to_float (&stops[0].color, instance->color0);
instance->offsets0[0] = stops[0].offset;
}

View File

@@ -1,17 +1,21 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskrendernode.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_outset_shadow_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
const GskRoundedRect *outline,
void gsk_gpu_conic_gradient_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const graphene_rect_t *rect,
const graphene_point_t *center,
float angle,
const graphene_point_t *offset,
const GdkRGBA *color,
const graphene_point_t *shadow_offset,
float spread,
float blur_radius);
const GskColorStop *stops,
gsize n_stops);
G_END_DECLS

View File

@@ -0,0 +1,86 @@
#include "config.h"
#include "gskgpucrossfadeopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpucrossfadeinstance.h"
typedef struct _GskGpuCrossFadeOp GskGpuCrossFadeOp;
struct _GskGpuCrossFadeOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_cross_fade_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuCrossfadeInstance *instance;
instance = (GskGpuCrossfadeInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "cross-fade");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->start_id);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->end_id);
g_string_append_printf (string, "%g%%", 100 * instance->opacity_progress[1]);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_CROSS_FADE_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuCrossFadeOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_cross_fade_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpucrossfade",
sizeof (GskGpuCrossfadeInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_crossfade_info,
#endif
gsk_gpu_crossfade_setup_attrib_locations,
gsk_gpu_crossfade_setup_vao
};
void
gsk_gpu_cross_fade_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
const graphene_rect_t *rect,
const graphene_point_t *offset,
float opacity,
float progress,
guint32 start_descriptor,
const graphene_rect_t *start_rect,
guint32 end_descriptor,
const graphene_rect_t *end_rect)
{
GskGpuCrossfadeInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_CROSS_FADE_OP_CLASS,
0,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
instance->opacity_progress[0] = opacity;
instance->opacity_progress[1] = progress;
gsk_gpu_rect_to_float (start_rect, offset, instance->start_rect);
instance->start_id = start_descriptor;
gsk_gpu_rect_to_float (end_rect, offset, instance->end_rect);
instance->end_id = end_descriptor;
}

View File

@@ -1,20 +1,22 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_cross_fade_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
const graphene_rect_t *bounds,
void gsk_gpu_cross_fade_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
const graphene_rect_t *rect,
const graphene_point_t *offset,
float opacity,
float progress,
GskVulkanImage *start_image,
guint32 start_descriptor,
const graphene_rect_t *start_rect,
const graphene_rect_t *start_tex_rect,
GskVulkanImage *end_image,
const graphene_rect_t *end_rect,
const graphene_rect_t *end_tex_rect);
guint32 end_descriptor,
const graphene_rect_t *end_rect);
G_END_DECLS

240
gsk/gpu/gskgpudescriptors.c Normal file
View File

@@ -0,0 +1,240 @@
#include "config.h"
#include "gskgpudescriptorsprivate.h"
typedef struct _GskGpuImageEntry GskGpuImageEntry;
typedef struct _GskGpuBufferEntry GskGpuBufferEntry;
struct _GskGpuImageEntry
{
GskGpuImage *image;
GskGpuSampler sampler;
guint32 descriptor;
};
struct _GskGpuBufferEntry
{
GskGpuBuffer *buffer;
guint32 descriptor;
};
static void
gsk_gpu_image_entry_clear (gpointer data)
{
GskGpuImageEntry *entry = data;
g_object_unref (entry->image);
}
static void
gsk_gpu_buffer_entry_clear (gpointer data)
{
GskGpuBufferEntry *entry = data;
g_object_unref (entry->buffer);
}
#define GDK_ARRAY_NAME gsk_gpu_image_entries
#define GDK_ARRAY_TYPE_NAME GskGpuImageEntries
#define GDK_ARRAY_ELEMENT_TYPE GskGpuImageEntry
#define GDK_ARRAY_FREE_FUNC gsk_gpu_image_entry_clear
#define GDK_ARRAY_BY_VALUE 1
#define GDK_ARRAY_PREALLOC 16
#define GDK_ARRAY_NO_MEMSET 1
#include "gdk/gdkarrayimpl.c"
#define GDK_ARRAY_NAME gsk_gpu_buffer_entries
#define GDK_ARRAY_TYPE_NAME GskGpuBufferEntries
#define GDK_ARRAY_ELEMENT_TYPE GskGpuBufferEntry
#define GDK_ARRAY_FREE_FUNC gsk_gpu_buffer_entry_clear
#define GDK_ARRAY_BY_VALUE 1
#define GDK_ARRAY_PREALLOC 4
#define GDK_ARRAY_NO_MEMSET 1
#include "gdk/gdkarrayimpl.c"
typedef struct _GskGpuDescriptorsPrivate GskGpuDescriptorsPrivate;
struct _GskGpuDescriptorsPrivate
{
GskGpuImageEntries images;
GskGpuBufferEntries buffers;
};
G_DEFINE_TYPE_WITH_PRIVATE (GskGpuDescriptors, gsk_gpu_descriptors, G_TYPE_OBJECT)
static void
gsk_gpu_descriptors_finalize (GObject *object)
{
GskGpuDescriptors *self = GSK_GPU_DESCRIPTORS (object);
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
gsk_gpu_image_entries_clear (&priv->images);
gsk_gpu_buffer_entries_clear (&priv->buffers);
G_OBJECT_CLASS (gsk_gpu_descriptors_parent_class)->finalize (object);
}
static void
gsk_gpu_descriptors_class_init (GskGpuDescriptorsClass *klass)
{
GObjectClass *object_class = G_OBJECT_CLASS (klass);
object_class->finalize = gsk_gpu_descriptors_finalize;
}
static void
gsk_gpu_descriptors_init (GskGpuDescriptors *self)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
gsk_gpu_image_entries_init (&priv->images);
gsk_gpu_buffer_entries_init (&priv->buffers);
}
gsize
gsk_gpu_descriptors_get_n_images (GskGpuDescriptors *self)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
return gsk_gpu_image_entries_get_size (&priv->images);
}
gsize
gsk_gpu_descriptors_get_n_buffers (GskGpuDescriptors *self)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
return gsk_gpu_buffer_entries_get_size (&priv->buffers);
}
void
gsk_gpu_descriptors_set_size (GskGpuDescriptors *self,
gsize n_images,
gsize n_buffers)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
g_assert (n_images <= gsk_gpu_image_entries_get_size (&priv->images));
gsk_gpu_image_entries_set_size (&priv->images, n_images);
g_assert (n_buffers <= gsk_gpu_buffer_entries_get_size (&priv->buffers));
gsk_gpu_buffer_entries_set_size (&priv->buffers, n_buffers);
}
GskGpuImage *
gsk_gpu_descriptors_get_image (GskGpuDescriptors *self,
gsize id)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
const GskGpuImageEntry *entry = gsk_gpu_image_entries_get (&priv->images, id);
return entry->image;
}
GskGpuSampler
gsk_gpu_descriptors_get_sampler (GskGpuDescriptors *self,
gsize id)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
const GskGpuImageEntry *entry = gsk_gpu_image_entries_get (&priv->images, id);
return entry->sampler;
}
gsize
gsk_gpu_descriptors_find_image (GskGpuDescriptors *self,
guint32 descriptor)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
gsize i;
for (i = 0; i < gsk_gpu_image_entries_get_size (&priv->images); i++)
{
const GskGpuImageEntry *entry = gsk_gpu_image_entries_get (&priv->images, i);
if (entry->descriptor == descriptor)
return i;
}
g_return_val_if_reached ((gsize) -1);
}
GskGpuBuffer *
gsk_gpu_descriptors_get_buffer (GskGpuDescriptors *self,
gsize id)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
const GskGpuBufferEntry *entry = gsk_gpu_buffer_entries_get (&priv->buffers, id);
return entry->buffer;
}
gboolean
gsk_gpu_descriptors_add_image (GskGpuDescriptors *self,
GskGpuImage *image,
GskGpuSampler sampler,
guint32 *out_descriptor)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
gsize i;
guint32 descriptor;
for (i = 0; i < gsk_gpu_image_entries_get_size (&priv->images); i++)
{
const GskGpuImageEntry *entry = gsk_gpu_image_entries_get (&priv->images, i);
if (entry->image == image && entry->sampler == sampler)
{
*out_descriptor = entry->descriptor;
return TRUE;
}
}
if (!GSK_GPU_DESCRIPTORS_GET_CLASS (self)->add_image (self, image, sampler, &descriptor))
return FALSE;
gsk_gpu_image_entries_append (&priv->images,
&(GskGpuImageEntry) {
.image = g_object_ref (image),
.sampler = sampler,
.descriptor = descriptor
});
*out_descriptor = descriptor;
return TRUE;
}
gboolean
gsk_gpu_descriptors_add_buffer (GskGpuDescriptors *self,
GskGpuBuffer *buffer,
guint32 *out_descriptor)
{
GskGpuDescriptorsPrivate *priv = gsk_gpu_descriptors_get_instance_private (self);
gsize i;
guint32 descriptor;
for (i = 0; i < gsk_gpu_buffer_entries_get_size (&priv->buffers); i++)
{
const GskGpuBufferEntry *entry = gsk_gpu_buffer_entries_get (&priv->buffers, i);
if (entry->buffer == buffer)
{
*out_descriptor = entry->descriptor;
return TRUE;
}
}
if (!GSK_GPU_DESCRIPTORS_GET_CLASS (self)->add_buffer (self, buffer, &descriptor))
return FALSE;
gsk_gpu_buffer_entries_append (&priv->buffers,
&(GskGpuBufferEntry) {
.buffer = g_object_ref (buffer),
.descriptor = descriptor
});
*out_descriptor = descriptor;
return TRUE;
}

View File

@@ -0,0 +1,61 @@
#pragma once
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GPU_DESCRIPTORS (gsk_gpu_descriptors_get_type ())
#define GSK_GPU_DESCRIPTORS(o) (G_TYPE_CHECK_INSTANCE_CAST ((o), GSK_TYPE_GPU_DESCRIPTORS, GskGpuDescriptors))
#define GSK_GPU_DESCRIPTORS_CLASS(k) (G_TYPE_CHECK_CLASS_CAST ((k), GSK_TYPE_GPU_DESCRIPTORS, GskGpuDescriptorsClass))
#define GSK_IS_GPU_DESCRIPTORS(o) (G_TYPE_CHECK_INSTANCE_TYPE ((o), GSK_TYPE_GPU_DESCRIPTORS))
#define GSK_IS_GPU_DESCRIPTORS_CLASS(k) (G_TYPE_CHECK_CLASS_TYPE ((k), GSK_TYPE_GPU_DESCRIPTORS))
#define GSK_GPU_DESCRIPTORS_GET_CLASS(o) (G_TYPE_INSTANCE_GET_CLASS ((o), GSK_TYPE_GPU_DESCRIPTORS, GskGpuDescriptorsClass))
typedef struct _GskGpuDescriptorsClass GskGpuDescriptorsClass;
struct _GskGpuDescriptors
{
GObject parent_instance;
};
struct _GskGpuDescriptorsClass
{
GObjectClass parent_class;
gboolean (* add_image) (GskGpuDescriptors *self,
GskGpuImage *image,
GskGpuSampler sampler,
guint32 *out_id);
gboolean (* add_buffer) (GskGpuDescriptors *self,
GskGpuBuffer *buffer,
guint32 *out_id);
};
GType gsk_gpu_descriptors_get_type (void) G_GNUC_CONST;
gsize gsk_gpu_descriptors_get_n_images (GskGpuDescriptors *self);
gsize gsk_gpu_descriptors_get_n_buffers (GskGpuDescriptors *self);
void gsk_gpu_descriptors_set_size (GskGpuDescriptors *self,
gsize n_images,
gsize n_buffers);
GskGpuImage * gsk_gpu_descriptors_get_image (GskGpuDescriptors *self,
gsize id);
GskGpuSampler gsk_gpu_descriptors_get_sampler (GskGpuDescriptors *self,
gsize id);
gsize gsk_gpu_descriptors_find_image (GskGpuDescriptors *self,
guint32 descriptor);
GskGpuBuffer * gsk_gpu_descriptors_get_buffer (GskGpuDescriptors *self,
gsize id);
gboolean gsk_gpu_descriptors_add_image (GskGpuDescriptors *self,
GskGpuImage *image,
GskGpuSampler sampler,
guint32 *out_descriptor);
gboolean gsk_gpu_descriptors_add_buffer (GskGpuDescriptors *self,
GskGpuBuffer *buffer,
guint32 *out_descriptor);
G_DEFINE_AUTOPTR_CLEANUP_FUNC(GskGpuDescriptors, g_object_unref)
G_END_DECLS

723
gsk/gpu/gskgpudevice.c Normal file
View File

@@ -0,0 +1,723 @@
#include "config.h"
#include "gskgpudeviceprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuuploadopprivate.h"
#include "gdk/gdkdisplayprivate.h"
#include "gdk/gdktextureprivate.h"
#define MAX_SLICES_PER_ATLAS 64
#define ATLAS_SIZE 1024
#define MAX_ATLAS_ITEM_SIZE 256
typedef struct _GskGpuCached GskGpuCached;
typedef struct _GskGpuCachedClass GskGpuCachedClass;
typedef struct _GskGpuCachedAtlas GskGpuCachedAtlas;
typedef struct _GskGpuCachedGlyph GskGpuCachedGlyph;
typedef struct _GskGpuCachedTexture GskGpuCachedTexture;
typedef struct _GskGpuDevicePrivate GskGpuDevicePrivate;
struct _GskGpuDevicePrivate
{
GdkDisplay *display;
gsize max_image_size;
GskGpuCached *first_cached;
GskGpuCached *last_cached;
guint cache_gc_source;
GHashTable *texture_cache;
GHashTable *glyph_cache;
GskGpuCachedAtlas *current_atlas;
};
G_DEFINE_TYPE_WITH_PRIVATE (GskGpuDevice, gsk_gpu_device, G_TYPE_OBJECT)
/* {{{ Cached base class */
struct _GskGpuCachedClass
{
gsize size;
void (* free) (GskGpuDevice *device,
GskGpuCached *cached);
gboolean (* should_collect) (GskGpuDevice *device,
GskGpuCached *cached,
gint64 timestsamp);
};
struct _GskGpuCached
{
const GskGpuCachedClass *class;
GskGpuCachedAtlas *atlas;
GskGpuCached *next;
GskGpuCached *prev;
};
static void
gsk_gpu_cached_free (GskGpuDevice *device,
GskGpuCached *cached)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (device);
if (cached->next)
cached->next->prev = cached->prev;
else
priv->last_cached = cached->prev;
if (cached->prev)
cached->prev->next = cached->next;
else
priv->first_cached = cached->next;
cached->class->free (device, cached);
}
static gboolean
gsk_gpu_cached_should_collect (GskGpuDevice *device,
GskGpuCached *cached,
gint64 timestamp)
{
return cached->class->should_collect (device, cached, timestamp);
}
static gpointer
gsk_gpu_cached_new (GskGpuDevice *device,
const GskGpuCachedClass *class,
GskGpuCachedAtlas *atlas)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (device);
GskGpuCached *cached;
cached = g_malloc0 (class->size);
cached->class = class;
cached->atlas = atlas;
cached->prev = priv->last_cached;
priv->last_cached = cached;
if (cached->prev)
cached->prev->next = cached;
else
priv->first_cached = cached;
return cached;
}
static void
gsk_gpu_cached_use (GskGpuDevice *device,
GskGpuCached *cached,
gint64 timestamp)
{
/* FIXME */
}
/* }}} */
/* {{{ CachedAtlas */
struct _GskGpuCachedAtlas
{
GskGpuCached parent;
GskGpuImage *image;
gsize n_slices;
struct {
gsize width;
gsize height;
} slices[MAX_SLICES_PER_ATLAS];
};
static void
gsk_gpu_cached_atlas_free (GskGpuDevice *device,
GskGpuCached *cached)
{
GskGpuCachedAtlas *self = (GskGpuCachedAtlas *) cached;
g_object_unref (self->image);
g_free (self);
}
static gboolean
gsk_gpu_cached_atlas_should_collect (GskGpuDevice *device,
GskGpuCached *cached,
gint64 timestamp)
{
/* FIXME */
return FALSE;
}
static const GskGpuCachedClass GSK_GPU_CACHED_ATLAS_CLASS =
{
sizeof (GskGpuCachedAtlas),
gsk_gpu_cached_atlas_free,
gsk_gpu_cached_atlas_should_collect
};
static GskGpuCachedAtlas *
gsk_gpu_cached_atlas_new (GskGpuDevice *device)
{
GskGpuCachedAtlas *self;
self = gsk_gpu_cached_new (device, &GSK_GPU_CACHED_ATLAS_CLASS, NULL);
self->image = GSK_GPU_DEVICE_GET_CLASS (device)->create_atlas_image (device, ATLAS_SIZE, ATLAS_SIZE);
return self;
}
/* }}} */
/* {{{ CachedTexture */
struct _GskGpuCachedTexture
{
GskGpuCached parent;
/* atomic */ GdkTexture *texture;
GskGpuImage *image;
};
static void
gsk_gpu_cached_texture_free (GskGpuDevice *device,
GskGpuCached *cached)
{
GskGpuCachedTexture *self = (GskGpuCachedTexture *) cached;
gboolean texture_still_alive;
texture_still_alive = g_atomic_pointer_exchange (&self->texture, NULL) != NULL;
g_object_unref (self->image);
if (!texture_still_alive)
g_free (self);
}
static gboolean
gsk_gpu_cached_texture_should_collect (GskGpuDevice *device,
GskGpuCached *cached,
gint64 timestamp)
{
/* FIXME */
return FALSE;
}
static const GskGpuCachedClass GSK_GPU_CACHED_TEXTURE_CLASS =
{
sizeof (GskGpuCachedTexture),
gsk_gpu_cached_texture_free,
gsk_gpu_cached_texture_should_collect
};
static void
gsk_gpu_cached_texture_destroy_cb (gpointer data)
{
GskGpuCachedTexture *cache = data;
gboolean cache_still_alive;
cache_still_alive = g_atomic_pointer_exchange (&cache->texture, NULL) != NULL;
if (!cache_still_alive)
g_free (cache);
}
static GskGpuCachedTexture *
gsk_gpu_cached_texture_new (GskGpuDevice *device,
GdkTexture *texture,
GskGpuImage *image)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (device);
GskGpuCachedTexture *self;
if (gdk_texture_get_render_data (texture, device))
gdk_texture_clear_render_data (texture);
else if ((self = g_hash_table_lookup (priv->texture_cache, texture)))
{
g_hash_table_remove (priv->texture_cache, texture);
g_object_weak_unref (G_OBJECT (texture), (GWeakNotify) gsk_gpu_cached_texture_destroy_cb, self);
}
self = gsk_gpu_cached_new (device, &GSK_GPU_CACHED_TEXTURE_CLASS, NULL);
self->texture = texture;
self->image = g_object_ref (image);
if (!gdk_texture_set_render_data (texture, device, self, gsk_gpu_cached_texture_destroy_cb))
{
g_object_weak_ref (G_OBJECT (texture), (GWeakNotify) gsk_gpu_cached_texture_destroy_cb, self);
g_hash_table_insert (priv->texture_cache, texture, self);
}
return self;
}
/* }}} */
/* {{{ CachedGlyph */
struct _GskGpuCachedGlyph
{
GskGpuCached parent;
PangoFont *font;
PangoGlyph glyph;
GskGpuGlyphLookupFlags flags;
float scale;
GskGpuImage *image;
graphene_rect_t bounds;
graphene_point_t origin;
};
static void
gsk_gpu_cached_glyph_free (GskGpuDevice *device,
GskGpuCached *cached)
{
GskGpuCachedGlyph *self = (GskGpuCachedGlyph *) cached;
g_object_unref (self->font);
g_object_unref (self->image);
g_free (self);
}
static gboolean
gsk_gpu_cached_glyph_should_collect (GskGpuDevice *device,
GskGpuCached *cached,
gint64 timestsamp)
{
/* FIXME */
return FALSE;
}
static guint
gsk_gpu_cached_glyph_hash (gconstpointer data)
{
const GskGpuCachedGlyph *glyph = data;
return GPOINTER_TO_UINT (glyph->font) ^
glyph->glyph ^
(glyph->flags << 24) ^
((guint) glyph->scale * PANGO_SCALE);
}
static gboolean
gsk_gpu_cached_glyph_equal (gconstpointer v1,
gconstpointer v2)
{
const GskGpuCachedGlyph *glyph1 = v1;
const GskGpuCachedGlyph *glyph2 = v2;
return glyph1->font == glyph2->font
&& glyph1->glyph == glyph2->glyph
&& glyph1->flags == glyph2->flags
&& glyph1->scale == glyph2->scale;
}
static const GskGpuCachedClass GSK_GPU_CACHED_GLYPH_CLASS =
{
sizeof (GskGpuCachedGlyph),
gsk_gpu_cached_glyph_free,
gsk_gpu_cached_glyph_should_collect
};
/* }}} */
/* {{{ GskGpuDevice */
void
gsk_gpu_device_gc (GskGpuDevice *self,
gint64 timestamp)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
GskGpuCached *cached, *next;
for (cached = priv->first_cached; cached != NULL; cached = next)
{
next = cached->next;
if (gsk_gpu_cached_should_collect (self, cached, timestamp))
gsk_gpu_cached_free (self, priv->first_cached);
}
}
static void
gsk_gpu_device_clear_cache (GskGpuDevice *self)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
for (GskGpuCached *cached = priv->first_cached; cached; cached = cached->next)
{
if (cached->prev == NULL)
g_assert (priv->first_cached == cached);
else
g_assert (cached->prev->next == cached);
if (cached->next == NULL)
g_assert (priv->last_cached == cached);
else
g_assert (cached->next->prev == cached);
}
while (priv->first_cached)
gsk_gpu_cached_free (self, priv->first_cached);
g_assert (priv->last_cached == NULL);
}
static void
gsk_gpu_device_dispose (GObject *object)
{
GskGpuDevice *self = GSK_GPU_DEVICE (object);
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
gsk_gpu_device_clear_cache (self);
g_hash_table_unref (priv->glyph_cache);
g_hash_table_unref (priv->texture_cache);
G_OBJECT_CLASS (gsk_gpu_device_parent_class)->dispose (object);
}
static void
gsk_gpu_device_finalize (GObject *object)
{
GskGpuDevice *self = GSK_GPU_DEVICE (object);
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
g_object_unref (priv->display);
G_OBJECT_CLASS (gsk_gpu_device_parent_class)->finalize (object);
}
static void
gsk_gpu_device_class_init (GskGpuDeviceClass *klass)
{
GObjectClass *object_class = G_OBJECT_CLASS (klass);
object_class->dispose = gsk_gpu_device_dispose;
object_class->finalize = gsk_gpu_device_finalize;
}
static void
gsk_gpu_device_init (GskGpuDevice *self)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
priv->glyph_cache = g_hash_table_new (gsk_gpu_cached_glyph_hash,
gsk_gpu_cached_glyph_equal);
priv->texture_cache = g_hash_table_new (g_direct_hash,
g_direct_equal);
}
void
gsk_gpu_device_setup (GskGpuDevice *self,
GdkDisplay *display,
gsize max_image_size)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
priv->display = g_object_ref (display);
priv->max_image_size = max_image_size;
}
GdkDisplay *
gsk_gpu_device_get_display (GskGpuDevice *self)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
return priv->display;
}
gsize
gsk_gpu_device_get_max_image_size (GskGpuDevice *self)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
return priv->max_image_size;
}
GskGpuImage *
gsk_gpu_device_create_offscreen_image (GskGpuDevice *self,
gboolean with_mipmap,
GdkMemoryDepth depth,
gsize width,
gsize height)
{
return GSK_GPU_DEVICE_GET_CLASS (self)->create_offscreen_image (self, with_mipmap, depth, width, height);
}
GskGpuImage *
gsk_gpu_device_create_upload_image (GskGpuDevice *self,
gboolean with_mipmap,
GdkMemoryFormat format,
gsize width,
gsize height)
{
return GSK_GPU_DEVICE_GET_CLASS (self)->create_upload_image (self, with_mipmap, format, width, height);
}
GskGpuImage *
gsk_gpu_device_create_download_image (GskGpuDevice *self,
GdkMemoryDepth depth,
gsize width,
gsize height)
{
return GSK_GPU_DEVICE_GET_CLASS (self)->create_download_image (self, depth, width, height);
}
/* This rounds up to the next number that has <= 2 bits set:
* 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, ...
* That is roughly sqrt(2), so it should limit waste
*/
static gsize
round_up_atlas_size (gsize num)
{
gsize storage = g_bit_storage (num);
num = num + (((1 << storage) - 1) >> 2);
num &= (((gsize) 7) << storage) >> 2;
return num;
}
static gboolean
gsk_gpu_cached_atlas_allocate (GskGpuCachedAtlas *atlas,
gsize width,
gsize height,
gsize *out_x,
gsize *out_y)
{
gsize i;
gsize waste, slice_waste;
gsize best_slice;
gsize y, best_y;
gboolean can_add_slice;
best_y = 0;
best_slice = G_MAXSIZE;
can_add_slice = atlas->n_slices < MAX_SLICES_PER_ATLAS;
if (can_add_slice)
waste = height; /* Require less than 100% waste */
else
waste = G_MAXSIZE; /* Accept any slice, we can't make better ones */
for (i = 0, y = 0; i < atlas->n_slices; y += atlas->slices[i].height, i++)
{
if (atlas->slices[i].height < height || ATLAS_SIZE - atlas->slices[i].width < width)
continue;
slice_waste = atlas->slices[i].height - height;
if (slice_waste < waste)
{
waste = slice_waste;
best_slice = i;
best_y = y;
if (waste == 0)
break;
}
}
if (best_slice >= i && i == atlas->n_slices)
{
if (!can_add_slice)
return FALSE;
atlas->n_slices++;
if (atlas->n_slices == MAX_SLICES_PER_ATLAS)
atlas->slices[i].height = ATLAS_SIZE - y;
else
atlas->slices[i].height = round_up_atlas_size (MAX (height, 4));
atlas->slices[i].width = 0;
best_y = y;
best_slice = i;
}
*out_x = atlas->slices[best_slice].width;
*out_y = best_y;
atlas->slices[best_slice].width += width;
g_assert (atlas->slices[best_slice].width <= ATLAS_SIZE);
return TRUE;
}
static void
gsk_gpu_device_ensure_atlas (GskGpuDevice *self,
gboolean recreate,
gint64 timestamp)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
if (priv->current_atlas && !recreate)
return;
priv->current_atlas = gsk_gpu_cached_atlas_new (self);
}
GskGpuImage *
gsk_gpu_device_get_atlas_image (GskGpuDevice *self)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
gsk_gpu_device_ensure_atlas (self, FALSE, g_get_monotonic_time ());
return priv->current_atlas->image;
}
static GskGpuImage *
gsk_gpu_device_add_atlas_image (GskGpuDevice *self,
gint64 timestamp,
gsize width,
gsize height,
gsize *out_x,
gsize *out_y)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
if (width > MAX_ATLAS_ITEM_SIZE || height > MAX_ATLAS_ITEM_SIZE)
return NULL;
gsk_gpu_device_ensure_atlas (self, FALSE, timestamp);
if (gsk_gpu_cached_atlas_allocate (priv->current_atlas, width, height, out_x, out_y))
{
gsk_gpu_cached_use (self, (GskGpuCached *) priv->current_atlas, timestamp);
return priv->current_atlas->image;
}
gsk_gpu_device_ensure_atlas (self, TRUE, timestamp);
if (gsk_gpu_cached_atlas_allocate (priv->current_atlas, width, height, out_x, out_y))
{
gsk_gpu_cached_use (self, (GskGpuCached *) priv->current_atlas, timestamp);
return priv->current_atlas->image;
}
return NULL;
}
GskGpuImage *
gsk_gpu_device_lookup_texture_image (GskGpuDevice *self,
GdkTexture *texture,
gint64 timestamp)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
GskGpuCachedTexture *cache;
cache = gdk_texture_get_render_data (texture, self);
if (cache == NULL)
cache = g_hash_table_lookup (priv->texture_cache, texture);
if (cache)
{
return g_object_ref (cache->image);
}
return NULL;
}
void
gsk_gpu_device_cache_texture_image (GskGpuDevice *self,
GdkTexture *texture,
gint64 timestamp,
GskGpuImage *image)
{
GskGpuCachedTexture *cache;
cache = gsk_gpu_cached_texture_new (self, texture, image);
gsk_gpu_cached_use (self, (GskGpuCached *) cache, timestamp);
}
GskGpuImage *
gsk_gpu_device_lookup_glyph_image (GskGpuDevice *self,
GskGpuFrame *frame,
PangoFont *font,
PangoGlyph glyph,
GskGpuGlyphLookupFlags flags,
float scale,
graphene_rect_t *out_bounds,
graphene_point_t *out_origin)
{
GskGpuDevicePrivate *priv = gsk_gpu_device_get_instance_private (self);
GskGpuCachedGlyph lookup = {
.font = font,
.glyph = glyph,
.flags = flags,
.scale = scale
};
GskGpuCachedGlyph *cache;
PangoRectangle ink_rect;
graphene_rect_t rect;
graphene_point_t origin;
GskGpuImage *image;
gsize atlas_x, atlas_y, padding;
cache = g_hash_table_lookup (priv->glyph_cache, &lookup);
if (cache)
{
gsk_gpu_cached_use (self, (GskGpuCached *) cache, gsk_gpu_frame_get_timestamp (frame));
*out_bounds = cache->bounds;
*out_origin = cache->origin;
return cache->image;
}
cache = g_new (GskGpuCachedGlyph, 1);
pango_font_get_glyph_extents (font, glyph, &ink_rect, NULL);
origin.x = floor (ink_rect.x * scale / PANGO_SCALE);
origin.y = floor (ink_rect.y * scale / PANGO_SCALE);
rect.size.width = ceil ((ink_rect.x + ink_rect.width) * scale / PANGO_SCALE) - origin.x;
rect.size.height = ceil ((ink_rect.y + ink_rect.height) * scale / PANGO_SCALE) - origin.y;
padding = 1;
image = gsk_gpu_device_add_atlas_image (self,
gsk_gpu_frame_get_timestamp (frame),
rect.size.width + 2 * padding, rect.size.height + 2 * padding,
&atlas_x, &atlas_y);
if (image)
{
g_object_ref (image);
rect.origin.x = atlas_x + padding;
rect.origin.y = atlas_y + padding;
cache = gsk_gpu_cached_new (self, &GSK_GPU_CACHED_GLYPH_CLASS, priv->current_atlas);
}
else
{
image = gsk_gpu_device_create_upload_image (self, FALSE, GDK_MEMORY_DEFAULT, rect.size.width, rect.size.height),
rect.origin.x = 0;
rect.origin.y = 0;
padding = 0;
cache = gsk_gpu_cached_new (self, &GSK_GPU_CACHED_GLYPH_CLASS, NULL);
}
cache->font = g_object_ref (font),
cache->glyph = glyph,
cache->flags = flags,
cache->scale = scale,
cache->bounds = rect,
cache->image = image,
cache->origin = GRAPHENE_POINT_INIT (- origin.x + (flags & 3) / 4.f,
- origin.y + ((flags >> 2) & 3) / 4.f);
gsk_gpu_upload_glyph_op (frame,
cache->image,
font,
glyph,
&(cairo_rectangle_int_t) {
.x = rect.origin.x - padding,
.y = rect.origin.y - padding,
.width = rect.size.width + 2 * padding,
.height = rect.size.height + 2 * padding,
},
scale,
&GRAPHENE_POINT_INIT (cache->origin.x + 1,
cache->origin.y + 1));
g_hash_table_insert (priv->glyph_cache, cache, cache);
gsk_gpu_cached_use (self, (GskGpuCached *) cache, gsk_gpu_frame_get_timestamp (frame));
*out_bounds = cache->bounds;
*out_origin = cache->origin;
return cache->image;
}
/* }}} */

View File

@@ -0,0 +1,103 @@
#pragma once
#include "gskgputypesprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
#define GSK_TYPE_GPU_DEVICE (gsk_gpu_device_get_type ())
#define GSK_GPU_DEVICE(o) (G_TYPE_CHECK_INSTANCE_CAST ((o), GSK_TYPE_GPU_DEVICE, GskGpuDevice))
#define GSK_GPU_DEVICE_CLASS(k) (G_TYPE_CHECK_CLASS_CAST ((k), GSK_TYPE_GPU_DEVICE, GskGpuDeviceClass))
#define GSK_IS_GPU_DEVICE(o) (G_TYPE_CHECK_INSTANCE_TYPE ((o), GSK_TYPE_GPU_DEVICE))
#define GSK_IS_GPU_DEVICE_CLASS(k) (G_TYPE_CHECK_CLASS_TYPE ((k), GSK_TYPE_GPU_DEVICE))
#define GSK_GPU_DEVICE_GET_CLASS(o) (G_TYPE_INSTANCE_GET_CLASS ((o), GSK_TYPE_GPU_DEVICE, GskGpuDeviceClass))
typedef struct _GskGpuDeviceClass GskGpuDeviceClass;
struct _GskGpuDevice
{
GObject parent_instance;
};
struct _GskGpuDeviceClass
{
GObjectClass parent_class;
GskGpuImage * (* create_offscreen_image) (GskGpuDevice *self,
gboolean with_mipmap,
GdkMemoryDepth depth,
gsize width,
gsize height);
GskGpuImage * (* create_atlas_image) (GskGpuDevice *self,
gsize width,
gsize height);
GskGpuImage * (* create_upload_image) (GskGpuDevice *self,
gboolean with_mipmap,
GdkMemoryFormat format,
gsize width,
gsize height);
GskGpuImage * (* create_download_image) (GskGpuDevice *self,
GdkMemoryDepth depth,
gsize width,
gsize height);
};
GType gsk_gpu_device_get_type (void) G_GNUC_CONST;
void gsk_gpu_device_setup (GskGpuDevice *self,
GdkDisplay *display,
gsize max_image_size);
void gsk_gpu_device_gc (GskGpuDevice *self,
gint64 timestamp);
GdkDisplay * gsk_gpu_device_get_display (GskGpuDevice *self);
gsize gsk_gpu_device_get_max_image_size (GskGpuDevice *self);
GskGpuImage * gsk_gpu_device_get_atlas_image (GskGpuDevice *self);
GskGpuImage * gsk_gpu_device_create_offscreen_image (GskGpuDevice *self,
gboolean with_mipmap,
GdkMemoryDepth depth,
gsize width,
gsize height);
GskGpuImage * gsk_gpu_device_create_upload_image (GskGpuDevice *self,
gboolean with_mipmap,
GdkMemoryFormat format,
gsize width,
gsize height);
GskGpuImage * gsk_gpu_device_create_download_image (GskGpuDevice *self,
GdkMemoryDepth depth,
gsize width,
gsize height);
GskGpuImage * gsk_gpu_device_lookup_texture_image (GskGpuDevice *self,
GdkTexture *texture,
gint64 timestamp);
void gsk_gpu_device_cache_texture_image (GskGpuDevice *self,
GdkTexture *texture,
gint64 timestamp,
GskGpuImage *image);
typedef enum
{
GSK_GPU_GLYPH_X_OFFSET_1 = 0x1,
GSK_GPU_GLYPH_X_OFFSET_2 = 0x2,
GSK_GPU_GLYPH_X_OFFSET_3 = 0x3,
GSK_GPU_GLYPH_Y_OFFSET_1 = 0x4,
GSK_GPU_GLYPH_Y_OFFSET_2 = 0x8,
GSK_GPU_GLYPH_Y_OFFSET_3 = 0xC
} GskGpuGlyphLookupFlags;
GskGpuImage * gsk_gpu_device_lookup_glyph_image (GskGpuDevice *self,
GskGpuFrame *frame,
PangoFont *font,
PangoGlyph glyph,
GskGpuGlyphLookupFlags flags,
float scale,
graphene_rect_t *out_bounds,
graphene_point_t *out_origin);
G_DEFINE_AUTOPTR_CLEANUP_FUNC(GskGpuDevice, g_object_unref)
G_END_DECLS

347
gsk/gpu/gskgpudownloadop.c Normal file
View File

@@ -0,0 +1,347 @@
#include "config.h"
#include "gskgpudownloadopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskglimageprivate.h"
#include "gskgpuimageprivate.h"
#include "gskgpuprintprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkanbufferprivate.h"
#include "gskvulkanframeprivate.h"
#include "gskvulkanimageprivate.h"
#endif
#include "gdk/gdkdmabuftextureprivate.h"
#include "gdk/gdkglcontextprivate.h"
#ifdef HAVE_DMABUF
#include <linux/dma-buf.h>
#endif
typedef struct _GskGpuDownloadOp GskGpuDownloadOp;
typedef void (* GdkGpuDownloadOpCreateFunc) (GskGpuDownloadOp *);
struct _GskGpuDownloadOp
{
GskGpuOp op;
GskGpuImage *image;
gboolean allow_dmabuf;
GdkGpuDownloadOpCreateFunc create_func;
GskGpuDownloadFunc func;
gpointer user_data;
GdkTexture *texture;
GskGpuBuffer *buffer;
#ifdef GDK_RENDERING_VULKAN
VkSemaphore vk_semaphore;
#endif
};
static void
gsk_gpu_download_op_finish (GskGpuOp *op)
{
GskGpuDownloadOp *self = (GskGpuDownloadOp *) op;
if (self->create_func)
self->create_func (self);
self->func (self->user_data, self->texture);
g_object_unref (self->texture);
g_object_unref (self->image);
g_clear_object (&self->buffer);
}
static void
gsk_gpu_download_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuDownloadOp *self = (GskGpuDownloadOp *) op;
gsk_gpu_print_op (string, indent, "download");
gsk_gpu_print_image (string, self->image);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
/* The code needs to run here because vkGetSemaphoreFdKHR() may
* only be called after the semaphore has been submitted via
* vkQueueSubmit().
*/
static void
gsk_gpu_download_op_vk_sync_semaphore (GskGpuDownloadOp *self)
{
PFN_vkGetSemaphoreFdKHR func_vkGetSemaphoreFdKHR;
GdkDisplay *display;
int fd, sync_file_fd;
/* Don't look at where I store my variables plz */
display = gdk_dmabuf_texture_get_display (GDK_DMABUF_TEXTURE (self->texture));
fd = gdk_dmabuf_texture_get_dmabuf (GDK_DMABUF_TEXTURE (self->texture))->planes[0].fd;
func_vkGetSemaphoreFdKHR = (PFN_vkGetSemaphoreFdKHR) vkGetDeviceProcAddr (display->vk_device, "vkGetSemaphoreFdKHR");
/* vkGetSemaphoreFdKHR implicitly resets the semaphore.
* But who cares, we're about to delete it. */
if (GSK_VK_CHECK (func_vkGetSemaphoreFdKHR, display->vk_device,
&(VkSemaphoreGetFdInfoKHR) {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_GET_FD_INFO_KHR,
.semaphore = self->vk_semaphore,
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
},
&sync_file_fd) == VK_SUCCESS)
{
gdk_dmabuf_import_sync_file (fd, DMA_BUF_SYNC_WRITE, sync_file_fd);
close (sync_file_fd);
}
vkDestroySemaphore (display->vk_device, self->vk_semaphore, NULL);
}
static void
gsk_gpu_download_op_vk_create (GskGpuDownloadOp *self)
{
GBytes *bytes;
guchar *data;
gsize width, height, stride;
GdkMemoryFormat format;
data = gsk_gpu_buffer_map (self->buffer);
width = gsk_gpu_image_get_width (self->image);
height = gsk_gpu_image_get_height (self->image);
format = gsk_gpu_image_get_format (self->image);
stride = width * gdk_memory_format_bytes_per_pixel (format);
bytes = g_bytes_new (data, stride * height);
self->texture = gdk_memory_texture_new (width,
height,
format,
bytes,
stride);
g_bytes_unref (bytes);
gsk_gpu_buffer_unmap (self->buffer);
}
static GskGpuOp *
gsk_gpu_download_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuDownloadOp *self = (GskGpuDownloadOp *) op;
gsize width, height, stride;
#ifdef HAVE_DMABUF
if (self->allow_dmabuf)
self->texture = gsk_vulkan_image_to_dmabuf_texture (GSK_VULKAN_IMAGE (self->image));
if (self->texture)
{
GskVulkanDevice *device = GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame));
VkDevice vk_device = gsk_vulkan_device_get_vk_device (device);
gsk_gpu_device_cache_texture_image (GSK_GPU_DEVICE (device), self->texture, gsk_gpu_frame_get_timestamp (frame), self->image);
if (gsk_vulkan_device_has_feature (device, GDK_VULKAN_FEATURE_SEMAPHORE_EXPORT))
{
GSK_VK_CHECK (vkCreateSemaphore, vk_device,
&(VkSemaphoreCreateInfo) {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
.pNext = &(VkExportSemaphoreCreateInfo) {
.sType = VK_STRUCTURE_TYPE_EXPORT_SEMAPHORE_CREATE_INFO,
.handleTypes = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
},
},
NULL,
&self->vk_semaphore);
gsk_vulkan_semaphores_add_signal (state->semaphores, self->vk_semaphore);
self->create_func = gsk_gpu_download_op_vk_sync_semaphore;
}
return op->next;
}
#endif
width = gsk_gpu_image_get_width (self->image);
height = gsk_gpu_image_get_height (self->image);
stride = width * gdk_memory_format_bytes_per_pixel (gsk_gpu_image_get_format (self->image));
self->buffer = gsk_vulkan_buffer_new_read (GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame)),
height * stride);
gsk_vulkan_image_transition (GSK_VULKAN_IMAGE (self->image),
state->semaphores,
state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
VK_ACCESS_TRANSFER_READ_BIT);
vkCmdCopyImageToBuffer (state->vk_command_buffer,
gsk_vulkan_image_get_vk_image (GSK_VULKAN_IMAGE (self->image)),
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
gsk_vulkan_buffer_get_vk_buffer (GSK_VULKAN_BUFFER (self->buffer)),
1,
(VkBufferImageCopy[1]) {
{
.bufferOffset = 0,
.bufferRowLength = width,
.bufferImageHeight = height,
.imageSubresource = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.mipLevel = 0,
.baseArrayLayer = 0,
.layerCount = 1
},
.imageOffset = {
.x = 0,
.y = 0,
.z = 0
},
.imageExtent = {
.width = width,
.height = height,
.depth = 1
}
}
});
vkCmdPipelineBarrier (state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_HOST_BIT,
0,
0, NULL,
1, &(VkBufferMemoryBarrier) {
.sType = VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER,
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.dstAccessMask = VK_ACCESS_HOST_READ_BIT,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.buffer = gsk_vulkan_buffer_get_vk_buffer (GSK_VULKAN_BUFFER (self->buffer)),
.offset = 0,
.size = VK_WHOLE_SIZE,
},
0, NULL);
self->create_func = gsk_gpu_download_op_vk_create;
return op->next;
}
#endif
typedef struct _GskGLTextureData GskGLTextureData;
struct _GskGLTextureData
{
GdkGLContext *context;
GLuint texture_id;
GLsync sync;
};
static void
gsk_gl_texture_data_free (gpointer user_data)
{
GskGLTextureData *data = user_data;
gdk_gl_context_make_current (data->context);
g_clear_pointer (&data->sync, glDeleteSync);
glDeleteTextures (1, &data->texture_id);
g_object_unref (data->context);
g_free (data);
}
static GskGpuOp *
gsk_gpu_download_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuDownloadOp *self = (GskGpuDownloadOp *) op;
GdkGLTextureBuilder *builder;
GskGLTextureData *data;
GdkGLContext *context;
context = GDK_GL_CONTEXT (gsk_gpu_frame_get_context (frame));
data = g_new (GskGLTextureData, 1);
data->context = g_object_ref (context);
data->texture_id = gsk_gl_image_steal_texture (GSK_GL_IMAGE (self->image));
if (gdk_gl_context_has_sync (context))
data->sync = glFenceSync (GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
builder = gdk_gl_texture_builder_new ();
gdk_gl_texture_builder_set_context (builder, context);
gdk_gl_texture_builder_set_id (builder, data->texture_id);
gdk_gl_texture_builder_set_format (builder, gsk_gpu_image_get_format (self->image));
gdk_gl_texture_builder_set_width (builder, gsk_gpu_image_get_width (self->image));
gdk_gl_texture_builder_set_height (builder, gsk_gpu_image_get_height (self->image));
gdk_gl_texture_builder_set_sync (builder, data->sync);
self->texture = gdk_gl_texture_builder_build (builder,
gsk_gl_texture_data_free,
data);
g_object_unref (builder);
return op->next;
}
static const GskGpuOpClass GSK_GPU_DOWNLOAD_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuDownloadOp),
GSK_GPU_STAGE_COMMAND,
gsk_gpu_download_op_finish,
gsk_gpu_download_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_download_op_vk_command,
#endif
gsk_gpu_download_op_gl_command
};
void
gsk_gpu_download_op (GskGpuFrame *frame,
GskGpuImage *image,
gboolean allow_dmabuf,
GskGpuDownloadFunc func,
gpointer user_data)
{
GskGpuDownloadOp *self;
self = (GskGpuDownloadOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_DOWNLOAD_OP_CLASS);
self->image = g_object_ref (image);
self->allow_dmabuf = allow_dmabuf;
self->func = func,
self->user_data = user_data;
}
static void
gsk_gpu_download_save_png_cb (gpointer filename,
GdkTexture *texture)
{
gdk_texture_save_to_png (texture, filename);
g_free (filename);
}
void
gsk_gpu_download_png_op (GskGpuFrame *frame,
GskGpuImage *image,
const char *filename_format,
...)
{
va_list args;
char *filename;
va_start (args, filename_format);
filename = g_strdup_vprintf (filename_format, args);
va_end (args);
gsk_gpu_download_op (frame,
image,
FALSE,
gsk_gpu_download_save_png_cb,
filename);
}

View File

@@ -0,0 +1,22 @@
#pragma once
#include "gskgpuopprivate.h"
G_BEGIN_DECLS
typedef void (* GskGpuDownloadFunc) (gpointer user_data,
GdkTexture *texture);
void gsk_gpu_download_op (GskGpuFrame *frame,
GskGpuImage *image,
gboolean allow_dmabuf,
GskGpuDownloadFunc func,
gpointer user_data);
void gsk_gpu_download_png_op (GskGpuFrame *frame,
GskGpuImage *image,
const char *filename_format,
...) G_GNUC_PRINTF(3, 4);
G_END_DECLS

668
gsk/gpu/gskgpuframe.c Normal file
View File

@@ -0,0 +1,668 @@
#include "config.h"
#include "gskgpuframeprivate.h"
#include "gskgpubufferprivate.h"
#include "gskgpudeviceprivate.h"
#include "gskgpudownloadopprivate.h"
#include "gskgpuimageprivate.h"
#include "gskgpunodeprocessorprivate.h"
#include "gskgpuopprivate.h"
#include "gskgpurendererprivate.h"
#include "gskgpurenderpassopprivate.h"
#include "gskgpuuploadopprivate.h"
#include "gskdebugprivate.h"
#include "gskrendererprivate.h"
#include "gdk/gdkdmabufdownloaderprivate.h"
#include "gdk/gdktexturedownloaderprivate.h"
#define DEFAULT_VERTEX_BUFFER_SIZE 128 * 1024
/* GL_MAX_UNIFORM_BLOCK_SIZE is at 16384 */
#define DEFAULT_STORAGE_BUFFER_SIZE 16 * 1024 * 64
#define GDK_ARRAY_NAME gsk_gpu_ops
#define GDK_ARRAY_TYPE_NAME GskGpuOps
#define GDK_ARRAY_ELEMENT_TYPE guchar
#define GDK_ARRAY_BY_VALUE 1
#include "gdk/gdkarrayimpl.c"
typedef struct _GskGpuFramePrivate GskGpuFramePrivate;
struct _GskGpuFramePrivate
{
GskGpuRenderer *renderer;
GskGpuDevice *device;
GskGpuOptimizations optimizations;
gint64 timestamp;
GskGpuOps ops;
GskGpuOp *first_op;
GskGpuBuffer *vertex_buffer;
guchar *vertex_buffer_data;
gsize vertex_buffer_used;
GskGpuBuffer *storage_buffer;
guchar *storage_buffer_data;
gsize storage_buffer_used;
};
G_DEFINE_TYPE_WITH_PRIVATE (GskGpuFrame, gsk_gpu_frame, G_TYPE_OBJECT)
static void
gsk_gpu_frame_default_setup (GskGpuFrame *self)
{
}
static void
gsk_gpu_frame_default_cleanup (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
GskGpuOp *op;
gsize i;
for (i = 0; i < gsk_gpu_ops_get_size (&priv->ops); i += op->op_class->size)
{
op = (GskGpuOp *) gsk_gpu_ops_index (&priv->ops, i);
gsk_gpu_op_finish (op);
}
gsk_gpu_ops_set_size (&priv->ops, 0);
}
static void
gsk_gpu_frame_cleanup (GskGpuFrame *self)
{
GSK_GPU_FRAME_GET_CLASS (self)->cleanup (self);
}
static GskGpuImage *
gsk_gpu_frame_default_upload_texture (GskGpuFrame *self,
gboolean with_mipmap,
GdkTexture *texture)
{
GskGpuImage *image;
image = gsk_gpu_upload_texture_op_try (self, with_mipmap, texture);
if (image)
g_object_ref (image);
return image;
}
static void
gsk_gpu_frame_dispose (GObject *object)
{
GskGpuFrame *self = GSK_GPU_FRAME (object);
gsk_gpu_frame_cleanup (self);
G_OBJECT_CLASS (gsk_gpu_frame_parent_class)->dispose (object);
}
static void
gsk_gpu_frame_finalize (GObject *object)
{
GskGpuFrame *self = GSK_GPU_FRAME (object);
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
gsk_gpu_ops_clear (&priv->ops);
g_clear_object (&priv->vertex_buffer);
g_clear_object (&priv->storage_buffer);
g_object_unref (priv->device);
G_OBJECT_CLASS (gsk_gpu_frame_parent_class)->finalize (object);
}
static void
gsk_gpu_frame_class_init (GskGpuFrameClass *klass)
{
GObjectClass *object_class = G_OBJECT_CLASS (klass);
klass->setup = gsk_gpu_frame_default_setup;
klass->cleanup = gsk_gpu_frame_default_cleanup;
klass->upload_texture = gsk_gpu_frame_default_upload_texture;
object_class->dispose = gsk_gpu_frame_dispose;
object_class->finalize = gsk_gpu_frame_finalize;
}
static void
gsk_gpu_frame_init (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
gsk_gpu_ops_init (&priv->ops);
}
void
gsk_gpu_frame_setup (GskGpuFrame *self,
GskGpuRenderer *renderer,
GskGpuDevice *device,
GskGpuOptimizations optimizations)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
/* no reference, the renderer owns us */
priv->renderer = renderer;
priv->device = g_object_ref (device);
priv->optimizations = optimizations;
GSK_GPU_FRAME_GET_CLASS (self)->setup (self);
}
GskGpuDevice *
gsk_gpu_frame_get_device (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
return priv->device;
}
GdkDrawContext *
gsk_gpu_frame_get_context (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
return gsk_gpu_renderer_get_context (priv->renderer);
}
gint64
gsk_gpu_frame_get_timestamp (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
return priv->timestamp;
}
gboolean
gsk_gpu_frame_should_optimize (GskGpuFrame *self,
GskGpuOptimizations optimization)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
return (priv->optimizations & optimization) == optimization;
}
static void
gsk_gpu_frame_verbose_print (GskGpuFrame *self,
const char *heading)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
if (GSK_RENDERER_DEBUG_CHECK (GSK_RENDERER (priv->renderer), VERBOSE))
{
GskGpuOp *op;
guint indent = 1;
GString *string = g_string_new (heading);
g_string_append (string, ":\n");
for (op = priv->first_op; op; op = op->next)
{
if (op->op_class->stage == GSK_GPU_STAGE_END_PASS)
indent--;
gsk_gpu_op_print (op, self, string, indent);
if (op->op_class->stage == GSK_GPU_STAGE_BEGIN_PASS)
indent++;
}
gdk_debug_message ("%s", string->str);
g_string_free (string, TRUE);
}
}
static void
gsk_gpu_frame_seal_ops (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
GskGpuOp *last, *op;
guint i;
priv->first_op = (GskGpuOp *) gsk_gpu_ops_index (&priv->ops, 0);
last = priv->first_op;
for (i = last->op_class->size; i < gsk_gpu_ops_get_size (&priv->ops); i += op->op_class->size)
{
op = (GskGpuOp *) gsk_gpu_ops_index (&priv->ops, i);
last->next = op;
last = op;
}
}
typedef struct
{
struct {
GskGpuOp *first;
GskGpuOp *last;
} upload, command;
} SortData;
static GskGpuOp *
gsk_gpu_frame_sort_render_pass (GskGpuFrame *self,
GskGpuOp *op,
SortData *sort_data)
{
SortData subpasses = { { NULL, NULL }, { NULL, NULL } };
while (op)
{
switch (op->op_class->stage)
{
case GSK_GPU_STAGE_UPLOAD:
if (sort_data->upload.first == NULL)
sort_data->upload.first = op;
else
sort_data->upload.last->next = op;
sort_data->upload.last = op;
op = op->next;
break;
case GSK_GPU_STAGE_COMMAND:
case GSK_GPU_STAGE_SHADER:
if (sort_data->command.first == NULL)
sort_data->command.first = op;
else
sort_data->command.last->next = op;
sort_data->command.last = op;
op = op->next;
break;
case GSK_GPU_STAGE_PASS:
if (subpasses.command.first == NULL)
subpasses.command.first = op;
else
subpasses.command.last->next = op;
subpasses.command.last = op;
op = op->next;
break;
case GSK_GPU_STAGE_BEGIN_PASS:
if (subpasses.command.first == NULL)
subpasses.command.first = op;
else
subpasses.command.last->next = op;
subpasses.command.last = op;
/* append subpass to existing subpasses */
op = gsk_gpu_frame_sort_render_pass (self, op->next, &subpasses);
break;
case GSK_GPU_STAGE_END_PASS:
sort_data->command.last->next = op;
sort_data->command.last = op;
op = op->next;
goto out;
default:
g_assert_not_reached ();
break;
}
}
out:
/* prepend subpasses to the current pass */
if (subpasses.upload.first)
{
if (sort_data->upload.first != NULL)
subpasses.upload.last->next = sort_data->upload.first;
else
sort_data->upload.last = subpasses.upload.last;
sort_data->upload.first = subpasses.upload.first;
}
if (subpasses.command.first)
{
if (sort_data->command.first != NULL)
subpasses.command.last->next = sort_data->command.first;
else
sort_data->command.last = subpasses.command.last;
sort_data->command.first = subpasses.command.first;
}
return op;
}
static void
gsk_gpu_frame_sort_ops (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
SortData sort_data = { { NULL, }, };
gsk_gpu_frame_sort_render_pass (self, priv->first_op, &sort_data);
if (sort_data.upload.first)
{
sort_data.upload.last->next = sort_data.command.first;
priv->first_op = sort_data.upload.first;
}
else
priv->first_op = sort_data.command.first;
if (sort_data.command.last)
sort_data.command.last->next = NULL;
}
gpointer
gsk_gpu_frame_alloc_op (GskGpuFrame *self,
gsize size)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
gsize pos;
pos = gsk_gpu_ops_get_size (&priv->ops);
gsk_gpu_ops_splice (&priv->ops,
pos,
0, FALSE,
NULL,
size);
return gsk_gpu_ops_index (&priv->ops, pos);
}
GskGpuImage *
gsk_gpu_frame_upload_texture (GskGpuFrame *self,
gboolean with_mipmap,
GdkTexture *texture)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
GskGpuImage *image;
image = GSK_GPU_FRAME_GET_CLASS (self)->upload_texture (self, with_mipmap, texture);
if (image)
gsk_gpu_device_cache_texture_image (priv->device, texture, priv->timestamp, image);
return image;
}
GskGpuDescriptors *
gsk_gpu_frame_create_descriptors (GskGpuFrame *self)
{
return GSK_GPU_FRAME_GET_CLASS (self)->create_descriptors (self);
}
static GskGpuBuffer *
gsk_gpu_frame_create_vertex_buffer (GskGpuFrame *self,
gsize size)
{
return GSK_GPU_FRAME_GET_CLASS (self)->create_vertex_buffer (self, size);
}
static GskGpuBuffer *
gsk_gpu_frame_create_storage_buffer (GskGpuFrame *self,
gsize size)
{
return GSK_GPU_FRAME_GET_CLASS (self)->create_storage_buffer (self, size);
}
static inline gsize
round_up (gsize number, gsize divisor)
{
return (number + divisor - 1) / divisor * divisor;
}
gsize
gsk_gpu_frame_reserve_vertex_data (GskGpuFrame *self,
gsize size)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
gsize size_needed;
if (priv->vertex_buffer == NULL)
priv->vertex_buffer = gsk_gpu_frame_create_vertex_buffer (self, DEFAULT_VERTEX_BUFFER_SIZE);
size_needed = round_up (priv->vertex_buffer_used, size) + size;
if (gsk_gpu_buffer_get_size (priv->vertex_buffer) < size_needed)
{
gsize old_size = gsk_gpu_buffer_get_size (priv->vertex_buffer);
GskGpuBuffer *new_buffer = gsk_gpu_frame_create_vertex_buffer (self, old_size * 2);
guchar *new_data = gsk_gpu_buffer_map (new_buffer);
if (priv->vertex_buffer_data)
{
memcpy (new_data, priv->vertex_buffer_data, old_size);
gsk_gpu_buffer_unmap (priv->vertex_buffer);
}
g_object_unref (priv->vertex_buffer);
priv->vertex_buffer = new_buffer;
priv->vertex_buffer_data = new_data;
}
priv->vertex_buffer_used = size_needed;
return size_needed - size;
}
guchar *
gsk_gpu_frame_get_vertex_data (GskGpuFrame *self,
gsize offset)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
if (priv->vertex_buffer_data == NULL)
priv->vertex_buffer_data = gsk_gpu_buffer_map (priv->vertex_buffer);
return priv->vertex_buffer_data + offset;
}
static void
gsk_gpu_frame_ensure_storage_buffer (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
if (priv->storage_buffer_data != NULL)
return;
if (priv->storage_buffer == NULL)
priv->storage_buffer = gsk_gpu_frame_create_storage_buffer (self, DEFAULT_STORAGE_BUFFER_SIZE);
priv->storage_buffer_data = gsk_gpu_buffer_map (priv->storage_buffer);
}
GskGpuBuffer *
gsk_gpu_frame_write_storage_buffer (GskGpuFrame *self,
const guchar *data,
gsize size,
gsize *out_offset)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
gsize offset;
gsk_gpu_frame_ensure_storage_buffer (self);
offset = priv->storage_buffer_used;
if (offset + size > gsk_gpu_buffer_get_size (priv->storage_buffer))
{
g_assert (offset > 0);
gsk_gpu_buffer_unmap (priv->storage_buffer);
g_clear_object (&priv->storage_buffer);
priv->storage_buffer_data = 0;
priv->storage_buffer_used = 0;
gsk_gpu_frame_ensure_storage_buffer (self);
offset = priv->storage_buffer_used;
}
if (size)
{
memcpy (priv->storage_buffer_data + offset, data, size);
priv->storage_buffer_used += size;
}
*out_offset = offset;
return priv->storage_buffer;
}
gboolean
gsk_gpu_frame_is_busy (GskGpuFrame *self)
{
return GSK_GPU_FRAME_GET_CLASS (self)->is_busy (self);
}
static void
copy_texture (gpointer user_data,
GdkTexture *texture)
{
GdkTexture **target = (GdkTexture **) user_data;
*target = g_object_ref (texture);
}
static void
gsk_gpu_frame_record (GskGpuFrame *self,
gint64 timestamp,
GskGpuImage *target,
const cairo_region_t *clip,
GskRenderNode *node,
const graphene_rect_t *viewport,
GdkTexture **texture)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
cairo_rectangle_int_t extents;
priv->timestamp = timestamp;
if (clip)
{
cairo_region_get_extents (clip, &extents);
}
else
{
extents = (cairo_rectangle_int_t) {
0, 0,
gsk_gpu_image_get_width (target),
gsk_gpu_image_get_height (target)
};
}
gsk_gpu_render_pass_begin_op (self,
target,
&extents,
GSK_RENDER_PASS_PRESENT);
gsk_gpu_node_processor_process (self,
target,
&extents,
node,
viewport);
gsk_gpu_render_pass_end_op (self,
target,
GSK_RENDER_PASS_PRESENT);
if (texture)
gsk_gpu_download_op (self, target, TRUE, copy_texture, texture);
}
static void
gsk_gpu_frame_submit (GskGpuFrame *self)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
gsk_gpu_frame_seal_ops (self);
gsk_gpu_frame_verbose_print (self, "start of frame");
gsk_gpu_frame_sort_ops (self);
gsk_gpu_frame_verbose_print (self, "after sort");
if (priv->vertex_buffer)
{
gsk_gpu_buffer_unmap (priv->vertex_buffer);
priv->vertex_buffer_data = NULL;
priv->vertex_buffer_used = 0;
}
if (priv->storage_buffer_data)
{
gsk_gpu_buffer_unmap (priv->storage_buffer);
priv->storage_buffer_data = NULL;
priv->storage_buffer_used = 0;
}
GSK_GPU_FRAME_GET_CLASS (self)->submit (self,
priv->vertex_buffer,
priv->first_op);
}
void
gsk_gpu_frame_render (GskGpuFrame *self,
gint64 timestamp,
GskGpuImage *target,
const cairo_region_t *region,
GskRenderNode *node,
const graphene_rect_t *viewport,
GdkTexture **texture)
{
gsk_gpu_frame_cleanup (self);
gsk_gpu_frame_record (self, timestamp, target, region, node, viewport, texture);
gsk_gpu_frame_submit (self);
}
typedef struct _Download Download;
struct _Download
{
GdkMemoryFormat format;
guchar *data;
gsize stride;
};
static void
do_download (gpointer user_data,
GdkTexture *texture)
{
Download *download = user_data;
GdkTextureDownloader downloader;
gdk_texture_downloader_init (&downloader, texture);
gdk_texture_downloader_set_format (&downloader, download->format);
gdk_texture_downloader_download_into (&downloader, download->data, download->stride);
gdk_texture_downloader_finish (&downloader);
g_free (download);
}
void
gsk_gpu_frame_download_texture (GskGpuFrame *self,
gint64 timestamp,
GdkTexture *texture,
GdkMemoryFormat format,
guchar *data,
gsize stride)
{
GskGpuFramePrivate *priv = gsk_gpu_frame_get_instance_private (self);
GskGpuImage *image;
image = gsk_gpu_device_lookup_texture_image (priv->device, texture, timestamp);
if (image == NULL)
image = gsk_gpu_frame_upload_texture (self, FALSE, texture);
if (image == NULL)
{
g_critical ("Could not upload texture");
return;
}
gsk_gpu_frame_cleanup (self);
priv->timestamp = timestamp;
gsk_gpu_download_op (self,
image,
FALSE,
do_download,
g_memdup (&(Download) {
.format = format,
.data = data,
.stride = stride
}, sizeof (Download)));
gsk_gpu_frame_submit (self);
g_object_unref (image);
}

View File

@@ -0,0 +1,89 @@
#pragma once
#include "gskgpurenderer.h"
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
#define GSK_TYPE_GPU_FRAME (gsk_gpu_frame_get_type ())
#define GSK_GPU_FRAME(o) (G_TYPE_CHECK_INSTANCE_CAST ((o), GSK_TYPE_GPU_FRAME, GskGpuFrame))
#define GSK_GPU_FRAME_CLASS(k) (G_TYPE_CHECK_CLASS_CAST ((k), GSK_TYPE_GPU_FRAME, GskGpuFrameClass))
#define GSK_IS_GPU_FRAME(o) (G_TYPE_CHECK_INSTANCE_TYPE ((o), GSK_TYPE_GPU_FRAME))
#define GSK_IS_GPU_FRAME_CLASS(k) (G_TYPE_CHECK_CLASS_TYPE ((k), GSK_TYPE_GPU_FRAME))
#define GSK_GPU_FRAME_GET_CLASS(o) (G_TYPE_INSTANCE_GET_CLASS ((o), GSK_TYPE_GPU_FRAME, GskGpuFrameClass))
typedef struct _GskGpuFrameClass GskGpuFrameClass;
struct _GskGpuFrame
{
GObject parent_instance;
};
struct _GskGpuFrameClass
{
GObjectClass parent_class;
gboolean (* is_busy) (GskGpuFrame *self);
void (* setup) (GskGpuFrame *self);
void (* cleanup) (GskGpuFrame *self);
GskGpuImage * (* upload_texture) (GskGpuFrame *self,
gboolean with_mipmap,
GdkTexture *texture);
GskGpuDescriptors * (* create_descriptors) (GskGpuFrame *self);
GskGpuBuffer * (* create_vertex_buffer) (GskGpuFrame *self,
gsize size);
GskGpuBuffer * (* create_storage_buffer) (GskGpuFrame *self,
gsize size);
void (* submit) (GskGpuFrame *self,
GskGpuBuffer *vertex_buffer,
GskGpuOp *op);
};
GType gsk_gpu_frame_get_type (void) G_GNUC_CONST;
void gsk_gpu_frame_setup (GskGpuFrame *self,
GskGpuRenderer *renderer,
GskGpuDevice *device,
GskGpuOptimizations optimizations);
GdkDrawContext * gsk_gpu_frame_get_context (GskGpuFrame *self) G_GNUC_PURE;
GskGpuDevice * gsk_gpu_frame_get_device (GskGpuFrame *self) G_GNUC_PURE;
gint64 gsk_gpu_frame_get_timestamp (GskGpuFrame *self) G_GNUC_PURE;
gboolean gsk_gpu_frame_should_optimize (GskGpuFrame *self,
GskGpuOptimizations optimization) G_GNUC_PURE;
gpointer gsk_gpu_frame_alloc_op (GskGpuFrame *self,
gsize size);
GskGpuImage * gsk_gpu_frame_upload_texture (GskGpuFrame *self,
gboolean with_mipmap,
GdkTexture *texture);
GskGpuDescriptors * gsk_gpu_frame_create_descriptors (GskGpuFrame *self);
gsize gsk_gpu_frame_reserve_vertex_data (GskGpuFrame *self,
gsize size);
guchar * gsk_gpu_frame_get_vertex_data (GskGpuFrame *self,
gsize offset);
GskGpuBuffer * gsk_gpu_frame_write_storage_buffer (GskGpuFrame *self,
const guchar *data,
gsize size,
gsize *out_offset);
gboolean gsk_gpu_frame_is_busy (GskGpuFrame *self);
void gsk_gpu_frame_render (GskGpuFrame *self,
gint64 timestamp,
GskGpuImage *target,
const cairo_region_t *region,
GskRenderNode *node,
const graphene_rect_t *viewport,
GdkTexture **texture);
void gsk_gpu_frame_download_texture (GskGpuFrame *self,
gint64 timestamp,
GdkTexture *texture,
GdkMemoryFormat format,
guchar *data,
gsize stride);
G_DEFINE_AUTOPTR_CLEANUP_FUNC(GskGpuFrame, g_object_unref)
G_END_DECLS

101
gsk/gpu/gskgpuglobalsop.c Normal file
View File

@@ -0,0 +1,101 @@
#include "config.h"
#include "gskgpuglobalsopprivate.h"
#include "gskglframeprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskroundedrectprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkandeviceprivate.h"
#include "gskvulkanframeprivate.h"
#include "gskvulkandescriptorsprivate.h"
#endif
typedef struct _GskGpuGlobalsOp GskGpuGlobalsOp;
struct _GskGpuGlobalsOp
{
GskGpuOp op;
GskGpuGlobalsInstance instance;
};
static void
gsk_gpu_globals_op_finish (GskGpuOp *op)
{
}
static void
gsk_gpu_globals_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
gsk_gpu_print_op (string, indent, "globals");
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_globals_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuGlobalsOp *self = (GskGpuGlobalsOp *) op;
vkCmdPushConstants (state->vk_command_buffer,
gsk_vulkan_device_get_vk_pipeline_layout (GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame)),
gsk_vulkan_descriptors_get_pipeline_layout (state->desc)),
VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT,
0,
sizeof (self->instance),
&self->instance);
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_globals_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuGlobalsOp *self = (GskGpuGlobalsOp *) op;
gsk_gl_frame_bind_globals (GSK_GL_FRAME (frame));
/* FIXME: Does it matter if we glBufferData() or glSubBufferData() here? */
glBufferSubData (GL_UNIFORM_BUFFER,
0,
sizeof (self->instance),
&self->instance);
return op->next;
}
static const GskGpuOpClass GSK_GPU_GLOBALS_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuGlobalsOp),
GSK_GPU_STAGE_COMMAND,
gsk_gpu_globals_op_finish,
gsk_gpu_globals_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_globals_op_vk_command,
#endif
gsk_gpu_globals_op_gl_command
};
void
gsk_gpu_globals_op (GskGpuFrame *frame,
const graphene_vec2_t *scale,
const graphene_matrix_t *mvp,
const GskRoundedRect *clip)
{
GskGpuGlobalsOp *self;
self = (GskGpuGlobalsOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_GLOBALS_OP_CLASS);
graphene_matrix_to_float (mvp, self->instance.mvp);
gsk_rounded_rect_to_float (clip, graphene_point_zero (), self->instance.clip);
graphene_vec2_to_float (scale, self->instance.scale);
}

View File

@@ -1,14 +1,22 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpuopprivate.h"
#include <gsk/gskroundedrect.h>
#include <graphene.h>
G_BEGIN_DECLS
const VkPushConstantRange *
gsk_vulkan_push_constants_get_ranges (void) G_GNUC_PURE;
uint32_t gsk_vulkan_push_constants_get_range_count (void) G_GNUC_PURE;
typedef struct _GskGpuGlobalsInstance GskGpuGlobalsInstance;
void gsk_vulkan_push_constants_op (GskVulkanRender *render,
struct _GskGpuGlobalsInstance
{
float mvp[16];
float clip[12];
float scale[2];
};
void gsk_gpu_globals_op (GskGpuFrame *frame,
const graphene_vec2_t *scale,
const graphene_matrix_t *mvp,
const GskRoundedRect *clip);

164
gsk/gpu/gskgpuimage.c Normal file
View File

@@ -0,0 +1,164 @@
#include "config.h"
#include "gskgpuimageprivate.h"
typedef struct _GskGpuImagePrivate GskGpuImagePrivate;
struct _GskGpuImagePrivate
{
GskGpuImageFlags flags;
GdkMemoryFormat format;
gsize width;
gsize height;
};
#define ORTHO_NEAR_PLANE -10000
#define ORTHO_FAR_PLANE 10000
G_DEFINE_TYPE_WITH_PRIVATE (GskGpuImage, gsk_gpu_image, G_TYPE_OBJECT)
static void
gsk_gpu_image_get_default_projection_matrix (GskGpuImage *self,
graphene_matrix_t *out_projection)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
graphene_matrix_init_ortho (out_projection,
0, priv->width,
0, priv->height,
ORTHO_NEAR_PLANE,
ORTHO_FAR_PLANE);
}
static void
gsk_gpu_image_texture_toggle_ref_cb (gpointer texture,
GObject *image,
gboolean is_last_ref)
{
if (is_last_ref)
g_object_unref (texture);
else
g_object_ref (texture);
}
static void
gsk_gpu_image_dispose (GObject *object)
{
GskGpuImage *self = GSK_GPU_IMAGE (object);
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
if (priv->flags & GSK_GPU_IMAGE_TOGGLE_REF)
{
priv->flags &= ~GSK_GPU_IMAGE_TOGGLE_REF;
G_OBJECT (self)->ref_count++;
g_object_remove_toggle_ref (G_OBJECT (self), gsk_gpu_image_texture_toggle_ref_cb, NULL);
}
G_OBJECT_CLASS (gsk_gpu_image_parent_class)->dispose (object);
}
static void
gsk_gpu_image_class_init (GskGpuImageClass *klass)
{
GObjectClass *object_class = G_OBJECT_CLASS (klass);
object_class->dispose = gsk_gpu_image_dispose;
klass->get_projection_matrix = gsk_gpu_image_get_default_projection_matrix;
}
static void
gsk_gpu_image_init (GskGpuImage *self)
{
}
void
gsk_gpu_image_setup (GskGpuImage *self,
GskGpuImageFlags flags,
GdkMemoryFormat format,
gsize width,
gsize height)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
priv->flags = flags;
priv->format = format;
priv->width = width;
priv->height = height;
}
/*
* gsk_gpu_image_toggle_ref_texture:
* @self: a GskGpuImage
* @texture: the texture owning @self
*
* This function must be called whenever the texture owns the data
* used by the image. It will then add a toggle ref, so that ref'ing
* the image will ref the texture and unrefing the image will unref it
* again.
*
* This ensures that whenever the image is used, the texture will keep
* being referenced and not go away. But once all the image's references
* get unref'ed, the texture is free to go away.
**/
void
gsk_gpu_image_toggle_ref_texture (GskGpuImage *self,
GdkTexture *texture)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
g_assert ((priv->flags & GSK_GPU_IMAGE_TOGGLE_REF) == 0);
priv->flags |= GSK_GPU_IMAGE_TOGGLE_REF;
g_object_ref (texture);
g_object_add_toggle_ref (G_OBJECT (self), gsk_gpu_image_texture_toggle_ref_cb, texture);
g_object_unref (self);
}
GdkMemoryFormat
gsk_gpu_image_get_format (GskGpuImage *self)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
return priv->format;
}
gsize
gsk_gpu_image_get_width (GskGpuImage *self)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
return priv->width;
}
gsize
gsk_gpu_image_get_height (GskGpuImage *self)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
return priv->height;
}
GskGpuImageFlags
gsk_gpu_image_get_flags (GskGpuImage *self)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
return priv->flags;
}
void
gsk_gpu_image_set_flags (GskGpuImage *self,
GskGpuImageFlags flags)
{
GskGpuImagePrivate *priv = gsk_gpu_image_get_instance_private (self);
priv->flags |= flags;
}
void
gsk_gpu_image_get_projection_matrix (GskGpuImage *self,
graphene_matrix_t *out_projection)
{
GSK_GPU_IMAGE_GET_CLASS (self)->get_projection_matrix (self, out_projection);
}

View File

@@ -0,0 +1,54 @@
#pragma once
#include "gskgputypesprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
#define GSK_TYPE_GPU_IMAGE (gsk_gpu_image_get_type ())
#define GSK_GPU_IMAGE(o) (G_TYPE_CHECK_INSTANCE_CAST ((o), GSK_TYPE_GPU_IMAGE, GskGpuImage))
#define GSK_GPU_IMAGE_CLASS(k) (G_TYPE_CHECK_CLASS_CAST ((k), GSK_TYPE_GPU_IMAGE, GskGpuImageClass))
#define GSK_IS_GPU_IMAGE(o) (G_TYPE_CHECK_INSTANCE_TYPE ((o), GSK_TYPE_GPU_IMAGE))
#define GSK_IS_GPU_IMAGE_CLASS(k) (G_TYPE_CHECK_CLASS_TYPE ((k), GSK_TYPE_GPU_IMAGE))
#define GSK_GPU_IMAGE_GET_CLASS(o) (G_TYPE_INSTANCE_GET_CLASS ((o), GSK_TYPE_GPU_IMAGE, GskGpuImageClass))
typedef struct _GskGpuImageClass GskGpuImageClass;
struct _GskGpuImage
{
GObject parent_instance;
};
struct _GskGpuImageClass
{
GObjectClass parent_class;
void (* get_projection_matrix) (GskGpuImage *self,
graphene_matrix_t *out_projection);
};
GType gsk_gpu_image_get_type (void) G_GNUC_CONST;
void gsk_gpu_image_setup (GskGpuImage *self,
GskGpuImageFlags flags,
GdkMemoryFormat format,
gsize width,
gsize height);
void gsk_gpu_image_toggle_ref_texture (GskGpuImage *self,
GdkTexture *texture);
GdkMemoryFormat gsk_gpu_image_get_format (GskGpuImage *self);
gsize gsk_gpu_image_get_width (GskGpuImage *self);
gsize gsk_gpu_image_get_height (GskGpuImage *self);
GskGpuImageFlags gsk_gpu_image_get_flags (GskGpuImage *self);
void gsk_gpu_image_set_flags (GskGpuImage *self,
GskGpuImageFlags flags);
void gsk_gpu_image_get_projection_matrix (GskGpuImage *self,
graphene_matrix_t *out_projection);
G_DEFINE_AUTOPTR_CLEANUP_FUNC(GskGpuImage, g_object_unref)
G_END_DECLS

View File

@@ -0,0 +1,101 @@
#include "config.h"
#include "gskgpulineargradientopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpulineargradientinstance.h"
#define VARIATION_SUPERSAMPLING (1 << 0)
#define VARIATION_REPEATING (1 << 1)
typedef struct _GskGpuLinearGradientOp GskGpuLinearGradientOp;
struct _GskGpuLinearGradientOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_linear_gradient_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuLineargradientInstance *instance;
instance = (GskGpuLineargradientInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
if (shader->variation & VARIATION_REPEATING)
gsk_gpu_print_op (string, indent, "repeating-linear-gradient");
else
gsk_gpu_print_op (string, indent, "linear-gradient");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_LINEAR_GRADIENT_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuLinearGradientOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_linear_gradient_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpulineargradient",
sizeof (GskGpuLineargradientInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_lineargradient_info,
#endif
gsk_gpu_lineargradient_setup_attrib_locations,
gsk_gpu_lineargradient_setup_vao
};
void
gsk_gpu_linear_gradient_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
gboolean repeating,
const graphene_rect_t *rect,
const graphene_point_t *start,
const graphene_point_t *end,
const graphene_point_t *offset,
const GskColorStop *stops,
gsize n_stops)
{
GskGpuLineargradientInstance *instance;
g_assert (n_stops > 1);
g_assert (n_stops <= 7);
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_LINEAR_GRADIENT_OP_CLASS,
(repeating ? VARIATION_REPEATING : 0) |
(gsk_gpu_frame_should_optimize (frame, GSK_GPU_OPTIMIZE_GRADIENTS) ? VARIATION_SUPERSAMPLING : 0),
clip,
NULL,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_point_to_float (start, offset, instance->startend);
gsk_gpu_point_to_float (end, offset, &instance->startend[2]);
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 6)].color, instance->color6);
instance->offsets1[2] = stops[MIN (n_stops - 1, 6)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 5)].color, instance->color5);
instance->offsets1[1] = stops[MIN (n_stops - 1, 5)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 4)].color, instance->color4);
instance->offsets1[0] = stops[MIN (n_stops - 1, 4)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 3)].color, instance->color3);
instance->offsets0[3] = stops[MIN (n_stops - 1, 3)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 2)].color, instance->color2);
instance->offsets0[2] = stops[MIN (n_stops - 1, 2)].offset;
gsk_gpu_rgba_to_float (&stops[1].color, instance->color1);
instance->offsets0[1] = stops[1].offset;
gsk_gpu_rgba_to_float (&stops[0].color, instance->color0);
instance->offsets0[0] = stops[0].offset;
}

View File

@@ -1,16 +1,20 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskrendernode.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_linear_gradient_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
void gsk_gpu_linear_gradient_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
gboolean repeating,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_point_t *start,
const graphene_point_t *end,
gboolean repeating,
const graphene_point_t *offset,
const GskColorStop *stops,
gsize n_stops);

84
gsk/gpu/gskgpumaskop.c Normal file
View File

@@ -0,0 +1,84 @@
#include "config.h"
#include "gskgpumaskopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpumaskinstance.h"
typedef struct _GskGpuMaskOp GskGpuMaskOp;
struct _GskGpuMaskOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_mask_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuMaskInstance *instance;
instance = (GskGpuMaskInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "mask");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->source_id);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->mask_id);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_MASK_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuMaskOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_mask_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpumask",
sizeof (GskGpuMaskInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_mask_info,
#endif
gsk_gpu_mask_setup_attrib_locations,
gsk_gpu_mask_setup_vao
};
void
gsk_gpu_mask_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
const graphene_rect_t *rect,
const graphene_point_t *offset,
float opacity,
GskMaskMode mask_mode,
guint32 source_descriptor,
const graphene_rect_t *source_rect,
guint32 mask_descriptor,
const graphene_rect_t *mask_rect)
{
GskGpuMaskInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_MASK_OP_CLASS,
mask_mode,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rect_to_float (source_rect, offset, instance->source_rect);
instance->source_id = source_descriptor;
gsk_gpu_rect_to_float (mask_rect, offset, instance->mask_rect);
instance->mask_id = mask_descriptor;
instance->opacity = opacity;
}

View File

@@ -1,19 +1,22 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_mask_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
void gsk_gpu_mask_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
const graphene_rect_t *rect,
const graphene_point_t *offset,
GskVulkanImage *source,
float opacity,
GskMaskMode mask_mode,
guint32 source_descriptor,
const graphene_rect_t *source_rect,
const graphene_rect_t *source_tex_rect,
GskVulkanImage *mask,
const graphene_rect_t *mask_rect,
const graphene_rect_t *mask_tex_rect,
GskMaskMode mask_mode);
guint32 mask_descriptor,
const graphene_rect_t *mask_rect);
G_END_DECLS

193
gsk/gpu/gskgpumipmapop.c Normal file
View File

@@ -0,0 +1,193 @@
#include "config.h"
#include "gskgpumipmapopprivate.h"
#include "gskglimageprivate.h"
#include "gskgpuprintprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkanimageprivate.h"
#endif
typedef struct _GskGpuMipmapOp GskGpuMipmapOp;
struct _GskGpuMipmapOp
{
GskGpuOp op;
GskGpuImage *image;
};
static void
gsk_gpu_mipmap_op_finish (GskGpuOp *op)
{
GskGpuMipmapOp *self = (GskGpuMipmapOp *) op;
g_object_unref (self->image);
}
static void
gsk_gpu_mipmap_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuMipmapOp *self = (GskGpuMipmapOp *) op;
gsk_gpu_print_op (string, indent, "mipmap");
gsk_gpu_print_image (string, self->image);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_mipmap_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuMipmapOp *self = (GskGpuMipmapOp *) op;
GskVulkanImage *image;
VkImage vk_image;
gsize width, height;
guint i, n_levels;
image = GSK_VULKAN_IMAGE (self->image);
vk_image = gsk_vulkan_image_get_vk_image (image);
width = gsk_gpu_image_get_width (self->image);
height = gsk_gpu_image_get_height (self->image);
n_levels = gsk_vulkan_mipmap_levels (width, height);
/* optimize me: only transition mipmap layers 1..n, but not 0 */
gsk_vulkan_image_transition (image,
state->semaphores,
state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
VK_ACCESS_TRANSFER_WRITE_BIT);
for (i = 0; /* we break after the barrier */ ; i++)
{
vkCmdPipelineBarrier (state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT,
0,
0, NULL,
0, NULL,
1, &(VkImageMemoryBarrier) {
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT,
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = vk_image,
.subresourceRange = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.baseMipLevel = i,
.levelCount = 1,
.baseArrayLayer = 0,
.layerCount = 1
},
});
if (i + 1 == n_levels)
break;
vkCmdBlitImage (state->vk_command_buffer,
vk_image,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
vk_image,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
1,
&(VkImageBlit) {
.srcSubresource = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.mipLevel = i,
.baseArrayLayer = 0,
.layerCount = 1
},
.srcOffsets = {
{
.x = 0,
.y = 0,
.z = 0,
},
{
.x = width,
.y = height,
.z = 1
}
},
.dstSubresource = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.mipLevel = i + 1,
.baseArrayLayer = 0,
.layerCount = 1
},
.dstOffsets = {
{
.x = 0,
.y = 0,
.z = 0,
},
{
.x = width / 2,
.y = height / 2,
.z = 1,
}
},
},
VK_FILTER_LINEAR);
width = MAX (1, width / 2);
height = MAX (1, height / 2);
}
gsk_vulkan_image_set_vk_image_layout (image,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
VK_ACCESS_TRANSFER_READ_BIT);
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_mipmap_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuMipmapOp *self = (GskGpuMipmapOp *) op;
glActiveTexture (GL_TEXTURE0);
gsk_gl_image_bind_texture (GSK_GL_IMAGE (self->image));
/* need to reset the images again */
state->desc = NULL;
glGenerateMipmap (GL_TEXTURE_2D);
return op->next;
}
static const GskGpuOpClass GSK_GPU_MIPMAP_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuMipmapOp),
GSK_GPU_STAGE_PASS,
gsk_gpu_mipmap_op_finish,
gsk_gpu_mipmap_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_mipmap_op_vk_command,
#endif
gsk_gpu_mipmap_op_gl_command
};
void
gsk_gpu_mipmap_op (GskGpuFrame *frame,
GskGpuImage *image)
{
GskGpuMipmapOp *self;
g_assert ((gsk_gpu_image_get_flags (image) & (GSK_GPU_IMAGE_CAN_MIPMAP | GSK_GPU_IMAGE_MIPMAP)) == GSK_GPU_IMAGE_CAN_MIPMAP);
self = (GskGpuMipmapOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_MIPMAP_OP_CLASS);
self->image = g_object_ref (image);
gsk_gpu_image_set_flags (image, GSK_GPU_IMAGE_MIPMAP);
}

View File

@@ -0,0 +1,11 @@
#pragma once
#include "gskgpuopprivate.h"
G_BEGIN_DECLS
void gsk_gpu_mipmap_op (GskGpuFrame *frame,
GskGpuImage *image);
G_END_DECLS

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
#pragma once
#include "gskgputypesprivate.h"
#include "gsktypes.h"
G_BEGIN_DECLS
void gsk_gpu_node_processor_process (GskGpuFrame *frame,
GskGpuImage *target,
const cairo_rectangle_int_t *clip,
GskRenderNode *node,
const graphene_rect_t *viewport);
G_END_DECLS

51
gsk/gpu/gskgpuop.c Normal file
View File

@@ -0,0 +1,51 @@
#include "config.h"
#include "gskgpuopprivate.h"
#include "gskgpuframeprivate.h"
GskGpuOp *
gsk_gpu_op_alloc (GskGpuFrame *frame,
const GskGpuOpClass *op_class)
{
GskGpuOp *op;
op = gsk_gpu_frame_alloc_op (frame, op_class->size);
op->op_class = op_class;
return op;
}
void
gsk_gpu_op_finish (GskGpuOp *op)
{
op->op_class->finish (op);
}
void
gsk_gpu_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
op->op_class->print (op, frame, string, indent);
}
#ifdef GDK_RENDERING_VULKAN
GskGpuOp *
gsk_gpu_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
return op->op_class->vk_command (op, frame, state);
}
#endif
GskGpuOp *
gsk_gpu_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
return op->op_class->gl_command (op, frame, state);
}

99
gsk/gpu/gskgpuopprivate.h Normal file
View File

@@ -0,0 +1,99 @@
#pragma once
#include <gdk/gdk.h>
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
typedef enum
{
GSK_GPU_STAGE_UPLOAD,
GSK_GPU_STAGE_PASS,
GSK_GPU_STAGE_COMMAND,
GSK_GPU_STAGE_SHADER,
/* magic ones */
GSK_GPU_STAGE_BEGIN_PASS,
GSK_GPU_STAGE_END_PASS
} GskGpuStage;
typedef struct _GskGLCommandState GskGLCommandState;
typedef struct _GskVulkanCommandState GskVulkanCommandState;
struct _GskGLCommandState
{
gsize flip_y;
struct {
const GskGpuOpClass *op_class;
guint32 variation;
GskGpuShaderClip clip;
gsize n_external;
} current_program;
GskGLDescriptors *desc;
};
#ifdef GDK_RENDERING_VULKAN
struct _GskVulkanCommandState
{
VkRenderPass vk_render_pass;
VkFormat vk_format;
VkCommandBuffer vk_command_buffer;
GskGpuBlend blend;
GskVulkanDescriptors *desc;
GskVulkanSemaphores *semaphores;
};
#endif
struct _GskGpuOp
{
const GskGpuOpClass *op_class;
GskGpuOp *next;
};
struct _GskGpuOpClass
{
gsize size;
GskGpuStage stage;
void (* finish) (GskGpuOp *op);
void (* print) (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent);
#ifdef GDK_RENDERING_VULKAN
GskGpuOp * (* vk_command) (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state);
#endif
GskGpuOp * (* gl_command) (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state);
};
/* ensures alignment of ops to multipes of 16 bytes - and that makes graphene happy */
#define GSK_GPU_OP_SIZE(struct_name) ((sizeof(struct_name) + 15) & ~15)
GskGpuOp * gsk_gpu_op_alloc (GskGpuFrame *frame,
const GskGpuOpClass *op_class);
void gsk_gpu_op_finish (GskGpuOp *op);
void gsk_gpu_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent);
#ifdef GDK_RENDERING_VULKAN
GskGpuOp * gsk_gpu_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state);
#endif
GskGpuOp * gsk_gpu_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state);
G_END_DECLS

135
gsk/gpu/gskgpuprint.c Normal file
View File

@@ -0,0 +1,135 @@
#include "config.h"
#include "gskgpuprintprivate.h"
#include "gskgpudescriptorsprivate.h"
#include "gskgpuimageprivate.h"
void
gsk_gpu_print_indent (GString *string,
guint indent)
{
g_string_append_printf (string, "%*s", 2 * indent, "");
}
void
gsk_gpu_print_op (GString *string,
guint indent,
const char *op_name)
{
gsk_gpu_print_indent (string, indent);
g_string_append (string, op_name);
g_string_append_c (string, ' ');
}
void
gsk_gpu_print_string (GString *string,
const char *s)
{
g_string_append (string, s);
g_string_append_c (string, ' ');
}
void
gsk_gpu_print_enum (GString *string,
GType type,
int value)
{
GEnumClass *class;
class = g_type_class_ref (type);
gsk_gpu_print_string (string, g_enum_get_value (class, value)->value_nick);
g_type_class_unref (class);
}
void
gsk_gpu_print_rect (GString *string,
const float rect[4])
{
g_string_append_printf (string, "%g %g %g %g ",
rect[0], rect[1],
rect[2], rect[3]);
}
void
gsk_gpu_print_int_rect (GString *string,
const cairo_rectangle_int_t *rect)
{
g_string_append_printf (string, "%d %d %d %d ",
rect->x, rect->y,
rect->width, rect->height);
}
void
gsk_gpu_print_rounded_rect (GString *string,
const float rect[12])
{
gsk_gpu_print_rect (string, (const float *) rect);
if (rect[4] == 0.0 && rect[5] == 0.0 &&
rect[6] == 0.0 && rect[7] == 0.0 &&
rect[8] == 0.0 && rect[9] == 0.0 &&
rect[10] == 0.0 && rect[11] == 0.0)
return;
g_string_append (string, "/ ");
if (rect[4] != rect[5] ||
rect[6] != rect[7] ||
rect[8] != rect[9] ||
rect[10] != rect[11])
{
g_string_append (string, "variable ");
}
else if (rect[4] != rect[6] ||
rect[4] != rect[8] ||
rect[4] != rect[10])
{
g_string_append_printf (string, "%g %g %g %g ",
rect[4], rect[6],
rect[8], rect[10]);
}
else
{
g_string_append_printf (string, "%g ", rect[4]);
}
}
void
gsk_gpu_print_rgba (GString *string,
const float rgba[4])
{
GdkRGBA color = { rgba[0], rgba[1], rgba[2], rgba[3] };
char *s = gdk_rgba_to_string (&color);
g_string_append (string, s);
g_string_append_c (string, ' ');
g_free (s);
}
void
gsk_gpu_print_newline (GString *string)
{
if (string->len && string->str[string->len - 1] == ' ')
string->str[string->len - 1] = '\n';
else
g_string_append_c (string, '\n');
}
void
gsk_gpu_print_image (GString *string,
GskGpuImage *image)
{
g_string_append_printf (string, "%zux%zu ",
gsk_gpu_image_get_width (image),
gsk_gpu_image_get_height (image));
}
void
gsk_gpu_print_image_descriptor (GString *string,
GskGpuDescriptors *desc,
guint32 descriptor)
{
gsize id = gsk_gpu_descriptors_find_image (desc, descriptor);
gsk_gpu_print_image (string, gsk_gpu_descriptors_get_image (desc, id));
}

View File

@@ -0,0 +1,35 @@
#pragma once
#include "gskgputypesprivate.h"
#include "gskroundedrect.h"
#include <cairo.h>
#include <graphene.h>
void gsk_gpu_print_indent (GString *string,
guint indent);
void gsk_gpu_print_op (GString *string,
guint indent,
const char *op_name);
void gsk_gpu_print_newline (GString *string);
void gsk_gpu_print_string (GString *string,
const char *s);
void gsk_gpu_print_enum (GString *string,
GType type,
int value);
void gsk_gpu_print_rect (GString *string,
const float rect[4]);
void gsk_gpu_print_int_rect (GString *string,
const cairo_rectangle_int_t *rect);
void gsk_gpu_print_rounded_rect (GString *string,
const float rect[12]);
void gsk_gpu_print_rgba (GString *string,
const float rgba[4]);
void gsk_gpu_print_image (GString *string,
GskGpuImage *image);
void gsk_gpu_print_image_descriptor (GString *string,
GskGpuDescriptors *desc,
guint32 descriptor);

View File

@@ -0,0 +1,105 @@
#include "config.h"
#include "gskgpuradialgradientopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpuradialgradientinstance.h"
#define VARIATION_SUPERSAMPLING (1 << 0)
#define VARIATION_REPEATING (1 << 1)
typedef struct _GskGpuRadialGradientOp GskGpuRadialGradientOp;
struct _GskGpuRadialGradientOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_radial_gradient_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuRadialgradientInstance *instance;
instance = (GskGpuRadialgradientInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
if (shader->variation & VARIATION_REPEATING)
gsk_gpu_print_op (string, indent, "repeating-radial-gradient");
else
gsk_gpu_print_op (string, indent, "radial-gradient");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_RADIAL_GRADIENT_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuRadialGradientOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_radial_gradient_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpuradialgradient",
sizeof (GskGpuRadialgradientInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_radialgradient_info,
#endif
gsk_gpu_radialgradient_setup_attrib_locations,
gsk_gpu_radialgradient_setup_vao
};
void
gsk_gpu_radial_gradient_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
gboolean repeating,
const graphene_rect_t *rect,
const graphene_point_t *center,
const graphene_point_t *radius,
float start,
float end,
const graphene_point_t *offset,
const GskColorStop *stops,
gsize n_stops)
{
GskGpuRadialgradientInstance *instance;
g_assert (n_stops > 1);
g_assert (n_stops <= 7);
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_RADIAL_GRADIENT_OP_CLASS,
(repeating ? VARIATION_REPEATING : 0) |
(gsk_gpu_frame_should_optimize (frame, GSK_GPU_OPTIMIZE_GRADIENTS) ? VARIATION_SUPERSAMPLING : 0),
clip,
NULL,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_point_to_float (center, offset, instance->center_radius);
gsk_gpu_point_to_float (radius, graphene_point_zero(), &instance->center_radius[2]);
instance->startend[0] = start;
instance->startend[1] = end;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 6)].color, instance->color6);
instance->offsets1[2] = stops[MIN (n_stops - 1, 6)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 5)].color, instance->color5);
instance->offsets1[1] = stops[MIN (n_stops - 1, 5)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 4)].color, instance->color4);
instance->offsets1[0] = stops[MIN (n_stops - 1, 4)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 3)].color, instance->color3);
instance->offsets0[3] = stops[MIN (n_stops - 1, 3)].offset;
gsk_gpu_rgba_to_float (&stops[MIN (n_stops - 1, 2)].color, instance->color2);
instance->offsets0[2] = stops[MIN (n_stops - 1, 2)].offset;
gsk_gpu_rgba_to_float (&stops[1].color, instance->color1);
instance->offsets0[1] = stops[1].offset;
gsk_gpu_rgba_to_float (&stops[0].color, instance->color0);
instance->offsets0[0] = stops[0].offset;
}

View File

@@ -0,0 +1,25 @@
#pragma once
#include "gskgpushaderopprivate.h"
#include "gskrendernode.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_gpu_radial_gradient_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
gboolean repeating,
const graphene_rect_t *rect,
const graphene_point_t *center,
const graphene_point_t *radius,
float start,
float end,
const graphene_point_t *offset,
const GskColorStop *stops,
gsize n_stops);
G_END_DECLS

497
gsk/gpu/gskgpurenderer.c Normal file
View File

@@ -0,0 +1,497 @@
#include "config.h"
#include "gskgpurendererprivate.h"
#include "gskdebugprivate.h"
#include "gskgpudeviceprivate.h"
#include "gskgpuframeprivate.h"
#include "gskprivate.h"
#include "gskrendererprivate.h"
#include "gskrendernodeprivate.h"
#include "gskgpuimageprivate.h"
#include "gdk/gdkdebugprivate.h"
#include "gdk/gdkdisplayprivate.h"
#include "gdk/gdkdmabuftextureprivate.h"
#include "gdk/gdkdrawcontextprivate.h"
#include "gdk/gdkprofilerprivate.h"
#include "gdk/gdktextureprivate.h"
#include "gdk/gdktexturedownloaderprivate.h"
#include "gdk/gdkdrawcontextprivate.h"
#include <graphene.h>
#define GSK_GPU_MAX_FRAMES 4
static const GdkDebugKey gsk_gpu_optimization_keys[] = {
{ "uber", GSK_GPU_OPTIMIZE_UBER, "Don't use the uber shader" },
{ "clear", GSK_GPU_OPTIMIZE_CLEAR, "Use shaders instead of vkCmdClearAttachment()/glClear()" },
{ "blit", GSK_GPU_OPTIMIZE_BLIT, "Use shaders instead of vkCmdBlit()/glBlitFramebuffer()" },
{ "gradients", GSK_GPU_OPTIMIZE_GRADIENTS, "Don't supersample gradients" },
{ "mipmap", GSK_GPU_OPTIMIZE_MIPMAP, "Avoid creating mipmaps" },
{ "gl-baseinstance", GSK_GPU_OPTIMIZE_GL_BASE_INSTANCE, "Assume no ARB/EXT_base_instance support" },
};
typedef struct _GskGpuRendererPrivate GskGpuRendererPrivate;
struct _GskGpuRendererPrivate
{
GskGpuDevice *device;
GdkDrawContext *context;
GskGpuOptimizations optimizations;
GskGpuFrame *frames[GSK_GPU_MAX_FRAMES];
};
static void gsk_gpu_renderer_dmabuf_downloader_init (GdkDmabufDownloaderInterface *iface);
G_DEFINE_TYPE_EXTENDED (GskGpuRenderer, gsk_gpu_renderer, GSK_TYPE_RENDERER, 0,
G_ADD_PRIVATE (GskGpuRenderer)
G_IMPLEMENT_INTERFACE (GDK_TYPE_DMABUF_DOWNLOADER,
gsk_gpu_renderer_dmabuf_downloader_init))
static void
gsk_gpu_renderer_make_current (GskGpuRenderer *self)
{
GSK_GPU_RENDERER_GET_CLASS (self)->make_current (self);
}
static GskGpuFrame *
gsk_gpu_renderer_create_frame (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
GskGpuRendererClass *klass = GSK_GPU_RENDERER_GET_CLASS (self);
GskGpuFrame *result;
result = g_object_new (klass->frame_type, NULL);
gsk_gpu_frame_setup (result, self, priv->device, priv->optimizations);
return result;
}
static void
gsk_gpu_renderer_dmabuf_downloader_close (GdkDmabufDownloader *downloader)
{
gsk_renderer_unrealize (GSK_RENDERER (downloader));
}
static gboolean
gsk_gpu_renderer_dmabuf_downloader_supports (GdkDmabufDownloader *downloader,
GdkDmabufTexture *texture,
GError **error)
{
GskGpuRenderer *self = GSK_GPU_RENDERER (downloader);
const GdkDmabuf *dmabuf;
GdkDmabufFormats *formats;
dmabuf = gdk_dmabuf_texture_get_dmabuf (texture);
formats = GSK_GPU_RENDERER_GET_CLASS (self)->get_dmabuf_formats (self);
if (!gdk_dmabuf_formats_contains (formats, dmabuf->fourcc, dmabuf->modifier))
{
g_set_error (error,
GDK_DMABUF_ERROR, GDK_DMABUF_ERROR_UNSUPPORTED_FORMAT,
"Unsupported dmabuf format: %.4s:%#" G_GINT64_MODIFIER "x",
(char *) &dmabuf->fourcc, dmabuf->modifier);
return FALSE;
}
return TRUE;
}
static void
gsk_gpu_renderer_dmabuf_downloader_download (GdkDmabufDownloader *downloader,
GdkDmabufTexture *texture,
GdkMemoryFormat format,
guchar *data,
gsize stride)
{
GskGpuRenderer *self = GSK_GPU_RENDERER (downloader);
GskGpuFrame *frame;
gsk_gpu_renderer_make_current (self);
frame = gsk_gpu_renderer_create_frame (self);
gsk_gpu_frame_download_texture (frame,
g_get_monotonic_time(),
GDK_TEXTURE (texture),
format,
data,
stride);
g_object_unref (frame);
}
static void
gsk_gpu_renderer_dmabuf_downloader_init (GdkDmabufDownloaderInterface *iface)
{
iface->close = gsk_gpu_renderer_dmabuf_downloader_close;
iface->supports = gsk_gpu_renderer_dmabuf_downloader_supports;
iface->download = gsk_gpu_renderer_dmabuf_downloader_download;
}
static cairo_region_t *
get_render_region (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
const cairo_region_t *damage;
cairo_region_t *scaled_damage;
double scale;
scale = gsk_gpu_renderer_get_scale (self);
damage = gdk_draw_context_get_frame_region (priv->context);
scaled_damage = cairo_region_create ();
for (int i = 0; i < cairo_region_num_rectangles (damage); i++)
{
cairo_rectangle_int_t rect;
cairo_region_get_rectangle (damage, i, &rect);
cairo_region_union_rectangle (scaled_damage, &(cairo_rectangle_int_t) {
.x = (int) floor (rect.x * scale),
.y = (int) floor (rect.y * scale),
.width = (int) ceil ((rect.x + rect.width) * scale) - floor (rect.x * scale),
.height = (int) ceil ((rect.y + rect.height) * scale) - floor (rect.y * scale),
});
}
return scaled_damage;
}
static GskGpuFrame *
gsk_gpu_renderer_get_frame (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
guint i;
while (TRUE)
{
for (i = 0; i < G_N_ELEMENTS (priv->frames); i++)
{
if (priv->frames[i] == NULL)
{
priv->frames[i] = gsk_gpu_renderer_create_frame (self);
return priv->frames[i];
}
if (!gsk_gpu_frame_is_busy (priv->frames[i]))
return priv->frames[i];
}
GSK_GPU_RENDERER_GET_CLASS (self)->wait (self, priv->frames, GSK_GPU_MAX_FRAMES);
}
}
static gboolean
gsk_gpu_renderer_realize (GskRenderer *renderer,
GdkSurface *surface,
GError **error)
{
GskGpuRenderer *self = GSK_GPU_RENDERER (renderer);
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
GskGpuOptimizations context_optimizations;
GdkDisplay *display;
if (surface)
display = gdk_surface_get_display (surface);
else
display = gdk_display_get_default ();
priv->device = GSK_GPU_RENDERER_GET_CLASS (self)->get_device (display, error);
if (priv->device == NULL)
return FALSE;
priv->context = GSK_GPU_RENDERER_GET_CLASS (self)->create_context (self, display, surface, &context_optimizations, error);
if (priv->context == NULL)
{
g_clear_object (&priv->device);
return FALSE;
}
priv->optimizations &= context_optimizations;
return TRUE;
}
static void
gsk_gpu_renderer_unrealize (GskRenderer *renderer)
{
GskGpuRenderer *self = GSK_GPU_RENDERER (renderer);
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
gsize i, j;
gsk_gpu_renderer_make_current (self);
while (TRUE)
{
for (i = 0, j = 0; i < G_N_ELEMENTS (priv->frames); i++)
{
if (priv->frames[i] == NULL)
break;
if (gsk_gpu_frame_is_busy (priv->frames[i]))
{
if (i > j)
{
priv->frames[j] = priv->frames[i];
priv->frames[i] = NULL;
}
j++;
continue;
}
g_clear_object (&priv->frames[i]);
}
if (j == 0)
break;
GSK_GPU_RENDERER_GET_CLASS (self)->wait (self, priv->frames, j);
}
g_clear_object (&priv->context);
g_clear_object (&priv->device);
}
static GdkTexture *
gsk_gpu_renderer_fallback_render_texture (GskGpuRenderer *self,
GskRenderNode *root,
const graphene_rect_t *rounded_viewport)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
GskGpuImage *image;
gsize width, height, max_size, image_width, image_height;
gsize x, y, size, bpp, stride;
GdkMemoryFormat format;
GdkMemoryDepth depth;
GBytes *bytes;
guchar *data;
GdkTexture *texture;
GdkTextureDownloader downloader;
GskGpuFrame *frame;
max_size = gsk_gpu_device_get_max_image_size (priv->device);
depth = gsk_render_node_get_preferred_depth (root);
do
{
image = gsk_gpu_device_create_download_image (priv->device,
gsk_render_node_get_preferred_depth (root),
MIN (max_size, rounded_viewport->size.width),
MIN (max_size, rounded_viewport->size.height));
max_size /= 2;
}
while (image == NULL);
format = gsk_gpu_image_get_format (image);
bpp = gdk_memory_format_bytes_per_pixel (format);
image_width = gsk_gpu_image_get_width (image);
image_height = gsk_gpu_image_get_height (image);
width = rounded_viewport->size.width;
height = rounded_viewport->size.height;
stride = width * bpp;
size = stride * height;
data = g_malloc_n (stride, height);
for (y = 0; y < height; y += image_height)
{
for (x = 0; x < width; x += image_width)
{
texture = NULL;
if (image == NULL)
image = gsk_gpu_device_create_download_image (priv->device,
depth,
MIN (image_width, width - x),
MIN (image_height, height - y));
frame = gsk_gpu_renderer_create_frame (self);
gsk_gpu_frame_render (frame,
g_get_monotonic_time(),
image,
NULL,
root,
&GRAPHENE_RECT_INIT (rounded_viewport->origin.x + x,
rounded_viewport->origin.y + y,
image_width,
image_height),
&texture);
g_object_unref (frame);
g_assert (texture);
gdk_texture_downloader_init (&downloader, texture);
gdk_texture_downloader_set_format (&downloader, format);
gdk_texture_downloader_download_into (&downloader,
data + stride * y + x * bpp,
stride);
gdk_texture_downloader_finish (&downloader);
g_object_unref (texture);
g_clear_object (&image);
}
}
bytes = g_bytes_new_take (data, size);
texture = gdk_memory_texture_new (width, height, GDK_MEMORY_DEFAULT, bytes, stride);
g_bytes_unref (bytes);
return texture;
}
static GdkTexture *
gsk_gpu_renderer_render_texture (GskRenderer *renderer,
GskRenderNode *root,
const graphene_rect_t *viewport)
{
GskGpuRenderer *self = GSK_GPU_RENDERER (renderer);
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
GskGpuFrame *frame;
GskGpuImage *image;
GdkTexture *texture;
graphene_rect_t rounded_viewport;
gsk_gpu_renderer_make_current (self);
rounded_viewport = GRAPHENE_RECT_INIT (viewport->origin.x,
viewport->origin.y,
ceil (viewport->size.width),
ceil (viewport->size.height));
image = gsk_gpu_device_create_download_image (priv->device,
gsk_render_node_get_preferred_depth (root),
rounded_viewport.size.width,
rounded_viewport.size.height);
if (image == NULL)
return gsk_gpu_renderer_fallback_render_texture (self, root, &rounded_viewport);
frame = gsk_gpu_renderer_create_frame (self);
texture = NULL;
gsk_gpu_frame_render (frame,
g_get_monotonic_time(),
image,
NULL,
root,
&rounded_viewport,
&texture);
g_object_unref (frame);
g_object_unref (image);
/* check that callback setting texture was actually called, as its technically async */
g_assert (texture);
return texture;
}
static void
gsk_gpu_renderer_render (GskRenderer *renderer,
GskRenderNode *root,
const cairo_region_t *region)
{
GskGpuRenderer *self = GSK_GPU_RENDERER (renderer);
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
GskGpuFrame *frame;
GskGpuImage *backbuffer;
cairo_region_t *render_region;
GdkSurface *surface;
if (cairo_region_is_empty (region))
{
gdk_draw_context_empty_frame (priv->context);
return;
}
gdk_draw_context_begin_frame_full (priv->context,
gsk_render_node_get_preferred_depth (root),
region);
gsk_gpu_renderer_make_current (self);
backbuffer = GSK_GPU_RENDERER_GET_CLASS (self)->get_backbuffer (self);
frame = gsk_gpu_renderer_get_frame (self);
render_region = get_render_region (self);
surface = gdk_draw_context_get_surface (priv->context);
gsk_gpu_frame_render (frame,
g_get_monotonic_time(),
backbuffer,
render_region,
root,
&GRAPHENE_RECT_INIT (
0, 0,
gdk_surface_get_width (surface),
gdk_surface_get_height (surface)
),
NULL);
gdk_draw_context_end_frame (priv->context);
g_clear_pointer (&render_region, cairo_region_destroy);
}
static double
gsk_gpu_renderer_real_get_scale (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
GdkSurface *surface;
surface = gdk_draw_context_get_surface (priv->context);
return gdk_surface_get_scale (surface);
}
static void
gsk_gpu_renderer_class_init (GskGpuRendererClass *klass)
{
GskRendererClass *renderer_class = GSK_RENDERER_CLASS (klass);
renderer_class->supports_offload = TRUE;
renderer_class->realize = gsk_gpu_renderer_realize;
renderer_class->unrealize = gsk_gpu_renderer_unrealize;
renderer_class->render = gsk_gpu_renderer_render;
renderer_class->render_texture = gsk_gpu_renderer_render_texture;
gsk_ensure_resources ();
klass->optimizations = -1;
klass->optimizations &= ~gdk_parse_debug_var ("GSK_GPU_SKIP",
gsk_gpu_optimization_keys,
G_N_ELEMENTS (gsk_gpu_optimization_keys));
klass->get_scale = gsk_gpu_renderer_real_get_scale;
}
static void
gsk_gpu_renderer_init (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
priv->optimizations = GSK_GPU_RENDERER_GET_CLASS (self)->optimizations;
}
GdkDrawContext *
gsk_gpu_renderer_get_context (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
return priv->context;
}
GskGpuDevice *
gsk_gpu_renderer_get_device (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
return priv->device;
}
double
gsk_gpu_renderer_get_scale (GskGpuRenderer *self)
{
return GSK_GPU_RENDERER_GET_CLASS (self)->get_scale (self);
}
GskGpuOptimizations
gsk_gpu_renderer_get_optimizations (GskGpuRenderer *self)
{
GskGpuRendererPrivate *priv = gsk_gpu_renderer_get_instance_private (self);
return priv->optimizations;
}

38
gsk/gpu/gskgpurenderer.h Normal file
View File

@@ -0,0 +1,38 @@
/*
* Copyright © 2016 Benjamin Otte
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <gdk/gdk.h>
#include <gsk/gskrenderer.h>
G_BEGIN_DECLS
#define GSK_TYPE_GPU_RENDERER (gsk_gpu_renderer_get_type ())
#define GSK_GPU_RENDERER(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj), GSK_TYPE_GPU_RENDERER, GskGpuRenderer))
#define GSK_IS_GPU_RENDERER(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj), GSK_TYPE_GPU_RENDERER))
typedef struct _GskGpuRenderer GskGpuRenderer;
GDK_AVAILABLE_IN_ALL
GType gsk_gpu_renderer_get_type (void) G_GNUC_CONST;
G_DEFINE_AUTOPTR_CLEANUP_FUNC(GskGpuRenderer, g_object_unref)
G_END_DECLS

View File

@@ -0,0 +1,52 @@
#pragma once
#include "gskgpurenderer.h"
#include "gskgputypesprivate.h"
#include "gskrendererprivate.h"
G_BEGIN_DECLS
#define GSK_GPU_RENDERER_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass), GSK_TYPE_GPU_RENDERER, GskGpuRendererClass))
#define GSK_IS_GPU_RENDERER_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), GSK_TYPE_GPU_RENDERER))
#define GSK_GPU_RENDERER_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), GSK_TYPE_GPU_RENDERER, GskGpuRendererClass))
typedef struct _GskGpuRendererClass GskGpuRendererClass;
struct _GskGpuRenderer
{
GskRenderer parent_instance;
};
struct _GskGpuRendererClass
{
GskRendererClass parent_class;
GType frame_type;
GskGpuOptimizations optimizations; /* subclasses cannot override this */
GskGpuDevice * (* get_device) (GdkDisplay *display,
GError **error);
GdkDrawContext * (* create_context) (GskGpuRenderer *self,
GdkDisplay *display,
GdkSurface *surface,
GskGpuOptimizations *supported,
GError **error);
void (* make_current) (GskGpuRenderer *self);
GskGpuImage * (* get_backbuffer) (GskGpuRenderer *self);
void (* wait) (GskGpuRenderer *self,
GskGpuFrame **frame,
gsize n_frames);
double (* get_scale) (GskGpuRenderer *self);
GdkDmabufFormats * (* get_dmabuf_formats) (GskGpuRenderer *self);
};
GdkDrawContext * gsk_gpu_renderer_get_context (GskGpuRenderer *self);
GskGpuDevice * gsk_gpu_renderer_get_device (GskGpuRenderer *self);
double gsk_gpu_renderer_get_scale (GskGpuRenderer *self);
GskGpuOptimizations gsk_gpu_renderer_get_optimizations (GskGpuRenderer *self);
G_END_DECLS

View File

@@ -0,0 +1,370 @@
#include "config.h"
#include "gskgpurenderpassopprivate.h"
#include "gskglimageprivate.h"
#include "gskgpudeviceprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpunodeprocessorprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskrendernodeprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkanimageprivate.h"
#include "gskvulkandescriptorsprivate.h"
#endif
typedef struct _GskGpuRenderPassOp GskGpuRenderPassOp;
struct _GskGpuRenderPassOp
{
GskGpuOp op;
GskGpuImage *target;
cairo_rectangle_int_t area;
GskRenderPassType pass_type;
};
static void
gsk_gpu_render_pass_op_finish (GskGpuOp *op)
{
GskGpuRenderPassOp *self = (GskGpuRenderPassOp *) op;
g_object_unref (self->target);
}
static void
gsk_gpu_render_pass_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuRenderPassOp *self = (GskGpuRenderPassOp *) op;
gsk_gpu_print_op (string, indent, "begin-render-pass");
gsk_gpu_print_image (string, self->target);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static VkImageLayout
gsk_gpu_render_pass_type_to_vk_image_layout (GskRenderPassType type)
{
switch (type)
{
default:
g_assert_not_reached ();
G_GNUC_FALLTHROUGH;
case GSK_RENDER_PASS_PRESENT:
return VK_IMAGE_LAYOUT_PRESENT_SRC_KHR;
case GSK_RENDER_PASS_OFFSCREEN:
return VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
}
}
static void
gsk_gpu_render_pass_op_do_barriers (GskGpuRenderPassOp *self,
GskVulkanCommandState *state)
{
GskGpuShaderOp *shader;
GskGpuOp *op;
GskGpuDescriptors *desc = NULL;
for (op = ((GskGpuOp *) self)->next;
op->op_class->stage != GSK_GPU_STAGE_END_PASS;
op = op->next)
{
if (op->op_class->stage != GSK_GPU_STAGE_SHADER)
continue;
shader = (GskGpuShaderOp *) op;
if (shader->desc == NULL || shader->desc == desc)
continue;
if (desc == NULL)
{
gsk_vulkan_descriptors_bind (GSK_VULKAN_DESCRIPTORS (shader->desc), state->desc, state->vk_command_buffer);
state->desc = GSK_VULKAN_DESCRIPTORS (shader->desc);
}
desc = shader->desc;
gsk_vulkan_descriptors_transition (GSK_VULKAN_DESCRIPTORS (desc), state->semaphores, state->vk_command_buffer);
}
if (desc == NULL)
gsk_vulkan_descriptors_transition (state->desc, state->semaphores, state->vk_command_buffer);
}
static GskGpuOp *
gsk_gpu_render_pass_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuRenderPassOp *self = (GskGpuRenderPassOp *) op;
/* nesting frame passes not allowed */
g_assert (state->vk_render_pass == VK_NULL_HANDLE);
gsk_gpu_render_pass_op_do_barriers (self, state);
state->vk_format = gsk_vulkan_image_get_vk_format (GSK_VULKAN_IMAGE (self->target));
state->vk_render_pass = gsk_vulkan_device_get_vk_render_pass (GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame)),
state->vk_format,
gsk_vulkan_image_get_vk_image_layout (GSK_VULKAN_IMAGE (self->target)),
gsk_gpu_render_pass_type_to_vk_image_layout (self->pass_type));
vkCmdSetViewport (state->vk_command_buffer,
0,
1,
&(VkViewport) {
.x = 0,
.y = 0,
.width = gsk_gpu_image_get_width (self->target),
.height = gsk_gpu_image_get_height (self->target),
.minDepth = 0,
.maxDepth = 1
});
vkCmdBeginRenderPass (state->vk_command_buffer,
&(VkRenderPassBeginInfo) {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,
.renderPass = state->vk_render_pass,
.framebuffer = gsk_vulkan_image_get_vk_framebuffer (GSK_VULKAN_IMAGE(self->target),
state->vk_render_pass),
.renderArea = {
{ self->area.x, self->area.y },
{ self->area.width, self->area.height }
},
.clearValueCount = 1,
.pClearValues = (VkClearValue [1]) {
{ .color = { .float32 = { 0.f, 0.f, 0.f, 0.f } } }
}
},
VK_SUBPASS_CONTENTS_INLINE);
op = op->next;
while (op->op_class->stage != GSK_GPU_STAGE_END_PASS)
{
op = gsk_gpu_op_vk_command (op, frame, state);
}
op = gsk_gpu_op_vk_command (op, frame, state);
return op;
}
#endif
static GskGpuOp *
gsk_gpu_render_pass_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuRenderPassOp *self = (GskGpuRenderPassOp *) op;
/* nesting frame passes not allowed */
g_assert (state->flip_y == 0);
gsk_gl_image_bind_framebuffer (GSK_GL_IMAGE (self->target));
if (gsk_gl_image_is_flipped (GSK_GL_IMAGE (self->target)))
state->flip_y = gsk_gpu_image_get_height (self->target);
else
state->flip_y = 0;
glViewport (0, 0,
gsk_gpu_image_get_width (self->target),
gsk_gpu_image_get_height (self->target));
if (state->flip_y)
glScissor (self->area.x, state->flip_y - self->area.y - self->area.height, self->area.width, self->area.height);
else
glScissor (self->area.x, self->area.y, self->area.width, self->area.height);
glClearColor (0, 0, 0, 0);
glClear (GL_COLOR_BUFFER_BIT);
op = op->next;
while (op->op_class->stage != GSK_GPU_STAGE_END_PASS)
{
op = gsk_gpu_op_gl_command (op, frame, state);
}
op = gsk_gpu_op_gl_command (op, frame, state);
return op;
}
static const GskGpuOpClass GSK_GPU_RENDER_PASS_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuRenderPassOp),
GSK_GPU_STAGE_BEGIN_PASS,
gsk_gpu_render_pass_op_finish,
gsk_gpu_render_pass_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_render_pass_op_vk_command,
#endif
gsk_gpu_render_pass_op_gl_command
};
typedef struct _GskGpuFramePassEndOp GskGpuFramePassEndOp;
struct _GskGpuFramePassEndOp
{
GskGpuOp op;
GskGpuImage *target;
GskRenderPassType pass_type;
};
static void
gsk_gpu_render_pass_end_op_finish (GskGpuOp *op)
{
GskGpuFramePassEndOp *self = (GskGpuFramePassEndOp *) op;
g_object_unref (self->target);
}
static void
gsk_gpu_render_pass_end_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuFramePassEndOp *self = (GskGpuFramePassEndOp *) op;
gsk_gpu_print_op (string, indent, "end-render-pass");
gsk_gpu_print_image (string, self->target);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_render_pass_end_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuFramePassEndOp *self = (GskGpuFramePassEndOp *) op;
vkCmdEndRenderPass (state->vk_command_buffer);
if (gsk_gpu_image_get_flags (self->target) & GSK_GPU_IMAGE_CAN_MIPMAP)
{
vkCmdPipelineBarrier (state->vk_command_buffer,
gsk_vulkan_image_get_vk_pipeline_stage (GSK_VULKAN_IMAGE (self->target)),
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
0,
0, NULL,
0, NULL,
1, &(VkImageMemoryBarrier) {
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.srcAccessMask = gsk_vulkan_image_get_vk_access (GSK_VULKAN_IMAGE (self->target)),
.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT,
.oldLayout = gsk_vulkan_image_get_vk_image_layout (GSK_VULKAN_IMAGE (self->target)),
.newLayout = gsk_gpu_render_pass_type_to_vk_image_layout (self->pass_type),
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = gsk_vulkan_image_get_vk_image (GSK_VULKAN_IMAGE (self->target)),
.subresourceRange = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.baseMipLevel = 1,
.levelCount = VK_REMAINING_MIP_LEVELS,
.baseArrayLayer = 0,
.layerCount = 1
},
});
}
gsk_vulkan_image_set_vk_image_layout (GSK_VULKAN_IMAGE (self->target),
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
gsk_gpu_render_pass_type_to_vk_image_layout (self->pass_type),
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT);
state->vk_render_pass = VK_NULL_HANDLE;
state->vk_format = VK_FORMAT_UNDEFINED;
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_render_pass_end_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
state->flip_y = 0;
return op->next;
}
static const GskGpuOpClass GSK_GPU_RENDER_PASS_END_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuFramePassEndOp),
GSK_GPU_STAGE_END_PASS,
gsk_gpu_render_pass_end_op_finish,
gsk_gpu_render_pass_end_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_render_pass_end_op_vk_command,
#endif
gsk_gpu_render_pass_end_op_gl_command
};
void
gsk_gpu_render_pass_begin_op (GskGpuFrame *frame,
GskGpuImage *image,
const cairo_rectangle_int_t *area,
GskRenderPassType pass_type)
{
GskGpuRenderPassOp *self;
self = (GskGpuRenderPassOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_RENDER_PASS_OP_CLASS);
self->target = g_object_ref (image);
self->area = *area;
self->pass_type = pass_type;
}
void
gsk_gpu_render_pass_end_op (GskGpuFrame *frame,
GskGpuImage *image,
GskRenderPassType pass_type)
{
GskGpuFramePassEndOp *self;
self = (GskGpuFramePassEndOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_RENDER_PASS_END_OP_CLASS);
self->target = g_object_ref (image);
self->pass_type = pass_type;
}
GskGpuImage *
gsk_gpu_render_pass_op_offscreen (GskGpuFrame *frame,
const graphene_vec2_t *scale,
const graphene_rect_t *viewport,
GskRenderNode *node)
{
GskGpuImage *image;
int width, height;
width = ceil (graphene_vec2_get_x (scale) * viewport->size.width);
height = ceil (graphene_vec2_get_y (scale) * viewport->size.height);
image = gsk_gpu_device_create_offscreen_image (gsk_gpu_frame_get_device (frame),
FALSE,
gsk_render_node_get_preferred_depth (node),
width, height);
gsk_gpu_render_pass_begin_op (frame,
image,
&(cairo_rectangle_int_t) { 0, 0, width, height },
GSK_RENDER_PASS_OFFSCREEN);
gsk_gpu_node_processor_process (frame,
image,
&(cairo_rectangle_int_t) { 0, 0, width, height },
node,
viewport);
gsk_gpu_render_pass_end_op (frame,
image,
GSK_RENDER_PASS_OFFSCREEN);
return image;
}

View File

@@ -0,0 +1,32 @@
#pragma once
#include "gskgputypesprivate.h"
#include "gsktypes.h"
#include <graphene.h>
G_BEGIN_DECLS
/* We only need this for the final VkImageLayout, but don't tell anyone */
typedef enum
{
GSK_RENDER_PASS_OFFSCREEN,
GSK_RENDER_PASS_PRESENT
} GskRenderPassType;
void gsk_gpu_render_pass_begin_op (GskGpuFrame *frame,
GskGpuImage *image,
const cairo_rectangle_int_t *area,
GskRenderPassType pass_type);
void gsk_gpu_render_pass_end_op (GskGpuFrame *frame,
GskGpuImage *image,
GskRenderPassType pass_type);
GskGpuImage * gsk_gpu_render_pass_op_offscreen (GskGpuFrame *frame,
const graphene_vec2_t *scale,
const graphene_rect_t *viewport,
GskRenderNode *node);
G_END_DECLS

View File

@@ -0,0 +1,75 @@
#include "config.h"
#include "gskgpuroundedcoloropprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgpushaderopprivate.h"
#include "gsk/gskroundedrectprivate.h"
#include "gpu/shaders/gskgpuroundedcolorinstance.h"
typedef struct _GskGpuRoundedColorOp GskGpuRoundedColorOp;
struct _GskGpuRoundedColorOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_rounded_color_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuRoundedcolorInstance *instance;
instance = (GskGpuRoundedcolorInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "rounded-color");
gsk_gpu_print_rounded_rect (string, instance->outline);
gsk_gpu_print_rgba (string, instance->color);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_ROUNDED_COLOR_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuRoundedColorOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_rounded_color_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpuroundedcolor",
sizeof (GskGpuRoundedcolorInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_roundedcolor_info,
#endif
gsk_gpu_roundedcolor_setup_attrib_locations,
gsk_gpu_roundedcolor_setup_vao
};
void
gsk_gpu_rounded_color_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const GskRoundedRect *outline,
const graphene_point_t *offset,
const GdkRGBA *color)
{
GskGpuRoundedcolorInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_ROUNDED_COLOR_OP_CLASS,
0,
clip,
NULL,
&instance);
gsk_rounded_rect_to_float (outline, offset, instance->outline);
gsk_gpu_rgba_to_float (color, instance->color);
}

View File

@@ -0,0 +1,18 @@
#pragma once
#include "gskgputypesprivate.h"
#include "gsktypes.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_gpu_rounded_color_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const GskRoundedRect *outline,
const graphene_point_t *offset,
const GdkRGBA *color);
G_END_DECLS

90
gsk/gpu/gskgpuscissorop.c Normal file
View File

@@ -0,0 +1,90 @@
#include "config.h"
#include "gskgpuscissoropprivate.h"
#include "gskgpuopprivate.h"
#include "gskgpuprintprivate.h"
typedef struct _GskGpuScissorOp GskGpuScissorOp;
struct _GskGpuScissorOp
{
GskGpuOp op;
cairo_rectangle_int_t rect;
};
static void
gsk_gpu_scissor_op_finish (GskGpuOp *op)
{
}
static void
gsk_gpu_scissor_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuScissorOp *self = (GskGpuScissorOp *) op;
gsk_gpu_print_op (string, indent, "scissor");
gsk_gpu_print_int_rect (string, &self->rect);
gsk_gpu_print_newline (string);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_scissor_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuScissorOp *self = (GskGpuScissorOp *) op;
vkCmdSetScissor (state->vk_command_buffer,
0,
1,
&(VkRect2D) {
{ self->rect.x, self->rect.y },
{ self->rect.width, self->rect.height },
});
return op->next;
}
#endif
static GskGpuOp *
gsk_gpu_scissor_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuScissorOp *self = (GskGpuScissorOp *) op;
if (state->flip_y)
glScissor (self->rect.x, state->flip_y - self->rect.y - self->rect.height, self->rect.width, self->rect.height);
else
glScissor (self->rect.x, self->rect.y, self->rect.width, self->rect.height);
return op->next;
}
static const GskGpuOpClass GSK_GPU_SCISSOR_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuScissorOp),
GSK_GPU_STAGE_COMMAND,
gsk_gpu_scissor_op_finish,
gsk_gpu_scissor_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_scissor_op_vk_command,
#endif
gsk_gpu_scissor_op_gl_command
};
void
gsk_gpu_scissor_op (GskGpuFrame *frame,
const cairo_rectangle_int_t *rect)
{
GskGpuScissorOp *self;
self = (GskGpuScissorOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_SCISSOR_OP_CLASS);
self->rect = *rect;
}

View File

@@ -1,10 +1,10 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
void gsk_vulkan_scissor_op (GskVulkanRender *render,
void gsk_gpu_scissor_op (GskGpuFrame *frame,
const cairo_rectangle_int_t *rect);

200
gsk/gpu/gskgpushaderop.c Normal file
View File

@@ -0,0 +1,200 @@
#include "config.h"
#include "gskgpushaderopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgldescriptorsprivate.h"
#include "gskgldeviceprivate.h"
#include "gskglframeprivate.h"
#include "gskglimageprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkandescriptorsprivate.h"
#include "gskvulkandeviceprivate.h"
#endif
void
gsk_gpu_shader_op_finish (GskGpuOp *op)
{
GskGpuShaderOp *self = (GskGpuShaderOp *) op;
g_clear_object (&self->desc);
}
#ifdef GDK_RENDERING_VULKAN
GskGpuOp *
gsk_gpu_shader_op_vk_command_n (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state,
gsize instance_scale)
{
GskGpuShaderOp *self = (GskGpuShaderOp *) op;
GskGpuShaderOpClass *shader_op_class = (GskGpuShaderOpClass *) op->op_class;
GskVulkanDescriptors *desc;
GskGpuOp *next;
gsize i;
desc = GSK_VULKAN_DESCRIPTORS (self->desc);
if (desc && state->desc != desc)
{
gsk_vulkan_descriptors_bind (desc, state->desc, state->vk_command_buffer);
state->desc = desc;
}
i = 1;
if (gsk_vulkan_device_has_feature (GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame)),
GDK_VULKAN_FEATURE_NONUNIFORM_INDEXING))
{
for (next = op->next; next && i < 10 * 1000; next = next->next)
{
GskGpuShaderOp *next_shader = (GskGpuShaderOp *) next;
if (next->op_class != op->op_class ||
next_shader->desc != self->desc ||
next_shader->variation != self->variation ||
next_shader->clip != self->clip ||
next_shader->vertex_offset != self->vertex_offset + i * shader_op_class->vertex_size)
break;
i++;
}
}
else
next = op->next;
vkCmdBindPipeline (state->vk_command_buffer,
VK_PIPELINE_BIND_POINT_GRAPHICS,
gsk_vulkan_device_get_vk_pipeline (GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame)),
gsk_vulkan_descriptors_get_pipeline_layout (state->desc),
shader_op_class,
self->variation,
self->clip,
state->blend,
state->vk_format,
state->vk_render_pass));
vkCmdDraw (state->vk_command_buffer,
6 * instance_scale, i,
0, self->vertex_offset / shader_op_class->vertex_size);
return next;
}
GskGpuOp *
gsk_gpu_shader_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
return gsk_gpu_shader_op_vk_command_n (op, frame, state, 1);
}
#endif
GskGpuOp *
gsk_gpu_shader_op_gl_command_n (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state,
gsize instance_scale)
{
GskGpuShaderOp *self = (GskGpuShaderOp *) op;
GskGpuShaderOpClass *shader_op_class = (GskGpuShaderOpClass *) op->op_class;
GskGLDescriptors *desc;
GskGpuOp *next;
gsize i, n_external;
desc = GSK_GL_DESCRIPTORS (self->desc);
if (desc)
n_external = gsk_gl_descriptors_get_n_external (desc);
else
n_external = 0;
if (state->current_program.op_class != op->op_class ||
state->current_program.variation != self->variation ||
state->current_program.clip != self->clip ||
state->current_program.n_external != n_external)
{
state->current_program.op_class = op->op_class;
state->current_program.variation = self->variation;
state->current_program.clip = self->clip;
state->current_program.n_external = n_external;
gsk_gl_frame_use_program (GSK_GL_FRAME (frame),
shader_op_class,
self->variation,
self->clip,
n_external);
}
if (desc != state->desc && desc)
{
gsk_gl_descriptors_use (desc);
state->desc = desc;
}
i = 1;
for (next = op->next; next && i < 10 * 1000; next = next->next)
{
GskGpuShaderOp *next_shader = (GskGpuShaderOp *) next;
if (next->op_class != op->op_class ||
next_shader->desc != self->desc ||
next_shader->variation != self->variation ||
next_shader->clip != self->clip ||
next_shader->vertex_offset != self->vertex_offset + i * shader_op_class->vertex_size)
break;
i++;
}
if (gsk_gpu_frame_should_optimize (frame, GSK_GPU_OPTIMIZE_GL_BASE_INSTANCE))
{
glDrawArraysInstancedBaseInstance (GL_TRIANGLES,
0,
6 * instance_scale,
i,
self->vertex_offset / shader_op_class->vertex_size);
}
else
{
shader_op_class->setup_vao (self->vertex_offset);
glDrawArraysInstanced (GL_TRIANGLES,
0,
6 * instance_scale,
i);
}
return next;
}
GskGpuOp *
gsk_gpu_shader_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
return gsk_gpu_shader_op_gl_command_n (op, frame, state, 1);
}
GskGpuShaderOp *
gsk_gpu_shader_op_alloc (GskGpuFrame *frame,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
gpointer out_vertex_data)
{
GskGpuShaderOp *self;
self = (GskGpuShaderOp *) gsk_gpu_op_alloc (frame, &op_class->parent_class);
self->variation = variation;
self->clip = clip;
if (desc)
self->desc = g_object_ref (desc);
else
self->desc = NULL;
self->vertex_offset = gsk_gpu_frame_reserve_vertex_data (frame, op_class->vertex_size);
*((gpointer *) out_vertex_data) = gsk_gpu_frame_get_vertex_data (frame, self->vertex_offset);
return self;
}

View File

@@ -0,0 +1,80 @@
#pragma once
#include "gskgpuopprivate.h"
#include "gskgputypesprivate.h"
G_BEGIN_DECLS
struct _GskGpuShaderOp
{
GskGpuOp parent_op;
GskGpuDescriptors *desc;
guint32 variation;
GskGpuShaderClip clip;
gsize vertex_offset;
};
struct _GskGpuShaderOpClass
{
GskGpuOpClass parent_class;
const char * shader_name;
gsize vertex_size;
#ifdef GDK_RENDERING_VULKAN
const VkPipelineVertexInputStateCreateInfo *vertex_input_state;
#endif
void (* setup_attrib_locations) (GLuint program);
void (* setup_vao) (gsize offset);
};
GskGpuShaderOp * gsk_gpu_shader_op_alloc (GskGpuFrame *frame,
const GskGpuShaderOpClass *op_class,
guint32 variation,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
gpointer out_vertex_data);
void gsk_gpu_shader_op_finish (GskGpuOp *op);
#ifdef GDK_RENDERING_VULKAN
GskGpuOp * gsk_gpu_shader_op_vk_command_n (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state,
gsize instance_scale);
GskGpuOp * gsk_gpu_shader_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state);
#endif
GskGpuOp * gsk_gpu_shader_op_gl_command_n (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state,
gsize instance_scale);
GskGpuOp * gsk_gpu_shader_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state);
static inline void
gsk_gpu_rgba_to_float (const GdkRGBA *rgba,
float values[4])
{
values[0] = rgba->red;
values[1] = rgba->green;
values[2] = rgba->blue;
values[3] = rgba->alpha;
}
#include <graphene.h>
static inline void
gsk_gpu_point_to_float (const graphene_point_t *point,
const graphene_point_t *offset,
float values[2])
{
values[0] = point->x + offset->x;
values[1] = point->y + offset->y;
}
G_END_DECLS

View File

@@ -0,0 +1,82 @@
#include "config.h"
#include "gskgpustraightalphaopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpustraightalphainstance.h"
#define VARIATION_OPACITY (1 << 0)
#define VARIATION_STRAIGHT_ALPHA (1 << 1)
typedef struct _GskGpuStraightAlphaOp GskGpuStraightAlphaOp;
struct _GskGpuStraightAlphaOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_straight_alpha_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuStraightalphaInstance *instance;
instance = (GskGpuStraightalphaInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "straight-alpha");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->tex_id);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_STRAIGHT_ALPHA_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuStraightAlphaOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_straight_alpha_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpustraightalpha",
sizeof (GskGpuStraightalphaInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_straightalpha_info,
#endif
gsk_gpu_straightalpha_setup_attrib_locations,
gsk_gpu_straightalpha_setup_vao
};
void
gsk_gpu_straight_alpha_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
float opacity,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect)
{
GskGpuStraightalphaInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_STRAIGHT_ALPHA_OP_CLASS,
(opacity < 1.0 ? VARIATION_OPACITY : 0) |
VARIATION_STRAIGHT_ALPHA,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rect_to_float (tex_rect, offset, instance->tex_rect);
instance->tex_id = descriptor;
instance->opacity = opacity;
}

View File

@@ -1,17 +1,19 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_inset_shadow_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
const GskRoundedRect *outline,
void gsk_gpu_straight_alpha_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
float opacity,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const GdkRGBA *color,
const graphene_point_t *shadow_offset,
float spread,
float blur_radius);
const graphene_rect_t *tex_rect);
G_END_DECLS

76
gsk/gpu/gskgputextureop.c Normal file
View File

@@ -0,0 +1,76 @@
#include "config.h"
#include "gskgputextureopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgputextureinstance.h"
typedef struct _GskGpuTextureOp GskGpuTextureOp;
struct _GskGpuTextureOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_texture_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuTextureInstance *instance;
instance = (GskGpuTextureInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "texture");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_image_descriptor (string, shader->desc, instance->tex_id);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_TEXTURE_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuTextureOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_texture_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgputexture",
sizeof (GskGpuTextureInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_texture_info,
#endif
gsk_gpu_texture_setup_attrib_locations,
gsk_gpu_texture_setup_vao
};
void
gsk_gpu_texture_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect)
{
GskGpuTextureInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_TEXTURE_OP_CLASS,
0,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
gsk_gpu_rect_to_float (tex_rect, offset, instance->tex_rect);
instance->tex_id = descriptor;
}

View File

@@ -1,13 +1,15 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_texture_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
GskVulkanImage *image,
GskVulkanRenderSampler sampler,
void gsk_gpu_texture_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
GskGpuDescriptors *desc,
guint32 descriptor,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect);

View File

@@ -0,0 +1,122 @@
#pragma once
#include <gdk/gdk.h>
#include "gsk/gskenums.h"
#include "gdk/gdkmemoryformatprivate.h"
#define GSK_GPU_PATTERN_STACK_SIZE 16
typedef struct _GskGLDescriptors GskGLDescriptors;
typedef struct _GskGpuBuffer GskGpuBuffer;
typedef struct _GskGpuDescriptors GskGpuDescriptors;
typedef struct _GskGpuDevice GskGpuDevice;
typedef struct _GskGpuFrame GskGpuFrame;
typedef struct _GskGpuImage GskGpuImage;
typedef struct _GskGpuOp GskGpuOp;
typedef struct _GskGpuOpClass GskGpuOpClass;
typedef struct _GskGpuShaderOp GskGpuShaderOp;
typedef struct _GskGpuShaderOpClass GskGpuShaderOpClass;
typedef struct _GskVulkanDescriptors GskVulkanDescriptors;
typedef struct _GskVulkanSemaphores GskVulkanSemaphores;
typedef enum {
GSK_GPU_IMAGE_EXTERNAL = (1 << 0),
GSK_GPU_IMAGE_TOGGLE_REF = (1 << 1),
GSK_GPU_IMAGE_STRAIGHT_ALPHA = (1 << 2),
GSK_GPU_IMAGE_NO_BLIT = (1 << 3),
GSK_GPU_IMAGE_CAN_MIPMAP = (1 << 4),
GSK_GPU_IMAGE_MIPMAP = (1 << 5),
GSK_GPU_IMAGE_FILTERABLE = (1 << 6),
GSK_GPU_IMAGE_RENDERABLE = (1 << 7),
} GskGpuImageFlags;
typedef enum {
GSK_GPU_SAMPLER_DEFAULT,
GSK_GPU_SAMPLER_TRANSPARENT,
GSK_GPU_SAMPLER_REPEAT,
GSK_GPU_SAMPLER_NEAREST,
GSK_GPU_SAMPLER_MIPMAP_DEFAULT,
/* add more */
GSK_GPU_SAMPLER_N_SAMPLERS
} GskGpuSampler;
typedef enum {
GSK_GPU_SHADER_CLIP_NONE,
GSK_GPU_SHADER_CLIP_RECT,
GSK_GPU_SHADER_CLIP_ROUNDED
} GskGpuShaderClip;
typedef enum {
GSK_GPU_BLEND_OVER,
GSK_GPU_BLEND_ADD
} GskGpuBlend;
typedef enum {
GSK_GPU_PATTERN_DONE,
GSK_GPU_PATTERN_COLOR,
GSK_GPU_PATTERN_OPACITY,
GSK_GPU_PATTERN_TEXTURE,
GSK_GPU_PATTERN_STRAIGHT_ALPHA,
GSK_GPU_PATTERN_COLOR_MATRIX,
GSK_GPU_PATTERN_GLYPHS,
GSK_GPU_PATTERN_LINEAR_GRADIENT,
GSK_GPU_PATTERN_REPEATING_LINEAR_GRADIENT,
GSK_GPU_PATTERN_RADIAL_GRADIENT,
GSK_GPU_PATTERN_REPEATING_RADIAL_GRADIENT,
GSK_GPU_PATTERN_CONIC_GRADIENT,
GSK_GPU_PATTERN_CLIP,
GSK_GPU_PATTERN_ROUNDED_CLIP,
GSK_GPU_PATTERN_REPEAT_PUSH,
GSK_GPU_PATTERN_POSITION_POP,
GSK_GPU_PATTERN_PUSH_COLOR,
GSK_GPU_PATTERN_POP_CROSS_FADE,
GSK_GPU_PATTERN_POP_MASK_ALPHA,
GSK_GPU_PATTERN_POP_MASK_INVERTED_ALPHA,
GSK_GPU_PATTERN_POP_MASK_LUMINANCE,
GSK_GPU_PATTERN_POP_MASK_INVERTED_LUMINANCE,
GSK_GPU_PATTERN_AFFINE,
GSK_GPU_PATTERN_BLEND_DEFAULT,
GSK_GPU_PATTERN_BLEND_MULTIPLY,
GSK_GPU_PATTERN_BLEND_SCREEN,
GSK_GPU_PATTERN_BLEND_OVERLAY,
GSK_GPU_PATTERN_BLEND_DARKEN,
GSK_GPU_PATTERN_BLEND_LIGHTEN,
GSK_GPU_PATTERN_BLEND_COLOR_DODGE,
GSK_GPU_PATTERN_BLEND_COLOR_BURN,
GSK_GPU_PATTERN_BLEND_HARD_LIGHT,
GSK_GPU_PATTERN_BLEND_SOFT_LIGHT,
GSK_GPU_PATTERN_BLEND_DIFFERENCE,
GSK_GPU_PATTERN_BLEND_EXCLUSION,
GSK_GPU_PATTERN_BLEND_COLOR,
GSK_GPU_PATTERN_BLEND_HUE,
GSK_GPU_PATTERN_BLEND_SATURATION,
GSK_GPU_PATTERN_BLEND_LUMINOSITY,
} GskGpuPatternType;
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_MULTIPLY == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_MULTIPLY);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_SCREEN == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_SCREEN);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_OVERLAY == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_OVERLAY);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_DARKEN == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_DARKEN);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_LIGHTEN == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_LIGHTEN);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_COLOR_DODGE == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_COLOR_DODGE);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_COLOR_BURN == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_COLOR_BURN);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_HARD_LIGHT == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_HARD_LIGHT);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_SOFT_LIGHT == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_SOFT_LIGHT);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_DIFFERENCE == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_DIFFERENCE);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_EXCLUSION == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_EXCLUSION);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_COLOR == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_COLOR);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_HUE == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_HUE);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_SATURATION == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_SATURATION);
G_STATIC_ASSERT (GSK_GPU_PATTERN_BLEND_LUMINOSITY == GSK_GPU_PATTERN_BLEND_DEFAULT + GSK_BLEND_MODE_LUMINOSITY);
typedef enum {
GSK_GPU_OPTIMIZE_UBER = 1 << 0,
GSK_GPU_OPTIMIZE_CLEAR = 1 << 1,
GSK_GPU_OPTIMIZE_BLIT = 1 << 2,
GSK_GPU_OPTIMIZE_GRADIENTS = 1 << 3,
GSK_GPU_OPTIMIZE_MIPMAP = 1 << 4,
/* These require hardware support */
GSK_GPU_OPTIMIZE_GL_BASE_INSTANCE = 1 << 5,
} GskGpuOptimizations;

74
gsk/gpu/gskgpuuberop.c Normal file
View File

@@ -0,0 +1,74 @@
#include "config.h"
#include "gskgpuuberopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgpushaderopprivate.h"
#include "gskrectprivate.h"
#include "gpu/shaders/gskgpuuberinstance.h"
typedef struct _GskGpuUberOp GskGpuUberOp;
struct _GskGpuUberOp
{
GskGpuShaderOp op;
};
static void
gsk_gpu_uber_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuShaderOp *shader = (GskGpuShaderOp *) op;
GskGpuUberInstance *instance;
instance = (GskGpuUberInstance *) gsk_gpu_frame_get_vertex_data (frame, shader->vertex_offset);
gsk_gpu_print_op (string, indent, "uber");
gsk_gpu_print_rect (string, instance->rect);
gsk_gpu_print_newline (string);
}
static const GskGpuShaderOpClass GSK_GPU_UBER_OP_CLASS = {
{
GSK_GPU_OP_SIZE (GskGpuUberOp),
GSK_GPU_STAGE_SHADER,
gsk_gpu_shader_op_finish,
gsk_gpu_uber_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_shader_op_vk_command,
#endif
gsk_gpu_shader_op_gl_command
},
"gskgpuuber",
sizeof (GskGpuUberInstance),
#ifdef GDK_RENDERING_VULKAN
&gsk_gpu_uber_info,
#endif
gsk_gpu_uber_setup_attrib_locations,
gsk_gpu_uber_setup_vao
};
void
gsk_gpu_uber_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const graphene_rect_t *rect,
const graphene_point_t *offset,
GskGpuDescriptors *desc,
guint32 pattern_id)
{
GskGpuUberInstance *instance;
gsk_gpu_shader_op_alloc (frame,
&GSK_GPU_UBER_OP_CLASS,
0,
clip,
desc,
&instance);
gsk_gpu_rect_to_float (rect, offset, instance->rect);
instance->pattern_id = pattern_id;
}

View File

@@ -1,15 +1,17 @@
#pragma once
#include "gskvulkanopprivate.h"
#include "gskgpushaderopprivate.h"
#include <graphene.h>
G_BEGIN_DECLS
void gsk_vulkan_convert_op (GskVulkanRender *render,
GskVulkanShaderClip clip,
GskVulkanImage *image,
void gsk_gpu_uber_op (GskGpuFrame *frame,
GskGpuShaderClip clip,
const graphene_rect_t *rect,
const graphene_point_t *offset,
const graphene_rect_t *tex_rect);
GskGpuDescriptors *desc,
guint32 pattern_id);
G_END_DECLS

626
gsk/gpu/gskgpuuploadop.c Normal file
View File

@@ -0,0 +1,626 @@
#include "config.h"
#include "gskgpuuploadopprivate.h"
#include "gskgpuframeprivate.h"
#include "gskgpuimageprivate.h"
#include "gskgpuprintprivate.h"
#include "gskgldeviceprivate.h"
#include "gskglimageprivate.h"
#ifdef GDK_RENDERING_VULKAN
#include "gskvulkanbufferprivate.h"
#include "gskvulkanimageprivate.h"
#endif
#include "gdk/gdkglcontextprivate.h"
#include "gsk/gskdebugprivate.h"
static GskGpuOp *
gsk_gpu_upload_op_gl_command_with_area (GskGpuOp *op,
GskGpuFrame *frame,
GskGpuImage *image,
const cairo_rectangle_int_t *area,
void (* draw_func) (GskGpuOp *, guchar *, gsize))
{
GskGLImage *gl_image = GSK_GL_IMAGE (image);
GdkMemoryFormat format;
GdkGLContext *context;
gsize stride, bpp;
guchar *data;
guint gl_format, gl_type;
context = GDK_GL_CONTEXT (gsk_gpu_frame_get_context (frame));
format = gsk_gpu_image_get_format (image);
bpp = gdk_memory_format_bytes_per_pixel (format);
stride = area->width * bpp;
data = g_malloc (area->height * stride);
draw_func (op, data, stride);
gl_format = gsk_gl_image_get_gl_format (gl_image);
gl_type = gsk_gl_image_get_gl_type (gl_image);
glActiveTexture (GL_TEXTURE0);
gsk_gl_image_bind_texture (gl_image);
glPixelStorei (GL_UNPACK_ALIGNMENT, gdk_memory_format_alignment (format));
/* GL_UNPACK_ROW_LENGTH is available on desktop GL, OpenGL ES >= 3.0, or if
* the GL_EXT_unpack_subimage extension for OpenGL ES 2.0 is available
*/
if (stride == gsk_gpu_image_get_width (image) * bpp)
{
glTexSubImage2D (GL_TEXTURE_2D, 0, area->x, area->y, area->width, area->height, gl_format, gl_type, data);
}
else if (stride % bpp == 0 && gdk_gl_context_has_unpack_subimage (context))
{
glPixelStorei (GL_UNPACK_ROW_LENGTH, stride / bpp);
glTexSubImage2D (GL_TEXTURE_2D, 0, area->x, area->y, area->width, area->height, gl_format, gl_type, data);
glPixelStorei (GL_UNPACK_ROW_LENGTH, 0);
}
else
{
gsize i;
for (i = 0; i < area->height; i++)
glTexSubImage2D (GL_TEXTURE_2D, 0, area->x, area->y + i, area->width, 1, gl_format, gl_type, data + (i * stride));
}
glPixelStorei (GL_UNPACK_ALIGNMENT, 4);
g_free (data);
return op->next;
}
static GskGpuOp *
gsk_gpu_upload_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGpuImage *image,
void (* draw_func) (GskGpuOp *, guchar *, gsize))
{
return gsk_gpu_upload_op_gl_command_with_area (op,
frame,
image,
&(cairo_rectangle_int_t) {
0, 0,
gsk_gpu_image_get_width (image),
gsk_gpu_image_get_height (image)
},
draw_func);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_upload_op_vk_command_with_area (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state,
GskVulkanImage *image,
const cairo_rectangle_int_t *area,
void (* draw_func) (GskGpuOp *, guchar *, gsize),
GskGpuBuffer **buffer)
{
gsize stride;
guchar *data;
stride = area->width * gdk_memory_format_bytes_per_pixel (gsk_gpu_image_get_format (GSK_GPU_IMAGE (image)));
*buffer = gsk_vulkan_buffer_new_write (GSK_VULKAN_DEVICE (gsk_gpu_frame_get_device (frame)),
area->height * stride);
data = gsk_gpu_buffer_map (*buffer);
draw_func (op, data, stride);
gsk_gpu_buffer_unmap (*buffer);
vkCmdPipelineBarrier (state->vk_command_buffer,
VK_PIPELINE_STAGE_HOST_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT,
0,
0, NULL,
1, &(VkBufferMemoryBarrier) {
.sType = VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER,
.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT,
.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.buffer = gsk_vulkan_buffer_get_vk_buffer (GSK_VULKAN_BUFFER (*buffer)),
.offset = 0,
.size = VK_WHOLE_SIZE,
},
0, NULL);
gsk_vulkan_image_transition (image,
state->semaphores,
state->vk_command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
VK_ACCESS_TRANSFER_WRITE_BIT);
vkCmdCopyBufferToImage (state->vk_command_buffer,
gsk_vulkan_buffer_get_vk_buffer (GSK_VULKAN_BUFFER (*buffer)),
gsk_vulkan_image_get_vk_image (image),
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
1,
(VkBufferImageCopy[1]) {
{
.bufferOffset = 0,
.bufferRowLength = area->width,
.bufferImageHeight = area->height,
.imageSubresource = {
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
.mipLevel = 0,
.baseArrayLayer = 0,
.layerCount = 1
},
.imageOffset = {
.x = area->x,
.y = area->y,
.z = 0
},
.imageExtent = {
.width = area->width,
.height = area->height,
.depth = 1
}
}
});
return op->next;
}
static GskGpuOp *
gsk_gpu_upload_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state,
GskVulkanImage *image,
void (* draw_func) (GskGpuOp *, guchar *, gsize),
GskGpuBuffer **buffer)
{
gsize stride;
guchar *data;
data = gsk_vulkan_image_get_data (image, &stride);
if (data)
{
draw_func (op, data, stride);
*buffer = NULL;
return op->next;
}
return gsk_gpu_upload_op_vk_command_with_area (op,
frame,
state,
image,
&(cairo_rectangle_int_t) {
0, 0,
gsk_gpu_image_get_width (GSK_GPU_IMAGE (image)),
gsk_gpu_image_get_height (GSK_GPU_IMAGE (image)),
},
draw_func,
buffer);
}
#endif
typedef struct _GskGpuUploadTextureOp GskGpuUploadTextureOp;
struct _GskGpuUploadTextureOp
{
GskGpuOp op;
GskGpuImage *image;
GskGpuBuffer *buffer;
GdkTexture *texture;
};
static void
gsk_gpu_upload_texture_op_finish (GskGpuOp *op)
{
GskGpuUploadTextureOp *self = (GskGpuUploadTextureOp *) op;
g_object_unref (self->image);
g_clear_object (&self->buffer);
g_object_unref (self->texture);
}
static void
gsk_gpu_upload_texture_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuUploadTextureOp *self = (GskGpuUploadTextureOp *) op;
gsk_gpu_print_op (string, indent, "upload-texture");
gsk_gpu_print_image (string, self->image);
gsk_gpu_print_newline (string);
}
static void
gsk_gpu_upload_texture_op_draw (GskGpuOp *op,
guchar *data,
gsize stride)
{
GskGpuUploadTextureOp *self = (GskGpuUploadTextureOp *) op;
GdkTextureDownloader *downloader;
downloader = gdk_texture_downloader_new (self->texture);
gdk_texture_downloader_set_format (downloader, gsk_gpu_image_get_format (self->image));
gdk_texture_downloader_download_into (downloader, data, stride);
gdk_texture_downloader_free (downloader);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_upload_texture_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuUploadTextureOp *self = (GskGpuUploadTextureOp *) op;
return gsk_gpu_upload_op_vk_command (op,
frame,
state,
GSK_VULKAN_IMAGE (self->image),
gsk_gpu_upload_texture_op_draw,
&self->buffer);
}
#endif
static GskGpuOp *
gsk_gpu_upload_texture_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuUploadTextureOp *self = (GskGpuUploadTextureOp *) op;
return gsk_gpu_upload_op_gl_command (op,
frame,
self->image,
gsk_gpu_upload_texture_op_draw);
}
static const GskGpuOpClass GSK_GPU_UPLOAD_TEXTURE_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuUploadTextureOp),
GSK_GPU_STAGE_UPLOAD,
gsk_gpu_upload_texture_op_finish,
gsk_gpu_upload_texture_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_upload_texture_op_vk_command,
#endif
gsk_gpu_upload_texture_op_gl_command
};
GskGpuImage *
gsk_gpu_upload_texture_op_try (GskGpuFrame *frame,
gboolean with_mipmap,
GdkTexture *texture)
{
GskGpuUploadTextureOp *self;
GskGpuImage *image;
image = gsk_gpu_device_create_upload_image (gsk_gpu_frame_get_device (frame),
with_mipmap,
gdk_texture_get_format (texture),
gdk_texture_get_width (texture),
gdk_texture_get_height (texture));
if (image == NULL)
return NULL;
if (GSK_DEBUG_CHECK (FALLBACK))
{
GEnumClass *enum_class = g_type_class_ref (GDK_TYPE_MEMORY_FORMAT);
if (gdk_texture_get_format (texture) != gsk_gpu_image_get_format (image))
{
gdk_debug_message ("Unsupported format %s, converting on CPU to %s",
g_enum_get_value (enum_class, gdk_texture_get_format (texture))->value_nick,
g_enum_get_value (enum_class, gsk_gpu_image_get_format (image))->value_nick);
}
if (with_mipmap && !(gsk_gpu_image_get_flags (image) & GSK_GPU_IMAGE_CAN_MIPMAP))
{
gdk_debug_message ("Format %s does not support mipmaps",
g_enum_get_value (enum_class, gsk_gpu_image_get_format (image))->value_nick);
}
g_type_class_unref (enum_class);
}
self = (GskGpuUploadTextureOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_UPLOAD_TEXTURE_OP_CLASS);
self->texture = g_object_ref (texture);
self->image = image;
return self->image;
}
typedef struct _GskGpuUploadCairoOp GskGpuUploadCairoOp;
struct _GskGpuUploadCairoOp
{
GskGpuOp op;
GskGpuImage *image;
graphene_rect_t viewport;
GskGpuCairoFunc func;
gpointer user_data;
GDestroyNotify user_destroy;
GskGpuBuffer *buffer;
};
static void
gsk_gpu_upload_cairo_op_finish (GskGpuOp *op)
{
GskGpuUploadCairoOp *self = (GskGpuUploadCairoOp *) op;
g_object_unref (self->image);
if (self->user_destroy)
self->user_destroy (self->user_data);
g_clear_object (&self->buffer);
}
static void
gsk_gpu_upload_cairo_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuUploadCairoOp *self = (GskGpuUploadCairoOp *) op;
gsk_gpu_print_op (string, indent, "upload-cairo");
gsk_gpu_print_image (string, self->image);
gsk_gpu_print_newline (string);
}
static void
gsk_gpu_upload_cairo_op_draw (GskGpuOp *op,
guchar *data,
gsize stride)
{
GskGpuUploadCairoOp *self = (GskGpuUploadCairoOp *) op;
cairo_surface_t *surface;
cairo_t *cr;
int width, height;
width = gsk_gpu_image_get_width (self->image);
height = gsk_gpu_image_get_height (self->image);
surface = cairo_image_surface_create_for_data (data,
CAIRO_FORMAT_ARGB32,
width, height,
stride);
cairo_surface_set_device_scale (surface,
width / self->viewport.size.width,
height / self->viewport.size.height);
cr = cairo_create (surface);
cairo_set_operator (cr, CAIRO_OPERATOR_CLEAR);
cairo_paint (cr);
cairo_set_operator (cr, CAIRO_OPERATOR_OVER);
cairo_translate (cr, -self->viewport.origin.x, -self->viewport.origin.y);
self->func (self->user_data, cr);
cairo_destroy (cr);
cairo_surface_finish (surface);
cairo_surface_destroy (surface);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_upload_cairo_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuUploadCairoOp *self = (GskGpuUploadCairoOp *) op;
return gsk_gpu_upload_op_vk_command (op,
frame,
state,
GSK_VULKAN_IMAGE (self->image),
gsk_gpu_upload_cairo_op_draw,
&self->buffer);
}
#endif
static GskGpuOp *
gsk_gpu_upload_cairo_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuUploadCairoOp *self = (GskGpuUploadCairoOp *) op;
return gsk_gpu_upload_op_gl_command (op,
frame,
self->image,
gsk_gpu_upload_cairo_op_draw);
}
static const GskGpuOpClass GSK_GPU_UPLOAD_CAIRO_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuUploadCairoOp),
GSK_GPU_STAGE_UPLOAD,
gsk_gpu_upload_cairo_op_finish,
gsk_gpu_upload_cairo_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_upload_cairo_op_vk_command,
#endif
gsk_gpu_upload_cairo_op_gl_command
};
GskGpuImage *
gsk_gpu_upload_cairo_op (GskGpuFrame *frame,
const graphene_vec2_t *scale,
const graphene_rect_t *viewport,
GskGpuCairoFunc func,
gpointer user_data,
GDestroyNotify user_destroy)
{
GskGpuUploadCairoOp *self;
self = (GskGpuUploadCairoOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_UPLOAD_CAIRO_OP_CLASS);
self->image = gsk_gpu_device_create_upload_image (gsk_gpu_frame_get_device (frame),
FALSE,
GDK_MEMORY_DEFAULT,
ceil (graphene_vec2_get_x (scale) * viewport->size.width),
ceil (graphene_vec2_get_y (scale) * viewport->size.height));
self->viewport = *viewport;
self->func = func;
self->user_data = user_data;
self->user_destroy = user_destroy;
return self->image;
}
typedef struct _GskGpuUploadGlyphOp GskGpuUploadGlyphOp;
struct _GskGpuUploadGlyphOp
{
GskGpuOp op;
GskGpuImage *image;
cairo_rectangle_int_t area;
PangoFont *font;
PangoGlyph glyph;
float scale;
graphene_point_t origin;
GskGpuBuffer *buffer;
};
static void
gsk_gpu_upload_glyph_op_finish (GskGpuOp *op)
{
GskGpuUploadGlyphOp *self = (GskGpuUploadGlyphOp *) op;
g_object_unref (self->image);
g_object_unref (self->font);
g_clear_object (&self->buffer);
}
static void
gsk_gpu_upload_glyph_op_print (GskGpuOp *op,
GskGpuFrame *frame,
GString *string,
guint indent)
{
GskGpuUploadGlyphOp *self = (GskGpuUploadGlyphOp *) op;
gsk_gpu_print_op (string, indent, "upload-glyph");
gsk_gpu_print_int_rect (string, &self->area);
g_string_append_printf (string, "glyph %u @ %g ", self->glyph, self->scale);
gsk_gpu_print_newline (string);
}
static void
gsk_gpu_upload_glyph_op_draw (GskGpuOp *op,
guchar *data,
gsize stride)
{
GskGpuUploadGlyphOp *self = (GskGpuUploadGlyphOp *) op;
cairo_surface_t *surface;
cairo_t *cr;
surface = cairo_image_surface_create_for_data (data,
CAIRO_FORMAT_ARGB32,
self->area.width,
self->area.height,
stride);
cairo_surface_set_device_offset (surface, self->origin.x, self->origin.y);
cairo_surface_set_device_scale (surface, self->scale, self->scale);
cr = cairo_create (surface);
cairo_set_operator (cr, CAIRO_OPERATOR_CLEAR);
cairo_paint (cr);
cairo_set_operator (cr, CAIRO_OPERATOR_OVER);
/* Make sure the entire surface is initialized to black */
cairo_set_source_rgba (cr, 0, 0, 0, 0);
cairo_rectangle (cr, 0.0, 0.0, self->area.width, self->area.height);
cairo_fill (cr);
/* Draw glyph */
cairo_set_source_rgba (cr, 1, 1, 1, 1);
pango_cairo_show_glyph_string (cr,
self->font,
&(PangoGlyphString) {
.num_glyphs = 1,
.glyphs = (PangoGlyphInfo[1]) { {
.glyph = self->glyph
} }
});
cairo_destroy (cr);
cairo_surface_finish (surface);
cairo_surface_destroy (surface);
}
#ifdef GDK_RENDERING_VULKAN
static GskGpuOp *
gsk_gpu_upload_glyph_op_vk_command (GskGpuOp *op,
GskGpuFrame *frame,
GskVulkanCommandState *state)
{
GskGpuUploadGlyphOp *self = (GskGpuUploadGlyphOp *) op;
return gsk_gpu_upload_op_vk_command_with_area (op,
frame,
state,
GSK_VULKAN_IMAGE (self->image),
&self->area,
gsk_gpu_upload_glyph_op_draw,
&self->buffer);
}
#endif
static GskGpuOp *
gsk_gpu_upload_glyph_op_gl_command (GskGpuOp *op,
GskGpuFrame *frame,
GskGLCommandState *state)
{
GskGpuUploadGlyphOp *self = (GskGpuUploadGlyphOp *) op;
return gsk_gpu_upload_op_gl_command_with_area (op,
frame,
self->image,
&self->area,
gsk_gpu_upload_glyph_op_draw);
}
static const GskGpuOpClass GSK_GPU_UPLOAD_GLYPH_OP_CLASS = {
GSK_GPU_OP_SIZE (GskGpuUploadGlyphOp),
GSK_GPU_STAGE_UPLOAD,
gsk_gpu_upload_glyph_op_finish,
gsk_gpu_upload_glyph_op_print,
#ifdef GDK_RENDERING_VULKAN
gsk_gpu_upload_glyph_op_vk_command,
#endif
gsk_gpu_upload_glyph_op_gl_command,
};
void
gsk_gpu_upload_glyph_op (GskGpuFrame *frame,
GskGpuImage *image,
PangoFont *font,
const PangoGlyph glyph,
const cairo_rectangle_int_t *area,
float scale,
const graphene_point_t *origin)
{
GskGpuUploadGlyphOp *self;
self = (GskGpuUploadGlyphOp *) gsk_gpu_op_alloc (frame, &GSK_GPU_UPLOAD_GLYPH_OP_CLASS);
self->image = g_object_ref (image);
self->area = *area;
self->font = g_object_ref (font);
self->glyph = glyph;
self->scale = scale;
self->origin = *origin;
}

View File

@@ -0,0 +1,32 @@
#pragma once
#include "gskgpuopprivate.h"
#include "gsktypes.h"
G_BEGIN_DECLS
typedef void (* GskGpuCairoFunc) (gpointer user_data,
cairo_t *cr);
GskGpuImage * gsk_gpu_upload_texture_op_try (GskGpuFrame *frame,
gboolean with_mipmap,
GdkTexture *texture);
GskGpuImage * gsk_gpu_upload_cairo_op (GskGpuFrame *frame,
const graphene_vec2_t *scale,
const graphene_rect_t *viewport,
GskGpuCairoFunc func,
gpointer user_data,
GDestroyNotify user_destroy);
void gsk_gpu_upload_glyph_op (GskGpuFrame *frame,
GskGpuImage *image,
PangoFont *font,
PangoGlyph glyph,
const cairo_rectangle_int_t *area,
float scale,
const graphene_point_t *origin);
G_END_DECLS

152
gsk/gpu/gsknglrenderer.c Normal file
View File

@@ -0,0 +1,152 @@
#include "config.h"
#include "gsknglrendererprivate.h"
#include "gskgpuimageprivate.h"
#include "gskgpurendererprivate.h"
#include "gskgldeviceprivate.h"
#include "gskglframeprivate.h"
#include "gskglimageprivate.h"
#include "gdk/gdkdisplayprivate.h"
#include "gdk/gdkglcontextprivate.h"
struct _GskNglRenderer
{
GskGpuRenderer parent_instance;
GskGpuImage *backbuffer;
};
struct _GskNglRendererClass
{
GskGpuRendererClass parent_class;
};
G_DEFINE_TYPE (GskNglRenderer, gsk_ngl_renderer, GSK_TYPE_GPU_RENDERER)
static GdkDrawContext *
gsk_ngl_renderer_create_context (GskGpuRenderer *renderer,
GdkDisplay *display,
GdkSurface *surface,
GskGpuOptimizations *supported,
GError **error)
{
GdkGLContext *context;
if (surface)
context = gdk_surface_create_gl_context (surface, error);
else
context = gdk_display_create_gl_context (display, error);
if (context == NULL)
return NULL;
/* GLES 2 is not supported */
gdk_gl_context_set_required_version (context, 3, 0);
if (!gdk_gl_context_realize (context, error))
{
g_object_unref (context);
return NULL;
}
gdk_gl_context_make_current (context);
*supported = -1;
if (!gdk_gl_context_check_version (context, "4.2", "9.9") &&
!epoxy_has_gl_extension ("GL_EXT_base_instance") &&
!epoxy_has_gl_extension ("GL_ARB_base_instance"))
*supported &= ~GSK_GPU_OPTIMIZE_GL_BASE_INSTANCE;
return GDK_DRAW_CONTEXT (context);
}
static void
gsk_ngl_renderer_make_current (GskGpuRenderer *renderer)
{
gdk_gl_context_make_current (GDK_GL_CONTEXT (gsk_gpu_renderer_get_context (renderer)));
}
static GskGpuImage *
gsk_ngl_renderer_get_backbuffer (GskGpuRenderer *renderer)
{
GskNglRenderer *self = GSK_NGL_RENDERER (renderer);
GdkDrawContext *context;
GdkSurface *surface;
double scale;
context = gsk_gpu_renderer_get_context (renderer);
surface = gdk_draw_context_get_surface (context);
scale = gsk_gpu_renderer_get_scale (renderer);
if (self->backbuffer == NULL ||
gsk_gpu_image_get_width (self->backbuffer) != ceil (gdk_surface_get_width (surface) * scale) ||
gsk_gpu_image_get_height (self->backbuffer) != ceil (gdk_surface_get_height (surface) * scale))
{
g_clear_object (&self->backbuffer);
self->backbuffer = gsk_gl_image_new_backbuffer (GSK_GL_DEVICE (gsk_gpu_renderer_get_device (renderer)),
GDK_MEMORY_DEFAULT /* FIXME */,
ceil (gdk_surface_get_width (surface) * scale),
ceil (gdk_surface_get_height (surface) * scale));
}
return self->backbuffer;
}
static void
gsk_ngl_renderer_wait (GskGpuRenderer *self,
GskGpuFrame **frame,
gsize n_frames)
{
}
static double
gsk_ngl_renderer_get_scale (GskGpuRenderer *self)
{
GdkDrawContext *context = gsk_gpu_renderer_get_context (self);
return gdk_gl_context_get_scale (GDK_GL_CONTEXT (context));
}
static GdkDmabufFormats *
gsk_ngl_renderer_get_dmabuf_formats (GskGpuRenderer *renderer)
{
GdkDisplay *display = GDK_DISPLAY (gdk_draw_context_get_display (gsk_gpu_renderer_get_context (renderer)));
return display->egl_dmabuf_formats;
}
static void
gsk_ngl_renderer_class_init (GskNglRendererClass *klass)
{
GskGpuRendererClass *gpu_renderer_class = GSK_GPU_RENDERER_CLASS (klass);
gpu_renderer_class->frame_type = GSK_TYPE_GL_FRAME;
gpu_renderer_class->get_device = gsk_gl_device_get_for_display;
gpu_renderer_class->create_context = gsk_ngl_renderer_create_context;
gpu_renderer_class->make_current = gsk_ngl_renderer_make_current;
gpu_renderer_class->get_backbuffer = gsk_ngl_renderer_get_backbuffer;
gpu_renderer_class->wait = gsk_ngl_renderer_wait;
gpu_renderer_class->get_scale = gsk_ngl_renderer_get_scale;
gpu_renderer_class->get_dmabuf_formats = gsk_ngl_renderer_get_dmabuf_formats;
}
static void
gsk_ngl_renderer_init (GskNglRenderer *self)
{
}
/**
* gsk_ngl_renderer_new:
*
* Creates an instance of the new experimental GL renderer.
*
* Returns: (transfer full): a new GL renderer
*/
GskRenderer *
gsk_ngl_renderer_new (void)
{
return g_object_new (GSK_TYPE_NGL_RENDERER, NULL);
}

Some files were not shown because too many files have changed in this diff Show More