It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
This makes the Scene less overloaded and it's needed for things such as
render layers.
In hindsight, it would be great to merge checkGraphicsReset() and
beginFrame(), e.g. make beginFrame() return the status like in QRhi or
VkSwapchain. If it's OUT_OF_DATE or something, reinitialize the
compositor.
This unifies frame hooks for OpenGL and QPainter render backends. There
are a couple of reasons why it's a good idea - it provides one mental
framework to start painting a frame, the Compositor will be able to
start and submit frames. The last one is very cool because it gives the
Compositor more power over compositing.
Besides unifying frame hooks, this cleans up a bit the arg naming mess
in endFrame(). As is, "damage" and "damagedRegion" are very confusing
names. "damage" arg has been renamed to "renderedRegion," because that's
what it is. The renderedRegion arg specifies the region that has been
repainted by the Scene. It's different from the damagedRegion as that
one specifies the surface damage, i.e. the difference between the
current and the next frame, while the renderedRegion may include a
region that had to be repainted to repair the back buffer. The main
reason why we need renderedRegion is the X11 platform. On Wayland, it's
unused.
In the future, we will need to extend this api with output layers.
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.