AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
Using the global coordinate system when specifying output layer damage
regions would be very confusing. In order to make the coordinate system
comprehensible, use the layer-local coordinate system.
The infinite region is used to tell the Compositor when it needs to
repaint the entire layer.
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
Some effects (AnimationEffect) transform windows without setting
PAINT_SCREEN_WITH_TRANSFORMED_WINDOWS flag. Make the scene disable
render optimizations if that's the case.
Whether AnimationEffect does a right thing is up to debate.
Window painting is no longer split in two phases - PAINT_WINDOW_OPAQUE
and PAINT_WINDOW_TRANSLUCENT.
PAINT_WINDOW_TRANSLUCENT is used as a hint to the occlusion culling
logic to ignore the opaque region.
Given that, the handling of the opaque region can be simplified. If no
effect sets the PAINT_WINDOW_TRANSLUCENT flag, then the opaque region
can be used as is.
It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
We already try to ensure that the surface damage is within render target
bounds. Avoid clipping surface damage in render backend, which is a bit
excessive task and perhaps it should be done an abstraction level above.
If the main surface is translucent (e.g. it contains only the drop
shadow) but its subsurface is opaque, the "window->isOpaque()" check
will produce a false positive.
Software cursor has always been a major source of problems. Hopefully,
porting it to RenderLayer will help us with that.
Note that the cursor layer is currently visible only when using software
cursor, however it will be changed once the Compositor can allocate
a real hardware cursor plane.
Currently, software cursor uses graphics-specific APIs (OpenGL and
QPainter) to paint itself. That will be changed in the future when
rendering parts are extracted from the Scene in a reusable helper.
This is the first tiny step towards the layer-based compositing in kwin.
The RenderLayer represents a layer with some contents. The actual
contents is represented by the RenderLayerDelegate class.
Currently, the RenderLayer is just a simple class responsible for
geometry, and repaints, but it will grow in the future. For example,
render layers need to form a tree.
The next (missing) biggest component in the layer-based compositing are
output layers. When output layers are added, each render layer would
have an output layer assigned to it or have its output layer inherited
from the parent.
The render layer tree wouldn't be affected by changes to the output
layer tree so transition between software and hardware cursors can be
seamless.
The next big milestone will be to try to port some of existing kwin
functionality to the RenderLayer, e.g. software cursor or screen edges.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
paintScreen() already tries to ensure that the damage region doesn't go
outside the scene geometry. With this change, it will try to clip the
damage region to the render target rect, which saves us an extra region
intersection and simplifies code that calls paintScreen().
Having a render loop in the Platform has always been awkward. Another
way to interpret the platform not supporting per screen rendering would
be that all outputs share the same render loop.
On X11, Scene::painted_screen is going to correspond to the primary
screen, we should not rely on this assumption though!
Neither SceneQPainter nor SceneOpenGL have to compute the projection
matrix by themselves. It can be done by the Scene when setting the
projection matrix. The main benefit behind this change is that it
reduces the amount of custom setup code around paintScreen(), which
makes us one step closer to getting rid of graphics-specific paint()
function and just calling paintScreen().
Because the GLRenderTarget and the GLVertexBuffer use the global
coordinate system, they are not ergonomic in render layers.
Assigning the device pixel ratio to GLRenderTarget and GLVertexBuffer is
an interesting api design choice too. Scaling is a window system
abstraction, which is absent in OpenGL or Vulkan. For example, it's not
possible to create an OpenGL texture with a scale factor of 2. It only
works with device pixels.
This change makes the GLRenderTarget and the GLVertexBuffer more
ergonomic for usages other than rendering the workspace by removing all
the global coordinate system and scaling stuff. That's the
responsibility of the users of those two classes.
On Wayland, a window can have subsurfaces. The spec doesn't require the
main surface and its sub-surfaces to have the same scale factor.
Given that Toplevel::bufferScale() makes no sense with Wayland windows,
this change drops it to make code more reasonable and to prevent people
from using Toplevel::bufferScale().
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
Currently, the scene owns the renderer, which puts more
responsibilities on the scene other than painting windows and it also
puts some limitations on what we can do, for example, there can be only
one scene, etc.
This change decouples the scene and the renderer so the scene is more
swappable.
Scenes are no longer implemented as plugins because opengl backend
and scene creation needs to be wrapped in opengl safety points. We
could still create the render backend and then go through the list
of scene plugins, but accessing concrete scene implementation is
much much simpler. Besides that, having scenes implemented as plugins
is not worthwhile because there are only two scenes and each contributes
very small amount of binary size. On the other hand, we still need to
take into account how many times kwin accesses the hard drive to load
plugins in order to function as expected.
This decouples the management of Shadow from the scene window and allows
multiple items share the same Shadow.
Currently, kwin has a single scene graph, but it makes sense to create a
scene graph per output as they could have different layers, etc. This
would also allow QtQuick share more textures with kwin, which is worth
doing for optimization purposes in the future.
We use surfaceless contexts with internal windows. We also require
the EGL_KHR_surfaceless_context extension for making context current
without outputs.
Arguably, we could use pbuffers, but since mainstream drivers (Mesa and
NVIDIA) support surfaceless contexts, the extra complexity doesn't buy
us anything.
This further decouples scene items from scene windows. The SurfaceItem
still needs to access the underlying window, I would like to re-iterate
over that later.
With this change, it will be possible to introduce WindowItem factory
function in the Toplevel class.
Makes it possible to apply the dpms settings per screen instead of
applying it to all of them, which is wrong at many levels.
Will be even more important with other effects like rotation.