AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
With these two actions being separate, RenderLoop can record the time spent
in endFrame (for example for multi-gpu transfers) without risking also recording
blocking swapbuffer calls, and endFrame can later be moved to output layer
Using the global coordinate system when specifying output layer damage
regions would be very confusing. In order to make the coordinate system
comprehensible, use the layer-local coordinate system.
The infinite region is used to tell the Compositor when it needs to
repaint the entire layer.
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
Otherwise the connection isn't severed when the layer is destroyed,
leading to crashes when screen resolution changes.
We don't actually need `this` to access `workspace()`, and we have
a guarded `output` as sender in the other case.
Notifications are really only useful in a setting with a full
shell environment where there is a notification center to display them.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Software cursor has always been a major source of problems. Hopefully,
porting it to RenderLayer will help us with that.
Note that the cursor layer is currently visible only when using software
cursor, however it will be changed once the Compositor can allocate
a real hardware cursor plane.
Currently, software cursor uses graphics-specific APIs (OpenGL and
QPainter) to paint itself. That will be changed in the future when
rendering parts are extracted from the Scene in a reusable helper.
This is the first tiny step towards the layer-based compositing in kwin.
The RenderLayer represents a layer with some contents. The actual
contents is represented by the RenderLayerDelegate class.
Currently, the RenderLayer is just a simple class responsible for
geometry, and repaints, but it will grow in the future. For example,
render layers need to form a tree.
The next (missing) biggest component in the layer-based compositing are
output layers. When output layers are added, each render layer would
have an output layer assigned to it or have its output layer inherited
from the parent.
The render layer tree wouldn't be affected by changes to the output
layer tree so transition between software and hardware cursors can be
seamless.
The next big milestone will be to try to port some of existing kwin
functionality to the RenderLayer, e.g. software cursor or screen edges.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
Having a render loop in the Platform has always been awkward. Another
way to interpret the platform not supporting per screen rendering would
be that all outputs share the same render loop.
On X11, Scene::painted_screen is going to correspond to the primary
screen, we should not rely on this assumption though!
With connection(), we will look up the x11 connection property on
kwinApp() object, which is less efficient than just calling a method on
the app object.
Otherwise animated cursors won't work. Hopefully, this will fix pointer
input test.
It would be great to refactor cursor handling so it's simpler, it can be
done later.
This makes the Scene less overloaded and it's needed for things such as
render layers.
In hindsight, it would be great to merge checkGraphicsReset() and
beginFrame(), e.g. make beginFrame() return the status like in QRhi or
VkSwapchain. If it's OUT_OF_DATE or something, reinitialize the
compositor.
Many effects use the stacking order property of the effects handler in
their constructors. This means that windows should have compositing
setup by the time effects are loaded.
After changing how binary effect plugins are loaded, i.e. not queueing
loading effects, but loading them immediately, some effects broke
because the effects handler is created before windows setup compositing.
This change attempts to fix those effects by rearranging compositor
startup code so windows setup compositing first, then create the effects
pointer.
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.
The GlStrictBinding flag indicates whether it's okay not to re-bind the X11
pixmap to the OpenGL surface texture if the corresponding window is damaged.
It doesn't really affect the SceneOpenGL, only low level backend stuff.
Currently, the scene owns the renderer, which puts more
responsibilities on the scene other than painting windows and it also
puts some limitations on what we can do, for example, there can be only
one scene, etc.
This change decouples the scene and the renderer so the scene is more
swappable.
Scenes are no longer implemented as plugins because opengl backend
and scene creation needs to be wrapped in opengl safety points. We
could still create the render backend and then go through the list
of scene plugins, but accessing concrete scene implementation is
much much simpler. Besides that, having scenes implemented as plugins
is not worthwhile because there are only two scenes and each contributes
very small amount of binary size. On the other hand, we still need to
take into account how many times kwin accesses the hard drive to load
plugins in order to function as expected.
Before attempting to create scenes, kwin will redirect windows but if
the opengl scene can't be created, it won't unredirect windows, which
seems to cause issues on aarch64.
BUG: 443953
This makes the compositingPossible() output more elaborative,
and adjusts their priorities to qCWarning.
I thought about dropping the "Compositing is not possible"
(qCCritical) message. It's not critical for X11 and Wayland
depends on OpenGL. OTOH it's from a different component. So in
the end I settled for qCWarning again; there is no real way
to know, if the platform will consider this a critical problem
(and eventually abort).
Then I saw X11StandalonePlatform::compositingNotPossibleReason,
which has more user friendly, QT Richtext / HTML encoded output,
but this seems overkill for the terminal; now I would like to
know, where this is actually used...
While at it report "manually" suspended compositing via qCInfo
instead of qCDebug.
It's confusing to have two signals (virtualScreenGeometryChanged() and
screenGeometryChanged()) that indicate the same thing.
This change ports parts of kwin from the screenGeometryChanged() signal
to the virtualScreenGeometryChanged() signal with the main motivation to
drop the former.
The virtualScreenGeometryChanged() signal was chosen as the replacement
for the consistency sake with EffectsHandler's virtualScreenGeometry and
virtualScreenSize properties.
When the aboutToDestroy signal is emitted, the compositor object is
already partially destroyed. This contradicts to the name of the signal.
In addition to that, it will be better to call the stop method in the
destructors of X11Compositor and WaylandClient as it allows them to add
custom cleanup code when compositing is turned off.
The Xrender backend was added at the time when OpenGL drivers were not
particularly stable. Nowadays though, it's a totally different situation.
The OpenGL render backend has been the default one for many years. It's
quite stable, and it allows implementing many advanced features that
other render backends don't.
Many features are not tested with it during the development cycle; the
only time when it is noticed is when changes in other parts of kwin break
the build in the xrender backend. Effectively, the xrender backend is
unmaintained nowadays.
Given that the xrender backend is effectively unmaintained and our focus
being shifted towards wayland, this change drops the xrender backend in
favor of the opengl backend.
Besides being de-facto unmaintained, another issue is that QtQuick does
not support and most likely will never support the Xrender API. This
poses a problem as we want thumbnail items to be natively integrated in
the qtquick scene graph.