Things such as Output, InputDevice and so on are made to be
multi-purpose. In order to make this separation more clear, this change
moves that code in the core directory. Some things still link to the
abstraction level above (kwin), they can be tackled in future refactors.
Ideally code in core/ should depend either on other code in core/ or
system libs.
This change adjusts the window management abstractions in kwin for the
drm backend providing more than just "desktop" outputs.
Besides that, it has other potential benefits - for example, the
Workspace could start managing allocation of the placeholder output by
itself, thus leading to some simplifications in the drm backend. Another
is that it lets us move wayland code from the drm backend.
The main reason to drop multi-head support is that it has been simply
unmaintained for many many years. When implementing a feature, we don't
even bother checking if multi-head is broken, KCMs don't handle
multihead, window management features are written for Xinerama. KWin
is optimized for Xinerama-like operation mode in general, which is
provided out of the box.
If you use multihead for esoteric gpu stuff, consider using kwin_wayland!
The Workspace has two stacks - one with managed windows and deleted
windows, the other includes windows from the first stack + override
redirect windows.
This change merges both stacks. It has several benefits - we will be
able to move window elevation stuff to Workspace and streamline the
scene stuff, for example it will be possible to have a root item.
Another advantage is that unmanaged windows will have
Window::stackingOrder() property set, which can be useful in the future
in qml effects or (qtquick scene if we push harder in that front).
Another advantage is that kwin will make less X11 calls when restacking
managed windows.
Effects may perform cleanup when a deleted window is removed. If that
happens and the SceneWindow is accessed, kwin may crash.
The Scene processes Workspace::deletedRemoved() before effects.
In order to fix dereferencing null pointer, this change makes the Window
destroy its associated SceneWindow.
This makes KWin switch to in-tree copy of KWaylandServer codebase.
KWaylandServer namespace has been left as is. It will be addressed later
by renaming classes in order to fit in the KWin namespace.
AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
With these two actions being separate, RenderLoop can record the time spent
in endFrame (for example for multi-gpu transfers) without risking also recording
blocking swapbuffer calls, and endFrame can later be moved to output layer
Using the global coordinate system when specifying output layer damage
regions would be very confusing. In order to make the coordinate system
comprehensible, use the layer-local coordinate system.
The infinite region is used to tell the Compositor when it needs to
repaint the entire layer.
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
Otherwise the connection isn't severed when the layer is destroyed,
leading to crashes when screen resolution changes.
We don't actually need `this` to access `workspace()`, and we have
a guarded `output` as sender in the other case.
Notifications are really only useful in a setting with a full
shell environment where there is a notification center to display them.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Software cursor has always been a major source of problems. Hopefully,
porting it to RenderLayer will help us with that.
Note that the cursor layer is currently visible only when using software
cursor, however it will be changed once the Compositor can allocate
a real hardware cursor plane.
Currently, software cursor uses graphics-specific APIs (OpenGL and
QPainter) to paint itself. That will be changed in the future when
rendering parts are extracted from the Scene in a reusable helper.
This is the first tiny step towards the layer-based compositing in kwin.
The RenderLayer represents a layer with some contents. The actual
contents is represented by the RenderLayerDelegate class.
Currently, the RenderLayer is just a simple class responsible for
geometry, and repaints, but it will grow in the future. For example,
render layers need to form a tree.
The next (missing) biggest component in the layer-based compositing are
output layers. When output layers are added, each render layer would
have an output layer assigned to it or have its output layer inherited
from the parent.
The render layer tree wouldn't be affected by changes to the output
layer tree so transition between software and hardware cursors can be
seamless.
The next big milestone will be to try to port some of existing kwin
functionality to the RenderLayer, e.g. software cursor or screen edges.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
Having a render loop in the Platform has always been awkward. Another
way to interpret the platform not supporting per screen rendering would
be that all outputs share the same render loop.
On X11, Scene::painted_screen is going to correspond to the primary
screen, we should not rely on this assumption though!
With connection(), we will look up the x11 connection property on
kwinApp() object, which is less efficient than just calling a method on
the app object.
Otherwise animated cursors won't work. Hopefully, this will fix pointer
input test.
It would be great to refactor cursor handling so it's simpler, it can be
done later.
This makes the Scene less overloaded and it's needed for things such as
render layers.
In hindsight, it would be great to merge checkGraphicsReset() and
beginFrame(), e.g. make beginFrame() return the status like in QRhi or
VkSwapchain. If it's OUT_OF_DATE or something, reinitialize the
compositor.
Many effects use the stacking order property of the effects handler in
their constructors. This means that windows should have compositing
setup by the time effects are loaded.
After changing how binary effect plugins are loaded, i.e. not queueing
loading effects, but loading them immediately, some effects broke
because the effects handler is created before windows setup compositing.
This change attempts to fix those effects by rearranging compositor
startup code so windows setup compositing first, then create the effects
pointer.
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.