Having blurRegion to identify if a decoration supports blur or not instead of the metadata-json way has the following benefits:
- decorations can now provide both blur or not based on user preference
- theme engines such as Aurorae do not have to enforce blur or not to their themes and they can support blur enabled and disabled themes at the same time if they want to
- blurRegion is empty by default so the Korners bug will be fixed for all solid aurorae themes. Breeze and Oxygen have set **blur:false** so nothing changes for them.
- all aurorae themes that do not require blur will free up system resources by default
This adds support for animating showing/hiding of the input method panel
to the sliding popup effect, if the input panel is of type "Toplevel".
This is mainly intended to animate showing the virtual keyboard and has
been primarily tested with Maliit. It replaces the client-side animation
that Maliit would do, instead doing the animation on the KWin side which
provides a significantly smoother experience.
This ensures that we get a warning if the config header is not included
instead of compiling the code as if it was disabled. Interestingly, some
checks already used #if KWIN_BUILD_*, so those were generating -Wundef
warnings when the feature is disabled. Commit 886173cab assumed that all
those features were already 01, so this unbreaks the build if any of the
features is disabled.
Fixes: 886173cab ("Reduce ifdefs in Workspace::supportInformation()")
This is the first tiny step towards the layer-based compositing in kwin.
The RenderLayer represents a layer with some contents. The actual
contents is represented by the RenderLayerDelegate class.
Currently, the RenderLayer is just a simple class responsible for
geometry, and repaints, but it will grow in the future. For example,
render layers need to form a tree.
The next (missing) biggest component in the layer-based compositing are
output layers. When output layers are added, each render layer would
have an output layer assigned to it or have its output layer inherited
from the parent.
The render layer tree wouldn't be affected by changes to the output
layer tree so transition between software and hardware cursors can be
seamless.
The next big milestone will be to try to port some of existing kwin
functionality to the RenderLayer, e.g. software cursor or screen edges.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
Because the GLRenderTarget and the GLVertexBuffer use the global
coordinate system, they are not ergonomic in render layers.
Assigning the device pixel ratio to GLRenderTarget and GLVertexBuffer is
an interesting api design choice too. Scaling is a window system
abstraction, which is absent in OpenGL or Vulkan. For example, it's not
possible to create an OpenGL texture with a scale factor of 2. It only
works with device pixels.
This change makes the GLRenderTarget and the GLVertexBuffer more
ergonomic for usages other than rendering the workspace by removing all
the global coordinate system and scaling stuff. That's the
responsibility of the users of those two classes.
This change replaces abort() with Q_ASSERT and Q_UNREACHABLE() macros to
make kwin code base consistent. Besides that, Q_UNREACHABLE may
potentially provide the compiler more info that can be used to generate
more efficient machine code.
EffectQuickScene is not used strictly by effects, aurorae decorations
use it too to render window decorations.
This change renames the EffectQuickView/Scene to
OffscreenQuickView/Scene to clear up the naming scheme.
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.
The screen count can be retrieved by checking the number of items in the
EffectHandler.screens property.
The replacement for the numberScreensChanged signal are the screenAdded
and the screenRemoved signals.
The main motivation behind this change is to clean up the screens api
and reduce the number of usages of the Screens class.
This lays down some groundwork for realtime gestures in Wayland,
so that gestures that are 1:1 with user motion on a touchpad are
now possible to implement.
Due to earlier commits, this is mostly just glue code to make a
convenient API.
Gestures implemented with this API are four-finger gestures, to
avoid conflicting with apps that may use two or three-finger
gestures.
Active output is a window management concept. It indicates what output
new windows have to be placed on if they have no output hint. So
Workspace seems to be a better place for it than the Screens class, which
is obsolete.
Currently, the EffectsHandler has two signals that are emitted when the
combined geometry of all outputs change - virtualScreenGeometryChanged()
and screenGeometryChanged(). Having two signals is most likely a
historical artifact.
This change untangles the screenGeometryChanged() signal from the
Workspace and makes it the same as the virtualScreenGeometryChanged()
signal.
The main idea behind _NET_WM_FRAME_OVERLAP is to extend the borders of
the server-side decoration so the application can draw on top of it. It
was inspired by similar feature in Windows.
However, _NET_WM_FRAME_OVERLAP is basically unused. Neither GTK nor Qt
support it and I have never seen any application that uses it.
At the moment, kwin is the only compositing window manager that supports
_NET_WM_FRAME_OVERLAP. Neither mutter nor compiz nor compton and so on
support it.
Since _NET_WM_FRAME_OVERLAP is practically unused, there's no point for
keeping supporting it.
This change shouldn't affect any existing app as _NET_WM_FRAME_OVERLAP
atom is not listed in _NET_SUPPORTED.
Currently, thumbnail items are rendered by kwin. This means that qtquick
code cannot do things such as applying shader effects to window thumbnails
or simply draw custom controls on top of thumbnails.
With this change, task switchers and qml extensions will be able to
place their own contents on top of thumbnails and apply custom effects
to them.
In order to integrate window thumbnails, a window is rendered on kwin
side using its own opengl context. A fence is inserted in the command
stream to ensure that the qtquick machinery doesn't start using the
offscreen texture while there are still rendering commands being executed.
Thumbnails are rendered into offscreen textures as we don't have full
control over when qtquick windows render their contents and to work around
the fact that things such as VAOs can't be shared across OpenGL contexts.
WindowThumbnailItem and DesktopThumbnailItem act as texture providers.
It's unused and the advantages of keeping it are outweighed by the
disadvantages - the returned value is dependant on the window type.
If you need to draw a drop-shadow that matches the shape of the window
or something along that line, render the window into an offscreen
texture and sample the alpha channel in a fragment shader.
At the moment, we handle window quads inefficiently. Window quads from
all items are merged into a single list just to be broken up again.
This change removes window quads from libkwineffects. This allows us to
handle window quads efficiently. Furthermore, we could optimize methods
such as WindowVertex::left() and so on. KWin spends reasonable amount
of time in those methods when many windows have to be composited.
It's a necessary prerequisite for making wl_surface painting code role
agnostic.
The Xrender backend was added at the time when OpenGL drivers were not
particularly stable. Nowadays though, it's a totally different situation.
The OpenGL render backend has been the default one for many years. It's
quite stable, and it allows implementing many advanced features that
other render backends don't.
Many features are not tested with it during the development cycle; the
only time when it is noticed is when changes in other parts of kwin break
the build in the xrender backend. Effectively, the xrender backend is
unmaintained nowadays.
Given that the xrender backend is effectively unmaintained and our focus
being shifted towards wayland, this change drops the xrender backend in
favor of the opengl backend.
Besides being de-facto unmaintained, another issue is that QtQuick does
not support and most likely will never support the Xrender API. This
poses a problem as we want thumbnail items to be natively integrated in
the qtquick scene graph.
With the introduction of scene items, the Scene::Window::bufferShape()
method was removed as it makes no sense on wayland - a window may have
several sub-surfaces, so a single region to indicate the shape of the
window won't work.
SurfaceItem::shape() returns the shape of a surface. On Wayland, it
corresponds to the rect of the wl_surface. On X11, it corresponds to the
client window rect inside the frame window, or custom shape region if
the client has set one.
On the other hand, EffectWindow::shape() wants a completely different
thing. If the window is decorated, it needs to return the rect of the
decoration. Otherwise it has to return the shape region if there's one.
In the future, the EffectWindow::shape() function must be removed as it
doesn't fit the item based design. The main reason why we have it at
all is because the x server doesn't support translucency, setting a
shape region is a (hacky) way to work around that limitation, xeyes is
a notable example.
BUG: 437138
BUG: 435862
This is to improve code readability and make it easier to differentiate
between methods that are used during interactive move-resize and normal
move-resize methods in the future.