AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
Added this interface to the VirtualDesktopManager. Realtime touchpad gestures update the interface to allow for mac os style desktop switching.
Also makes gestured switching use natural direction.
BUG: 185710
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
- added option to remove the frametime graph
- added option to remove the "this is a benchmark" message
- location of the fps counter is now on the "active" monitor by default
- removed the hard-limit of 100 for the FPS counter
- added option to color the text based off the FPS value
With this change, the Workspace would provide clientArea() overloads
that take only AbstractOutput and VirtualDesktop. integer ids are
obsolete as they are unstable.
Swipe with three fingers
- left to switch to the previous virtual desktop
- right to switch to the next virtual desktop
- up and down to toggle the overview
CCBUG: 439925
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
support realtime activation for screenedges gestures, making it possible
for effects to show half-triggered states while dragging from
the edge with a finger, making them much more usable
It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
Having blurRegion to identify if a decoration supports blur or not instead of the metadata-json way has the following benefits:
- decorations can now provide both blur or not based on user preference
- theme engines such as Aurorae do not have to enforce blur or not to their themes and they can support blur enabled and disabled themes at the same time if they want to
- blurRegion is empty by default so the Korners bug will be fixed for all solid aurorae themes. Breeze and Oxygen have set **blur:false** so nothing changes for them.
- all aurorae themes that do not require blur will free up system resources by default
This adds support for animating showing/hiding of the input method panel
to the sliding popup effect, if the input panel is of type "Toplevel".
This is mainly intended to animate showing the virtual keyboard and has
been primarily tested with Maliit. It replaces the client-side animation
that Maliit would do, instead doing the animation on the KWin side which
provides a significantly smoother experience.
This ensures that we get a warning if the config header is not included
instead of compiling the code as if it was disabled. Interestingly, some
checks already used #if KWIN_BUILD_*, so those were generating -Wundef
warnings when the feature is disabled. Commit 886173cab assumed that all
those features were already 01, so this unbreaks the build if any of the
features is disabled.
Fixes: 886173cab ("Reduce ifdefs in Workspace::supportInformation()")
This is the first tiny step towards the layer-based compositing in kwin.
The RenderLayer represents a layer with some contents. The actual
contents is represented by the RenderLayerDelegate class.
Currently, the RenderLayer is just a simple class responsible for
geometry, and repaints, but it will grow in the future. For example,
render layers need to form a tree.
The next (missing) biggest component in the layer-based compositing are
output layers. When output layers are added, each render layer would
have an output layer assigned to it or have its output layer inherited
from the parent.
The render layer tree wouldn't be affected by changes to the output
layer tree so transition between software and hardware cursors can be
seamless.
The next big milestone will be to try to port some of existing kwin
functionality to the RenderLayer, e.g. software cursor or screen edges.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
Because the GLRenderTarget and the GLVertexBuffer use the global
coordinate system, they are not ergonomic in render layers.
Assigning the device pixel ratio to GLRenderTarget and GLVertexBuffer is
an interesting api design choice too. Scaling is a window system
abstraction, which is absent in OpenGL or Vulkan. For example, it's not
possible to create an OpenGL texture with a scale factor of 2. It only
works with device pixels.
This change makes the GLRenderTarget and the GLVertexBuffer more
ergonomic for usages other than rendering the workspace by removing all
the global coordinate system and scaling stuff. That's the
responsibility of the users of those two classes.
This change replaces abort() with Q_ASSERT and Q_UNREACHABLE() macros to
make kwin code base consistent. Besides that, Q_UNREACHABLE may
potentially provide the compiler more info that can be used to generate
more efficient machine code.
EffectQuickScene is not used strictly by effects, aurorae decorations
use it too to render window decorations.
This change renames the EffectQuickView/Scene to
OffscreenQuickView/Scene to clear up the naming scheme.
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.
The screen count can be retrieved by checking the number of items in the
EffectHandler.screens property.
The replacement for the numberScreensChanged signal are the screenAdded
and the screenRemoved signals.
The main motivation behind this change is to clean up the screens api
and reduce the number of usages of the Screens class.
This lays down some groundwork for realtime gestures in Wayland,
so that gestures that are 1:1 with user motion on a touchpad are
now possible to implement.
Due to earlier commits, this is mostly just glue code to make a
convenient API.
Gestures implemented with this API are four-finger gestures, to
avoid conflicting with apps that may use two or three-finger
gestures.
Active output is a window management concept. It indicates what output
new windows have to be placed on if they have no output hint. So
Workspace seems to be a better place for it than the Screens class, which
is obsolete.
Currently, the EffectsHandler has two signals that are emitted when the
combined geometry of all outputs change - virtualScreenGeometryChanged()
and screenGeometryChanged(). Having two signals is most likely a
historical artifact.
This change untangles the screenGeometryChanged() signal from the
Workspace and makes it the same as the virtualScreenGeometryChanged()
signal.