We use the PMF syntax so the isValid() check is unnecessary as the
compiler will notify about wrong signal at compile time. It makes
writing autotests feel less boilerplaty.
It adds more test cases in OutputChangesTest, particularly swapping
outputs.
Swapping outputs is an interesting case because outputs can temporarily
overlap so workspace()->outputAt() can return wrong output and the
window is going to stick to wrong output.
Currently the Workspace processes output updates as they occur, e.g.
when the drm backend scans connectors, the Workspace will handle
hotplugged outputs one by one or if an output configuration changes the
mode of several outputs, the workspace will process output layout
updates one by one instead of handling it in one pass. The main reason
for the current behavior is simplicity.
However, that can create issues because it's possible that the output
layout will be temporarily in degenerate state and features such as
sticking windows to their outputs will be broken.
In order to fix that, this change makes the Workspace process batched
output updates. There are several challenges - disconnected outputs have
to be alive when the outputsQueried signal is emitted, the workspace
needs to determine what outputs have been added or removed on its own.
Separate trigger progress and semantic progress in gesture.
Move effect activation and desktop switching over to semantic progress.
Allow semantic progress to exceed 1 for overshoot in animations.
I've added VerticalAxis, HorizontalAxis, DirectionlessSwipe and BiDirectionalPinch gestures directions.
These are all combinations of other gesture directions that semantically work well together.
I've implemented these gestures as well as changed some labels and improved documentation,
Also,
Add vector signal to SwipeGesture
- Now only 1 GestureDirection enum
- Now only 1 registerGesture() call
- The 4 kinds of gesture (Pinch/Swipe) and (Touchpad/Touchscreen) in globalshortcuts.h/cpp are merged into 1 GestureShortcut
- Change from range to set of finger counts in gestures
No behavior should change, just a refactor.
This might be the root cause of random ASAN errors in testQuickTiling.
From commit 617291c6974d232ee99c4c49e891ce16863e3d6e:
The internal EventQueue is a child of the registry object. This means
that after the registry is destroyed, all proxy objects in that event
queue are going to have invalid reference to it, which is not a problem
as long as the wl_display_dispatch() function is not called.
The wl_display_dispatch() function uses wl_proxy's queue reference to
enqueue incoming events to that queue.
Unfortunately, during teardown, the internal ConnectionThread may
dispatch events right after the registry object has been destroyed,
which can lead to a crash.
In order to fix the crash, we need to destroy all proxy objects and only
after that we can destroy the event queue. It's okay if wayland events
are dispatched in between.
i.e. the EventQueue object must be destroyed last to ensure avoid hitting
dangling pointers.
This enables again the crossfade between the old window picture and the new one in the maximize and morphingpopup effects.
It does that with the OffScreenEffect redirect() feature.
BUG:439689
BUG:435423
Things such as Output, InputDevice and so on are made to be
multi-purpose. In order to make this separation more clear, this change
moves that code in the core directory. Some things still link to the
abstraction level above (kwin), they can be tackled in future refactors.
Ideally code in core/ should depend either on other code in core/ or
system libs.
Other policy enums are declared in options.h so let's do the same for
placement policy. Besides consistency, another advantage of moving the
enum in kwin namespace is that the enum could be forward declared.
An application that does not support text-input has no way of
communicating with the input method, so even if you show the input
method the application receives nothing. As a fallback, instead send
fake key events so the application still gets something at least.
The key events are synthesised based on the text string that the
input method sends, which may result in things that do not actually
correspond to real keys. Unfortunately I do not see a way around that.
CCBUG: 439911
Currently, the main user of these two functions is the X11 standalone
platform.
This change ports that code to Workspace::geometry(), which is not great
but the X11 backend already depends on the Workspace indirectly via the
Screens. Not sure if it's worth making the standalone X11 backend track
the xinerama rect internally.
This class can be used to create an anonymous file, for instance
to pass data between compositor and clients, through means of a
file descriptor, as is done in various Wayland protocols, notably
the keymap exchange.
It also implements sealing the file, so that it can be shared
between multiple clients without them being able to modify it.
If supported, memfd_create is used, otherwise a `QTemporaryFile`
is used.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Since the screen number is well-known, we can look up the default
screen on demand. Note that xcb_get_setup() is pretty cheap as it
simply returns a const pointer to pre-allocated data.
At the moment, a platform should provide two output lists - one that
lists all available outputs, and the other one that contains only
enabled outputs. In general, this amounts to some boilerplate code and
forces backends to be implemented in some certain way, which sometimes
is inconvenient, e.g. if an output is disabled or enabled, it will be
simpler if we only change Output::isEnabled(), otherwise we need to
start accounting for corner cases such as the order in which
Output::isEnabled() and Platform::enabledOutputs() are changed, etc.
In X11 when a window is maximised if the client is unable to fufill the
space provided we centre align the window.
With the new floating point geometry behaviour of centreing changes.
Instead of a 1 pixel gap at the top, we get a 0.5 pixel gap either side.
When we get into the codepath to "fix" the window in `closeHeight` we
only move the top, giving us an invalid buffer size.
We don't really want to change the logic here; on xwayland with the
scaling opt-out path it's feasible for a floating sized logical size to
still be representable. This code rounds to the native unit after all
the logic has taken effect.
It's important for tablet devices to be able to specify to which section
of the display we'll be fitting the tablet. This setting allows to
specify this by providing some options that will do so relative to the
output size.
CCBUG: 433045
The Session can be useful not only to the platform backend but also
input backends and for things such as vt switching, etc. Therefore it's
better to have the Application own the Session.
Platform backends are provided as plugins. This is great for
extensibility, but the disadvantages of this design outweigh the
benefits.
The number of backends will be limited, it's safe to say that we will
have to maintain three backends for many years to come - kms/drm,
virtual, and wayland. The plugin system adds unnecessary complexity.
Startup logic is affected too. At the moment, platform backends provide
the session object, which is awkward as it starts adding dependencies
between backends. It will be nicer if the session is created depending
on the loaded session type.
In some cases, wayland code needs to talk to the backend directly, e.g.
for drm leasing, etc. With the plugin architecture it's hard to do that.
Not impossible though, we can approach it as in Qt 6, but it's still
harder than linking the code directly.
Of course, the main disadvantage of shipping backends in a lib is that
you will need to patch kwin if you need a custom platform, however such
cases will be rare.
Despite that disadvantage, I still think that it's a step in the right
direction where the goal is to have multi-purpose backends and other
reusable components of kwin.
The legacy X11 standalone platform is linked directly to kwin_x11
executable, while the remaining backends are linked to libkwin.
The original intention behind creating plugins before the workspace was
to handle the case where kwin_wayland may need to wait until outputs are
available. However, since things have changed a lot in that regard,
plugins can be loaded after the workspace now.
The main benefit behind this is that plugins can be simpler, they won't
need to track when the workspace is created.
On X11, plugins are already loaded after the workspace is instantiated.
This change adjusts the window management abstractions in kwin for the
drm backend providing more than just "desktop" outputs.
Besides that, it has other potential benefits - for example, the
Workspace could start managing allocation of the placeholder output by
itself, thus leading to some simplifications in the drm backend. Another
is that it lets us move wayland code from the drm backend.
We gain nothing with it. XCB setup logic in the Xwayland server has to
be moved to the workspace layer anyway. For example, this move of
responsibilities will be needed to support running more than just one
instance of Xwayland. Architecture-wise, it would be cleaner too.
Unfortunately, it breaks encapsulation of the Application, but this can
be taken care later.
With fractional scaling integer based logical geometry may not match
device pixels. Once we have a floating point base we can fix that. This
also is
important for our X11 scale override, with a scale of 2 we could
get logical sizes with halves.
We already have all input being floating point, this doubles down on it
for all remaining geometry.
- Outputs remain integer to ensure that any screen on the right remains
aligned.
- Placement also remains integer based for now.
- Repainting is untouched as we always expand outwards
(QRectF::toAdjustedRect().
- Decoration is untouched for now
- Rules are integer in the config, but floating in the adjusting/API
This should also be fine.
At some point we'll add a method to snap to the device pixel
grid. Effectively `round(value * dpr) / dpr` though right now things
mostly work.
This also gets rid of a lot of hacks for QRect right and bottom which
are very
confusing.
Parts to watch out in the port are:
QRectF::contains now includes edges
QRectF::right and bottom are now sane so previous hacks have to be
removed
QRectF(QPoint, QPoint) behaves differently for the same reason
QRectF::center too
In test results some adjusted values which are the result of
QRect.center because using QRectF's center should behave the same to the
user.
The Screens object is created by Workspace on X11. This change makes X11
and Wayland behave more similar. As is, the Screens is a helper for
window management code, don't use it in backends. Note that the X11 backend
already uses the Screens, it needs to be addressed individually.
We don't really care about the window showing up until we're calling
showInputPanel, but since Workspace::windowAdded is triggered for any
window that gets added, the test sometimes fails because count() is 2
instead of 1. To avoid that, only create the spy when it's actually
relevant instead of all the way at the start before any other setup is
done.
ScreenPaintData provides a way to transform the painted screen, e.g.
scale or translate. From API point of view, it's great. It allows
fullscreen effects to transform the workspace in various ways.
On the other hand, such effects end up fighting the default scene
painting algorithm. For example, just have a look at the slide effect!
With fullscreen effects, it's better to leave to them the decision how
the screen should be painted. For example, such approach is taken in
some wayland compositors, e.g. wayfire, and our qtquick effects already
operate in similar fashion.
Given that, strip the ScreenPaintData of all available transforms. The
main motivation behind this change is to improve encapsulation of item
painting code and simplify model-view-projection code in kwin. It will
also make the job of extracting item code for sharing purposes easier.
The IdleDetector is an idle detection helper. Its purpose is to reduce
code duplication in our private KIdleTime plugin and the idle wayland
protocol, and make user activity simulation less error prone.
Anything in xcb_ structs are always in X local, all member variables
aside from buffers are in kwin local space.
This patch ignores a few paths that are not relevant on wayland.
Currently, if you want to use TimeLine, you need to track the last
presentation timestamp which boils down to carrying some boilerplate
code.
The current situation can be improved by making TimeLine work with
presentation timestamps.
Part-of: <https://invent.kde.org/plasma/kwin/-/merge_requests/2473>
Firstly we weren't waiting for a signal at all, we are relying on events
being processed externally which is wrong.
Secondly ScreenLocker::KSldApp::self()->lockState() is tri-state;
unlocked, acquiring, locked. This gets compressed to a boolean where
acquiring and locked are the same.
If we run the tests whilst we're still acquiring the lock screen we can
call unlocked before we've finished locking. The greeter might then be
shown afterwards triggering a re-lock. It's a confused state.
The XRender backend has been removed, leaving most of KWinXRenderUtils unused.
The few features that are still used, notable `XRenderPicture` and pict format
are moved into the x11/common directory.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
This change makes the WindowItem track the opacity and schedule a
repaint. It further decouples the legacy scene from code window
abstractions.
It's an API breaking change. WindowPaintData no longer can make windows
more opaque. It only provides additional opacity factor.
With this, the WindowItem will know whether it's actually visible. As
the result, if a native wayland window has been minimized, kwin won't
try to schedule a new frame if just a frame callback has been committed.
EffectWindow::enablePainting() and EffectWindow::disablePainting() act
as a stone in the shoe. They have the final say whether the given window
is visible and they are invoked too late in the rendering process.
WindowItem needs to know whether the window is visible in advance,
before compositing starts.
This change replaces EffectWindow::enablePainting() and
EffectWindow::disablePainting() with EffectWindow::refVisible() and
EffectWindow::unrefVisible(). If an effect calls the refVisible()
function, the window will be kept visible regardless of its state. It
should be called when a window is minimized or closed, etc. If an effect
doesn't want to paint a window, it should not call effects->paintWindow().
EffectWindow::refVisible() doesn't replace EffectWindow::refWindow() but
supplements it. refVisible() only ensures that a window will be kept
visible while refWindow() ensures that the window won't be destroyed
until the effect is done with it.
Possibility to implement realtime screenedges gestures in scripted effects,
implement it in the windowsaperture show desktop effect.
* Expose registerRealtimeScreenEdge to JavaScript, the callback will be
a JS function.
* Add the concept of freezeInTime() in the animation js bindings,
it will either create an animation frozen at a given time or freeze a running animation
that can be restored and ran to completition at any time
* add an edges property only for showdesktop as it's not directly on the effect configuration
libinput_device_get_user_data() can be used to get the associated Device
object with libinput_device. That way, we won't need to maintain a
private list of all input devices.
This makes KWin switch to in-tree copy of KWaylandServer codebase.
KWaylandServer namespace has been left as is. It will be addressed later
by renaming classes in order to fit in the KWin namespace.
When we do more color management stuff we'll need it in more places,
making it a hard requirement reduces the amount of needed ifdefs and
should make adding color management features a little simpler.
AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
Added this interface to the VirtualDesktopManager. Realtime touchpad gestures update the interface to allow for mac os style desktop switching.
Also makes gestured switching use natural direction.
BUG: 185710
Input event flow has been refactored so all input events originate from
input devices.
The X11 backend uses InputRedirection so make it forward events to
relevant input device handlers.
Reduce the code duplication and boilerplate to create and destroy
the test clients by making the related variables class members.
Handle special cases using an extensible flag mechanism.
With this change, the Workspace would provide clientArea() overloads
that take only AbstractOutput and VirtualDesktop. integer ids are
obsolete as they are unstable.
Swipe with three fingers
- left to switch to the previous virtual desktop
- right to switch to the next virtual desktop
- up and down to toggle the overview
CCBUG: 439925
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
These are two conceptually different tasks that were intertwined.
On it's own it doesn't accomplish anything but is an important refactor
for longer term goals, namely:
- moving xwayland into kwin_wayland_wrapper with our wayland restart
handling support
- having multiple X connections
Behaviour should be the same.
[6/6] Make autotests create fake input devices
This test was the only one where input() could return a nullptr. With
this test removed, autotests can now expect input() to always return a
sane valid value and are therefor simpler to write.
That test belongs in kwayland-server anyway and kwayland-server's test
suite already tests that starting without XDG_RUNTIME_DIR is a no-no
thing
[5/6] Make autotests create fake input devices
Migrate all input simulation functions from kwinApp()->platform()->...
to the their counter part in the Test namespace.
[4/6] Make autotests create fake input devices
This translates all required input simulating methods from
kwinApp()->platform()->... to seperate functions in the Test namespace.
[3/6] Make autotests create fake input devices
This commit adds back all three VirtualInputDevices for simulating
keyboard, touch and pointer input events from autotests.
[1/6] Make autotests create fake input devices
The goal of this patch set is simulating user input in unit tests via
InputDevices and no longer use the Platform to fake input. This matches
more closely with how input is processed when running a full plasma
wayland session, i.e. with the DRM and libinput backends.
This ensures that we get a warning if the config header is not included
instead of compiling the code as if it was disabled. Interestingly, some
checks already used #if KWIN_BUILD_*, so those were generating -Wundef
warnings when the feature is disabled. Commit 886173cab assumed that all
those features were already 01, so this unbreaks the build if any of the
features is disabled.
Fixes: 886173cab ("Reduce ifdefs in Workspace::supportInformation()")
It's leftover after the times when widget style was using wayland
connection. Breeze had to destroy all wayland resources before
terminating the internal connection.
Xcursor loading code has hardcoded search paths, in order to take into
account distros installing app data in a different location,
libwayland-cursor sets the ICONDIR to the icon directory computed based
on the install prefix.
However, that won't work with gitlab CI because it relocates binaries. A
more robust way to find cursors would be to use QStandardPaths to find
all the icon directories on the system.
Another advantage of using own cursor loading code is that it allows us
to reuse cursor images that are symlinks. For example, with
breeze_cursors, almost half of the files in the cursors directory are
symlinks.
The main disadvantage of this approach is that we would have to keep the
search paths up to date. However, on the hand, there are not that many
of them, e.g. ~/.icons, ~/.local/share/icons, /usr/share/icons,
/usr/local/share/icons. The last three are implicitly handled by the
QStandardPaths.
With the xdg_toplevel.configure_bounds support, the compositor is
finally able to tell the client the maximum recommended window size.
That approach allows us to keep the compositor side simple and it
prevents (as long as the app is well behaved) annoying visual glitches
such as mapping window with one size and then quickly resizing it to
the final size.
There are two operations failing currently, so we QEXPECT_FAIL
them for now:
- A new client is not moved to the screen set by the rule
(Wayland only). Affects Apply, Force and Remember
- Disabling and enabling an output will not move the client
to the Forced screen (On X11 and Wayland, BUG:409979)
If the user wants to move a tiled window, but changes their mind and
tiles the window back to the previous position, the geometryRestore()
will be corrupted because initialMoveResizeGeometry() is the same as the
geometry of the window in the tiled mode.
This change fixes tracking of the geometry restore by precomputing the
geometry restore when starting interactive move. That way, if the window
is untiled and tiled again without release left pointer button, the
geometry restore will be set to the correct value in setQuickTileMode().
This change also adjusts the test suite so such a subtle case won't be
broken again without noticing it.
It can also be applied to client-side decorations. As long as the
compositor can ask the client to use some specific decoration mode, the
"no border" property can be set.
If there's only one configure event that changes the position of the
window and it gets acknowledged but no buffer is attached yet, and a new
configure is sent, then the ConfigurePosition flag won't be inherited
by the new configure event and the window will be misplaced.
In order to fix that, this change makes XdgSurfaceClient pop the last
acknowledged configure event from the m_configureEvents list only when
it's about to be applied for sure.
BUG: 448856
Historically, noBorder() was used for two things:
* as a substitute for AbstractClient::isDecorated()
* to determine whether the AbstractClient should have a decoration
With async decoration updates refactoring, a few things around
noBorder() have changed, which exposed an existing bug in the handling
of borderless maximized windows.
It's possible to have a case where an initially maximized window makes
an xdg_toplevel.set_maximized request before the initial commit, but
creates the decoration object after the initial commit.
Since XdgToplevelClient::userCanSetNoBorder() would return false when
maximize() is called in XdgToplevelClient::initialize(), m_userNoBorder
won't be updated and therefore the window can end up having a server
side decoration.
Previously, it wasn't the case because kwin would do nothing if the
decoration is installed and its preferred mode changes after the initial
commit but before the surface is mapped. With async decoration fixes,
kwin would react as expected, which unfortunately has exposed the bug.
The root cause of the problem is the fact that noBorder() is overloaded,
which makes it error-prone.
This patch changes how the noBorder property is treated. Now, it only
indicates whether the compositor wants the window to have no borders. If
noBorder() is true, it means that the compositor doesn't want the window
to have a server-side decoration; on the other hand, if noBorder() is
false, it doesn't imply that the window should have a decoration.
BUG: 448740
geometryRestore() is no longer updated after mapping the window, so
setQuickTileMode() has to update geometryRestore() explicitly to the
correct value.
With this change, geometryRestore() will be updated as follows:
* if the window is tiled, geometryRestore() is valid, nothing to do
* the window has been dragged to the top edge, set geometryRestore() to
the geometry that the window had when starting move
* otherwise, use the current move resize geometry
dontInteractiveMoveResize() was added to workaround kwin sending bad
configure events when double clicking mpv to make it fullscreen.
With async geometry updates fixed, dontInteractiveMoveResize() can be
finally removed.
Another reason to remove dontInteractiveMoveResize() is that it can make
kwin crash with a debug build. For example, if you enable resizing
maximized windows in breeze decoration settings and resize a maximized
window, kwin would eventually crash in
the AbstractClient::handleInteractiveMoveResize() function because neither
isInteractiveMove() nor isInteractiveResize() return true.
On Wayland, the move resize geometry and the frame geometry are
completely out of sync.
This change synchronizes emitting of the clientStepUserMovedResized
signal to the move resize geometry changes.
It simplifies code of InternalClient and XdgSurfaceClient, and makes
adding support for other shell surface protocols easier as there's less
boilerplate stuff that you would need to take care of.
It's not practical, regular users don't care about window geometry. One
could argue that it can be useful for creating window rules, but window
rules kcm pulls relevant properties from kwin.
If needed, one can reimplement this feature as a QtQuick script that creates
an overlay window positioned above the window that is being interactively
moved or resized.
Same as real hardware wl_keyboard, key should be sent before modifier
change. For example, Left Ctrl press and release should produce
key events in the order of Control_L and Control+Control_L.
Observed in kdevelop, that isEnabled() could be false when switching
between different tabs with Ctrl+Tab. But Qt may still call show()
if you click on the texteditor widget. This leads to isEnabled == false but
setActive(true) is called. This causes kdevelop in a usable state because
keyboard grab will be created and no key event will reach application
because isEnabled == false. Under normal circumstances, key will reach
widget first and triggers another text_input_v2 enable to make input
method work properly.
text-input-v3 does not have preedit styling, instead, it can only
specify the range of cursor. Try to keep track of any
highlight/selection style range and combine them together. If it matches
the cursor position, use it as the cursor range.
Currently, if a window switches between SSD and CSD, it is possible to
encounter a "corrupted" state where the server-side decoration is wrapped
around the window while it still has the client-side decoration.
The xdg-decoration protocol fixes this problem by saying that decoration
updates are bound to xdg_surface configure events.
At the moment, kwin sort of applies decoration updates immediately. With
this change, decoration updates will be done according to the spec.
If the compositor wants to create a decoration, it will send a configure
event and apply the decoration when the configure event is acked by the
client. In order to send the configure event with a good window geometry
size, kwin will create the decoration to query the border size but not
assign it to the client yet. As is, KDecoration api doesn't make
querying the border size ahead of time easy. The decoration plugin can
assign arbitrary border sizes to windows as it pleases it. We could change
that, but it effectively means starting KDecoration3 and setting existing
window deco ecosystem around kwin on fire the second time, that's off the
table.
If the compositor wants to remove the decoration, it will send a
configure event. When the configure event is acked and the surface is
committed, the window decoration will be destroyed.
Sync'ing decoration updates to configure events ensures that we cannot
end up with having both client-side and server-side decoration. It also
helps us to fix a bunch of geometry related issues caused by creating
and destroying the decoration without any surface buffer attached yet.
BUG: 445259
The output management test checks the implementation of output
management capabilities in the virtual backend, which is not helpful.
This change replaces it with a more useful test that verifies how
windows are placed after an output change.
TestXdgShellClientRules implicitly assumes that the kwinrc config is
referenced only by the RuleBook object.
However, after changing the default placement policy in the
WaylandTestApplication, that's no longer the case. The kwinApp() object
now also holds a reference to the main config file. Because of that,
previous window rules leak to next tests, which breaks them.
In order to address that issue, this change makes TestXdgShellClientRules
open a separate config and wipe it clean after each test run. Not great,
but there doesn't seem to be other way around with current KSharedConfig
api.
It's more common to see the parent object being the last argument in Qt
and this way you won't need to specify nullptr parent explicitly if the
xdg-popup or the xdg-toplevel surface doesn't need to be configured
implicitly, which makes tests slightly easier to read.
Effect loading is already tested using integration tests, for example
the maximize test verifies that the maximize effect is loaded _and_ it
actually does something useful when a window is maximized or restored,
testScriptedEffectLoader only verifies that the effect is loaded, which
is less helpful than what integration tests provide us.
But perhaps the main problem with these tests is that they require us
building a mockverse around them. This litters code with ifdef
preprocessor directives and makes changing such code a living nightmare.
Another problem with these two tests is that they cannot use OpenGL
because it means mocking OpenGL, which we obviously not going to do.
With integration tests, it's not a problem.
The bottom line is that unit tests can be useful but they make life
notoriously difficult when it comes to testing components that depend on
other components.
Currently, kwin expects that the xdg-decoration is installed before the
initial commit. However, decoration tests do that after the initial
commit, which makes testMaximizeAndChangeDecorationModeAfterInitialCommit()
silently pass.
On a second look, it seems that the xdg-decoration spec is okay with the
xdg-decoration being created after the first commit (as long as it's
done before the surface is mapped). This needs to be fixed separately.
CCBUG: 445259
If the preferred decoration mode changes after the initial commit but
before the surface is mapped, there's a chance that kwin can send a bad
configure event, it's been the case in the past. Add a test to prevent
such cases go unnoticed.
If the decoration is destroyed before the window is mapped, kwin can
respond with a configure event that has 0x0 size. New tests check that
problematic case.
BUG: 444962
Ever since the effects were changed to static, each test of the
integration tests includes all the effects. The result of this is that
when doing a debug build each test is now 60MiB or more. With the amount
of tests, this results in ~8 GiB of diskspace used just for KWin's
binary output directory, which is rather excessive.
Since the tests all share a common framework library, we can change that
library to a shared library and that way avoid linking all the effects
into each test.
Most of this is shuffling around some link libraries in the integration
test CMakeLists, however, I needed to export the Xwayland class as it is
used by one of the tests but wasn't exported.
EffectQuickScene is not used strictly by effects, aurorae decorations
use it too to render window decorations.
This change renames the EffectQuickView/Scene to
OffscreenQuickView/Scene to clear up the naming scheme.
Since binary effects are installed in their own directory, checking
service type is redundant. Also, KPluginMetaData::serviceTypes() has
been deprecated.
Task: https://phabricator.kde.org/T14483
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.
This improves file organization in kwin by putting backends in a single
directory.
It also makes easier to discover kwin's low level components for new
contributors because the plugins directory may come as the last place to
look for. When one hears "plugin", the first thing that comes to mind is
regular plugins, not low level backends.