Due to the screen edges test not being an integration test, it's very
hard to change output related code in libkwin. screens.cpp needs to have
a few ifdefs to successfully compile.
This change rewrites the screen edges test as an integration test in
order to allow us using other components of kwin in screens.cpp and
screenedge.cpp without ifdef guards.
It's not a one-to-one port.
Active output is a window management concept. It indicates what output
new windows have to be placed on if they have no output hint. So
Workspace seems to be a better place for it than the Screens class, which
is obsolete.
This effect is meant to be as a replacement for the present windows and
the desktop grid effect. It is written using QML.
So far, this effect implements only the basic features of the present
windows effect. Desktop management features will be added later.
CCBUG: 295775
CCBUG: 303438
This patch has one behavioral change - raiseOrLowerClient() will not
work if the client is not on the current virtual desktop.
However, raiseOrLowerClient() can be called only in two cases:
* user triggers the raise or lower shortcut for the active client. Since
the active client is on the current virtual desktop, it's not an issue
* an x11 window restacks itself. It makes no sense if an x11 window
restacks itself while it's inactive or not on current virtual desktop.
Also, the Opposite restack mode is rarely used, some window managers
don't even bother implementing it. So, having such a constraint should
not be a problem.
The main reason for not allowing raiseOrLowerClient() for windows that
are not on the current virtual desktop is that a window can be on
multiple virtual desktops. If a window is on A and B virtual desktops,
the only logical option is to toggle stacking position if the window is
on the current desktop. It's the only viable option as kwin does not
maintain per virtual desktop stacking order.
toplevel.h is included in many places. Changing virtualdesktops.h may
trigger rebuild of all kwin.
With this change, only cpp files that use virtualdesktops.h will need to
be recompiled.
The main motivation behind the split is to simplify client buffer code
and allow adding new features easier, for example referencing the shm
pool when a shm buffer is destroyed, or monitoring for readable linux
dmabuf file descriptors, etc.
Also, a referenced ClientBuffer cannot be destroyed, unlike the old
BufferInterface.
Since we adapted inputmethod to support methods like ibus, the input
method can be active but not have a visible panel.
This includes an extra property that will indicate us if the panel is
visible at any time. This will allow us to properly render the virtual
keyboard hide button in Plasma Mobile (or wherever we need it).
So far calling setActive(true) would issue a deactivation then another
activation. This sometimes makes maliit crash and we can achieve the
same result just by just issuing a reset.
It's unused and the advantages of keeping it are outweighed by the
disadvantages - the returned value is dependant on the window type.
If you need to draw a drop-shadow that matches the shape of the window
or something along that line, render the window into an offscreen
texture and sample the alpha channel in a fragment shader.
The looking glass effect creates an opengl texture at load time without
a current context, which aborts the test.
The test should be converted into an integration test. Mocking the GL
library is simply not feasible and not worth the trouble. :/
It is error-prone to have multiple sources for the same data. If the
base implementation (Compositor::compositing()) changes, other helpers
can get out of sync.
At the moment, the test depends on the implicit client connection flush
in BufferInterface::unref(). It will be highly desirable if that connection
flush is gone as it allows us batching wayland events better. It also
allows us remove a system call from a hot path.
At the moment, we handle window quads inefficiently. Window quads from
all items are merged into a single list just to be broken up again.
This change removes window quads from libkwineffects. This allows us to
handle window quads efficiently. Furthermore, we could optimize methods
such as WindowVertex::left() and so on. KWin spends reasonable amount
of time in those methods when many windows have to be composited.
It's a necessary prerequisite for making wl_surface painting code role
agnostic.
Drop-shadows with the software render backend impact performance quite
significantly. It also makes it easier to prepare render backends for the
item based design.
The Xrender backend was added at the time when OpenGL drivers were not
particularly stable. Nowadays though, it's a totally different situation.
The OpenGL render backend has been the default one for many years. It's
quite stable, and it allows implementing many advanced features that
other render backends don't.
Many features are not tested with it during the development cycle; the
only time when it is noticed is when changes in other parts of kwin break
the build in the xrender backend. Effectively, the xrender backend is
unmaintained nowadays.
Given that the xrender backend is effectively unmaintained and our focus
being shifted towards wayland, this change drops the xrender backend in
favor of the opengl backend.
Besides being de-facto unmaintained, another issue is that QtQuick does
not support and most likely will never support the Xrender API. This
poses a problem as we want thumbnail items to be natively integrated in
the qtquick scene graph.
With the ongoing scene redesign, it needs to be rewritten. However,
given that it is not used widely based on support information from
various bug reports and our available man power is sparse, the most
reasonable thing is to drop the effect, unfortunately.
With the ongoing scene redesign, it needs to be rewritten. However,
given that it is not used widely based on support information from
various bug reports and our available man power is sparse, the most
reasonable thing is to drop the effect, unfortunately.
With the ongoing scene redesign, it needs to be rewritten. However,
given that it is not used widely based on support information from
various bug reports and our available man power is sparse, the most
reasonable thing is to drop the effect, unfortunately.
With the ongoing scene redesign, it needs to be rewritten. However,
given that it is not used widely based on support information from
various bug reports and our available man power is sparse, the most
reasonable thing is to drop the effect, unfortunately.
Window management features were written with synchronous geometry
updates in mind. Currently, this poses a big problem on Wayland because
geometry updates are done in asynchronous fashion there.
At the moment, geometry is updated in a so called pseudo-asynchronous
fashion, meaning that the frame geometry will be reset to the old value
once geometry updates are unblocked. The main drawback of this approach
is that it is too error prone, the data flow is hard to comprehend, etc.
It is worth noting that there is already a machinery to perform async
geometry which is used during interactive move/resize operations.
This change extends the move/resize geometry usage beyond interactive
move/resize to make asynchronous geometry updates less error prone and
easier to comprehend.
With the proposed solution, all geometry updates must be done on the
move/resize geometry first. After that, the new geometry is passed on to
the Client-specific implementation of moveResizeInternal().
To be more specific, the frameGeometry() returns the current frame
geometry, it is primarily useful only to the scene. If you want to move
or resize a window, you need to use moveResizeGeometry() because it
corresponds to the last requested frame geometry.
It is worth noting that the moveResizeGeometry() returns the desired
bounding geometry. The client may commit the xdg_toplevel surface with a
slightly smaller window geometry, for example to enforce a specific
aspect ratio. The client is not allowed to resize beyond the size as
indicated in moveResizeGeometry().
The data flow is very simple: moveResize() updates the move/resize
geometry and calls the client-specific implementation of the
moveResizeInternal() method. Based on whether a configure event is
needed, moveResizeInternal() will update the frameGeometry() either
immediately or after the client commits a new buffer.
Unfortunately, both the compositor and xdg-shell clients try to update
the window geometry. It means that it's possible to have conflicts
between the two. With this change, the compositor's move resize geometry
will be synced only if there are no pending configure events, meaning
that the user doesn't try to resize the window.
This is to improve code readability and make it easier to differentiate
between methods that are used during interactive move-resize and normal
move-resize methods in the future.
We need to emit the clientFinishUserMovedResized signal to notify
effects such as translucency that the interactive move-resize is
finished. Otherwise, the set() animation won't be cancelled and the
window will get stuck frozen.
BUG: 409376
The order in which Xwayland surfaces are associated with X11 windows is
undefined, meaning that we cannot assume that a newly created X11 window
won't have a surface associated with it already.
On X11, the lockscreen greeter is an override-redirect window so the
scale and the glide effect ignore it.
On Wayland, the lockscreen greeter is a regular window so both effects
try to animate it upon the screen being unlocked, which looks bad.
This reduces the number of usages of xStackingOrder(), which simplifies
the reasoning about when it can be marked as dirty.
Since internal windows are now in the regular stack, InternalWindowTest
can use stackingOrder().
As for X11ClientTest, there's no specific reason why it uses the x stack
instead of the regular one.
We need to make sure that the information from
toplevelConfigureRequestedSpy is in place to be used, otherwise we get
an empty size and it doesn't work.
We were expecting a tooltip to be closed when clicking its
transientParent, but it's explicitly not something we are after. We
close popups when we click either other clients or the actual client on
the decoration.
This change makes it so we end up clicking another window instead of the
parent one that is unrelated.
When debugging modifier_only_shortcut_test in _waylandonly mode I saw
that it was failing, among other things, because some aspects were not
initialised.
This changes every test we have to run the new
Test::initWaylandWorkspace() that calls waylandServer()->initWorkspace()
but also makes sure that WaylandServer::initialized is emitted before we
proceed.
Starting with 48c3376927e5e9c13377bf3cfc8b0c411783e7f3 in kglobalaccel,
KGlobalAccel won't work in desktop environments other than Plasma.
We need to set XDG_CURRENT_DESKTOP=KDE to ensure that global shortcuts
still work.
Currently, the fullscreen state is update synchronously, but it needs to
be done in asynchronous fashion.
This change removes some tests as they don't add any value, testFullscreen()
covers them all.
Xcursors don't support hidpi so if a hidpi cursor is needed, kwin will
scale the desired size by the scale factor and ask Xcursor helpers to
load a theme with the given name and the size.
However, the theme loading code doesn't take into account that Xcursor
theme loading helpers may not return cursor sprites of size size * scale
if the theme has no such a size.
For example, if the cursor theme only provides 24, 36, and 48 sizes and
kwin attempts to load cursors of size 48 with a scale factor of 2, we
will get cursors of size 48 instead of 96. Unfortunately, this will
result in the issue where the cursor shrinks when hovering decorations
because kwin doesn't know that the effective scale factor (1) is
different from the requested scale factor (2).
In order to fix loading of HiDPI cursors, we need to approximate the
effective scale factor of every cursor sprite as we load it.
If a decoration is created for an already mapped maximized window, check
the workspace position to ensure that the window still fits the maximize
area.
BUG: 432326
Re-use Qt's implementation of handling non-Latin layouts here
For full ASCII range support (Alt+`, etc.) Qt needs to be patched still,
see QTBUG-90611
BUG: 375518
At the moment, the session code is far from being extensible. If we
decide to add support for libseatd, it will be a challenging task with
the current design of session management code. The goal of this
refactoring is to fix that.
Another motivation behind this change is to prepare session related code
for upstreaming to kwayland-server where it belongs.
This provides the compositor a way to indicate what output is being
rendered. The effects such as the screenshot can check the provided
screen object in order to function as expected.
Summary:
QScriptEngine is deprecated for years and suffers bitrot.
Plasma hit one super major bug with it in 5.11.0 and has now ported
away.
Main porting notes:
- creating low level functions no longer exists
The old global functions are exposed on the ScriptedEffect instance
and then the QJSValue wrappers of the globalObject are modified to
trampoline the methods at a wrapper level.
- We can then use QJSEngine to automatically do argument error checking
rather than unmarshalling a QJSValue manually which significantly
reduces a lot of code.
- We can't make FPX2 a native type, so these are QJSValue args and
unboxed there.
Long term I want overloads for animate that take int/QSize/QPoint which
are native JS types, but that might be an API break.
Test Plan:
Hopefully comprehensive unit test which passes
Tested fade/fadeDesktop manually.
It's a very invasive change, so I expect some things will be broke
please help test any JS effects.
Reviewers: #kwin, mart, fvogt
Subscribers: fvogt, zzag, kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D14536
Otherwise the input method test seems to fail with the following error
"The name org.kde.kwin.testvirtualkeyboard was not provided by any
.service files"
According to the spec, when the pointer enters a surface, the contents
of the cursor becomes undefined. The client should call set_cursor() to
make sure that the cursor image is correct.
In case the compositor wants to cancel a touch sequence, we need to
ignore subsequent touch motion and touch up events until a new sequence
is initiated by the user.
Previously, it was implicitly handled by clearing the mapping table
between the touch slots and touch ids generated by kwayland-server.
Once in a while, we receive complaints from other fellow KDE developers
about the file organization of kwin. This change addresses some of those
complaints by moving all of source code in a separate directory, src/,
thus making the project structure more traditional. Things such as tests
are kept in their own toplevel directories.
This change may wreak havoc on merge requests that add new files to kwin,
but if a patch modifies an already existing file, git should be smart
enough to figure out that the file has been relocated.
We may potentially split the src/ directory further to make navigating
the source code easier, but hopefully this is good enough already.
Occasionally, I see complaints about the file organization of kwin,
which is fair enough.
This change makes the source code more relocatable by removing relative
paths from includes.
CMAKE_CURRENT_SOURCE_DIR was added to the interface include directories
of kwin library. This means that as long as you link against kwin target,
the real location of the source code of the library doesn't matter.
With autotests, things are not as convenient as with kwin target. Some
tests use cpp files from kwin core. If we move all source code in a src/
directory, they will need to be adjusted, but mostly only in build
scripts.
This allows running Xwayland apps as root. Xwayland started with an
empty Xauthority file. After kwin has received the display number, the
file is updated with an actual authority entry.
BUG: 432625
As it was pointed out in 6acf35e4cc, it is
better to return raw pointers than qpointers because returning a qpointer
is equivalent to constructing a new one.
Warning messages are not the kind of messages that should be ignored,
they indicate that something is off or wrong.
Also, this makes triaging bugs easier as we no longer have to ask people
to run kwin with the QT_LOGGING_RULES environment variable set.
This logs to a tracefs filesystem which can be viewed in tools such as
gpuvis to see precise timings of activities in relation to other trace
markers in X or graphic drivers.
This patch is loosely based on D23114. Though modified with thread
safety, support for string building, and a RAII pattern for durations.
Ultimately that expanded it somewhat.
At the moment, our frame scheduling infrastructure is still heavily
based on Xinerama-style rendering. Specifically, we assume that painting
is driven by a single timer, etc.
This change introduces a new type - RenderLoop. Its main purpose is to
drive compositing on a specific output, or in case of X11, on the
overlay window.
With RenderLoop, compositing is synchronized to vblank events. It
exposes the last and the next estimated presentation timestamp. The
expected presentation timestamp can be used by effects to ensure that
animations are synchronized with the upcoming vblank event.
On Wayland, every outputs has its own render loop. On X11, per screen
rendering is not possible, therefore the platform exposes the render
loop for the overlay window. Ideally, the Scene has to expose the
RenderLoop, but as the first step towards better compositing scheduling
it's good as is for the time being.
The RenderLoop tries to minimize the latency by delaying compositing as
close as possible to the next vblank event. One tricky thing about it is
that if compositing is too close to the next vblank event, animations
may become a little bit choppy. However, increasing the latency reduces
the choppiness.
Given that, there is no any "silver bullet" solution for the choppiness
issue, a new option has been added in the Compositing KCM to specify the
amount of latency. By default, it's "Medium," but if a user is not
satisfied with the upstream default, they can tweak it.
At the moment, the Screens class is convoluted with ifdefs because of
MockScreens.
The goal of this change is to reduce the number of usages of the
MockScreens class so it is possible to get rid of the ifdefs.
Since the Screens class is a convenience wrapper around AbstractOutput
objects that come from the Platform, it should not be platform-specific.
By dropping createScreens(), output-related code becomes simpler.
This change introduces a new component - ColorManager that is
responsible for color management stuff.
At the moment, it's very naive. It is useful only for updating gamma
ramps. But in the future, it will be extended with more CMS-related
features.
The ColorManager depends on lcms2 library. This is an optional
dependency. If lcms2 is not installed, the color manager won't be built.
This also fixes the issue where colord and nightcolor overwrite each
other's gamma ramps. With this change, the ColorManager will resolve the
conflict between two.
Effects are given the interval between two consecutive frames. The main
flaw of this approach is that if the Compositor transitions from the idle
state to "active" state, i.e. when there is something to repaint,
effects may see a very large interval between the last painted frame and
the current. In order to address this issue, the Scene invalidates the
timer that is used to measure time between consecutive frames before the
Compositor is about to become idle.
While this works perfectly fine with Xinerama-style rendering, with per
screen rendering, determining whether the compositor is about to idle is
rather a tedious task mostly because a single output can't be used for
the test.
Furthermore, since the Compositor schedules pointless repaints just to
ensure that it's idle, it might take several attempts to figure out
whether the scene timer must be invalidated if you use (true) per screen
rendering.
Ideally, all effects should use a timeline helper that is aware of the
underlying render loop and its timings. However, this option is off the
table because it will involve a lot of work to implement it.
Alternative and much simpler option is to pass the expected presentation
time to effects rather than time between consecutive frames. This means
that effects are responsible for determining how much animation timelines
have to be advanced. Typically, an effect would have to store the
presentation timestamp provided in either prePaint{Screen,Window} and
use it in the subsequent prePaint{Screen,Window} call to estimate the
amount of time passed between the next and the last frames.
Unfortunately, this is an API incompatible change. However, it shouldn't
take a lot of work to port third-party binary effects, which don't use the
AnimationEffect class, to the new API. On the bright side, we no longer
need to be concerned about the Compositor getting idle.
We do still try to determine whether the Compositor is about to idle,
primarily, because the OpenGL render backend swaps buffers on present,
but that will change with the ongoing compositing timing rework.
There were multiple other cases of placing the mouse between screens at
the start of tests. It seems to be all copy paste.
Only maximise and pointerConstraints were failing before this, but we
may as well fix all of them.