The original intention behind creating plugins before the workspace was
to handle the case where kwin_wayland may need to wait until outputs are
available. However, since things have changed a lot in that regard,
plugins can be loaded after the workspace now.
The main benefit behind this is that plugins can be simpler, they won't
need to track when the workspace is created.
On X11, plugins are already loaded after the workspace is instantiated.
This change adjusts the window management abstractions in kwin for the
drm backend providing more than just "desktop" outputs.
Besides that, it has other potential benefits - for example, the
Workspace could start managing allocation of the placeholder output by
itself, thus leading to some simplifications in the drm backend. Another
is that it lets us move wayland code from the drm backend.
We gain nothing with it. XCB setup logic in the Xwayland server has to
be moved to the workspace layer anyway. For example, this move of
responsibilities will be needed to support running more than just one
instance of Xwayland. Architecture-wise, it would be cleaner too.
Unfortunately, it breaks encapsulation of the Application, but this can
be taken care later.
With fractional scaling integer based logical geometry may not match
device pixels. Once we have a floating point base we can fix that. This
also is
important for our X11 scale override, with a scale of 2 we could
get logical sizes with halves.
We already have all input being floating point, this doubles down on it
for all remaining geometry.
- Outputs remain integer to ensure that any screen on the right remains
aligned.
- Placement also remains integer based for now.
- Repainting is untouched as we always expand outwards
(QRectF::toAdjustedRect().
- Decoration is untouched for now
- Rules are integer in the config, but floating in the adjusting/API
This should also be fine.
At some point we'll add a method to snap to the device pixel
grid. Effectively `round(value * dpr) / dpr` though right now things
mostly work.
This also gets rid of a lot of hacks for QRect right and bottom which
are very
confusing.
Parts to watch out in the port are:
QRectF::contains now includes edges
QRectF::right and bottom are now sane so previous hacks have to be
removed
QRectF(QPoint, QPoint) behaves differently for the same reason
QRectF::center too
In test results some adjusted values which are the result of
QRect.center because using QRectF's center should behave the same to the
user.
The Screens object is created by Workspace on X11. This change makes X11
and Wayland behave more similar. As is, the Screens is a helper for
window management code, don't use it in backends. Note that the X11 backend
already uses the Screens, it needs to be addressed individually.
We don't really care about the window showing up until we're calling
showInputPanel, but since Workspace::windowAdded is triggered for any
window that gets added, the test sometimes fails because count() is 2
instead of 1. To avoid that, only create the spy when it's actually
relevant instead of all the way at the start before any other setup is
done.
ScreenPaintData provides a way to transform the painted screen, e.g.
scale or translate. From API point of view, it's great. It allows
fullscreen effects to transform the workspace in various ways.
On the other hand, such effects end up fighting the default scene
painting algorithm. For example, just have a look at the slide effect!
With fullscreen effects, it's better to leave to them the decision how
the screen should be painted. For example, such approach is taken in
some wayland compositors, e.g. wayfire, and our qtquick effects already
operate in similar fashion.
Given that, strip the ScreenPaintData of all available transforms. The
main motivation behind this change is to improve encapsulation of item
painting code and simplify model-view-projection code in kwin. It will
also make the job of extracting item code for sharing purposes easier.
The IdleDetector is an idle detection helper. Its purpose is to reduce
code duplication in our private KIdleTime plugin and the idle wayland
protocol, and make user activity simulation less error prone.
Anything in xcb_ structs are always in X local, all member variables
aside from buffers are in kwin local space.
This patch ignores a few paths that are not relevant on wayland.
Currently, if you want to use TimeLine, you need to track the last
presentation timestamp which boils down to carrying some boilerplate
code.
The current situation can be improved by making TimeLine work with
presentation timestamps.
Part-of: <https://invent.kde.org/plasma/kwin/-/merge_requests/2473>
Firstly we weren't waiting for a signal at all, we are relying on events
being processed externally which is wrong.
Secondly ScreenLocker::KSldApp::self()->lockState() is tri-state;
unlocked, acquiring, locked. This gets compressed to a boolean where
acquiring and locked are the same.
If we run the tests whilst we're still acquiring the lock screen we can
call unlocked before we've finished locking. The greeter might then be
shown afterwards triggering a re-lock. It's a confused state.
The XRender backend has been removed, leaving most of KWinXRenderUtils unused.
The few features that are still used, notable `XRenderPicture` and pict format
are moved into the x11/common directory.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
This change makes the WindowItem track the opacity and schedule a
repaint. It further decouples the legacy scene from code window
abstractions.
It's an API breaking change. WindowPaintData no longer can make windows
more opaque. It only provides additional opacity factor.
With this, the WindowItem will know whether it's actually visible. As
the result, if a native wayland window has been minimized, kwin won't
try to schedule a new frame if just a frame callback has been committed.
EffectWindow::enablePainting() and EffectWindow::disablePainting() act
as a stone in the shoe. They have the final say whether the given window
is visible and they are invoked too late in the rendering process.
WindowItem needs to know whether the window is visible in advance,
before compositing starts.
This change replaces EffectWindow::enablePainting() and
EffectWindow::disablePainting() with EffectWindow::refVisible() and
EffectWindow::unrefVisible(). If an effect calls the refVisible()
function, the window will be kept visible regardless of its state. It
should be called when a window is minimized or closed, etc. If an effect
doesn't want to paint a window, it should not call effects->paintWindow().
EffectWindow::refVisible() doesn't replace EffectWindow::refWindow() but
supplements it. refVisible() only ensures that a window will be kept
visible while refWindow() ensures that the window won't be destroyed
until the effect is done with it.
Possibility to implement realtime screenedges gestures in scripted effects,
implement it in the windowsaperture show desktop effect.
* Expose registerRealtimeScreenEdge to JavaScript, the callback will be
a JS function.
* Add the concept of freezeInTime() in the animation js bindings,
it will either create an animation frozen at a given time or freeze a running animation
that can be restored and ran to completition at any time
* add an edges property only for showdesktop as it's not directly on the effect configuration
libinput_device_get_user_data() can be used to get the associated Device
object with libinput_device. That way, we won't need to maintain a
private list of all input devices.
This makes KWin switch to in-tree copy of KWaylandServer codebase.
KWaylandServer namespace has been left as is. It will be addressed later
by renaming classes in order to fit in the KWin namespace.
When we do more color management stuff we'll need it in more places,
making it a hard requirement reduces the amount of needed ifdefs and
should make adding color management features a little simpler.