Client-side wrappers for input-method-unstable-v1 fail to build because
wl_keyboard_interface is referenced in the header file generated by
wayland-scanner.
Unfortunately, qt6_generate_wayland_protocol_client_sources() forces
--include-core-only argument, this is addressed in Qt 6.4.1, but in
meanwhile let's ship a copy of Qt6WaylandClientMacros.cmake file until
the required Qt version is out.
In libinput 1.19, three new pointer axis events were added in order to
provide support for high-resolution scrolling.
LIBINPUT_EVENT_POINTER_AXIS is de-facto deprecated and new users of
libinput should use instead SCROLL_WHEEL, SCROLL_FINGER, and
SCROLL_CONTINUOUS.
Discrete deltas were replaced with v120 delta values. 120 corresponds to
a single discrete delta. Smaller values correspond to "partial" wheel
ticks.
https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/72
This change extends the OutputChangesTest so it also covers the cases
where a maximized and a fullscreen window is moved back to its original
output when it's hotplugged.
We use the PMF syntax so the isValid() check is unnecessary as the
compiler will notify about wrong signal at compile time. It makes
writing autotests feel less boilerplaty.
It adds more test cases in OutputChangesTest, particularly swapping
outputs.
Swapping outputs is an interesting case because outputs can temporarily
overlap so workspace()->outputAt() can return wrong output and the
window is going to stick to wrong output.
Currently the Workspace processes output updates as they occur, e.g.
when the drm backend scans connectors, the Workspace will handle
hotplugged outputs one by one or if an output configuration changes the
mode of several outputs, the workspace will process output layout
updates one by one instead of handling it in one pass. The main reason
for the current behavior is simplicity.
However, that can create issues because it's possible that the output
layout will be temporarily in degenerate state and features such as
sticking windows to their outputs will be broken.
In order to fix that, this change makes the Workspace process batched
output updates. There are several challenges - disconnected outputs have
to be alive when the outputsQueried signal is emitted, the workspace
needs to determine what outputs have been added or removed on its own.
Separate trigger progress and semantic progress in gesture.
Move effect activation and desktop switching over to semantic progress.
Allow semantic progress to exceed 1 for overshoot in animations.
I've added VerticalAxis, HorizontalAxis, DirectionlessSwipe and BiDirectionalPinch gestures directions.
These are all combinations of other gesture directions that semantically work well together.
I've implemented these gestures as well as changed some labels and improved documentation,
Also,
Add vector signal to SwipeGesture
- Now only 1 GestureDirection enum
- Now only 1 registerGesture() call
- The 4 kinds of gesture (Pinch/Swipe) and (Touchpad/Touchscreen) in globalshortcuts.h/cpp are merged into 1 GestureShortcut
- Change from range to set of finger counts in gestures
No behavior should change, just a refactor.
This might be the root cause of random ASAN errors in testQuickTiling.
From commit 617291c6974d232ee99c4c49e891ce16863e3d6e:
The internal EventQueue is a child of the registry object. This means
that after the registry is destroyed, all proxy objects in that event
queue are going to have invalid reference to it, which is not a problem
as long as the wl_display_dispatch() function is not called.
The wl_display_dispatch() function uses wl_proxy's queue reference to
enqueue incoming events to that queue.
Unfortunately, during teardown, the internal ConnectionThread may
dispatch events right after the registry object has been destroyed,
which can lead to a crash.
In order to fix the crash, we need to destroy all proxy objects and only
after that we can destroy the event queue. It's okay if wayland events
are dispatched in between.
i.e. the EventQueue object must be destroyed last to ensure avoid hitting
dangling pointers.
This enables again the crossfade between the old window picture and the new one in the maximize and morphingpopup effects.
It does that with the OffScreenEffect redirect() feature.
BUG:439689
BUG:435423
Things such as Output, InputDevice and so on are made to be
multi-purpose. In order to make this separation more clear, this change
moves that code in the core directory. Some things still link to the
abstraction level above (kwin), they can be tackled in future refactors.
Ideally code in core/ should depend either on other code in core/ or
system libs.
Other policy enums are declared in options.h so let's do the same for
placement policy. Besides consistency, another advantage of moving the
enum in kwin namespace is that the enum could be forward declared.
An application that does not support text-input has no way of
communicating with the input method, so even if you show the input
method the application receives nothing. As a fallback, instead send
fake key events so the application still gets something at least.
The key events are synthesised based on the text string that the
input method sends, which may result in things that do not actually
correspond to real keys. Unfortunately I do not see a way around that.
CCBUG: 439911
Currently, the main user of these two functions is the X11 standalone
platform.
This change ports that code to Workspace::geometry(), which is not great
but the X11 backend already depends on the Workspace indirectly via the
Screens. Not sure if it's worth making the standalone X11 backend track
the xinerama rect internally.
This class can be used to create an anonymous file, for instance
to pass data between compositor and clients, through means of a
file descriptor, as is done in various Wayland protocols, notably
the keymap exchange.
It also implements sealing the file, so that it can be shared
between multiple clients without them being able to modify it.
If supported, memfd_create is used, otherwise a `QTemporaryFile`
is used.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Since the screen number is well-known, we can look up the default
screen on demand. Note that xcb_get_setup() is pretty cheap as it
simply returns a const pointer to pre-allocated data.
At the moment, a platform should provide two output lists - one that
lists all available outputs, and the other one that contains only
enabled outputs. In general, this amounts to some boilerplate code and
forces backends to be implemented in some certain way, which sometimes
is inconvenient, e.g. if an output is disabled or enabled, it will be
simpler if we only change Output::isEnabled(), otherwise we need to
start accounting for corner cases such as the order in which
Output::isEnabled() and Platform::enabledOutputs() are changed, etc.
In X11 when a window is maximised if the client is unable to fufill the
space provided we centre align the window.
With the new floating point geometry behaviour of centreing changes.
Instead of a 1 pixel gap at the top, we get a 0.5 pixel gap either side.
When we get into the codepath to "fix" the window in `closeHeight` we
only move the top, giving us an invalid buffer size.
We don't really want to change the logic here; on xwayland with the
scaling opt-out path it's feasible for a floating sized logical size to
still be representable. This code rounds to the native unit after all
the logic has taken effect.
It's important for tablet devices to be able to specify to which section
of the display we'll be fitting the tablet. This setting allows to
specify this by providing some options that will do so relative to the
output size.
CCBUG: 433045
The Session can be useful not only to the platform backend but also
input backends and for things such as vt switching, etc. Therefore it's
better to have the Application own the Session.
Platform backends are provided as plugins. This is great for
extensibility, but the disadvantages of this design outweigh the
benefits.
The number of backends will be limited, it's safe to say that we will
have to maintain three backends for many years to come - kms/drm,
virtual, and wayland. The plugin system adds unnecessary complexity.
Startup logic is affected too. At the moment, platform backends provide
the session object, which is awkward as it starts adding dependencies
between backends. It will be nicer if the session is created depending
on the loaded session type.
In some cases, wayland code needs to talk to the backend directly, e.g.
for drm leasing, etc. With the plugin architecture it's hard to do that.
Not impossible though, we can approach it as in Qt 6, but it's still
harder than linking the code directly.
Of course, the main disadvantage of shipping backends in a lib is that
you will need to patch kwin if you need a custom platform, however such
cases will be rare.
Despite that disadvantage, I still think that it's a step in the right
direction where the goal is to have multi-purpose backends and other
reusable components of kwin.
The legacy X11 standalone platform is linked directly to kwin_x11
executable, while the remaining backends are linked to libkwin.
The original intention behind creating plugins before the workspace was
to handle the case where kwin_wayland may need to wait until outputs are
available. However, since things have changed a lot in that regard,
plugins can be loaded after the workspace now.
The main benefit behind this is that plugins can be simpler, they won't
need to track when the workspace is created.
On X11, plugins are already loaded after the workspace is instantiated.
This change adjusts the window management abstractions in kwin for the
drm backend providing more than just "desktop" outputs.
Besides that, it has other potential benefits - for example, the
Workspace could start managing allocation of the placeholder output by
itself, thus leading to some simplifications in the drm backend. Another
is that it lets us move wayland code from the drm backend.
We gain nothing with it. XCB setup logic in the Xwayland server has to
be moved to the workspace layer anyway. For example, this move of
responsibilities will be needed to support running more than just one
instance of Xwayland. Architecture-wise, it would be cleaner too.
Unfortunately, it breaks encapsulation of the Application, but this can
be taken care later.
With fractional scaling integer based logical geometry may not match
device pixels. Once we have a floating point base we can fix that. This
also is
important for our X11 scale override, with a scale of 2 we could
get logical sizes with halves.
We already have all input being floating point, this doubles down on it
for all remaining geometry.
- Outputs remain integer to ensure that any screen on the right remains
aligned.
- Placement also remains integer based for now.
- Repainting is untouched as we always expand outwards
(QRectF::toAdjustedRect().
- Decoration is untouched for now
- Rules are integer in the config, but floating in the adjusting/API
This should also be fine.
At some point we'll add a method to snap to the device pixel
grid. Effectively `round(value * dpr) / dpr` though right now things
mostly work.
This also gets rid of a lot of hacks for QRect right and bottom which
are very
confusing.
Parts to watch out in the port are:
QRectF::contains now includes edges
QRectF::right and bottom are now sane so previous hacks have to be
removed
QRectF(QPoint, QPoint) behaves differently for the same reason
QRectF::center too
In test results some adjusted values which are the result of
QRect.center because using QRectF's center should behave the same to the
user.
The Screens object is created by Workspace on X11. This change makes X11
and Wayland behave more similar. As is, the Screens is a helper for
window management code, don't use it in backends. Note that the X11 backend
already uses the Screens, it needs to be addressed individually.
We don't really care about the window showing up until we're calling
showInputPanel, but since Workspace::windowAdded is triggered for any
window that gets added, the test sometimes fails because count() is 2
instead of 1. To avoid that, only create the spy when it's actually
relevant instead of all the way at the start before any other setup is
done.
ScreenPaintData provides a way to transform the painted screen, e.g.
scale or translate. From API point of view, it's great. It allows
fullscreen effects to transform the workspace in various ways.
On the other hand, such effects end up fighting the default scene
painting algorithm. For example, just have a look at the slide effect!
With fullscreen effects, it's better to leave to them the decision how
the screen should be painted. For example, such approach is taken in
some wayland compositors, e.g. wayfire, and our qtquick effects already
operate in similar fashion.
Given that, strip the ScreenPaintData of all available transforms. The
main motivation behind this change is to improve encapsulation of item
painting code and simplify model-view-projection code in kwin. It will
also make the job of extracting item code for sharing purposes easier.
The IdleDetector is an idle detection helper. Its purpose is to reduce
code duplication in our private KIdleTime plugin and the idle wayland
protocol, and make user activity simulation less error prone.
Anything in xcb_ structs are always in X local, all member variables
aside from buffers are in kwin local space.
This patch ignores a few paths that are not relevant on wayland.
Currently, if you want to use TimeLine, you need to track the last
presentation timestamp which boils down to carrying some boilerplate
code.
The current situation can be improved by making TimeLine work with
presentation timestamps.
Part-of: <https://invent.kde.org/plasma/kwin/-/merge_requests/2473>
Firstly we weren't waiting for a signal at all, we are relying on events
being processed externally which is wrong.
Secondly ScreenLocker::KSldApp::self()->lockState() is tri-state;
unlocked, acquiring, locked. This gets compressed to a boolean where
acquiring and locked are the same.
If we run the tests whilst we're still acquiring the lock screen we can
call unlocked before we've finished locking. The greeter might then be
shown afterwards triggering a re-lock. It's a confused state.
The XRender backend has been removed, leaving most of KWinXRenderUtils unused.
The few features that are still used, notable `XRenderPicture` and pict format
are moved into the x11/common directory.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
This change makes the WindowItem track the opacity and schedule a
repaint. It further decouples the legacy scene from code window
abstractions.
It's an API breaking change. WindowPaintData no longer can make windows
more opaque. It only provides additional opacity factor.
With this, the WindowItem will know whether it's actually visible. As
the result, if a native wayland window has been minimized, kwin won't
try to schedule a new frame if just a frame callback has been committed.
EffectWindow::enablePainting() and EffectWindow::disablePainting() act
as a stone in the shoe. They have the final say whether the given window
is visible and they are invoked too late in the rendering process.
WindowItem needs to know whether the window is visible in advance,
before compositing starts.
This change replaces EffectWindow::enablePainting() and
EffectWindow::disablePainting() with EffectWindow::refVisible() and
EffectWindow::unrefVisible(). If an effect calls the refVisible()
function, the window will be kept visible regardless of its state. It
should be called when a window is minimized or closed, etc. If an effect
doesn't want to paint a window, it should not call effects->paintWindow().
EffectWindow::refVisible() doesn't replace EffectWindow::refWindow() but
supplements it. refVisible() only ensures that a window will be kept
visible while refWindow() ensures that the window won't be destroyed
until the effect is done with it.
Possibility to implement realtime screenedges gestures in scripted effects,
implement it in the windowsaperture show desktop effect.
* Expose registerRealtimeScreenEdge to JavaScript, the callback will be
a JS function.
* Add the concept of freezeInTime() in the animation js bindings,
it will either create an animation frozen at a given time or freeze a running animation
that can be restored and ran to completition at any time
* add an edges property only for showdesktop as it's not directly on the effect configuration
libinput_device_get_user_data() can be used to get the associated Device
object with libinput_device. That way, we won't need to maintain a
private list of all input devices.