The IdleDetector is an idle detection helper. Its purpose is to reduce
code duplication in our private KIdleTime plugin and the idle wayland
protocol, and make user activity simulation less error prone.
With this only the main and the libinput threads will use realtime
scheduling, so it will be harder to leak realtime scheduling to somebody
else.
The only caveat is that kwin would need to keep CAP_SYS_NICE around,
however on the other hand, it's needed to ensure that kwin_wayland is
able to get high priority EGL contexts with some drivers, e.g. intel.
QX11Info is used in `platform.cpp`, which is used by both
kwin_x11 and kwin_wayland. The latter only gets it implicitly
through KGlobalAccelPrivate.
Ideally, platform.cpp was split to not require QX11Info in
kwin_wayland in the first place.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
This allows to toss a large amount of custom rendering code.
Furthermore, it removes the build-time dependency on Plasma Framework
for FrameSvg and Theme from KWin core as it's pulled in through QML
imports now.
It also cleans up the API and removes functions that are effectively
unused or no-op after this change.
For instance, effects often destroy their effect frames
in pre/postPaintScreen, which would now destroy an `OffscreenQuickView`,
which changes GL context. This is alleviated by delaying detruction
of the internal view.
Support for the features of text cross-fade and selection frame,
which are not used by any of the built-in effects, is dropped.
Signed-off-by: Eike Hein <eike.hein@mbition.io>
This makes KWin switch to in-tree copy of KWaylandServer codebase.
KWaylandServer namespace has been left as is. It will be addressed later
by renaming classes in order to fit in the KWin namespace.
Instead of creating a gammaramp object with a fixed size, make the color
device create a color transformation object that can be used to construct
arbitrary LUTs. This is needed in order to support tiled displays well
and is useful for further color management work.
When we do more color management stuff we'll need it in more places,
making it a hard requirement reduces the amount of needed ifdefs and
should make adding color management features a little simpler.
AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
Notifications are really only useful in a setting with a full
shell environment where there is a notification center to display them.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Software cursor has always been a major source of problems. Hopefully,
porting it to RenderLayer will help us with that.
Note that the cursor layer is currently visible only when using software
cursor, however it will be changed once the Compositor can allocate
a real hardware cursor plane.
Currently, software cursor uses graphics-specific APIs (OpenGL and
QPainter) to paint itself. That will be changed in the future when
rendering parts are extracted from the Scene in a reusable helper.
This is the first tiny step towards the layer-based compositing in kwin.
The RenderLayer represents a layer with some contents. The actual
contents is represented by the RenderLayerDelegate class.
Currently, the RenderLayer is just a simple class responsible for
geometry, and repaints, but it will grow in the future. For example,
render layers need to form a tree.
The next (missing) biggest component in the layer-based compositing are
output layers. When output layers are added, each render layer would
have an output layer assigned to it or have its output layer inherited
from the parent.
The render layer tree wouldn't be affected by changes to the output
layer tree so transition between software and hardware cursors can be
seamless.
The next big milestone will be to try to port some of existing kwin
functionality to the RenderLayer, e.g. software cursor or screen edges.
It's not practical, regular users don't care about window geometry. One
could argue that it can be useful for creating window rules, but window
rules kcm pulls relevant properties from kwin.
If needed, one can reimplement this feature as a QtQuick script that creates
an overlay window positioned above the window that is being interactively
moved or resized.
While a stylus is in proximity we want to show its cursor. In this commit
it only gets shown on move because for a still unknown reason the position
is out of date before the first move event.
BUG: 443921
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.
Hardware constraints limit the number of crtcs and which connector + crtc
combinations can work together. The current code is searching for working
combinations when a hotplug happens but that's not enough, it also needs
to happen when the user enables or disables outputs and when modesets are
done, and the configuration change needs to be applied with a single atomic
commit.
This commit removes the hard dependency of DrmPipeline on crtcs by moving
the pending state of outputs from the drm objects to DrmPipeline itself,
which ensures that it's independent from the set of drm objects currently
used. It also changes requests from KScreen to be applied truly atomically.
The main motivation behind this change is to prepare input abstractions
for virtual input devices so the wl_seat can properly advertise caps or
the cursor getting properly mapped/unmapped when a fake pointer is
added/removed on a system without a hardware mouse connected.
With this, there are three abstractions - InputDevice, InputBackend, and
InputRedirection.
An InputDevice represents an input device such as a mouse, a keyboard, a
tablet, etc. The InputBackend class notifies the InputRedirection about
(dis-)connected devices. The InputRedirection manages the input devices.
Such design allows to unify the event flow for real and virtual input
devices.
There can be several input backends active. For example, the libinput
backend and an input backend that provides virtual input devices, e.g.
libeis or org_kde_kwin_fake_input.