While it could be useful with tiled displays, the isFormatSupported and
supportedModifier functions can be called before prepareModeset, so where
m_formats is still empty. Additionally they're neither in a hot path nor
performance critical.
Whether or not we want to use explicit modifiers for our surfaces doesn't
matter for what format+modifiers drm planes support. This way direct scanout
works by default, without having to explicitly enable modifiers
systemd takes care of setting and dropping master permissions when
sending PauseDevice and ResumeDevice signals.
When the ResumeDevice signal is received, the relevant drm device should
already have master permissions set up.
On the other hand, when the active property changes, there's still a
chance that systemd haven't granted drm master permissions to us.
In case a modeset needs to be performed, the drm backend will test all
pipelines to ensure that new mode won't cause any bandwidth issues on
other outputs, etc.
To do that, it may delay presenting frames. If the new configuration
doesn't work, it needs to notify about failed frames.
However, the relevant code that notifies the RenderLoop about failed
atomic commits doesn't check if there's actually a pending modeset
present.
When switching between VTs, systemd can revoke master permissions from
kwin. To make things even more trickier, kwin can try to present a frame
in that short time span.
EffectQuickScene is not used strictly by effects, aurorae decorations
use it too to render window decorations.
This change renames the EffectQuickView/Scene to
OffscreenQuickView/Scene to clear up the naming scheme.
Fixes a crash I have with dpms + suspend, which was caused by the udev
event for updating outputs being called before the output got enabled
again. When DrmGpu::updateOutputs got called it removed the crtc from
the inactive output and then disabled the output afterwards. Instead,
only remove crtcs if an output is really disabled.
This also allows to generalize the logic for lease outputs, and could
in the future allow for faster dpms on/off switching.
This unifies frame hooks for OpenGL and QPainter render backends. There
are a couple of reasons why it's a good idea - it provides one mental
framework to start painting a frame, the Compositor will be able to
start and submit frames. The last one is very cool because it gives the
Compositor more power over compositing.
Besides unifying frame hooks, this cleans up a bit the arg naming mess
in endFrame(). As is, "damage" and "damagedRegion" are very confusing
names. "damage" arg has been renamed to "renderedRegion," because that's
what it is. The renderedRegion arg specifies the region that has been
repainted by the Scene. It's different from the damagedRegion as that
one specifies the surface damage, i.e. the difference between the
current and the next frame, while the renderedRegion may include a
region that had to be repainted to repair the back buffer. The main
reason why we need renderedRegion is the X11 platform. On Wayland, it's
unused.
In the future, we will need to extend this api with output layers.
The ifdefs for have_gbm obfuscate the code unnecessarily - the drm backend
is not a great experience with qpainter, so in practice noone should ship
it without gbm anyways.
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
The proprietary NVidia driver now supports gbm, which vastly improves the
user experience. For older devices that will not get gbm support dropping
EglStreams will likely not have a big impact as it has several session breaking
issues anyways.
By removing the backend a lot of logic can be simplified, most notably multi-gpu.
The main motivation behind this change is to move management of drm
blobs out of property wrappers in specialized wrappers to simplify state
management with blobs.
Connector mode blobs are created on demand.
When we switch CRTCs it can happen that a CRTC would stay enabled yet has
no connectors anymore. In this case the kernel may reject our atomic commit,
which would cause the modeset to fail. To counteract that, properly disable
unused drm objects
Currently KWin is combining modesets with presentation, which causes problems
when multiple monitors are used and crtcs need to be switched around, because
taking away a CRTC from another output causes the driver to disable the
other output. In order to avoid such problems, delay presentation until
all pipelines are ready to present and then do a modeset with a single atomic
commit. To process the resulting page flip events properly this commit also
ports KWin to page_flip_handler2 and changes how the pageFlipped and
notifyFrameFailed signals are processed.
Hardware constraints limit the number of crtcs and which connector + crtc
combinations can work together. The current code is searching for working
combinations when a hotplug happens but that's not enough, it also needs
to happen when the user enables or disables outputs and when modesets are
done, and the configuration change needs to be applied with a single atomic
commit.
This commit removes the hard dependency of DrmPipeline on crtcs by moving
the pending state of outputs from the drm objects to DrmPipeline itself,
which ensures that it's independent from the set of drm objects currently
used. It also changes requests from KScreen to be applied truly atomically.
This allows using base opengl backends in libkwin, which can be useful
later on for the purpose of moving the ownership of render backends from
the Scene class to the Compositor class.
This improves file organization in kwin by putting backends in a single
directory.
It also makes easier to discover kwin's low level components for new
contributors because the plugins directory may come as the last place to
look for. When one hears "plugin", the first thing that comes to mind is
regular plugins, not low level backends.