The proprietary NVidia driver now supports gbm, which vastly improves the
user experience. For older devices that will not get gbm support dropping
EglStreams will likely not have a big impact as it has several session breaking
issues anyways.
By removing the backend a lot of logic can be simplified, most notably multi-gpu.
The current "Minimize Overlapping" window placement tends to position
windows in locations that seem completely random, typically in a screen
corner. It is doing this because, true to its name, it is trying to
avoid overlapping other windows as much as possible. However in practice
this is rarely helpful. When the user opens a new window, it's because
they want to use it, and positioning the window far from where the
user is likely to be looking is counter-productive. This is even more
true on today's large and wide displays, where placing the window in a
corner may position it entirely outside the user's current field of
vision. We get bug reports about this exact issue for notifications
(which always appear in a corner by default) by users of such screens.
For notifications, this can be justifiable because notifications are
designed to be ignorable; app windows on the other hand, are not.
As a result, I commonly see Plasma users open windows and then
immediately, reflexively grab the window's titlebar and drag it to the
center of the screen. I have seen my wife do this. I have seen every
YouTube reviewer of Plasma do this. I have even see fellow KDE
developers at sprints do this. It seems like quite a common impulse
to want a newly-opened window to appear in the center of the screen,
which is where the user is likely to already be looking.
Thankfully, KWin already has a window placement mode that does this
automatically: "Centered". Accordingly, this commit changes the default
KWin window placement mode from "Minimize Overlapping" to "Centered".
No kconf migration script is provided because this is a better default
for most people in most cases, and existing users are highly likely to
appreciate this change.
The main motivation behind this change is to move management of drm
blobs out of property wrappers in specialized wrappers to simplify state
management with blobs.
Connector mode blobs are created on demand.
When we switch CRTCs it can happen that a CRTC would stay enabled yet has
no connectors anymore. In this case the kernel may reject our atomic commit,
which would cause the modeset to fail. To counteract that, properly disable
unused drm objects
Currently KWin is combining modesets with presentation, which causes problems
when multiple monitors are used and crtcs need to be switched around, because
taking away a CRTC from another output causes the driver to disable the
other output. In order to avoid such problems, delay presentation until
all pipelines are ready to present and then do a modeset with a single atomic
commit. To process the resulting page flip events properly this commit also
ports KWin to page_flip_handler2 and changes how the pageFlipped and
notifyFrameFailed signals are processed.
Hardware constraints limit the number of crtcs and which connector + crtc
combinations can work together. The current code is searching for working
combinations when a hotplug happens but that's not enough, it also needs
to happen when the user enables or disables outputs and when modesets are
done, and the configuration change needs to be applied with a single atomic
commit.
This commit removes the hard dependency of DrmPipeline on crtcs by moving
the pending state of outputs from the drm objects to DrmPipeline itself,
which ensures that it's independent from the set of drm objects currently
used. It also changes requests from KScreen to be applied truly atomically.
The GlStrictBinding flag indicates whether it's okay not to re-bind the X11
pixmap to the OpenGL surface texture if the corresponding window is damaged.
It doesn't really affect the SceneOpenGL, only low level backend stuff.
This ensures that the window will have correct geometry if a maximized
window changes preferred decoration mode. X11Client does something
similar, see X11Client::updateShape().
In hindsight, perhaps, AbstractClient::{create,destroy}Decoration() must
preserve the old frame geometry, but it's not clear how to do that
because it requires decoration updates to be truly async, otherwise
there will be ugly flickering.
Currently, the scene owns the renderer, which puts more
responsibilities on the scene other than painting windows and it also
puts some limitations on what we can do, for example, there can be only
one scene, etc.
This change decouples the scene and the renderer so the scene is more
swappable.
Scenes are no longer implemented as plugins because opengl backend
and scene creation needs to be wrapped in opengl safety points. We
could still create the render backend and then go through the list
of scene plugins, but accessing concrete scene implementation is
much much simpler. Besides that, having scenes implemented as plugins
is not worthwhile because there are only two scenes and each contributes
very small amount of binary size. On the other hand, we still need to
take into account how many times kwin accesses the hard drive to load
plugins in order to function as expected.
This allows using base opengl backends in libkwin, which can be useful
later on for the purpose of moving the ownership of render backends from
the Scene class to the Compositor class.
It has been disabled with Mesa for almost half a decade due to false
positives and even if it weren't disabled, it contributes to the startup
time.
The commit message that added the self test doesn't explain why it was
added, but if it was added to detect unstable drivers, it's not worth it.
With an opaque fullscreen window we can be sure that items under it don't
actually require us to repaint. This should yield some small efficiency
improvements and resolves stutter with adaptive sync.
BUG: 443872
FIXED-IN: 5.23.3
When binding we just need to be talking to the one client to make sure
it's set up. This saves us from waking up every other process only to
realise that nothing happened.
Windows in workspace.clientList() are sorted in the map order. This
means that the minimize all script will try to activate the last mapped
window when unminimizing windows, which is a bit annoying.
This change ensures that the minimize all script doesn't activate wrong
window by minimizing and unminimizing windows in the stacking order.
It's not a bullet-proof solution though, but it should produce good
enough results.