On Wayland, a window can have subsurfaces. The spec doesn't require the
main surface and its sub-surfaces to have the same scale factor.
Given that Toplevel::bufferScale() makes no sense with Wayland windows,
this change drops it to make code more reasonable and to prevent people
from using Toplevel::bufferScale().
With a persistent vbo, kwin will allocate one big enough buffer and
allocate memory out of it.
In order to prevent overwriting vertex buffer data that is currently
being accessed by the GPU, fences are inserted at the end of frame.
The signaled fences are destroyed after the buffer swap operation, which
seems a bit odd because the just inserted fence most likely won't be
signaled. Perhaps it's a historical artifact?
This change rearranges fence cleanup so it's performed right before
starting a new frame. With it, kwin will most likely re-use the
previously used memory chunk because there will be plenty of time for
the fence to become signaled.
Another motivation behind this change is to make refactoring SceneOpenGL
code easier. As is, m_backend->endFrame() is wrapped in
GLVertexBuffer::endOfFrame() and GLVertexBuffer::framePosted(). With
that, the Compositor can't call m_backend->endFrame(), which can be
desired for cleaning up render backend abstractions.
Currently, when screencasting a window, kwin may render a window into a
temporary offscreen texture, copy that offscreen texture to the dma-buf
render target, and discard the offscreen texture.
Allocating and deallocating offscreen textures is inefficient. Another
issue is that the screencast plugin uses Scene::Window::windowTexture().
It's a blocker for killing scene windows.
This change introduces a base ScreenCastSource type. It allows us to
move away from Scene::Window::windowTexture() and make the dma-buf code
path efficient with applications such as Firefox that utilize
sub-surfaces.
With the ScreenCastSource, kwin can also provide screen cast frames with
arbitrary device pixel ratio.
Like top level clients, apply plasmashell roles to popups as well (limiting them, don't allow dock or desktop roles in poups as they don't make sense)
This makes possible to recognize plasma tooltips as tooltips, treating them in a way closer to X, and makes morphingpopups work on wayland
After user edits the name of a desktop, the search field is no longer
focused. If the user starts typing text, one could expect that it will
be forwarded to the search field without requiring a click.
This change forwards unhandled key events to the search field to ensure
that searching is intuitive.
Since binary effects are installed in their own directory, checking
service type is redundant. Also, KPluginMetaData::serviceTypes() has
been deprecated.
Task: https://phabricator.kde.org/T14483
The ifdefs for have_gbm obfuscate the code unnecessarily - the drm backend
is not a great experience with qpainter, so in practice noone should ship
it without gbm anyways.
The Compositor contains nothing that can potentially get dirty and need
repainting.
As is, the advantages of this move aren't really noticeable, but it
makes sense with multiple scenes.
Backend parts are far from ideal, they can be improved later on as we
progress with the scene redesign.
The main idea behind the render backend is to decouple low level bits
from scenes. The end goal is to make the render backend provide render
targets where the scene can render.
Design-wise, such a split is more flexible than the current state, for
example we could start experimenting with using qtquick (assuming that
the legacy scene is properly encapsulated) or creating multiple scenes,
for example for each output layer, etc.
So far, the RenderBackend class only contains one getter, more stuff will
be moved from the Scene as it makes sense.
The proprietary NVidia driver now supports gbm, which vastly improves the
user experience. For older devices that will not get gbm support dropping
EglStreams will likely not have a big impact as it has several session breaking
issues anyways.
By removing the backend a lot of logic can be simplified, most notably multi-gpu.
The current "Minimize Overlapping" window placement tends to position
windows in locations that seem completely random, typically in a screen
corner. It is doing this because, true to its name, it is trying to
avoid overlapping other windows as much as possible. However in practice
this is rarely helpful. When the user opens a new window, it's because
they want to use it, and positioning the window far from where the
user is likely to be looking is counter-productive. This is even more
true on today's large and wide displays, where placing the window in a
corner may position it entirely outside the user's current field of
vision. We get bug reports about this exact issue for notifications
(which always appear in a corner by default) by users of such screens.
For notifications, this can be justifiable because notifications are
designed to be ignorable; app windows on the other hand, are not.
As a result, I commonly see Plasma users open windows and then
immediately, reflexively grab the window's titlebar and drag it to the
center of the screen. I have seen my wife do this. I have seen every
YouTube reviewer of Plasma do this. I have even see fellow KDE
developers at sprints do this. It seems like quite a common impulse
to want a newly-opened window to appear in the center of the screen,
which is where the user is likely to already be looking.
Thankfully, KWin already has a window placement mode that does this
automatically: "Centered". Accordingly, this commit changes the default
KWin window placement mode from "Minimize Overlapping" to "Centered".
No kconf migration script is provided because this is a better default
for most people in most cases, and existing users are highly likely to
appreciate this change.
The main motivation behind this change is to move management of drm
blobs out of property wrappers in specialized wrappers to simplify state
management with blobs.
Connector mode blobs are created on demand.
When we switch CRTCs it can happen that a CRTC would stay enabled yet has
no connectors anymore. In this case the kernel may reject our atomic commit,
which would cause the modeset to fail. To counteract that, properly disable
unused drm objects
Currently KWin is combining modesets with presentation, which causes problems
when multiple monitors are used and crtcs need to be switched around, because
taking away a CRTC from another output causes the driver to disable the
other output. In order to avoid such problems, delay presentation until
all pipelines are ready to present and then do a modeset with a single atomic
commit. To process the resulting page flip events properly this commit also
ports KWin to page_flip_handler2 and changes how the pageFlipped and
notifyFrameFailed signals are processed.
Hardware constraints limit the number of crtcs and which connector + crtc
combinations can work together. The current code is searching for working
combinations when a hotplug happens but that's not enough, it also needs
to happen when the user enables or disables outputs and when modesets are
done, and the configuration change needs to be applied with a single atomic
commit.
This commit removes the hard dependency of DrmPipeline on crtcs by moving
the pending state of outputs from the drm objects to DrmPipeline itself,
which ensures that it's independent from the set of drm objects currently
used. It also changes requests from KScreen to be applied truly atomically.