The Item schedules repaints per scene delegate. Currently, there are no
any attached scene delegates when using software cursor, which results in
it freezing as soon as it stops moving.
The issue is addressed by using SceneDelegate instead of RenderLayerDelegate.
The proposed code is not great, but on the other hand, the plan is to
embed the software cursor in the workspace scene if needed.
BUG: 490440
While on pointers and keyboards the focus patterns follows rather
naturally, on touch screens it doesn't so much.
This change adapts our touch infrastructure to allow for multiple
surfaces to be issued touch events without forcing all interactions into
the same one.
Signed-off-by: Victoria Fischer <victoria.fischer@mercedes-benz.com>
The shape region is used to clip the window contents. But, in practice,
it's used by a few applications. The most notable is xeyes.
The APIs that the shape region requires are manageable, but it would be
much preferred if we had a much simpler design.
In terms of the shape region support on Wayland, it's not widely
supported across all wayland compositors, to my knowledge, only two
support it, some compositors even want to disable XSHAPE extension at all.
This change makes the Xwayland windows ignore the shape region to see if
any real world applications are affected by it. If not, then we could
safely simplify the scene bits later.
The Xorg session is unaffected by this change.
It's used only by the decoration renderer, but even it doesn't need it
because the atlas parts are padded.
From the API point of view, it's worth looking for alternative solutions,
like integrating the render target clear step in the render passes. And
texture uploading code usually doesn't need to clear the texture because
it is going to overwrite its contents anyway.
The main motivation behind this change is to make the ItemRendererOpenGL
use a homogeneous coordinate space for texture coordinates in order to
simplify rendering code.
The device pixels have been chosen because they are more agnostic about
the graphics api.
The main motivation behind this change is to make the ItemRendererOpenGL
use a homogeneous coordinate space for texture coordinates in order to
simplify rendering code.
The device pixels have been chosen because they are more agnostic about
the graphics api.
Some clients have two or more completely opaque surfaces stacked on top of each other,
optimizing the lower ones out makes direct scanout happen more often and more efficiently
when multiple planes are involved
The alpha modifier protocol allows clients to set a multiplier for the opacity
of a surface, which allows them to offload some operations to KWin, which
in turn may offload them to KMS in the future
When a wl_surface is unmapped, we need to stop updating the buffer
in SurfacePixmapWayland.
However, SurfaceItemWayland::freeze() doesn't unset m_surface, so
the SurfacePixmapWayland keeps updating the buffer even after the
surface is unmapped. This results in some closed windows losing their
contents when playing a window closing animation.
Right now it's just a helper to mark items as being affected by some effect,
to prevent direct scanout of the relevant item without needing to block direct
scanout for the whole screen
Rendering intents describe how to handle mapping between different colorspaces,
what to do with out of gamut values and what to do if the whitepoint doesn't match.
This way, clients can choose which behavior their content should get.
The mastering display colorimetry describes what part of the colorspace
is actually used, which is important when we're sending desired metadata
about a screen using the rec.2020 container colorspace, or when the client
uses an "infinite" / extended colorspace like scRGB
This moves some of the responsbilities up in the stack, which simplifies
the backends and opens up some future possibilities like making direct scanout
work for non-surface items
We don't need a pixmap for direct scanout, and the drm backend destroys the pixmap
when direct scanout is successful... so this check created a loop of direct scanout
working and not working, and worse, the client reallocating its buffers each time.
BUG: 485639
BUG: 485730
BUG: 485712
CCBUG: 477016
linux-drm-syncobj-v1 allows drivers and apps to synchronize KWin's buffer access
to their rendering, and synchronize their rendering to KWin's buffer release. This
fixes severe glitches with the proprietary NVidia driver and allows for some
performance improvements with Mesa too.
If two items display image data, the item renderer needs to special case
each item. It's not an extensible design, and my long term goal is to
introduce a separate tree specifically to solve this problem and also
help with computing the repaint damage automatically, instead of issuing
scheduleRepaint()s manually.
The first step is to refactor the item renderer so it merely takes the
input data and renders it. At the moment, it's not exactly the case
because surface textures are updated while painting the items, which
inherently requires special casing. This change moves surface texture
update code to the surface item so it's easier to refactor rendering code
in the item renderer.
Now that we have Wayland around, there's a whole branch of dependencies
that shouldn't be necessary anymore.
This allows to build KWin without all of it, allowing us to have a much
more compact alignment for cases where all the legacy software isn't
necessary anymore.
Bundle KWindowSystem X11-specific headers into it too, since it's part
of the same process.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
It's needed to properly render transformed overlay items. Ideally, the
ItemRenderer would split items that can be rendered with and without the
scissor test on its own. But we are not there yet, so pass the
PAINT_SCREEN_TRANSFORMED flag to force the ItemRendererOpenGL to use
hardware clipping.
Opaque is a QRegion in logical pixels, using .toRect will round to the
nearest integer in either direction. This can mean an area is considered
opaque outside the rendered area, leading to glitchy contents on
shadows.
This is most noticable on on X11 windows when fractional scaling is
used.
Long term I hope to move Item::opaque to QList<QRectF> and
WindowPrePaintData::opaque to device pixels.
Window::layoutDecorationRects() uses KDecoration2::Decoration::rect() to
get the bounding decoration rect.
While Decoration::rect() should normally match Window::rect(), they can
diverge for a brief moment during async geometry updates. The worst
possible case is that the cached item quads may not be invalidated when
the geometry updates saddle.
To fix that, make DecorationItem monitor decorated client size changes
instead of window frame geometry changes. The reason for that is that
Decoration::size() is effectively decorated client size with added border
margins.
This way no extra buffer space is going to be wasted for a decoration
that isn't there, and it might be nicer for fractional scaling as kwin
won't need to deal with border size voodoo cases.
A window is added to the workspace when it's mapped. It's assumed that
the first Window::windowShown signal indicates that. But it's not
entirely true. For example, if setHidden(false); setHidden(true); are
called in succession, the window will be marked as ready for painting
even though it isn't.
The Window::readyForPaintingChanged() signal fixes that. It's emitted
when the window is actually mapped.