Windows have two kinds of repaints - window repaints and layer repaints.
The main difference between the two is that the former is specified in
the window-local coordinates while the latter is specified in the global
screen coordinates.
Window repaints are useful in case the position of the window doesn't
matter, for example for repainting damaged regions, etc.
But its biggest issue is that with per screen rendering, it's not
possible to determine what screens exactly have to be repainted. The
final area affected by the window repaint will be known only at
compositing time. If a window gets damaged, we have to schedule a
repaint on ALL outputs. Understandably, this costs a little bit in terms
of performance.
This change replaces the window repaints with the layer repaints. By
doing so, we can avoid scheduling repaints on outputs that don't
intersect with the dirty region and improve performance.
At the moment, our frame scheduling infrastructure is still heavily
based on Xinerama-style rendering. Specifically, we assume that painting
is driven by a single timer, etc.
This change introduces a new type - RenderLoop. Its main purpose is to
drive compositing on a specific output, or in case of X11, on the
overlay window.
With RenderLoop, compositing is synchronized to vblank events. It
exposes the last and the next estimated presentation timestamp. The
expected presentation timestamp can be used by effects to ensure that
animations are synchronized with the upcoming vblank event.
On Wayland, every outputs has its own render loop. On X11, per screen
rendering is not possible, therefore the platform exposes the render
loop for the overlay window. Ideally, the Scene has to expose the
RenderLoop, but as the first step towards better compositing scheduling
it's good as is for the time being.
The RenderLoop tries to minimize the latency by delaying compositing as
close as possible to the next vblank event. One tricky thing about it is
that if compositing is too close to the next vblank event, animations
may become a little bit choppy. However, increasing the latency reduces
the choppiness.
Given that, there is no any "silver bullet" solution for the choppiness
issue, a new option has been added in the Compositing KCM to specify the
amount of latency. By default, it's "Medium," but if a user is not
satisfied with the upstream default, they can tweak it.
In order to unlock per screen rendering, we need to track repaints for
every screen individually. While we could do this in the Compositor class,
tracking repaints in the Scene seems a better alternative in long run
because we will have to instantiate a Scene for each composited screen
one day.
In rare cases, the Compositor has to perform a compositing if there is
nothing to repaint. For example, if a client has committed a frame
callback to get notified about the next vblank event without damaging
the surface.
The compositing timing algorithm assumes that glXSwapBuffers() and
eglSwapBuffers() block. While this was true long time ago with NVIDIA
drivers, nowadays, it's not the case. The NVIDIA driver queues
several buffers in advance and if the application runs out of them,
it will block. With Mesa driver, swapping buffer was never blocking.
This change makes the render backends swap buffers right after ending
a compositing cycle. This may potentially block, but it shouldn't be
an issue with modern drivers. In case it gets proven, we can move
glXSwapBuffers() and eglSwapBuffers() in a separate thread.
Note that this change breaks the compositing timing algorithm, but
it's already sort of broken with Mesa drivers.
Effects are given the interval between two consecutive frames. The main
flaw of this approach is that if the Compositor transitions from the idle
state to "active" state, i.e. when there is something to repaint,
effects may see a very large interval between the last painted frame and
the current. In order to address this issue, the Scene invalidates the
timer that is used to measure time between consecutive frames before the
Compositor is about to become idle.
While this works perfectly fine with Xinerama-style rendering, with per
screen rendering, determining whether the compositor is about to idle is
rather a tedious task mostly because a single output can't be used for
the test.
Furthermore, since the Compositor schedules pointless repaints just to
ensure that it's idle, it might take several attempts to figure out
whether the scene timer must be invalidated if you use (true) per screen
rendering.
Ideally, all effects should use a timeline helper that is aware of the
underlying render loop and its timings. However, this option is off the
table because it will involve a lot of work to implement it.
Alternative and much simpler option is to pass the expected presentation
time to effects rather than time between consecutive frames. This means
that effects are responsible for determining how much animation timelines
have to be advanced. Typically, an effect would have to store the
presentation timestamp provided in either prePaint{Screen,Window} and
use it in the subsequent prePaint{Screen,Window} call to estimate the
amount of time passed between the next and the last frames.
Unfortunately, this is an API incompatible change. However, it shouldn't
take a lot of work to port third-party binary effects, which don't use the
AnimationEffect class, to the new API. On the bright side, we no longer
need to be concerned about the Compositor getting idle.
We do still try to determine whether the Compositor is about to idle,
primarily, because the OpenGL render backend swaps buffers on present,
but that will change with the ongoing compositing timing rework.
In order to allow per screen rendering, we need the Compositor to be
able to drive rendering on each screen. Currently, it's not possible
because Scene::paint() paints all screen.
With this change, the Compositor will be able to ask the Scene to paint
only a screen with the specific id.
Once the main surface has been unmapped, we are no longer interested in
any changes that indicate that the window quads cache should be discarded
This also fixes a bug where the scene holds a subsurface monitor object
even after the associated window has been destroyed.
AnimationEffect schedules repaints in postPaintWindow() and performs
cleanup in preScreenPaint(). With the X11-style rendering, this doesn't
have any issues, scheduled repaints will be reset during the next
compositing cycle.
But with per screen rendering, we might hit the following case
- Paint screen 0
- Reset scheduled repaints
- AnimationEffect::prePaintScreen(): update the timeline
- AnimationEffect::postPaintScreen(): schedule a repaint
- Paint screen 1
- Reset scheduled repaints
- AnimationEffect::prePaintScreen(): destroy the animation
- AnimationEffect::postPaintScreen(): no repaint is scheduled
- Return to the event loop
In this scenario, the repaint region scheduled by AnimationEffect will
be lost when compositing is performed on screen 1.
There is no any other way to fix this issue but maintain repaint regions
per each individual screen if per screen rendering is enabled.
BUG: 428439
If you play some video and the software cursor doesn't hover it, then
the shadow cast by the cursor will be getting darker and darker with
every frame.
The main reason for that is that kwin paints the software cursor even
if the rect behind it hasn't been damaged or repainted.
Currently, we use glFinish() to ensure that stream consumers don't see
corrupted or rather incomplete buffers. This is a serious issue because
glFinish() not only prevents the gpu from processing new GL commands,
but it also blocks the compositor.
This change addresses the blocking issue by using native fences. With
the proposed change, after finishing recording a frame, a fence is
inserted in the command stream. When the native fence is signaled, the
pending pipewire buffer will be enqueued.
If the EGL_ANDROID_native_fence_sync extension is not supported, we'll
fall back to using glFinish().
Every time Platform::supportsQpaContext() is called, we go through the
list of supported extensions and perform a string comparison op. This is
not really cheap.
Uses a setter and clear method pattern rather than having the code
repeated.
Instead of keeping a QPointer, now we are a QObject and we get notified
about destruction intention directly, so we can clear the pointer when
necessary.
Currently, we don't compute the clip region properly for some client-
side decorated applications, for example gedit, due to mixing several
separate coordinate spaces.
This change ensures that the window pixmap shape and the opaque region
are in the same coordinate space - the window pixmap coordinate.
In order to simplify mapping regions from the window pixmap coordinates
to the global screen coordinates, a new helper method was introduced in
the WindowPixmap class - mapToGlobal().
Summary:
Notify the driver about the parts of the screen that will be repainted.
In some cases this can be benefitial. This is especially useful on lima
and panfrost devices (e.g. pinephone, pinebook, pinebook pro).
Test Plan:
Tested on a pinebook pro with a late mesa version.
Basically I implemented it, then it didn't work and I fixed it.
Maybe next step we want to look into our damage algorithm.
The main advantage of SPDX license identifiers over the traditional
license headers is that it's more difficult to overlook inappropriate
licenses for kwin, for example GPL 3. We also don't have to copy a
lot of boilerplate text.
In order to create this change, I ran licensedigger -r -c from the
toplevel source directory.
We need a couple of connections to ensure that the window pixmap, the
window quad cache, and the window shape get discarded when the geometry
of the toplevel has been changed. Currently, those connections are
created with the receiver object being the scene. The problem is that
the associated wayland surface may outlive the toplevel and we don't
cleanup the connections after the scene window has been destroyed.
The fact that the connections don't get destroyed can lead to accessing
dangling pointers, which may result in a crash.
In order to ensure that the connections are broken automatically when
the scene window is destroyed, we need to ensure that the received
object is the scene window. That way, the connections will be destroyed
automatically.
We currently deal with three distinct coordinate spaces - the window
pixmap coordinate space, the window coordinate space, and the buffer
pixel coordinate space.
This change introduces a couple of helper methods to make it easier
to map points from the window pixmap space to the other two spaces.
The main motivation behind the new helpers is to break the direct
relationship between the surface-local coordinates and buffer pixel
coordinates for wayland surfaces.
No window quads are generated for sub-surfaces right now. This leads to
issues with effects that operate on window quads, e.g. magic lamp and
wobbly windows. Furthermore, the OpenGL scene needs window quads to
properly clip windows during the rendering process.
The best way to render sub-surfaces would be with a little help from a
scene graph. Contrary to GNOME, KDE hasn't developed any scene graph
implementation that we could use in kwin. As a short term solution, this
change adjusts the scene to generate window quads.
Window quads are generated as we traverse the current window pixmap tree
in the depth-first search manner. In order to match a list of quads with
a particular WindowPixmap, we assign an id to each quad.
BUG: 387313
FIXED-IN: 5.19.0
Differential Revision: https://phabricator.kde.org/D29131
We need to release the previous window pixmap if the new pixmap is
valid. However, it's currently the case only when the client has
attached either an fbo buffer or a wl_buffer. If an internal client
has attached a raster buffer, the previous window pixmap won't be
released.
In order to ensure that we're going to release the previous window
pixmaps no matter what type of buffer has been attached, this change
refactors WindowPixmap to use isValid() to verify that the new
window pixmap is valid.
Differential Revision: https://phabricator.kde.org/D29131
In order to generate window quads for sub-surfaces, we need a valid
window pixmap tree. The problem is that the window pixmap tree is
created too late in the rendering process. This change adjusts the
scene so it creates window pixmap trees before buildQuads().
Differential Revision: https://phabricator.kde.org/D29131
Summary:
The screenshot made on screens with scale factor were downscaled by their scale factor making them blurry.
It prevents taking screenshots of missing Hidpi related bugs showing the issues under Wayland.
This fix the case of a single screenshot, but not the rest:
Multiscreen screenshot downscales the screen using scale factor.
Spectacle rectangular selection screenshot is broken as soon as some scale factor different than 1 is used on any screen.
Test Plan:
Under Wayland with a scale factor on a screen, take a screenshot using spectacle.
The output image is not downscaled and has the same size as the screen resolution.
No other change to any other screenshot mode, or under X.
Reviewers: davidedmundson, #kwin
Reviewed By: davidedmundson, #kwin
Subscribers: kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D29010
Summary:
This will save the copy of some objects, especially PaintData classes that are
not copy-on-write.
It also follows the practice on other parts of the system.
Test Plan: Running it right now
Reviewers: #kwin, davidedmundson
Reviewed By: #kwin, davidedmundson
Subscribers: kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D28031
Summary:
When a window is being interactively resized, its contents may jump. The
reason why that happens is because KWin renders partially resized client
window. Composite extension spec says that a window will get a new pixmap
each time it is resized or mapped. This applies to the frame window, but
not to the client window itself. If the client window is resized,
off-screen storage for the frame window won't be reallocated. Therefore,
KWin may render partially resized client window if the client doesn't
attempt to be in sync with our rendering loop. Currently, the only way
to do that is to use extended frame counters, which are not supported by
KWin.
So, in order to fix visual artifacts during interactive resize, we need
somehow forcefully re-allocate off-screen storage for the frame window.
Unfortunately, Composite extension doesn't provide any request to do
that, so the only option we have is to resize the frame window.
BUG: 415839
FIXED-IN: 5.18.0
Reviewers: #kwin
Subscribers: davidedmundson, ngraham, alexde, fredrik, kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D26914
Summary:
Add a small getter to query information internally if the backend supports
swap events. Defaults to true as it is the default in the GBM Wayland backend.
Test Plan: i915
Reviewers: #kwin
Subscribers: kwin
Tags: #kwin
Maniphest Tasks: T11071
Differential Revision: https://phabricator.kde.org/D25298
This reverts commit 9151bb7b9e.
This reverts commit ac4dce1c20.
This reverts commit 754b72d155.
In order to make the fix work, we need to redirect the client window
instead of the frame window. However, we cannot to do that because
Xwayland expects the toplevel window(in our case, the frame window)
to be redirected.
Another solution to the texture bleeding issue must be found.
CCBUG: 257566
CCBUG: 360549
Summary:
Since KDE 4.2 - 4.3 times, KWin doesn't paint window decorations on real
X11 windows, except when compositing is turned off. This leaves us with
a problem. The actual client contents is inside a larger texture with no
useful pixel data around it. This and decoration texture bleeding are
the main factors that contribute to 1px gap between the server-side
decoration and client contents with effects such as wobbly windows, and
zoom.
Another problem with naming frame pixmap instead of client pixmap is
that it doesn't quite go along with wayland. It only makes more difficult
to abstract window quad generation in the scene.
Since we don't actually need the frame window when compositing is on,
there is nothing that holds us from redirecting client windows instead
of frame windows. This will help us to fix the texture bleeding issue
and also help us with the ongoing redesign of the scene.
Test Plan: X11 clients are still composited.
Reviewers: #kwin, davidedmundson
Reviewed By: #kwin, davidedmundson
Subscribers: davidedmundson, kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D25610
Summary:
Qt has its own thing where a type might also have corresponding list
alias, e.g. QObject and QObjectList, QWidget and QWidgetList. I don't
know why Qt does that, maybe for some historical reasons, but what
matters is that we copy this pattern here in KWin. While this pattern
might be useful with some long list types, for example
QList<QWeakPointer<TabBoxClient>> TabBoxClientList
in general, it causes more harm than good. For example, we've got two
new client types, do we need corresponding list typedefs for them? If
no, why do we have ClientList and so on?
Another problem with these typedefs is that you need to include utils.h
header in order to use them. A better way to handle such things is to
just forward declare a client class (if that's possible) and use it
directly with QList or QVector. This way translation units don't get
"bloated" with utils.h stuff for no apparent reason.
So, in order to make code more consistent and easier to follow, this
change drops some of our custom typedefs. Namely ConstClientList,
ClientList, DeletedList, UnmanagedList, ToplevelList, and GroupList.
Test Plan: Compiles.
Reviewers: #kwin
Subscribers: kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D24950
Summary:
Currently our Scene is quite naive about geometry. It assumes that the
window frame wraps the attached buffer/client. While this is true for X11
clients, such geometry model is not suitable for client-side decorated
clients, in our case for xdg-shell clients that set window geometry
other than the bounding rectangle of the main surface.
In general, the proposed solution doesn't make any concrete assumptions
about the order between frame and buffer geometry, however we may still
need to reconsider the design of Scene once it starts to generate quads
for sub-surfaces.
Reviewers: #kwin, davidedmundson
Reviewed By: #kwin, davidedmundson
Subscribers: davidedmundson, romangg, kwin
Tags: #kwin
Maniphest Tasks: T10867
Differential Revision: https://phabricator.kde.org/D24462
Summary:
Compositing in X11 was done time shifted, meaning that we paint first, then
wait one vblank interval length and present on prepareRenderingFrame the
previous paint result. This is supposed to make sure we don't miss the vblank
and in case of block till retrace be able to continue issuing commands and
only shortly before next vblank present.
This is counter-intuitiv, not how we do it on Wayland or even on MESA with X.
The reason seems to be that the GLX backend was in the beginning written
against Nvidia proprietary driver which needed this but nowadays even this
driver defaults to non-blocking behavior on buffer swap.
Therefore remove this legacy anomaly fully and directly present after paint.
We then wait one refresh cycle and in the future can optimize this by delaying
the paint and present till shortly before vsync.
Test Plan: kwin_x11 tested on i915 and Nvidia proprietary driver.
Reviewers: #kwin
Subscribers: zzag, alexeymin, kwin
Tags: #kwin
Maniphest Tasks: T11071
Differential Revision: https://phabricator.kde.org/D23514
Summary:
Selecting not to vsync does not make sense for an X11 compositor. In the end
we want clients to be able to present async if they want to but the compositor
is supposed to send swaps with vsync to the XServer in order to not generate
tearing artifacts.
There was also a detection logic which did some questionable things in case
vsync was not available. I don't think this is necessary at all since we can
just always run a timer to present with or without vsync.
Test Plan: kwin_x11 tested on i915.
Reviewers: #kwin, zzag
Subscribers: zzag, kwin
Tags: #kwin
Maniphest Tasks: T11071
Differential Revision: https://phabricator.kde.org/D23511
Summary:
In order to properly implement xdg_surface.set_window_geometry we need
two kinds of geometry - frame and buffer. The frame geometry specifies
visible bounds of the client on the screen, excluding client-side drop
shadows. The buffer geometry specifies rectangle on the screen that the
attached buffer or x11 pixmap occupies on the screen.
This change renames the geometry property to frameGeometry in order to
reflect the new meaning assigned to it as well to make it easier to
differentiate between frame geometry and buffer geometry in the future.
Reviewers: #kwin
Subscribers: kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D24334
Summary:
EffectQuickView/Scene is a convenient class to render a QtQuick
scenegraph into an effect.
Current methods (such as present windows) involve creating an underlying
platform window which is expensive, causes a headache to filter out
again in the rest of the code, and only works as an overlay.
The new class exposes things more natively to an effect where we don't
mess with real windows, we can perform the painting anywhere in the view
and we don't have issues with hiding/closing.
QtQuick has both software and hardware accelerated modes, and kwin also
has 3 render backends. Every combination is supported.
* When used in OpenGL mode for both, we render into an FBO export the
texture ID then it's up to the effect to render that into a scene.
* When using software QtQuick rendering we blit into an image, upload
that into a KWinGLTexture which serves as an abstraction layer and
render that into the scene.
* When using GL for QtQuick and XRender/QPainter in kwin everything is
rendered into the internal FBO, blit and exported as an image.
* When using software rendering for both an image gets passed directly.
Mouse and keyboard events can be forwarded, only if the effect
intercepts them.
The class is meant to be generic enough that we can remove all the
QtQuick code from Aurorae.
The intention is also to replace EffectFrameImpl using this backend and
we can kill all of the EffectFrame code throughout the scenes.
The close button in present windows will also be ported to this,
simplifiying that code base.
Classes that handle the rendering and handling QML are intentionally
split so that in the future we can have a declarative effects API create
overlays from within the same context. Similar to how one can
instantiate windows from a typical QML scene.
Notes:
I don't like how I pass the kwin GL context from the backends into the
effect, but I need something that works with the library separation. It
also currently has wayland problem if I create a QOpenGLContext before
the QPA is set up with a scene - but I don't have anything better?
I know for the EffectFrame we need an API to push things through the
effects stack to handle blur/invert etc. Will deal with that when we
port the EffectFrame.
Test Plan: Used in an effect
Reviewers: #kwin, zzag
Reviewed By: #kwin, zzag
Subscribers: zzag, kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D24215