Effects are given the interval between two consecutive frames. The main
flaw of this approach is that if the Compositor transitions from the idle
state to "active" state, i.e. when there is something to repaint,
effects may see a very large interval between the last painted frame and
the current. In order to address this issue, the Scene invalidates the
timer that is used to measure time between consecutive frames before the
Compositor is about to become idle.
While this works perfectly fine with Xinerama-style rendering, with per
screen rendering, determining whether the compositor is about to idle is
rather a tedious task mostly because a single output can't be used for
the test.
Furthermore, since the Compositor schedules pointless repaints just to
ensure that it's idle, it might take several attempts to figure out
whether the scene timer must be invalidated if you use (true) per screen
rendering.
Ideally, all effects should use a timeline helper that is aware of the
underlying render loop and its timings. However, this option is off the
table because it will involve a lot of work to implement it.
Alternative and much simpler option is to pass the expected presentation
time to effects rather than time between consecutive frames. This means
that effects are responsible for determining how much animation timelines
have to be advanced. Typically, an effect would have to store the
presentation timestamp provided in either prePaint{Screen,Window} and
use it in the subsequent prePaint{Screen,Window} call to estimate the
amount of time passed between the next and the last frames.
Unfortunately, this is an API incompatible change. However, it shouldn't
take a lot of work to port third-party binary effects, which don't use the
AnimationEffect class, to the new API. On the bright side, we no longer
need to be concerned about the Compositor getting idle.
We do still try to determine whether the Compositor is about to idle,
primarily, because the OpenGL render backend swaps buffers on present,
but that will change with the ongoing compositing timing rework.
This change moves the XRender backend to platformsupport directory,
similar to the OpenGL and the QPainter backend. This allows to put
platform-specific logic in XRenderBackend.
The buffer offset for client-side decorated windows is not 0, this plus
mixing the frame position and the client size may result in clipped
thumbnails of client-side decorated applications, such as gedit, etc.
BUG: 428595
Currently, the OpenGLBackend and the QPainterBackend have hooks to
indicate the start and the end of compositing cycle, but in both cases,
the hooks have different names. This change fixes that inconsistency.
In order to allow per screen rendering, we need the Compositor to be
able to drive rendering on each screen. Currently, it's not possible
because Scene::paint() paints all screen.
With this change, the Compositor will be able to ask the Scene to paint
only a screen with the specific id.
All platforms that provide support for the QPainter render backend use
per screen rendering. Since there is no any way to test Xinerama-style
rendering, it's better to drop the dead code.
QGraphicsRotation and Scale are QObject wrappers. It's not useful in
data structures where we're creating mulitple of these every frame. It's
large enough to appear in hotspot as taking over 1% of a regular frame.
We don't even use the QGraphicsRotation mapping inside scene for a
reason, so it's not giving us much.
It's technically an API break in libkwineffects. Pragamatically no-one
would use these. We also lose QGraphicsScale's origin, but we never
exposed this in PaintData's public header.
If window thumbnails have to be downscaled, it's up to the application
what filter must be used. Also, we don't really use the lanczos filter
because both x and y scale factors are 1.
AnimationEffect schedules repaints in postPaintWindow() and performs
cleanup in preScreenPaint(). With the X11-style rendering, this doesn't
have any issues, scheduled repaints will be reset during the next
compositing cycle.
But with per screen rendering, we might hit the following case
- Paint screen 0
- Reset scheduled repaints
- AnimationEffect::prePaintScreen(): update the timeline
- AnimationEffect::postPaintScreen(): schedule a repaint
- Paint screen 1
- Reset scheduled repaints
- AnimationEffect::prePaintScreen(): destroy the animation
- AnimationEffect::postPaintScreen(): no repaint is scheduled
- Return to the event loop
In this scenario, the repaint region scheduled by AnimationEffect will
be lost when compositing is performed on screen 1.
There is no any other way to fix this issue but maintain repaint regions
per each individual screen if per screen rendering is enabled.
BUG: 428439
We use the GL_LINEAR magnification filter. This means that GL_REPEAT
wrap mode cannot be used for the software cursor because sampling texels
beyond the right texture edge is the same as sampling texels on the
left edge. This may produce undesired visual artifacts.
If an output is rotated, we will compute a transform matrix for the
cursor plane to rotate its contents.
In order to compute that matrix we need the rect of the cursor in the
device-independent pixels, the scale factor and the output transform.
The problem is that we provide a rect of the cursor in the native
pixels. This may result in the cursor being partially or fully clipped.
CCBUG: 424589
If you play some video and the software cursor doesn't hover it, then
the shadow cast by the cursor will be getting darker and darker with
every frame.
The main reason for that is that kwin paints the software cursor even
if the rect behind it hasn't been damaged or repainted.
If a cursor animation is driven purely by frame callbacks and kwin
uses hardware cursors, the cpu usage may spike to 100%.
This change addresses that issue by sending frame callbacks after a
compositing cycle has been performed.
GLTexture::width() and GLTexture::height() return the size of the cursor
texture in native pixels, but we need a size in device independent pixels.
CCBUG: 424589
Currently, we use glFinish() to ensure that stream consumers don't see
corrupted or rather incomplete buffers. This is a serious issue because
glFinish() not only prevents the gpu from processing new GL commands,
but it also blocks the compositor.
This change addresses the blocking issue by using native fences. With
the proposed change, after finishing recording a frame, a fence is
inserted in the command stream. When the native fence is signaled, the
pending pipewire buffer will be enqueued.
If the EGL_ANDROID_native_fence_sync extension is not supported, we'll
fall back to using glFinish().
Every time Platform::supportsQpaContext() is called, we go through the
list of supported extensions and perform a string comparison op. This is
not really cheap.
Uses a setter and clear method pattern rather than having the code
repeated.
Instead of keeping a QPointer, now we are a QObject and we get notified
about destruction intention directly, so we can clear the pointer when
necessary.
A timer could have fired at any time. We process mulitple QtQuickViews
on timers which change the GL context.
Deleting a kwin GLTexture calls glDeleteTextures/glDeleteFramebuffers.
Surprisingly I haven't seen a crash report from this, but it doesn't
look right.
Summary:
Notify the driver about the parts of the screen that will be repainted.
In some cases this can be benefitial. This is especially useful on lima
and panfrost devices (e.g. pinephone, pinebook, pinebook pro).
Test Plan:
Tested on a pinebook pro with a late mesa version.
Basically I implemented it, then it didn't work and I fixed it.
Maybe next step we want to look into our damage algorithm.
The main advantage of SPDX license identifiers over the traditional
license headers is that it's more difficult to overlook inappropriate
licenses for kwin, for example GPL 3. We also don't have to copy a
lot of boilerplate text.
In order to create this change, I ran licensedigger -r -c from the
toplevel source directory.
We currently deal with three distinct coordinate spaces - the window
pixmap coordinate space, the window coordinate space, and the buffer
pixel coordinate space.
This change introduces a couple of helper methods to make it easier
to map points from the window pixmap space to the other two spaces.
The main motivation behind the new helpers is to break the direct
relationship between the surface-local coordinates and buffer pixel
coordinates for wayland surfaces.
Summary: Don't include the \n at the end of the debug messages
Test Plan: Now I can see the debug errors without an empty line below
Reviewers: #kwin, zzag
Reviewed By: #kwin, zzag
Subscribers: zzag, kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D29684
No window quads are generated for sub-surfaces right now. This leads to
issues with effects that operate on window quads, e.g. magic lamp and
wobbly windows. Furthermore, the OpenGL scene needs window quads to
properly clip windows during the rendering process.
The best way to render sub-surfaces would be with a little help from a
scene graph. Contrary to GNOME, KDE hasn't developed any scene graph
implementation that we could use in kwin. As a short term solution, this
change adjusts the scene to generate window quads.
Window quads are generated as we traverse the current window pixmap tree
in the depth-first search manner. In order to match a list of quads with
a particular WindowPixmap, we assign an id to each quad.
BUG: 387313
FIXED-IN: 5.19.0
Differential Revision: https://phabricator.kde.org/D29131
In order to generate window quads for sub-surfaces, we need a valid
window pixmap tree. The problem is that the window pixmap tree is
created too late in the rendering process. This change adjusts the
scene so it creates window pixmap trees before buildQuads().
Differential Revision: https://phabricator.kde.org/D29131