This change introduces a new component - ColorManager that is
responsible for color management stuff.
At the moment, it's very naive. It is useful only for updating gamma
ramps. But in the future, it will be extended with more CMS-related
features.
The ColorManager depends on lcms2 library. This is an optional
dependency. If lcms2 is not installed, the color manager won't be built.
This also fixes the issue where colord and nightcolor overwrite each
other's gamma ramps. With this change, the ColorManager will resolve the
conflict between two.
One of the annoying things about EGL headers is that they include
platform headers by default, e.g. on X11, it's Xlib.h, etc.
The problem with Xlib.h is that it uses the define compiler directive to
declare constants and those constants have very generic names, e.g.
'None', which typically conflict with enums, etc.
In order to work around bad things coming from Xlib.h, we include
fixx11.h file that contains some workarounds to redefine some Xlib's
types.
There's a flag or rather two flags (EGL_NO_PLATFORM_SPECIFIC_TYPES and
EGL_NO_X11) that are cross-vendor and they can be used to prevent EGL
headers from including platform specific headers, such as Xlib.h [1]
The benefit of setting those two flags is that you can simply include
EGL/egl.h or epoxy/egl.h and the world won't explode due to Xlib.h
MESA_EGL_NO_X11_HEADERS is set to support older versions of Mesa.
[1] https://github.com/KhronosGroup/EGL-Registry/pull/111
Effects are given the interval between two consecutive frames. The main
flaw of this approach is that if the Compositor transitions from the idle
state to "active" state, i.e. when there is something to repaint,
effects may see a very large interval between the last painted frame and
the current. In order to address this issue, the Scene invalidates the
timer that is used to measure time between consecutive frames before the
Compositor is about to become idle.
While this works perfectly fine with Xinerama-style rendering, with per
screen rendering, determining whether the compositor is about to idle is
rather a tedious task mostly because a single output can't be used for
the test.
Furthermore, since the Compositor schedules pointless repaints just to
ensure that it's idle, it might take several attempts to figure out
whether the scene timer must be invalidated if you use (true) per screen
rendering.
Ideally, all effects should use a timeline helper that is aware of the
underlying render loop and its timings. However, this option is off the
table because it will involve a lot of work to implement it.
Alternative and much simpler option is to pass the expected presentation
time to effects rather than time between consecutive frames. This means
that effects are responsible for determining how much animation timelines
have to be advanced. Typically, an effect would have to store the
presentation timestamp provided in either prePaint{Screen,Window} and
use it in the subsequent prePaint{Screen,Window} call to estimate the
amount of time passed between the next and the last frames.
Unfortunately, this is an API incompatible change. However, it shouldn't
take a lot of work to port third-party binary effects, which don't use the
AnimationEffect class, to the new API. On the bright side, we no longer
need to be concerned about the Compositor getting idle.
We do still try to determine whether the Compositor is about to idle,
primarily, because the OpenGL render backend swaps buffers on present,
but that will change with the ongoing compositing timing rework.
These signals can be useful if you want to know what output exactly has
been disabled or enabled.
The outputEnabled signal is emitted after the outputAdded signal, and
the outputDisabled signal is emitted before the outputRemoved signal.
If eglSwapBuffers() fails, frame scheduling will be broken. KWin can't
recover from that, but still, having a log message might be useful for
the debugging purposes.
This change moves the XRender backend to platformsupport directory,
similar to the OpenGL and the QPainter backend. This allows to put
platform-specific logic in XRenderBackend.
In case hardware transforms can be used again, the shadow framebuffer
must be destroyed; otherwise rendered results will be distorted due to a
mismatch between the dimensions of the shadow framebuffer and the mode
size.
At the moment, the gbm_device for the primary device is destroyed before
the EGLDisplay is destroyed. This results in a crash in Mesa.
In order to fix the crash, this change ensures that the EGLDisplay is
destroyed before the gbm device.
This makes eglSwapBuffers() fail with per screen rendering enabled. In
long term, the screencast plugin has to create its own OpenGL context
and capture window frames after a compositing cycle has been performed.
However, it's currently tricky to do because of monitor screencasting.
Night Color adjusts the color temperature based on the current time in
your location. It's not a generic color correction module per se.
We need a central component that can be used by both night color and
colord integration to tweak gamma ramps and which will be able to
resolve conflicts between the two. The Night Color manager cannot be
such a thing because of its very specific usecase.
This change converts Night Color into a plugin to prepare some space for
such a component.
The tricky part is that the dbus api of Night Color has "ColorCorrect"
in its name. I'm afraid we cannot do that much about it without breaking
API compatibility.
As things are right now, we can only do 32bit textures for dmabuf (see
gbm_bo_format in gbm.h). This means that we were lying to our receivers
when we had 24bit textures by then giving a 32bit texture instead.
This changes it so we request a dummy texture before starting and if we
are offered one we assume they're available and offer a 32bits stream
directly (i.e. BGRA).
The buffer offset for client-side decorated windows is not 0, this plus
mixing the frame position and the client size may result in clipped
thumbnails of client-side decorated applications, such as gedit, etc.
BUG: 428595
krunner stuff doesn't really belong in kwin, it has nothing to do with
compositing or any other things that are the domain of compositors.
Given that, being as a plugin suits the krunner integration stuff best.
This change introduces basic colord integration in wayland session. It
is implemented as a binary plugin.
If an output is connected, the plugin will create the corresponding
colord device using the D-Bus API and start monitoring the device for
changes.
When a colord devices changes, the plugin will read the VCGT tag of the
current ICC color profile and apply it.
The scripting api is not suitable for implementing all features that
should not be implemented in libkwin. For example, the krunner
integration or screencasting are the things that don't belong to be
compiled right into kwin and yet we don't have any other choice.
This change introduces a quick and dirty plugin infrastructure that
can be used to implement things such as colord integration, krunner
integration, etc.
Without the KWindowSystem integration plugin, Wayland experience will be
negatively affected because windows created by kwin itself won't behave
as desired. Therefore it makes little sense to load this plugin at runtime.
On wayland, we know we're always going to load our internal QPA. Instead
of shipping a plugin and loading it dynamically we can use Qt static
plugins.
This should result in slightly faster load times, but also reduce the
number of moving pieces for kwin.
This also prevents anyone outside kwin loading our QPA which wouldn't
have made any sense and just crashed.
Currently, the OpenGLBackend and the QPainterBackend have hooks to
indicate the start and the end of compositing cycle, but in both cases,
the hooks have different names. This change fixes that inconsistency.
In order to allow per screen rendering, we need the Compositor to be
able to drive rendering on each screen. Currently, it's not possible
because Scene::paint() paints all screen.
With this change, the Compositor will be able to ask the Scene to paint
only a screen with the specific id.
Qt 5.15 introduced new syntax for defining Connections. Fix warnings like this one:
QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
All platforms that provide support for the QPainter render backend use
per screen rendering. Since there is no any way to test Xinerama-style
rendering, it's better to drop the dead code.
Listen to logind for resume notification and turn the outputs on when it
happens, much like we do when pressing a key.
This way laptops come back on when the lid opens.
BUG: 428424
QGraphicsRotation and Scale are QObject wrappers. It's not useful in
data structures where we're creating mulitple of these every frame. It's
large enough to appear in hotspot as taking over 1% of a regular frame.
We don't even use the QGraphicsRotation mapping inside scene for a
reason, so it's not giving us much.
It's technically an API break in libkwineffects. Pragamatically no-one
would use these. We also lose QGraphicsScale's origin, but we never
exposed this in PaintData's public header.
If window thumbnails have to be downscaled, it's up to the application
what filter must be used. Also, we don't really use the lanczos filter
because both x and y scale factors are 1.
AnimationEffect schedules repaints in postPaintWindow() and performs
cleanup in preScreenPaint(). With the X11-style rendering, this doesn't
have any issues, scheduled repaints will be reset during the next
compositing cycle.
But with per screen rendering, we might hit the following case
- Paint screen 0
- Reset scheduled repaints
- AnimationEffect::prePaintScreen(): update the timeline
- AnimationEffect::postPaintScreen(): schedule a repaint
- Paint screen 1
- Reset scheduled repaints
- AnimationEffect::prePaintScreen(): destroy the animation
- AnimationEffect::postPaintScreen(): no repaint is scheduled
- Return to the event loop
In this scenario, the repaint region scheduled by AnimationEffect will
be lost when compositing is performed on screen 1.
There is no any other way to fix this issue but maintain repaint regions
per each individual screen if per screen rendering is enabled.
BUG: 428439
They would override KScreen in case we were using a dock station that
brings 2 displays.
We'd get:
- udev: event for the first hotplughed screen
- kwin: process all screens properly (both)
- kscreen: would offer the right configuration for such displays
- udev: process the event for the second hotplug udev event
- kwin: restore the configuration
- kscreen: would think this is a conscious decision and embrace it as a
configuration
With this change we are only re-reading the configuration in case the
outputs changed.
At the moment, despite the protocol supporting it, we were not feeding
the EDIDs. KScreen was falling back to the output name so it didn't fail
horribly but it's still a good idea to provide all the data.
When dragging files on the desktop, the cursor image might be just too
big for the cursor plane, in which case we need to abandon hardware
cursors for a brief moment and use a software cursor. Once the files
have been dropped and the cursor image is small enough, we can go back
to using hw cursors.
BUG: 424589
We use the GL_LINEAR magnification filter. This means that GL_REPEAT
wrap mode cannot be used for the software cursor because sampling texels
beyond the right texture edge is the same as sampling texels on the
left edge. This may produce undesired visual artifacts.
Currently, if there is no pointer, only the hardware cursor will be
hidden. If the software cursor is forced, you are going to see a dead
immovable cursor.
If an output is rotated, we will compute a transform matrix for the
cursor plane to rotate its contents.
In order to compute that matrix we need the rect of the cursor in the
device-independent pixels, the scale factor and the output transform.
The problem is that we provide a rect of the cursor in the native
pixels. This may result in the cursor being partially or fully clipped.
CCBUG: 424589
If you play some video and the software cursor doesn't hover it, then
the shadow cast by the cursor will be getting darker and darker with
every frame.
The main reason for that is that kwin paints the software cursor even
if the rect behind it hasn't been damaged or repainted.
If a cursor animation is driven purely by frame callbacks and kwin
uses hardware cursors, the cpu usage may spike to 100%.
This change addresses that issue by sending frame callbacks after a
compositing cycle has been performed.
GLTexture::width() and GLTexture::height() return the size of the cursor
texture in native pixels, but we need a size in device independent pixels.
CCBUG: 424589
Qt checks OpenGL version to determine if some features can be enabled.
This change ensures that the format EGLPlatformContext returns has
properly initialized the OpenGL version, the context profile and the
format options (e.g. whether it's a debug context, etc).
Currently, we use glFinish() to ensure that stream consumers don't see
corrupted or rather incomplete buffers. This is a serious issue because
glFinish() not only prevents the gpu from processing new GL commands,
but it also blocks the compositor.
This change addresses the blocking issue by using native fences. With
the proposed change, after finishing recording a frame, a fence is
inserted in the command stream. When the native fence is signaled, the
pending pipewire buffer will be enqueued.
If the EGL_ANDROID_native_fence_sync extension is not supported, we'll
fall back to using glFinish().
On Wayland, internal windows that use OpenGL are rendered into fbos,
which are later handed over to kwin. In order to achieve that, our QPA
creates OpenGL contexts that share resources with the scene's context.
The problems start when compositing has been restarted. If user changes
any compositing settings, the underlying render backend will be
reinitialized and with it, the scene's context will be destroyed. Thus,
we no longer can accept framebuffer objects from internal windows.
This change addresses the framebuffer object sharing problem by adding
a so called global share context. It persists throughout the lifetime of
kwin. It can never be made current. The scene context and all contexts
created in our QPA share resources with it.
Therefore we can destroy the scene OpenGL context without affecting
OpenGL contexts owned by internal windows, e.g. the outline visual or
tabbox.
It's worth noting that Qt provides a way to create a global share
context. But for our purposes it's not suitable since the share
context must be known when QGuiApplication attempts to instantiate a
QOpenGLContext object. At that moment, the backend is not initialized
and thus the EGLDisplay is not available yet.
BUG: 415798
If the surfaceless context extension is unsupported by the underlying
platform, the QPA will use the EGLSurface of the first output to make
OpenGL contexts current.
If an internal window attempts to make an OpenGL context current while
compositing is being restarted, for example it's typically the case with
the composited outline visual, QPA will either try to make the context
current with a no longer valid EGLSurface for the first output or will
crash during the call to Platform::supportsSurfacelessContext(). The
latter needs more explanation. After the compositingToggled() signal has
been emitted, there is no scene and supportsSurfacelessContext() doesn't
handle this case.
In either case, we could return EGL_NO_SURFACE if compositing is being
restarted, but if the underlying platform doesn't support the surfaceless
context extension, then the composited outline will not be able to
delete used textures, framebuffer objects, etc.
This change addresses that problem by making sure that every platform
window has a pbuffer allocated in case the surfaceless context extension
is unsupported.
Currently, flip output transformations in the software fallback code
path are equivalent to normal rotate output transformations.
This change implements flip output transformations according to the
wl_output spec.
Currently, when the DRM platform uses cursor planes, the cursor on
a rotated output may be cropped because the math behind the current
cursor transform matrix is off.
In order to fix the cropping issue, this change replaces the current
cursor transform matrix with the core part of the surface-to-buffer
matrix, which was written against the wl_output spec.
BUG: 427605
CCBUG: 427060
Currently, every time compositing is restarted, both the gbm and the egl
streams backend will re-obtain the EGLDisplay object.
This is wrong because the core assumption is that the EGL display doesn't
change once it has been obtained.
On some devices, the GPU nodes are also added as /dev/dri/cardX, they
are not useful for KMS purposes and does not have display resources.
If we encounter such cards, then skip them.