This is done by converting from the sRGB + gamma 2.2 input from clients
to linear with the color space of the output (BT.709 or BT2020 atm) in
a shadow buffer, and then convert from the shadow buffer to the transfer
function the output needs (sRGB or PQ).
The main motivation is to avoid scattering graphics buffer things around
kwin.
DmaBufParams struct has been moved to the OutputBackend, but with the
introduction of buffer allocators, we need to port screencasting code to
the new abstractions some time in the future.
A client can specify the following flags when creating a linux dmabuf
client buffer:
- y_invert
- interlaced
- bottom_first
Only the y_invert flag is supported by kwin. The interlaced and the
bottom_first flags are ignored. On the other hand, most clients don't
specify the dmabuf flags. For example, neither EGL nor Vulkan WSIs
use the y_invert flag.
The y_invert flag is undesired because it also blocks optimizations such
as direct scanout because DRM assumes that the origin is in the top left
corner.
Therefore, this change drops the support for linux dmabuf flags. From
the protocol perspective, this is fine. It can be viewed as buffer
import failing with the specified flags.
LinuxDmaBufV1ClientBuffer contains properties (formats, and flags) that
are not available in the base GraphicsBuffer type and there's no reason
to move it there.
In order to get rid of those properties (and eventually hide the
LinuxDmaBufV1ClientBuffer type from the public api), this change adds a
DmaBufAttributes getter in the GraphicsBuffer.
As of nowadays, most clients have switched to the linux-dmabuf protocol,
except Xwayland, which still needs the wl-drm protocol.
On the other hand, we would like to unify some buffer handling code.
There are a few options:
- drop the support for wl-drm protocol. Not doable, because Xwayland
still needs it, even though it uses the linux dmabuf feedback protocol
too
- re-implement the wl-drm protocol
- re-implement the minimal part of the wl-drm protocol needed by
Xwayland
This change goes after the third option. Only the node name and the
capabilities will be sent. The buffer factory requests are not
implemented, but they can be if we discover that some clients need them.
With the dmabuf multi-gpu path, a buffer is imported to the secondary GPU
and presented directly, but importing a buffer that's usable for scanout
is not possible that way on most hardware. To prevent CPU copy from being
needed in those cases, this commit introduces a fallback where the buffer
is imported for rendering only, and then copied to a local buffer that's
presented on the screen.
CCBUG: 452219
CCBUG: 465809
EGL_WL_bind_wayland_display definitions are needed only in one cpp file,
so move them there instead.
We need to duplicate EGL_WL_bind_wayland_display definitions because
libepoxy doesn't define them for us.
At the moment, the render backend provides its specific implementation
of LinuxDmaBufV1ClientBuffer. This has some of its limitations. For
example, due to the strong coupling, compositing restarts must be
handled carefully. It's hard to have a generic code path to import
dmabufs, which would be nice to have in order to unify graphics buffer
allocation across various backends; currently, it's all scattered.
To make the code simpler, this change drops the commented out YUV import
code path for now. Given that Mesa implicitly handles it, the need for
it is no longer so urgent.
At the moment, the buffers for wsi are allocated implicitly by the EGL
implementation, which is fine for "normal" use cases. But we start
hitting the ceiling the moment we need to something more advanced. For
example the EGL backend creates a dummy fbo object wrapping the default
framebuffer, meaning that we cannot pass it to qtquick (because it can
use its own opengl context).
Another reason for using explicit buffers is that it lets us to clean up
some output related abstractions.
The only one that does it differently is the DRM backend and it's just
an extension that takes GBM into account, otherwise it's effectively
copy-pasted code.
The goal is to create surface items for things that are not in the
workspace scene. RenderBackend perhaps is not a great place for these
factory functions. On the other hand, this change merely rewires code
from Scene to RenderBackend. I think that in distant future we could
make surface items pick surface texture type on their own, for what it's
worth that's what we would do in QtQuick.
With current and broken behavior in Mesa, the timeout will always be
reached. GPU resets don't take anywhere near even a second, making
the user wait for 10s has no use.
Due to being a compositor, kwin has to conform to some certain
interfaces. It means a lot of virtual functions and function tables to
integrate with C APIs. Naturally, we not always want to use every
argument in such functions.
Since we get -Wunused-parameter from -Wall, we have to plumb those
unused arguments in order to suppress compiler warnings at the moment.
However, I don't think that extra work is worth it. We cannot change or
alter prototypes in any way to fix the warning the desired way. Q_UNUSED
and similar macros are not good indicators of whether an argument is
used too, we tend to overlook putting or removing those macros. I've
also noticed that Q_UNUSED are not used to guide us with the removal no
longer needed parameters.
Therefore, I think it's worth adding -Wno-unused-parameter compiler
option to stop the compiler producing warnings about unused parameters.
It changes nothing except that we don't need to put Q_UNUSED anymore,
which can be really cumbersome sometimes. Note that it doesn't affect
unused variables, you'll still get a -Wunused-variable compiler warning
if a variable is unused.
Things such as Output, InputDevice and so on are made to be
multi-purpose. In order to make this separation more clear, this change
moves that code in the core directory. Some things still link to the
abstraction level above (kwin), they can be tackled in future refactors.
Ideally code in core/ should depend either on other code in core/ or
system libs.
Currently, the main user of these two functions is the X11 standalone
platform.
This change ports that code to Workspace::geometry(), which is not great
but the X11 backend already depends on the Workspace indirectly via the
Screens. Not sure if it's worth making the standalone X11 backend track
the xinerama rect internally.