We use the hw module's backlight and set the color to either 0
or FF depending on whetehr we want to have the screen on or off.
This turns the backlight off properly. It is bound to the toggleBlank
functionality so that we always turn on/off the backlight depending
on whether our compositor is on or off.
In addition we listen to key release events on the power button to
toggle the state.
REVIEW: 126083
If we don't have a dedicated device identifier we should use the default
mechanismn which involves using the WAYLAND_DISPLAY or WAYLAND_SOCKET env
variable.
We are going to switch to a proper nested approach similar to the
x11 backend. Given that we don't want to run on fullscreen anymore
but just open a nested window.
This gives a better tear down experience as it goes to black instead
of showing outdated screen and also it disables vsync which fixes a
crash on teardown.
For the functions from GL_FOO_robustness we want to resolve it by
ourselves in order to add a custom implementation if it's not available.
Unfortunately once epoxy.h is included this breaks as epoxy defines the
names and so through the preprocessor epoxy always wins.
So we need different names: all functions from robustness get a "kwin"
prefix and the usage is changed everywhere in kwin source code.
REVIEW: 125883
Begin of proper multiscreen support!
We load configuration sets for the connected outputs. Each set of
screens represents a unique configuration. For that we use the md5
sum of the edid+connector as uuid of an output. Each of the md5 sums
is then used to create a uuid of the output set. We can be quite certain
that this will generate unique ids for the use cases we will face.
The uuids are used as group names. And from there we read the global
position.
The uuids are considered internal information. It is not intended for
users to configure manually in the config file. The intended way to
configure will be the OutputManagementInterface which recently got added
to KWayland. Once KWin applies a configuration it will store it to config
so that it can be loaded on next startup.
The configuration looks like:
[DrmOutputs][abcdef0123][0123abcdef]
Position=0,0
[DrmOutputs][abcdef0123][fbca3bcdef]
Position=1280,0
This is an example for two outputs set next to each other.
Reviewed-By: Sebastian Kügler
As we don't have GLPLatform before the backend is fully created
the AbstractEglBackend has a new method isOpenGLES() -> bool
which determines based on QOpenGLContext::openGLModuleType().
Heavily inspired by how the glxbackend works: present happens on
rendering start and not on end frame. In addition present needs to
check whether there is something to show to not block incorrectly.
This is needed as present might also be called from going to idle.
With this change the Nexus5 has a decend refresh rate shown in the
totally accurate fps effect. Before it was capped at around 30 fps
which indicates that the refresh rate was halfed.
On the tearfing front the change seems to not have any negative
impact.
This changes how we synchronize through vsync. We use a mutex and a
wait condition to synchronize the threads. When presenting the frame
our main gui thread blocks and will be woken up by the vsync event
(or a timeout of max 1 frame time slot). In order to minimize the
blocked time we use the blocksForRetrace functionality from the GLX
compositor.
Given this change we no longer need to tell the compositor that we
are swapping the frame, it's blocked anyway. Also we don't need the
failsafe QTimer anymore.
With this change applied on a Nexus 5 it's succeeding the "Martin
tortures phone test". It doesn't tear anymore and has a smooth
experience.
I'm rather disappointed by the fact that we need to block in order
to get vsync. This means Android/hwcomposer is as bad as GLX. So
much for the "Android stack is so awesome", in fact it's not. Anybody
thinking it's awesome should compare to DRM/KMS and especially atomic
modesetting. Yes it's possible to present frames without tearing and
without having to block the rendering thread.
Reviewed-By: Marco Martin and Bhushan Shah
The newer API is designed for the case that outputs are disabled and
makes sure that we don't have to abuse the aboutToSwapBuffers. This
also prevents possible conflicts between blocking during rendering and
screens being off.
Reviewed-By: Bhushan Shah
According to the hwcomposer documentation:
"It is a (silent) error to have HWC_EVENT_VSYNC enabled when calling
hwc_composer_device.set(..., 0, 0, 0) (screen off)".
Because of that we may not enable vsync directly after toggling the
output, but need to wait till after calling the set call.
Reviewed-by: Bhushan Shah
Apparently we don't get a vsync event for the first frame during startup
which blocks the compositor till the end of days. Thus a timer is added
which calls vsync after 1 sec if we didn't get an event.
Note: qt5-qpa-hwcomposer-plugin does the vsync in a different way:
it uses a wait condition to truly block in present till the vsync.
Maybe we need to do that as well.
Very basic: all screens have same size and are ordered from left to
right. It's mostly meant to allow easy test cases with multi-screen.
The quick tiling test demonstrates how it's used.
Based on a previous patch done by David Edmundson and heavily
inspired by qt5-qpa-hwcomposer-plugin [1].
The change requires a newer libhybris than the one used by Ubuntu. In
fact it allows to build against current master (at the day of writing [2]).
REVIEW: 125606
[1] https://github.com/mer-hybris/qt5-qpa-hwcomposer-plugin
[2] bd6df6a306
Libinput does work on libhybris enabled devices. There is no need to
use Android's input stack. This simplifies our code a lot and increases
sharing with normal Linux systems.
What's tricky is to convince the system to use libinput through with our
logind helper. Logind fails to open the files for us if we start KWin
over either ssh or adb shell. We need to get it into a proper session, so
only a kwin started through a helper like simplelogin will be able to use
libinput.
REVIEW: 125608
The backend is based on drm's gbm backend and also uses the GBM
platform, but with the default display allowing the driver to pick
a device. In addition it doesn't use a surface, but a surfaceless
context with a framebuffer object to render to. Given that it diverged
too much from drm's backend to allow more code sharing.
If KWin is started from a tty it gets a proper driver, if KWin is started
in a X11 session it only gets llvmpipe. Given that there is a chance that
the autotests using the virtual backend will fail. In that case a follow
up patch will enforce either O2 or Q.
A new backend which doesn't present the rendered output. It uses a
QPainter scene, renders to a QImage but doesn't present it anywhere.
Thus a real virtual backend.
By exporting the environment variable KWIN_WAYLAND_VIRTUAL_SCREENSHOTS
the backend creates a temporary dir, prints the path to std-out and
saves each rendered frame into that directory. Of course with exit it
will be deleted again.