It is quite a bit easier to reason about the conversion to device
coordinates when we actually have code that does that, instead of
implicitly assuming OpenGL handles it. Additionally, it means we don't
need to convert back to logical coordinates again when we're rounding
pixel values.
Crossfade is now hanlded by regular scene painting, only by offscreen
effects. There is no need for scene code to have awareness and use a
more expensive shader.
Things such as Output, InputDevice and so on are made to be
multi-purpose. In order to make this separation more clear, this change
moves that code in the core directory. Some things still link to the
abstraction level above (kwin), they can be tackled in future refactors.
Ideally code in core/ should depend either on other code in core/ or
system libs.
Quad generation needs a valid surface pixmap. This did not surface
before as the pixmap was only accessed when looping the region which
typically was empty without a pixmap.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Relying on the texture matrix to normalize means we multiply every UV
coordinate with 1/scale, which leads to floating point errors and thus
errors in the UV coordinates. Instead, if we calculate normalized
coordinates directly we avoid floating point error and get proper UV
coordinates.
Longer term the plan is to make all UV coordinates normalized and get
rid of the CoordinateType altogether.
With fractional scaling integer based logical geometry may not match
device pixels. Once we have a floating point base we can fix that. This
also is
important for our X11 scale override, with a scale of 2 we could
get logical sizes with halves.
We already have all input being floating point, this doubles down on it
for all remaining geometry.
- Outputs remain integer to ensure that any screen on the right remains
aligned.
- Placement also remains integer based for now.
- Repainting is untouched as we always expand outwards
(QRectF::toAdjustedRect().
- Decoration is untouched for now
- Rules are integer in the config, but floating in the adjusting/API
This should also be fine.
At some point we'll add a method to snap to the device pixel
grid. Effectively `round(value * dpr) / dpr` though right now things
mostly work.
This also gets rid of a lot of hacks for QRect right and bottom which
are very
confusing.
Parts to watch out in the port are:
QRectF::contains now includes edges
QRectF::right and bottom are now sane so previous hacks have to be
removed
QRectF(QPoint, QPoint) behaves differently for the same reason
QRectF::center too
In test results some adjusted values which are the result of
QRect.center because using QRectF's center should behave the same to the
user.
When we render individual component of a decoration into an atlas we
ceil the positions for the individual component parts so they don't risk
overlapping. See SceneOpenGLDecorationRenderer::render
This isn't done when we set the overall texture height. This can cause
the bottommost part of the atlas (the right edge) to go out of view.
BUG: 453745
ScreenPaintData provides a way to transform the painted screen, e.g.
scale or translate. From API point of view, it's great. It allows
fullscreen effects to transform the workspace in various ways.
On the other hand, such effects end up fighting the default scene
painting algorithm. For example, just have a look at the slide effect!
With fullscreen effects, it's better to leave to them the decision how
the screen should be painted. For example, such approach is taken in
some wayland compositors, e.g. wayfire, and our qtquick effects already
operate in similar fashion.
Given that, strip the ScreenPaintData of all available transforms. The
main motivation behind this change is to improve encapsulation of item
painting code and simplify model-view-projection code in kwin. It will
also make the job of extracting item code for sharing purposes easier.
There are a few benefits to using smart pointers from the standard library:
- std::unique_ptr has move semantics. With move semantics, transfer of ownership
can be properly expressed
- std::shared_ptr is more efficient than QSharedPointer
- more developers are used to them, making contributions for newcomers easier
We're also already using a mix of both; because Qt shared pointers provide
no benefits, porting to standard smart pointers improves consistency in
the code base. Because of that, this commit ports most of the uses of QSharedPointer
to std::shared_ptr, and some uses of QScopedPointer to std::unique_ptr
This change makes the WindowItem track the opacity and schedule a
repaint. It further decouples the legacy scene from code window
abstractions.
It's an API breaking change. WindowPaintData no longer can make windows
more opaque. It only provides additional opacity factor.
This allows to toss a large amount of custom rendering code.
Furthermore, it removes the build-time dependency on Plasma Framework
for FrameSvg and Theme from KWin core as it's pulled in through QML
imports now.
It also cleans up the API and removes functions that are effectively
unused or no-op after this change.
For instance, effects often destroy their effect frames
in pre/postPaintScreen, which would now destroy an `OffscreenQuickView`,
which changes GL context. This is alleviated by delaying detruction
of the internal view.
Support for the features of text cross-fade and selection frame,
which are not used by any of the built-in effects, is dropped.
Signed-off-by: Eike Hein <eike.hein@mbition.io>
The lanczos filter depends on the effect system. It makes very difficult
changing painting code from SceneWindow to Item.
Given that the last big users of the laczos filter - present windows and
desktop grid effects were re-written in QML. The only remaining user of
the lanczos filter is the thumbnail aside effect. Given that it's a
really obscure usecase, switching to the linear filter won't be very
noticeable.
As a backup plan, one can reimplement the thumbnailaside effect using
QML. The lanczos filter is already implemented in plasma-framework.
Item::isVisible() is true if either the item has been marked hidden or
one of its ancestors.
In some cases, kwin may render invisible windows, for example for window
thumbnails.
This change makes rendering code use explicit visibility status when
rendering to ensure that it's still possible to render invisible windows.
If the window filter rejects a window, that window won't be in the
stacking_order and henceforth won't be painted, so finalDrawWindow()
does extra work of checking again if the window is accepted.
AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
GLRenderTarget doesn't provide a generic abstraction for framebuffer
objects, so let's call GLRenderTarget what it is - a framebuffer.
Renaming the GLRenderTarget class allows us to use the term "render
target" which abstracts fbos or shm images without creating confusion.
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
In conjunction with a buffer format with alpha channel this allows
to have KWin's backdrop transparent to see through to a display
layer below.
Signed-off-by: Eike Hein <eike.hein@mbition.io>
It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
Software cursor has always been a major source of problems. Hopefully,
porting it to RenderLayer will help us with that.
Note that the cursor layer is currently visible only when using software
cursor, however it will be changed once the Compositor can allocate
a real hardware cursor plane.
Currently, software cursor uses graphics-specific APIs (OpenGL and
QPainter) to paint itself. That will be changed in the future when
rendering parts are extracted from the Scene in a reusable helper.
The responsibilities of the Scene must be reduced to painting only so we
can move forward with the layer-based compositing.
This change moves direct scanout logic from the opengl scene to the base
scene class and the compositor. It makes the opengl scene less
overloaded and allows to share direct scanout logic.
paintScreen() already tries to ensure that the damage region doesn't go
outside the scene geometry. With this change, it will try to clip the
damage region to the render target rect, which saves us an extra region
intersection and simplifies code that calls paintScreen().
Having a render loop in the Platform has always been awkward. Another
way to interpret the platform not supporting per screen rendering would
be that all outputs share the same render loop.
On X11, Scene::painted_screen is going to correspond to the primary
screen, we should not rely on this assumption though!
Neither SceneQPainter nor SceneOpenGL have to compute the projection
matrix by themselves. It can be done by the Scene when setting the
projection matrix. The main benefit behind this change is that it
reduces the amount of custom setup code around paintScreen(), which
makes us one step closer to getting rid of graphics-specific paint()
function and just calling paintScreen().
This allows us to make the GLRenderTarget a bit nicer when using it to
wrap the default fbo as we don't know what the color attachment texture
is besides its size.
This means that the responsibility of ensuring that the color attachment
outlives the fbo is now up to the caller. However, most of kwin code
has been written that way, so it's not an issue.
Because the GLRenderTarget and the GLVertexBuffer use the global
coordinate system, they are not ergonomic in render layers.
Assigning the device pixel ratio to GLRenderTarget and GLVertexBuffer is
an interesting api design choice too. Scaling is a window system
abstraction, which is absent in OpenGL or Vulkan. For example, it's not
possible to create an OpenGL texture with a scale factor of 2. It only
works with device pixels.
This change makes the GLRenderTarget and the GLVertexBuffer more
ergonomic for usages other than rendering the workspace by removing all
the global coordinate system and scaling stuff. That's the
responsibility of the users of those two classes.