It allows us to encapsulate SurfaceItem rendering. It's needed to add
support for YUV->RGB conversion fallback path.
Effects that use this property must be ported to OffscreenEffect, see
also OffscreenEffect::setShader().
This is a BREAKING CHANGE!
Our logical co-ordinates for shape can be floating. The shape is used to
determine final vertices on screen.
The commit appears to introduce some new loops but they're mostly what
QRegion would be doing internally so it shouldn't impact performance.
For most cases we just have a single rectangle in our shape anyway.
opaqueRegion is unchanged for now.
Due to being a compositor, kwin has to conform to some certain
interfaces. It means a lot of virtual functions and function tables to
integrate with C APIs. Naturally, we not always want to use every
argument in such functions.
Since we get -Wunused-parameter from -Wall, we have to plumb those
unused arguments in order to suppress compiler warnings at the moment.
However, I don't think that extra work is worth it. We cannot change or
alter prototypes in any way to fix the warning the desired way. Q_UNUSED
and similar macros are not good indicators of whether an argument is
used too, we tend to overlook putting or removing those macros. I've
also noticed that Q_UNUSED are not used to guide us with the removal no
longer needed parameters.
Therefore, I think it's worth adding -Wno-unused-parameter compiler
option to stop the compiler producing warnings about unused parameters.
It changes nothing except that we don't need to put Q_UNUSED anymore,
which can be really cumbersome sometimes. Note that it doesn't affect
unused variables, you'll still get a -Wunused-variable compiler warning
if a variable is unused.
RenderGeometry is a list of vertices that should be rendered. It's
needed so we can convert WindowQuadList to device coordinates and do
clipping in device coordinates. The intention is also to use it to
replace arbitrary float arrays in effects eventually.
This rounds the position of geometry in the OpenGL scene, which means
things that are fractionally scaled and end up between pixel boundaries
are instead aligned to pixels. This doesn't fully solve the fractional
scaling blurriness, as we still need the application to provide a buffer
with the correct size. If we do have a buffer with the correct size, we
will now render properly aligned. For example, internal clients should
now be correct.
It is quite a bit easier to reason about the conversion to device
coordinates when we actually have code that does that, instead of
implicitly assuming OpenGL handles it. Additionally, it means we don't
need to convert back to logical coordinates again when we're rounding
pixel values.
Crossfade is now hanlded by regular scene painting, only by offscreen
effects. There is no need for scene code to have awareness and use a
more expensive shader.
Things such as Output, InputDevice and so on are made to be
multi-purpose. In order to make this separation more clear, this change
moves that code in the core directory. Some things still link to the
abstraction level above (kwin), they can be tackled in future refactors.
Ideally code in core/ should depend either on other code in core/ or
system libs.
Quad generation needs a valid surface pixmap. This did not surface
before as the pixmap was only accessed when looping the region which
typically was empty without a pixmap.
Signed-off-by: Victoria Fischer <victoria.fischer@mbition.io>
Relying on the texture matrix to normalize means we multiply every UV
coordinate with 1/scale, which leads to floating point errors and thus
errors in the UV coordinates. Instead, if we calculate normalized
coordinates directly we avoid floating point error and get proper UV
coordinates.
Longer term the plan is to make all UV coordinates normalized and get
rid of the CoordinateType altogether.
With fractional scaling integer based logical geometry may not match
device pixels. Once we have a floating point base we can fix that. This
also is
important for our X11 scale override, with a scale of 2 we could
get logical sizes with halves.
We already have all input being floating point, this doubles down on it
for all remaining geometry.
- Outputs remain integer to ensure that any screen on the right remains
aligned.
- Placement also remains integer based for now.
- Repainting is untouched as we always expand outwards
(QRectF::toAdjustedRect().
- Decoration is untouched for now
- Rules are integer in the config, but floating in the adjusting/API
This should also be fine.
At some point we'll add a method to snap to the device pixel
grid. Effectively `round(value * dpr) / dpr` though right now things
mostly work.
This also gets rid of a lot of hacks for QRect right and bottom which
are very
confusing.
Parts to watch out in the port are:
QRectF::contains now includes edges
QRectF::right and bottom are now sane so previous hacks have to be
removed
QRectF(QPoint, QPoint) behaves differently for the same reason
QRectF::center too
In test results some adjusted values which are the result of
QRect.center because using QRectF's center should behave the same to the
user.
When we render individual component of a decoration into an atlas we
ceil the positions for the individual component parts so they don't risk
overlapping. See SceneOpenGLDecorationRenderer::render
This isn't done when we set the overall texture height. This can cause
the bottommost part of the atlas (the right edge) to go out of view.
BUG: 453745
ScreenPaintData provides a way to transform the painted screen, e.g.
scale or translate. From API point of view, it's great. It allows
fullscreen effects to transform the workspace in various ways.
On the other hand, such effects end up fighting the default scene
painting algorithm. For example, just have a look at the slide effect!
With fullscreen effects, it's better to leave to them the decision how
the screen should be painted. For example, such approach is taken in
some wayland compositors, e.g. wayfire, and our qtquick effects already
operate in similar fashion.
Given that, strip the ScreenPaintData of all available transforms. The
main motivation behind this change is to improve encapsulation of item
painting code and simplify model-view-projection code in kwin. It will
also make the job of extracting item code for sharing purposes easier.
There are a few benefits to using smart pointers from the standard library:
- std::unique_ptr has move semantics. With move semantics, transfer of ownership
can be properly expressed
- std::shared_ptr is more efficient than QSharedPointer
- more developers are used to them, making contributions for newcomers easier
We're also already using a mix of both; because Qt shared pointers provide
no benefits, porting to standard smart pointers improves consistency in
the code base. Because of that, this commit ports most of the uses of QSharedPointer
to std::shared_ptr, and some uses of QScopedPointer to std::unique_ptr
This change makes the WindowItem track the opacity and schedule a
repaint. It further decouples the legacy scene from code window
abstractions.
It's an API breaking change. WindowPaintData no longer can make windows
more opaque. It only provides additional opacity factor.
This allows to toss a large amount of custom rendering code.
Furthermore, it removes the build-time dependency on Plasma Framework
for FrameSvg and Theme from KWin core as it's pulled in through QML
imports now.
It also cleans up the API and removes functions that are effectively
unused or no-op after this change.
For instance, effects often destroy their effect frames
in pre/postPaintScreen, which would now destroy an `OffscreenQuickView`,
which changes GL context. This is alleviated by delaying detruction
of the internal view.
Support for the features of text cross-fade and selection frame,
which are not used by any of the built-in effects, is dropped.
Signed-off-by: Eike Hein <eike.hein@mbition.io>
The lanczos filter depends on the effect system. It makes very difficult
changing painting code from SceneWindow to Item.
Given that the last big users of the laczos filter - present windows and
desktop grid effects were re-written in QML. The only remaining user of
the lanczos filter is the thumbnail aside effect. Given that it's a
really obscure usecase, switching to the linear filter won't be very
noticeable.
As a backup plan, one can reimplement the thumbnailaside effect using
QML. The lanczos filter is already implemented in plasma-framework.
Item::isVisible() is true if either the item has been marked hidden or
one of its ancestors.
In some cases, kwin may render invisible windows, for example for window
thumbnails.
This change makes rendering code use explicit visibility status when
rendering to ensure that it's still possible to render invisible windows.
If the window filter rejects a window, that window won't be in the
stacking_order and henceforth won't be painted, so finalDrawWindow()
does extra work of checking again if the window is accepted.
AbstractOutput is not so Abstract and it's common to avoid the word
"Abstract" in class names as it doesn't contribute any new information.
It also significantly reduces the line width in some places.
The main motivation behind this change is to unify render target
representation across opengl and software renderers and avoid accessing
the render backend directory in order to get the render target.
GLRenderTarget doesn't provide a generic abstraction for framebuffer
objects, so let's call GLRenderTarget what it is - a framebuffer.
Renaming the GLRenderTarget class allows us to use the term "render
target" which abstracts fbos or shm images without creating confusion.
The .clang-format file is based on the one in ECM except the following
style options:
- AlwaysBreakBeforeMultilineStrings
- BinPackArguments
- BinPackParameters
- ColumnLimit
- BreakBeforeBraces
- KeepEmptyLinesAtTheStartOfBlocks
In conjunction with a buffer format with alpha channel this allows
to have KWin's backdrop transparent to see through to a display
layer below.
Signed-off-by: Eike Hein <eike.hein@mbition.io>
It's not possible to get the surface damage before calling
Scene::paint(), which is a big problem because it blocks proper surface
damage and buffer damage calculation when walking render layer tree.
This change reworks the scene compositing stages to allow getting the
next surface damage before calling Scene::paint().
The main challenge is that the effects can expand the surface damage. We
have to call prePaintWindow() and prePaintScreen() before actually
starting painting. However, prePaintWindow() is called after starting
rendering.
This change makes Scene call prePaintWindow() and prePaintScreen() so
it's possible to know the surface damage beforehand. Unfortunately, it's
also a breaking change. Some fullscreen effects will have to adapt to
the new Scene paint order. Paint hooks will be invoked in the following
order:
* prePaintScreen() once per frame
* prePaintWindow() once per frame
* paintScreen() can be called multiple times
* paintWindow() can be called as many times as paintScreen()
* postPaintWindow() once per frame
* postPaintScreen() once per frame
After walking the render layer tree, the Compositor will poke the render
backend for the back buffer repair region and combine it with the
surface damage to get the buffer damage, which can be passed to the
render backend (in order to optimize performance with tiled gpus) and
Scene::paint(), which will determine what parts of the scene have to
repainted based on the buffer damage.
Software cursor has always been a major source of problems. Hopefully,
porting it to RenderLayer will help us with that.
Note that the cursor layer is currently visible only when using software
cursor, however it will be changed once the Compositor can allocate
a real hardware cursor plane.
Currently, software cursor uses graphics-specific APIs (OpenGL and
QPainter) to paint itself. That will be changed in the future when
rendering parts are extracted from the Scene in a reusable helper.