Kdenlive and vector graphics

Hello,

I have been using Kdenlive for quite some time and I am very happy with it. However, I make somewhat educational videos where I need to draw lots of shapes on screen to point things, nothing fancy, I only need ellipses, rectangles and arrows. This is an area where my (otherwise super smooth) workflow becomes clunky in Kdenlive.

Importing SVGs directly seems extremely pixelated, while exporting vector graphics to PNG every time is quite cumbersome, so I have found a few tricks to have this done right inside kdenlive. For ellipses and rectangles I (ab)use the motion tracker effect, as I find it faster than working with titles (plus resizing ellipses in titles is buggy as hell) while for drawing arrows I use two alpha shapes (triangle and rectangle).

This is still less than ideal because of the following reasons:

  1. shapes drawn this way are weirdly pixelated
  2. when trying to anti-alias the shapes, say with blur, shapes develop a dark halo at the edges, this is visible even with a white shape on a white background
  3. I have no control on the arrow’s stroke

Recently I’ve seen how well kdenlive integrates with glaxinmate, especially the automatic launch of the program and exporting the background video into the animation for reference. On the other hand I have also seen how well OpenShot now integrates with Inkscape, it is close to what kdenlive does with glaxinmate except it does not export the background for reference, but it is waaaaay quicker than working in the kdenlive title editor or exporting every shape to png all the time.

So I am wondering

  1. is there a way to adjust the conversion from SVG to raster graphics when importing SVGs directly, so that they don’t pixelate?
  2. are there other tricks that I could use to speed up this process and have high quality, easily customizable graphics in my videos?
  3. could it be a good idea to try and implement a new effect, perhaps starting from the motion tracker, dedicated to drawing shapes on screen (including arrows)? And if so, where to start?
  4. Is there any plan to integrate kdenlive with Inkscape or similar software (drawio? libreoffice draw?) to a similar extent as glaxnimate?

Thanks in advance for your help, cheers!

2 Likes

Kdenlive doesn’t always behave well with resources that aren’t the same size as the project frame size. At present it tries to rescale them to fit the project size and that doesn’t always work as desired and creates issues when multiple transforms are stacked.

This is a known issue we hope to improve in the future.

That said though, svg’s behaving like that seems like a bug. And I hadn’t noticed this problem before with svg’s I’ve used, but a quick test now shows me one that scales up nicely from a very small image to still be clean at large magnifications. And another (which is ostensibly larger than the first one) which pixelates horribly when imported to Kdenlive.

I’m not quite sure offhand what the important difference between them is - the troublesome one I created in fig and exported to svg with fig2dev, the other in inkscape. And for bonus points the troublesome one looks much worse in the clip monitor than it does when placed on the timeline, and adding a transform effect to it (with no transformation) seems to improve it by antialiasing the otherwise sharp and jagged edges.

This does seem like a bug. And it’s possibly complicated by the fact we explicitly disable smooth interpolation in some cases (because that upsets the people who do ‘hard edge’ animations). If there isn’t an existing bug open about it, it’s probably worth opening one with an example svg and project file showing a case where this happens badly.

are there other tricks that I could use to speed up this process and have high quality, easily customizable graphics in my videos?

I’ve used colour clips and masks and text characters in the past, but that’s still in the scope of “hacks” not what I’d call a preferred technique.

could it be a good idea to try and implement a new effect

What advantage would there be to having an ‘effect’ for this, rather than just a clip with alpha that you can apply any effects you like to?

Is there any plan to integrate kdenlive with Inkscape or similar software (drawio? libreoffice draw?) to a similar extent as glaxnimate?

None that I’m aware of, but anything is possible if someone can make the case for this enabling something which makes the effort to do and maintain that worthwhile.

What would “integration” with inkscape do that you couldn’t just do by editing a project clip with inkscape in the already possible way?

Hello Ron, thanks for your thorough answer.

I’ll see if I can generate “clean” SVGs with inkscape like you did, and perhaps open a bug report too.

As for a new effect, from what I can tell it is not super handy to create, say, an ellipse with a given stroke width with alpha shapes only, and arrows are even less friendly. The shapes from the motion tracker are a bit pixelated and have that halo effect I mentioned when anti-aliased. So I thought it could be nice to have an effect that does what the motion tracker does but hopefully better, and maybe includes other shapes like arrows, with more control over fill and stroke, rounded edges and so on.

As for integration with inkscape (or similar), it could look a lot like what happens in glaxnimate. Say I have an SVG image in my project and I drag it in the timeline as clip. Then if I wanted to modify it it could be super handy if I could double click on it (or something) and it would automatically (i) open inkscape, (ii) extract a frame from the current video at the playhead, (iii) set that frame as a background in inkscape for reference, (iv) when one is done drawing, saves and exits the background image could be removed and the image in the project bin / timeline refreshed. These are all operations I can do manually so perhaps I could try to work on an external script that does this. I hope this clarified some things, what are your thoughts?

With a bit of coaching I might try to implement that effect I mentioned myself, at least as a proof of concept…

It’s not super trivial or immediately obvious how to do that in gimp either - but there are tools it is trivial to do that in, and the files they create are trivial to import as project (or library) resources.

So I thought it could be nice to have an effect that does what the motion tracker does

The primary task of the motion tracker isn’t to draw shapes - that’s really just an offshoot of one common use for it being to obscure things. It can be used to control the motion of any clip containing any shapes you like if you need them to track motion - but drawing those shapes isn’t its job.

maybe includes other shapes like arrows, with more control over fill and stroke, rounded edges and so on.

You can create anything there’s a unicode code point for and a font that draws it how you like as a text/title clip. And anything there isn’t in an image editor that’s suitable for the shapes or other things you want to create. What do we gain by re-implementing a subset that? Who decides what subset? Where does the scope creep end?

As for integration with inkscape (or similar), it could look a lot like

I think you missed what I was asking. We could put a lot of work into “integrating” with any arbitrary tool this way, but what do we actually gain? What could we do with this that you can’t already do with glaxnmate? Or that you can’t already do just by simply editing the file in question with the specialised tool you want to use for some job?

There’s definitely things we can improve in this space. But the best way to do that isn’t necessarily “absorb or reimplement every other tool”.

Hello Ron,

What is missing from a specialized tool is having the background imported for reference, like it happens in glaxnimate. That feature is really great but glaxnimate is not something I am able to work with as it doesn’t seem to provide the precision I need, but I’ll give it yet another shot just to be sure.

However I think I figured it out finally. I made an SVG that is exactly as large as my video resolution, where I draw one shape at a time. I import those shapes into kdenlive and they render well without pixelation. then I can right-click on the clips in the bin and select “edit clip” which will open them in inkscape (after being appropriately configured). Once the drawing is saved it updates in kdenlive too after a few seconds. I can totally work like this, and honestly I don’t even think I need the background image as reference. I just draw a shape with the right proportions and then I move it and rotate it with the transform effect.

What really enables me to work now is that the SVGs are imported properly without having to be exported to PNG, and that I can open inkscape straight from within kdenlive. I select my clip in the timeline, right click, clip in project bin, right click, edit. It’s a few clicks but totally doable.

Thanks for the tips, you gave me the inspiration to look a little deeper and, sure enough, I found a workable solution.

Which is possible because glaxnimate specifically added the features needed to support that level of integration with other tools and is designed to be able to work that way.

We can’t just magic our way into inkscape, or whatever other tool some people prefer over both glaxnimate and inkscape. Those tools need to support that, and someone needs to implement and maintain that.

But that said, if you’re working on an OS that permits sane concurrent access to files (read: probably anything but Windows) - it works perfectly well to have a kdenlive bin clip also open in inkscape, or any other tool that can manipulate it, and any change you make and save in inkscape will be automatically reflected in kdenlive.

I made an SVG that is exactly as large as my video resolution

Right now, that’s basically the best advice for project resources of any kind. Ensure they are all using the project frame size/rate, even if that means some pre-processing before adding them to the project. In some cases you can get away with not doing that, but it’s the path with the least number of surprises in it.

What really enables me to work now is that the SVGs are imported properly without having to be exported to PNG, and that I can open inkscape straight from within kdenlive.

It should work the same with PNG or any other resource type too (modulo scaling up a bitmap), and if you use a drag-and-droppy file browser, it will work straight out of that as well, you can open whatever you like in whatever you like, and drag new things you create to kdenlive to drop them in the bin or timeline.

This is the standard desktop / filesystem integration that we get for free on a suitable OS, and generally covers most use cases.

I’m actually working on windows and it seems to work ok.

As wor wishing our way into Inkscape, there might be a way of doing it without Inkscape’s support, which I can even achieve with an external script. Imagine the following steps which I can do completely manually and, as such, could be automated. When one opens an SVG file from within kdenlive with “edit clip”:

  1. grab a frame from the preview playhead, possibly disabling the svg clip being edited if present in the timeline
  2. embed the frame into the SVG to be edited as a locked background layer
  3. open the SVG in the editor of choice
  4. somehow wait for the SVG editor to be closed
  5. remove the background layer from the SVG and refresh it in the project

I think this should be scriptable, even with python, so for instance I could write my own script that does this and pass it as executable for the “svg editor” setting in the preference, so that instead of just opening the editor it runs the whole script.

I will try to implement this in python and come back to you.

I have a follow up question: now the SVGs I import look “right”, however when I scale them down and rotate them (in whichever order) I have the impression that they look more pixelated than I expect. I can see it tries to anti-alias but I can clearly see the jagged edges on the final 1080p output. I want to run more tests to “quantify” this pixelation more precisely, but in the meantime I wanted to ask if this is a known problem

@Ron I was able to implement exactly the functionality I described with a python script that dynamically inserts and removes a background image (taken from the project) while the SVG is being edited. The script replaces Inkscape in the “default image editor” setting in kdenlive. The added benefit is that I can choose the graphical application used depending on the file extension. The only problem is that I need to extract the frame manually so it can be used as background. I can work with that but I wonder if I could automate that too with some scripting. The project contains the playhead position so that could be read from my python script somehow (provided that I save the project right before) but I wonder if there would be any trick to script the extraction of the current frame and saving it alongside the image to be edited.

A related question: this trick with python works perfectly with the kdenlive installation from the Ubuntu repo, but doesn’t with the Kdenlive flatpak. The issue is that when the python script is launched from Kdenlive Flatpak it doesn’t see (can’t import) the “lxml” module which is needed to make these under-the-hood changes to the SVG file and that is installed as a system package. I already added “/usr/lib/python3/dist-packages” and “/usr/local/lib/python3.12/dist-packages” to PYTHONPATH within the script itself and added access permission to those folders for kdenlive through flatseal. I can see that those folder are now in PYTONPATH even when it is launched by Kdenlive (flatpak) but still I get an import error. Any advice?

So I guess I really should start by saying, if you’re having fun experimenting with this and learning what works and what doesn’t - then don’t let me or anyone else discourage you, go hard and have fun!

… but as a practical solution, something that only works when you put your left index finger on your nose, and your right middle finger on the precise center of your left earlobe, and hold your tongue Just Right, and then cough … can be a fun curiosity to play with, but isn’t really ever going to grow up to be a User Interface for Srs Users who just want to Get Things Done.

As I said before, there is no general interface for this sort of thing to just plug random apps into, both sides need to be able to communicate whatever it is they need to share to do whatever it is that you want using them together to be able to do. And if I was going to try to kludge something like this, I probably wouldn’t start with trying to scrape data and fabricate intermediate objects.

There’s lots of ways you can get part-way there. Probably the simplest would be to just use a compositing window manager that lets you make app windows transparent. Then you could just put the semi-transparent image editor on top of whatever background you want as a reference for the images.

Even simpler (and almost always better!), was making the need for this just vanish. As you pretty much did once you realised you should compose your mask shapes as library images at full frame size, then scale and translate them into the position you want them on the timeline.

But really, if I needed to do this task today (or more importantly, again in 6 months from now after I’ve forgotten all the special preconditions that the “do it semi-magically” button I made requires), the most reliably simple which works for every corner case, would just be to export the frame of interest (which I can equally do in either kdenlive or vlc or any of many other tools I might be viewing the video in at the time) as a temporary working file, then import that as a layer into inkscape, or gimp, or any of many other image editing tools I might want to create something with.

And I can do that with not much more than two keystrokes in each app, which will work no matter which machine I’m using those tools on or who configured them to prefer what. Yeah, it could be simpler, but a layer of fragile complexity only has to fail once to waste (probably much) more of my time than a whole year worth of 4 simple keystrokes.

But I am a toolmaker, and love good automation of mundane repetition, so if tilting at this windmill is scratching an itch for you, then I’ll definitely tell you to scratch it until it bleeds and learn everything you can from it - just don’t mistake it for a grail quest that will change the world for anyone else.

Re flatpak, the sandboxing is always going to make this sort of thing hard - which might be a feature if you want to make them hard, but it really doesn’t add any significant security against an actually malicious and capable attacker, so if you don’t want this sort of thing to be hard, it’s mostly also The Wrong Tool For the Job.

So I’ll say this again:

If you’re on Linux, really, just use the Kdenlive AppImages. And I say that as someone who maintains most of my own personal tooling in distro-packaged form, and has my own personal packaging for Kdenlive and its dependencies for dev version testing. But for serious video editing, I use the stable release appimages, and keep an archive of them for if I ever need them to make a minor tweak to an old project.

They’re the most reliable and best configured, best tested, least restrictive, builds available for Linux. At best a good and diligent distro packager can reproduce that, but the number of them putting in that sort of effort and engaging with developers or users is sadly minimal, and the problems we see reported here with distro-package users tend to be many and unending.

Anyway, I love what you’re trying to do. Just don’t confuse “I can do something” with “It’s a good idea to do it that way” :smiley:

Hello,

Thanks for the tip about appimages. I’ll try with that.

As for the tool, I’ll be selfish and say I just want to make a tool/automation that works for me (external to kdenlive), perhaps publish it on github, but as long as I’m happy I don’t really care if anyone else likes it or uses it.

I actually am quite happy now, but I still have to do two things: (i) grab the current frame and (ii) open the clip for editing. So the only thing I would like to try and do is bundle these two actions into one with some trickery.

So are you perhaps aware of any method to grab the frame at the current playhead from outside of kdenlive (my script) while the project is open and not yet saved. Or do you know a way I can add a frame grab operation within kdenlive every time before the “default image editor” is launched when editing a clip?

How would that even work? The ‘current’ frame could be in the clip monitor and unrelated to the playhead at all, and if it is in the timeline, it could be a composition of multiple tracks and effects that doesn’t exist “outside of kdenlive” until you actually render it …

You can keep trying to layer kludges on this if you’re doing it for the academic exercise, but if you’re doing it to actually use, I’d be going back to the drawing board for a different approach. Or maybe even tossing some wishlist ideas at the glaxnimate folk if there’s concrete things you think some other tool does better which is in the scope of what they’d like to be good at?

This solution is actually super usable, even with having to grab the reference frame manually it’s pure bliss compared to not having anything. Even more so as it allows me to edit different image formats with different programs (Inkscape for SVG, Gimp for all else). Do you have a different approach in mind that I can implement entirely on my own like this?

As for grabbing the frame at the playhead, indeed I would grab it from the timeline with effects and all, this is exactly what I’m doing manually, so I was hoping there could be some kind of API interface that I could use from within my script to tell kdenlive to grab, render and save a frame from the timeline at the current playhead.

Unfortunately, Kdenlive doesn’t have an API you could use just like that …

So … a bit like the standard xdg-open utility, except without the integration to the system and desktop which makes it work the same for all callers and support every file type that you have a preferred handler installed or configured for?

Do you have a different approach in mind

I thought I was pretty clear on the bit where I would approach this very differently to what you have and why. But it is your problem, and your learning experience, and you’re doing the work, and I don’t have to use it …

and I’m still not really clear on what it is that you think inkscape can do that glaxnimate can’t which makes all this effort and all these compromise shortcomings worthwhile?

Is there something concrete and significant, or is it really just “I prefer purple jellybeans”?

So … a bit like the standard xdg-open utility

How would this xdg-open work under windows? Besides, for example my default editor for SVGs is libreoffice draw, but when working with kdenlive I prefer Inkscape, so the default app system-wide is not always necessarily the preferred one in kdenlive. Regardless, what would one type into the “default image editor” setting in Kdenlive to make this work the way you describe?

I thought I was pretty clear on the bit where I would approach this very differently to what you have and why.

Ok how? I’m afraid I missed it. All I want is a method to import a reference frame into a graphics editor, so that I can paint the various shapes in the right position for the video being displayed, and then remove it. What is your suggestion there?

or is it really just “I prefer purple jellybeans”?

Now, I tried to work with glaxnimate but from what I can tell it does not have a grid functionality with “snapping”, which makes it extremely hard to do precision work (no, the various “align” options are not enough. Try to draw an arrow with fill and stroke of different color, then try to resize the tail only, so that the head maintains the same aspect and position, and you’ll know what I’m talking about :slight_smile:

Maybe like this?

All I want is …

Something we’ve repeatedly explained doesn’t exist, and isn’t part of the design of either application.

If you don’t want to use things that do exist, I’ve run out of ways to say that, sorry. You’ve hit the limit of what “Just wrap it in python” can sanely do for you - and you could have done that with already existing tools.

Ok perfect, I think we have been talking past each other a bit because I’m perfectly happy with the current solution. Having the frame extraction be automated was a nice to have, not a must. But for me it was essential to “solve the problem” quickly and effectively, and what I have is enough.

As for feature requests, I think there is value in being result oriented and creating new workflows with external tools and some python glue as long as it is actually practical, if for no other reason than to test whether the workflow even makes sense and if there is a broader interest. Indeed if nobody cares it means I’m doing this for myself, and at that point I’m happy where I am now.

What I’ll do is formalize my solution a bit, publish it on github as an external tool and perhaps make a youtube video explaining how it all works. If we see that there is interest around the topic we can pick up this discussion again and see if we can integrate the workflow in a more “native” way.

I maintain that, at least conceptually, the workflow is well self-contained and could realistically be replicated within kdenlive without requiring an editor to implement special features. That goes for sure for SVG images and .xcf/.psd raster images, obviously not for pure raster images as they do not support layers and as such the trick I described would not be viable. But again, this is something to think about in the future.

In the meantime, thank you for the enlightening discussion :slight_smile:

I’m not sure what you’re imagining might change …

You’ve done the software equivalent of sticking a coat hanger into a power socket because you wanted to use an appliance that didn’t have the correct plug.

If you want to do that to yourself in your own home, knock yourself out - but people cheering you on in social media can’t ever vote it into being a sensible thing to do. Moreso when at best all it does is save a couple of keystrokes in a very niche use case, that even you realised you didn’t really need in order to do what you originally needed.

The sane path to any genuine improvement here is never going to involve sifting through the undocumented entrails of some other tool’s output. This isn’t a ‘solution’ or a ‘workflow’, it’s a hail mary kludge that once kinda worked on a random sunday afternoon in June under some very limited conditions. There’s nothing making any promises it will still work after tomorrow, or with files created with or at any point touched by other tools.

It’s great that you’re experimenting and learning. But this isn’t even the cleanest of all the really ugly kludges that could achieve a similar result, so please don’t advertise it as something we support or recommend or are thinking about ‘integrating’ in the future.

If you’d really rather write a script to mangle SVG files than talk to the glaxnimate maintainers about adding support for a snap grid, you’d probably be better off writing one that can normalise ‘stray’ points and broken symmetry than digging even deeper holes further down this dead end…

You’ve done the software equivalent of sticking a coat hanger into a power socket because you wanted to use an appliance that didn’t have the correct plug.

It’s a temporary glue layer, quite elegant too if I say so myself, for the purpose of testing an idea. I implemented the functionality I have in mind (or a good 90% of it) with minimum effort so it can be showcased and tested. What goes on under the hood is of no interest right now, All I care is the user experience and at least for me, it’s quite nice.

If you want to do that to yourself in your own home, knock yourself out - but people cheering you on in social media can’t ever vote it into being a sensible thing to do. Moreso when at best all it does is save a couple of keystrokes in a very niche use case, that even you realised you didn’t really need in order to do what you originally needed.

Here I am really struggling to understand where this negativity comes from. My demo is already very useful, it allows me to draw graphics in an SVG file with a temporary background reference, so I can draw what I need in the right place and with the right size. All this being editor agnostic and saving a bunch of time by not having to manually import a background image, positioning it, then removing it afterwards.

This is exactly why functional demos are meant for, so that people can get a concrete feeling of an idea in order to talk facts instead of speculation, and decide whether or not to invest the limited resources into the “clean” development of the associated high level feature.

And why wouldn’t people be able to vote this feature? I reiterate that I’m not suggesting to use the python script. What I mean is the high-level feature which reads: “an editor-agnostic method to have the current timeline frame temporarily and automatically set as background layer while editing an SVG from within a project”.

I am sure glaxnimate is a serviceable animation editor, but it is not on par with inkscape when it comes to vector graphics. Besides the lack of a grid, which I guess could be introduced in due time following a feature request, and the fact that one would have to treat all vector graphics as animations. I would argue that with little effort we could enable the same type of functionality in an editor-agnostic way for vector graphics and even raster graphics that support layers.

Now, is there a “market” for this feature? Is it worth developing? Who knows, but having a simple demo that users, maintainers and contributors can try out would be a good first step to get a feeling for that.

And if not enough people care to go in and add a feature, fair enough, at least there will still be my little python script for the ones who do need this kind of functionality, so that we can do our job more efficiently without having to convince others.

Because “fairies really exist” isn’t something you can vote into being true.

What I mean is the high-level feature which …

does not exist, and has no stable foundation to be wished into existence upon.