Deleting a file from dolphin is very slow, any solution?

When I delete any file in dolphin. It takes unusually long time to move to Trash. Small files or big files both have same problem.
My os is on 128 gb ssd. And home partition is on 1 tb hdd. What could be the reason? I didn’t face this issue few months ago. But now deleting a file is slow. But when I delete using shift+delete it delete files instantly.

I am running Fedora kinoite 38.

1 Like

Using BTRFS here and I noticed that with less space it takes increasingly more time.

I start feeling it below 300GB space left, with 500 or more it’s practically no delay.

There was a bug that was making files moved to trash slow regardless of filesize. Not sure if that was fixed already.

As a workaround try emptying the trash folder or set it to unlimited size. That fixed the issue for me

1 Like

Likely your data is written to increasingly smaller free leftover spaces in partly filled blocks, because no completely empty blocks remain.

(Once no completely empty blocks remain, every block in an SSD has to be read completely, combined with a chunk of the new data, and written back. The more pieces your data has to be broken up into, to fit in these partly free spaces, the slower the whole process.)

If that’s the cause, there is btrfs balance to join partly filled blocks into completely filled, thereby creating completely empty blocks that can be written to without the read-and-join-procedure.

What puzzles me is that you noticed a slowdown with 300GB free space. That is way too high to need balance. Maybe df didn’t get it right; for btrfs you need btrfs device usage.

1 Like

Hmm, I’d guess it’s okay on a HDD. And there’s always something else running, too :sweat_smile:

I still have 450gb space left in hdd. But that’s not the case. However last night I found a solution but not the reason.

There were still some file left in ~/.local/share/Trash/. Emptying the trash don’t remove those files. I don’t know the reason why those files were stuck. So, I did hard delete.

rm -rf ~/.local/share/Trash/

Which remove all those files stuck in Trash folder. And now file moves to Trash instantly.

1 Like

Ha! No miracle then.

On an SSD every block is just looked up in a table (or so), while on revolving rust a head with its inertia (however small) must be positioned - and that takes time (however small). And the partially filled blocks are all over the platter…

You might want to see if a different filesystem works better for you.

Or mount btrfs with autodefrag:

More info, from basics to Arch Wiki tips & tricks:

(I knew it would be useful to keep notes on what helped me! :sun_with_face: )

1 Like

Already using it:


Knew Forza’s but nice to see some additional reading from manjaro, thanks.

Ha! Yes miracle but a bad one:

trash looks like copy not move

Was just trashing 2x 25GB. Not sure how trash is supposed to work but looks like it is copying it into the trash instead of moving, eh?

Just switched from Bullseye to Bookworm, so I am pretty sure it wasn’t like this before. Does this make any sense or does it justify a bug report?

@komraDE can you create a new topic? Since this topic is marked as solved already, I think it would be a good Idea. If you think your issue is related to this topic you can link it.

There are already a few bug reports like: 461847 – dolphin slow to move files to trash - delay of a few seconds before file is moved, regardless of file size and at least another one.

1 Like

Check with ..usage.. and ..df.. before, while, and after moving $bigfile to trash. That should confirm or rule out copying.

Did that, it really moves it from HDD to SDD, new topic here:

Just a quick comment: That’s an extreme zstd compression level you have there. According to a zstd benchmark from a random guy on the Internet, saving/copying with a compression level of 15 takes about 1500% longer time (i.e., 16 times as long) than with a more reasonable level of 3 (or lower). While only saving a few percentages of disk space.

Not really sure how I achieved my black magic here :crazy_face: but according to those findings my 4750G couldn’t write at way over 200MB/s that it actually is…

Don’t really recall my details but I was testing, too. It’s just that it must be enforced however, otherwise it would keep skipping compression.

Well, to me some percent diskspace mean more than an idle CPU. It shaved off “merely” 200-500GB on 18TB, seemed good enough for me to keep it.

Perhaps the lousy numbers from the sheet can be blamed on a very well compressed file (cuda-10.1.168-1-x86_64.pkg.tar.xz suggests so) but who has only already densely compressed tar-files? Real life has PDFs, pictures, movies, backups and everything else. Why bothering to manually select and zip files when the filesystem can take care of that?

Don’t get me wrong but the sheet’s approach to use a single and compressed file seems moot (but without further info/context I certainly don’t want to blame him/her or the results).

Still, one might claim that is a waste of energy… maybe so, didn’t measure that but I have TDP throttled and it still works without a hiccup.

Well, the benchmark is a few years old, and maybe you have a faster CPU? Or you’re actually writing uncompressible files, like videos, photos or (compressed) audio files. The zstd algorithm skips trying to compress ‘uncompressible’ blocks, so trying to compress such files is actually much faster than trying to compress ‘compressible’ files!

But you can easily benchmark this yourself. The zstd binary has a built-in feature for this. Here’s the result on a binary (which should give a OK compression ratio) for compression levels 1–15, on my slow 12-year old PC:

$ zstd -b1 -e15 /usr/bin/links
 1#links : 5543688  -> 3389713 (x1.635),  310.9 MB/s, 1192.8 MB/s
 2#links : 5543688  -> 3305174 (x1.677),  262.6 MB/s, 1155.2 MB/s
 3#links : 5543688  -> 3272425 (x1.694),  170.1 MB/s, 1116.1 MB/s
 4#links : 5543688  -> 3260977 (x1.700),  135.9 MB/s, 1105.5 MB/s
 5#links : 5543688  -> 3233151 (x1.715),   74.8 MB/s, 1103.0 MB/s
 6#links : 5543688  -> 3198366 (x1.733),   62.7 MB/s, 1134.5 MB/s
 7#links : 5543688  -> 3190817 (x1.737),   59.3 MB/s, 1137.9 MB/s
 8#links : 5543688  -> 3186682 (x1.740),   47.8 MB/s, 1150.3 MB/s
 9#links : 5543688  -> 3185795 (x1.740),   44.1 MB/s, 1149.6 MB/s
10#links : 5543688  -> 3184048 (x1.741),   34.1 MB/s, 1154.8 MB/s
11#links : 5543688  -> 3182593 (x1.742),   25.6 MB/s, 1158.6 MB/s
12#links : 5543688  -> 3182451 (x1.742),   24.4 MB/s, 1158.4 MB/s
13#links : 5543688  -> 3180186 (x1.743),   13.9 MB/s, 1163.2 MB/s
14#links : 5543688  -> 3180152 (x1.743),   13.8 MB/s, 1169.0 MB/s
15#links : 5543688  -> 3180267 (x1.743),   12.5 MB/s, 1169.8 MB/s

So compression using compression level 15 takes 310.9 / 12.5 ≈ 25 times as long (i.e., ~2400% times longer) as using the minimal compression level (1). (For a modern system, the speed should be faster, but ratios will likely be similar.)

And a compression level of 15 takes 14 times as long as using the default level 3. Using a level of 3, the file takes up (1/1.694 =) 59% of the original size. Using a level of 15, the file takes up a little over 57% of the original size. So there’s a very modest saving in disk space for a lot of extra CPU time.

For uncompressible data, e.g,. video files or random data, I get much higher speeds:

# Create a 5 MiB file with random data
$ dd if=/dev/urandom of=random.dat bs=5M count=1
$ zstd -b1 -e15 random.dat
 1#random.dat : 5242880 -> 5243010 (x1.000), 2635.0 MB/s  6033.1 MB/s
 2#random.dat : 5242880 -> 5243010 (x1.000), 2599.9 MB/s  6097.9 MB/s
 3#random.dat : 5242880 -> 5243010 (x1.000), 2151.2 MB/s  6106.4 MB/s
 4#random.dat : 5242880 -> 5243010 (x1.000), 1812.0 MB/s  6124.0 MB/s
 5#random.dat : 5242880 -> 5243010 (x1.000), 1289.7 MB/s, 6122.1 MB/s
 6#random.dat : 5242880 -> 5243010 (x1.000), 1270.3 MB/s, 6142.0 MB/s
 7#random.dat : 5242880 -> 5243010 (x1.000), 1156.8 MB/s, 6149.1 MB/s
 8#random.dat : 5242880 -> 5243010 (x1.000), 1165.6 MB/s, 6127.2 MB/s
 9#random.dat : 5242880 -> 5243010 (x1.000),  973.2 MB/s, 6151.4 MB/s
10#random.dat : 5242880 -> 5243010 (x1.000),  761.4 MB/s, 6145.3 MB/s
11#random.dat : 5242880 -> 5243010 (x1.000),  686.3 MB/s, 6128.2 MB/s
12#random.dat : 5242880 -> 5243010 (x1.000),  631.8 MB/s, 6143.5 MB/s
13#random.dat : 5242880 -> 5243010 (x1.000),   65.7 MB/s, 6146.1 MB/s
14#random.dat : 5242880 -> 5243010 (x1.000),   63.3 MB/s, 6129.5 MB/s
15#random.dat : 5242880 -> 5243010 (x1.000),   63.0 MB/s, 6131.1 MB/s

So low compression levels give an amazing ‘compression’ speed of > 2000 MB/s (in memory – it will of course be limited by the write speed of the hard drive). And only for compression levels ≧ 13 is the compression speed ‘low’. But for all compression levels, the file isn’t being compressed at all (it is actually increased by 130 bytes, which is just overhead data).

You’re referring to the compress-force mount option, right? Yes, perhaps surprisingly, it‘s actually a good idea to use this when you’re using zstd on a btrfs file system. The reason is that zstd is much faster (and perhaps cleverer?) than btrfs in determining if data is ‘uncompressible’ (so that in can be stored uncompressed). So (at least for compression levels ≦ 12) it’s better to use compress-force than compress, and let zstd do this job instead of btrfs.

For compressible files, using level 15 instead of level 3 or 4, will typically save an additional couple of % of the original file size, perhaps a bit more for very highly compressible files (text files). While for typically ‘large’ files, like videos, there will be no saving, just a major increase in time, CPU and electricity use.

BTW, for various reasons, it’s actually not that easy figure out how much space is saved or used for a file or a directory on a btrfs system. But there’s now a nice utility you can use, compsize. The command

sudo compsize file-or-directory

gives you information on the actual compressed and uncompressed size of the file(s), and on the different compression algorithms used (on a btrfs file system, different files can use different compression algorithms, including ‘none’, for uncompressable data). Add argument -b to output the sizes in bytes. Try running this on a directory with large files, like videos. :slight_smile:

The benchmark was not based on compressing a tar.xz file. If it was, the compression ratio would be 1, i.e., no compression. Instead, it was based on the extracted version of the file (i.e., uncompressed files), resulting in an overall compression ratio of 0.5853 (level 1) to 0.5363 (level 15), which is quite good (you almost halve the disk space used). The default level 3 had a compression level of 0.5720, so you would save only 0.013 (= 1.3 percentage points) if you used level 15 instead, but spend 1m 35s compressing the files instead of 6 seconds.


I’ve also had this issue and it’s very strange. I am on a HDD with ext4 but using trash-put for example is near instant. Placing and also emptying the wastebin took loads of time with dolphin, especially when done for the first time in a session.

The mentioned fix did work but I think this is a bug with dolphin or whatever framework is used to move to trash.

1 Like