When I delete any file in dolphin. It takes unusually long time to move to Trash. Small files or big files both have same problem.
My os is on 128 gb ssd. And home partition is on 1 tb hdd. What could be the reason? I didn’t face this issue few months ago. But now deleting a file is slow. But when I delete using shift+delete it delete files instantly.
Likely your data is written to increasingly smaller free leftover spaces in partly filled blocks, because no completely empty blocks remain.
(Once no completely empty blocks remain, every block in an SSD has to be read completely, combined with a chunk of the new data, and written back. The more pieces your data has to be broken up into, to fit in these partly free spaces, the slower the whole process.)
If that’s the cause, there is btrfs balance to join partly filled blocks into completely filled, thereby creating completely empty blocks that can be written to without the read-and-join-procedure.
What puzzles me is that you noticed a slowdown with 300GB free space. That is way too high to need balance. Maybe df didn’t get it right; for btrfs you need btrfs device usage.
I still have 450gb space left in hdd. But that’s not the case. However last night I found a solution but not the reason.
There were still some file left in ~/.local/share/Trash/. Emptying the trash don’t remove those files. I don’t know the reason why those files were stuck. So, I did hard delete.
rm -rf ~/.local/share/Trash/
Which remove all those files stuck in Trash folder. And now file moves to Trash instantly.
On an SSD every block is just looked up in a table (or so), while on revolving rust a head with its inertia (however small) must be positioned - and that takes time (however small). And the partially filled blocks are all over the platter…
You might want to see if a different filesystem works better for you.
Or mount btrfs with autodefrag:
More info, from basics to Arch Wiki tips & tricks:
@komraDEcan you create a new topic? Since this topic is marked as solved already, I think it would be a good Idea. If you think your issue is related to this topic you can link it.
Just a quick comment: That’s an extreme zstd compression level you have there. According to a zstd benchmark from a random guy on the Internet, saving/copying with a compression level of 15 takes about 1500% longer time (i.e., 16 times as long) than with a more reasonable level of 3 (or lower). While only saving a few percentages of disk space.
Not really sure how I achieved my black magic here but according to those findings my 4750G couldn’t write at way over 200MB/s that it actually is…
Don’t really recall my details but I was testing, too. It’s just that it must be enforced however, otherwise it would keep skipping compression.
Well, to me some percent diskspace mean more than an idle CPU. It shaved off “merely” 200-500GB on 18TB, seemed good enough for me to keep it.
Perhaps the lousy numbers from the sheet can be blamed on a very well compressed file (cuda-10.1.168-1-x86_64.pkg.tar.xz suggests so) but who has only already densely compressed tar-files? Real life has PDFs, pictures, movies, backups and everything else. Why bothering to manually select and zip files when the filesystem can take care of that?
Don’t get me wrong but the sheet’s approach to use a single and compressed file seems moot (but without further info/context I certainly don’t want to blame him/her or the results).
Still, one might claim that is a waste of energy… maybe so, didn’t measure that but I have TDP throttled and it still works without a hiccup.
Well, the benchmark is a few years old, and maybe you have a faster CPU? Or you’re actually writing uncompressible files, like videos, photos or (compressed) audio files. The zstd algorithm skips trying to compress ‘uncompressible’ blocks, so trying to compress such files is actually much faster than trying to compress ‘compressible’ files!
But you can easily benchmark this yourself. The zstd binary has a built-in feature for this. Here’s the result on a binary (which should give a OK compression ratio) for compression levels 1–15, on my slow 12-year old PC:
So compression using compression level 15 takes 310.9 / 12.5 ≈ 25 times as long (i.e., ~2400% times longer) as using the minimal compression level (1). (For a modern system, the speed should be faster, but ratios will likely be similar.)
And a compression level of 15 takes 14 times as long as using the default level 3. Using a level of 3, the file takes up (1/1.694 =) 59% of the original size. Using a level of 15, the file takes up a little over 57% of the original size. So there’s a very modest saving in disk space for a lot of extra CPU time.
For uncompressible data, e.g,. video files or random data, I get much higher speeds:
So low compression levels give an amazing ‘compression’ speed of > 2000 MB/s (in memory – it will of course be limited by the write speed of the hard drive). And only for compression levels ≧ 13 is the compression speed ‘low’. But for all compression levels, the file isn’t being compressed at all (it is actually increased by 130 bytes, which is just overhead data).
You’re referring to the compress-force mount option, right? Yes, perhaps surprisingly, it‘s actually a good idea to use this when you’re using zstd on a btrfs file system. The reason is that zstd is much faster (and perhaps cleverer?) than btrfs in determining if data is ‘uncompressible’ (so that in can be stored uncompressed). So (at least for compression levels ≦ 12) it’s better to use compress-force than compress, and let zstd do this job instead of btrfs.
For compressible files, using level 15 instead of level 3 or 4, will typically save an additional couple of % of the original file size, perhaps a bit more for very highly compressible files (text files). While for typically ‘large’ files, like videos, there will be no saving, just a major increase in time, CPU and electricity use.
BTW, for various reasons, it’s actually not that easy figure out how much space is saved or used for a file or a directory on a btrfs system. But there’s now a nice utility you can use, compsize. The command
sudo compsize file-or-directory
gives you information on the actual compressed and uncompressed size of the file(s), and on the different compression algorithms used (on a btrfs file system, different files can use different compression algorithms, including ‘none’, for uncompressable data). Add argument -b to output the sizes in bytes. Try running this on a directory with large files, like videos.
The benchmark was not based on compressing a tar.xz file. If it was, the compression ratio would be 1, i.e., no compression. Instead, it was based on the extracted version of the file (i.e., uncompressed files), resulting in an overall compression ratio of 0.5853 (level 1) to 0.5363 (level 15), which is quite good (you almost halve the disk space used). The default level 3 had a compression level of 0.5720, so you would save only 0.013 (= 1.3 percentage points) if you used level 15 instead, but spend 1m 35s compressing the files instead of 6 seconds.
I’ve also had this issue and it’s very strange. I am on a HDD with ext4 but using trash-put for example is near instant. Placing and also emptying the wastebin took loads of time with dolphin, especially when done for the first time in a session.
The mentioned fix did work but I think this is a bug with dolphin or whatever framework is used to move to trash.