Thread #108639567
File: signal-2024-07-16-10-55-19-775.jpg (45.3 KB)
45.3 KB JPG
previous: >>108631610#define __NR_acct 163
https://man7.org/linux/man-pages/man2/acct.2.html
https://man7.org/linux/man-pages/man5/acct.5.html
tl;dr:
log process accounting info
oh, this is pretty neat, honestly. i didn't realize this was a thing. i could definitely see this being useful for profiling or similar goals. anyone ever used it before?
it does one thing, and it does it well!
relevant resources:man manman syscalls
https://man7.org/linux/man-pages/
https://linux.die.net/man/
https://elixir.bootlin.com/linux/
https://elixir.bootlin.com/musl/
https://elixir.bootlin.com/glibc/
6 RepliesView Thread
>>
>>108639567
Man, this one takes me back. Per-process accounting used to be a big fucking deal in the business world, back in the days when departmental billing would track CPU time and put what you used on your department's budget.
>>
>>
>>
File: thanks.png (1.7 MB)
1.7 MB PNG
>108634324
>Try setting vm.dirty_writeback_centisecs = 6000 and vm.laptop_mode = 15 (or however many seconds) + LD_PRELOADing libeatmydata on login. It won't help you with static binaries though.
>>
>>
>>108643459
So you already have a syscall that writes *ALL* dirty memory back to disk, but somehow that's not enough. You need to sync very specific FDs for some reason, it can't be everything.
OK, but what if you have 256 open FDs that you have to sync? Are you going to call syncfs 256 times in a loop? That's fucking retarded. Not only are you paying through the nose for these 512 mode switches, but the underlying device may support 32 simultaneous writes ... twenty years ago. These days more likely to be up to 64K writes (for just one of the up to 64K queues).
So you're telling the kernel to get the same locks, reserve the same resources, allocate the same memory for internal nonsense, write and block the same queue 256 times in a row, for something that the kernel could do in potentially 8 internal submissions.
>so let's just spawn a bunch of threads which call syncfs 256 times total
So not only are you now paying for additional context switches and thread scheduling, and you don't get rid of the mode switches at all, but now the kernel has to manage its own global submission queue to cache outstanding file syncs.
>so just use sync again
So syncfs was a toy interface all along? Flushing all 4096 or however many open FDs there are right now is an appropriate workaround?
No.
Stop optimizing single submission speeds and design interfaces that reduce the number of submissions in the first place, or finally admit that you've bolted a microkernel interface onto a (supposed) monolith. Or, before all, get it through your fucking skull that interface and implementation cannot be divorced from one another.