How do you track the "mess" you are doing with your system?
Programs with custom services, virtual environments, config files in different locations, programs creating datas in different location...
I know today a lot of stuff runs in docker, but how does a sysadmin remember what has done on its system? Is it all about documenting and keeping your docs updated? Is there any other way?
(Eg. For installing calibre-web I had to create a python venv, the venv is owned by root in /opt, but the service starting calibre web in /etc/systemd/system needs to be executed with the User=<user> specifier because calibre web wants to write in a user home directory, at the same time the database folder needs to be owned by www-data because I want to r/w it from nextcloud... So calibreweb is installed as a custom root(?) program, running in a virtual env, can access a folder owned by someone else, but still needs to be executed by another user to store its data there... )
Despite my current confusion in understanding if all of this is right in terms of security, syntax and ownership,
No fucking way I will remember all this stuff in a week from now..
So... What do you use to do, if you do something? Do you use flowcharts? Simple text documents? Both?
I take daily work log notes in obsidian, then transclude chunks from those notes into topic notes and attach config files, images, context from the web, etc.
Obsidian was a game changer for me.
I just paste all the stuff, make a tag and forget about it until it is needed.
For those not using it, there are dozens of plugins and unlimited options for customization.
I have a standard daily note with timestamps and changelog over the whole vault and sqlite-like queries that manage dynamic dataviews.