sudo’s Hall of pain

  • @[email protected]
    link
    fedilink
    28
    edit-2
    3 months ago

    More than a sys admin thing is a general linux user thing; i switched from an emmc computer with no drives marked as sdx; to one with nvme and hdd; i was used to my old pc where sda was whatever usb stick i plugged in; and i used dd with sda1… I nuked 1tb of data and i am still running photorec to try and recover something at least; fml.

    • Atemu
      link
      fedilink
      193 months ago

      Am I the only one around here who does backups?

      • @[email protected]
        link
        fedilink
        203 months ago

        Unfortunately to do backups i need the money to buy a drive to do backups on; most of my pcs are literally made out of scrap parts from broken machines.

        • Atemu
          link
          fedilink
          83 months ago

          I use scrapped drives for my cold backups, you can make it work.

          Though in case of extreme financial inability, I’d make an exception to the “no backup, no pity” rule ;)

          • @[email protected]
            link
            fedilink
            13 months ago

            I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

            For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

            • Atemu
              link
              fedilink
              03 months ago

              I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

              It’s possible to make that work; through discipline and mechanism.

              You’d need like 12 of them but if you’d carve your data into <80GB chunks, you could store every chunk onto a separate scrap drive and thereby back up 1TB of data.

              Individual files >80GB are a bit more tricky but can also be handled by splitting them into parts.

              What such a system requires is rigorous documentation where stuff is; an index. I use git-annex for this purpose which comes with many mechanisms to aid this sort of setup but it’s quite a beast in terms of complexity. You could do every important thing it does manually without unreasonable effort through discipline.

              For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

              Another good practice is to attempt any changes on a test model. You’d create a sparse test image (truncate -s 1TB disk.img), mount via loopback and apply the same partition and filesystem layout that your actual disk has. Then you first attempt any changes you plan to do on that loopback device and then verify its filesystems still work.

              • @[email protected]
                link
                fedilink
                03 months ago

                Or mount it in RAID0/whatever the zfs equivalent is.

                The downside over one disk is many have more possible points of failed, taking out the whole array - so ideally another RAID would be best

                • Atemu
                  link
                  fedilink
                  03 months ago

                  That would require all of those disks to be connected at once which is a logistical nightmare. It would be hard with modern drives already but also consider that we’re talking IDE drives here; it’s hard enough to connect one of them to a modern system, let alone 12 simultaneously.

                  With an Index, you also gain the ability to lose and restore partial data. With a RAID array it’s all or nothing; requiring wasting a bunch of space for being able to restore everything at once. Using an index, you can simply check which data was lost and prepare another copy of that data on a spare drive.

              • @[email protected]
                link
                fedilink
                0
                edit-2
                3 months ago

                The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

                As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

                • Atemu
                  link
                  fedilink
                  13 months ago

                  The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

                  For that issue, I recommend never using unstable device names and always using /dev/disk/by-id/.

                  As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

                  I’d highly recommend to start backing up the most important data ASAP rather than waiting to be able to back up all data.

        • @[email protected]
          link
          fedilink
          43 months ago

          I’m in a similar boat, but tend to mirror my important files across a lot of my drives. Also, whenever I move hard drives computer to computer, I first look at the drive and copy everything I don’t wanna lose, just in case… Basically, learned to be careful the hard way a few times lol

        • @[email protected]
          link
          fedilink
          33 months ago

          You can buy (or get) cheap 1tb ssds or bigger 2tb hdds for sub 100€ where I am from.
          Pairing that with extreme conpression from veeam, not installing all programs in C:\ (or whatever system directory for linux) and either doing volume or file level backups should give you plenty of space to do those.

          • @[email protected]
            link
            fedilink
            23 months ago

            Linux its just / then from there you can mount other drives at whatever directory you want

            But also 100€ ain’t all too cheap for some of us

            • @[email protected]
              link
              fedilink
              23 months ago

              Linux its just / then from there you can mount other drives at whatever directory you want

              I know root / exists but I didn’t know a good analog to C:. Thank you though as some other members might not know it yet :)

              But also 100€ ain’t all too cheap for some of us

              Certainly. But nowadays even reputable brands bring out >1TB SSD/HDDs for little money.
              They should suffice for backup purposes.
              If the money is that tight when even a dedicated drive + backupsoftware won’t fit then you can only bridge the time until then with a USB drive or something else with a high enough capacity.
              And any (working) backup is better than none.

      • @[email protected]
        link
        fedilink
        53 months ago

        I have plenty of non-critical Linux ISOs that I don’t back up (because that’d be like 12 TB).

        But I’d still be pissed if I accidentally wiped them.

  • unalivejoy
    link
    fedilink
    English
    21
    edit-2
    3 months ago

    Expectation: apply chmod to all subdirectories.

    Reality: Remove read permission

    For chmod, chown, chattr, etc, -R is used to recurse subdirectories.

    • @[email protected]
      link
      fedilink
      43 months ago

      That’s what -R does in chmod as well? I feel like something here is going completely over my head. Or are you-all using another version of chmod?

        • @[email protected]
          link
          fedilink
          63 months ago

          Aha! I didn’t get that you meant the issue was accidentally using -r instead of -R since both you and OP wrote the upper case one.

          I’m a lot more used to -R so I instead get caught off by commands where that means something other than recursive :)

          I mostly use symbolic mode and honestly don’t get why everyone else seems to use octal all the time.

          • unalivejoy
            link
            fedilink
            English
            43 months ago

            People probably confuse it with tools like cp, rm, ls, etc as they use -r for file recursion.

            • @[email protected]
              link
              fedilink
              23 months ago

              ls -r actually lists entries in reverse order! It needs -R as well.

              cp and rm accept either.

              Looking at some man pages the only commands I found where -R didn’t work were scp and gzip where it doesn’t do anything, and rsync where it’s “use relative path names”.

              (Caveat: BSD utils might be different, who knows what those devils get up to!)

  • 𝘋𝘪𝘳𝘬
    link
    fedilink
    103 months ago

    Some time ago I wanted to clean up home directory files permissions to be not readable by group or others. Instead of just removing group/other permissions I hard-set all directories to 700 and all files to 600.

    Took quite some time to repair not working scripts and “application containers”.

    • wellDuuhOP
      link
      fedilink
      English
      4
      edit-2
      3 months ago

      Well I nuked myself with chmod -R on my home directory this morning… My day is now dedicated to reinstalling nixos on my laptop… Glad I didn’t do this on a production server…

      Will be extra cautious now with the -R commands

      PS: I now see the need of timeshift despite of using nixos… I could have backed up my home dir… And restore the prev state

      • 𝘋𝘪𝘳𝘬
        link
        fedilink
        23 months ago

        Imagine accidentally running it on / instead …

        But wasn’t NixOS not specifically design to be protected against such issues?

        • wellDuuhOP
          link
          fedilink
          English
          2
          edit-2
          3 months ago

          😂 heck no! (Just found out)

          Nix provides a platform where you define how the system should be by specifying what version of apps to install, and configurations to inherit.

          It does not back up any configuration and files that are outside the defined configuration file! And Turns out there are plenty of them.

          What, You changed into dark theme on your android studio? Stored on home dir .local, not on nix configuration file

          Every app that I customized it whilst inside the app, the changes are thrown on .local.

          Again… TIMESHIFT would have saved me sooo much time.

          This is me Sangry now

          Edit: I hope this post saves someone a world of pain in the future

          • @[email protected]
            link
            fedilink
            13 months ago

            I’m very confused, I don’t see that -r is a valid option for chmod. What did you even do? I see no clarification anywhere in this post for what actually happened.

            • wellDuuhOP
              link
              fedilink
              English
              2
              edit-2
              3 months ago

              I accidentally scrambled all the permissions on my home directory by running sudo chmod -R -755 .

              The -R does this recursively through out every sub directory under /home/user/

              While this looks somewhat innocent and harmless, most (if not all) files on home directory are owned by normal user. The above command just changed all files ownership to root (privileged user) which has alot of nuisance.

              Effects:

              1. To run any app now, you need to open a new terminal and type sudo -E app-name &, every single time. Annoying, but not as much as the following effects…
              2. Running apps this way is not recommended since the app might accidentally change your system configurations without remorse, as it’s launched with root privileges (eg. network sockets, of which might most certainly be used by another app or daemon) and lead into hundreds of popups telling you that some system app terminated unexpectedly (without any reason whatsoever! Now you have to hunt that reason out on dmesg or sm’n). This can and WILL certainly lead to Linux crashes.
              3. Due to effects on 2. Above, most apps (eg. Android studio) WILL prevent you from launching it with root privileges, by quiting itself immediately when it detects that privileged user is owner of the application process. So you will wind up with apps that you might never use again 😕

              It’s a world pain by a thousand tinny cuts.

              Hope this answers all your questions, and yes, it’s -R, not -r

              Solutions:

              1. Be extra extra careful while running sudo commands, especially those with -R (recursive) options. Are you on a right directory? ( I thought I was, turns out I wasn’t)

              in addition to above, I would try to avoid using ., and specify the particular directory using ~/path/to/dir. So, instead of sudo chmod -R -755 ., I could have used sudo chmod -R -755 ~/path/to/dir

              1. timeshift to the rescue. Backup your home directory (except Downloads and Video folders), preferably weekly, (or daily if you change your system configurations more frequently)
              • @[email protected]
                link
                fedilink
                English
                2
                edit-2
                3 months ago

                The above command just changed all files ownership to root (privileged user)

                Hey uhm, are you sure? That seems wrong.

                For me, the command removes read, write, and execute permissions of the user, and read and execute permissions for everyone else. Which would be expected.

                chown would be the command to change ownership…

                To run any app now, you need to open a new terminal and type sudo -E app-name &,

                You could also try and fix the permissions by running sudo chmod -R u+rwX g+rX /home/user. That will fix all access permissions first of all. Then, you might have to fix execute permissions (but do this only on files that are meant to be executed!) using chmod +x path/to/file.

                Solutions Be extra extra careful while running sudo commands

                Yes. But you (as the owner) would not even have needed sudo for the chmod command to succeed. I think you might have just slightly misunderstood chmod’s syntax. Your command as given means "recursively, remove the permissions 755 (you have a - in front of them!). It sounds like you probably wanted chmod -R 755 ... (without -, giving read/write/execute to the owner and read/execute to everyone else). But the descriptive notation above is probably easier to remember. Read the manpage maybe…

  • @[email protected]
    link
    fedilink
    5
    edit-2
    3 months ago

    Not chmod related, but I’ve made some other interesting mistakes lately.

    Was trying to speed up the boot process on my ancient laptop by changing the startup services. Somehow ended up with nologin never being unset, which means that regular users aren’t allowed to log in; and since I hadn’t set a root password, no one could log in!

    Installed a different version of Python for a project, accidentally removed the wrong version of Python at the end of the day. When I started the computer the next day, all sorts of interesting things were broken!

  • krolden
    link
    fedilink
    33 months ago

    If you chmod verbose this wouldn’t have been an issue

  • ∟⊔⊤∦∣≶
    link
    13 months ago

    Thank you for your sacrifice.

    I definitely would have made this mistake.