Skip Navigation

Need help setting up new SSD as boot drive.

Howdy All! I recently got a bitchin' new SSD, a Samsung 990 EVO Plus 4TB and I am struggle bussing trying to make it my new boot drive on my computer while keeping all of my programs and settings and things just the way I like them. Specs are I7 13700K cpu and an RTX 4070 gpu plugged into an MSI MAG Z790 Tomahawk Wifi mobo all working harmoniously to run Opensuse Tumbleweed.

Things I have done so far:

  1. Googled that shit, didn't find much that helped me unfortunately. Found some forum where a guy was trying to move over to an SSD from a HDD and then remove the HDD, whereas I just want to change the boot drive to SSD and continue using both drives in the same rig. Someone else in that thread recommended clonezilla but then further down I read something about UUIDs(?) being copied as well and being unable to use both drives in the same computer or it can cause issues and corrupt data. That scared me off that.
  2. Tried using the Yast Partitioner tool but the scary warning box it makes you click through and the general lack of any clue what I'm doing scared me off that.
  3. Decided to just fresh install Opensuse Tumbleweed onto SSD with usb and then mount the HDD so that I can just copy everything over that way. Or so I thought. First I ran into the issue of the /home located in HDD not being viewable by my user on the SSD, I guess. Fixed that by unmounting the drive and remounting it with the following appended to the end of the mount command '-o subvol=/' , I got that from google as well. Now I'm able to view things in /home on HDD from the user on SSD and I've even copied some things over. However I'm unable to access the .snapshots folder in the root directory of HDD which I intended to copy over the latest snapshot and use it on the SSD install to bring all of my non /home stuff over.

So I'm kinda stuck in the middle of transferring over now. I have an inclination toward being lazy so I don't really want to spend time installing all of the flatpaks and configuring the OS again if I don't have to. Mostly because I've already had one false start with Linux and went ahead and started fresh so this would be the third time having to set everything up again from scratch. Any help or suggestions are greatly appreciated!

26 comments
  • If you want to clone the existing system onto the new ssd, here’s the broad strokes of what you can do.

    1. Get a usb stick and write your linux distro of choice to it. Doesn’t really matter which one, we’re just using this to clone the system drive to the new drive. You want the system drive to be totally inactive during the clone which is why you’ll do it from a live usb rather than with the system itself booted.
    2. shut down the system
    3. Install the new ssd. DO NOT REMOVE THE CURRENT SYSTEM/BOOT SSD. You should now have two ssds installed.
    4. If you can’t install the second ssd, plug it in to usb via an enclosure
    5. Boot from the live usb
    6. open the terminal
    7. run lsblk and note the /dev/sdX path of the system drive. Write it down.
    8. From the same output, note the /dev/sdX path of the new ssd. Write it down.
    9. Use the dd command to clone the system drive to the new ssd. The command will look like this:

    `dd if=/dev/existingBootDrive of=/dev/newSSDDrive bs=8M status=progress oflag=direct’

    This command will clone the exact data of the system drive to the new ssd. the if portion of the command stands for in file, as in the source of the data you want to clone. Make sure that is your existing boot drive. of is the out file, the destination of the clone. Make sure that is your new ssd.

    When you do this, the new drive will appear to be the same size as the old drive. This is due to the cloning, but is easily resolved by resizing the partition(s). How you do this depends on the filesystem, so refer to this guide for resizing

    1. Once you’ve resized the partition/disk, double check the partition UUIDs on the new ssd against what’s in /etc/fstab on the new disk. To do this, run blkid to get a list of all the partitions and their UUIDs. Note the UUIDs of the partitions on the new ssd.
    2. To check /etc/fstab, you’ll have to mount the root (/) partition of the new drive somewhere in the live system. In the terminal you should already be in the home folder of the live system user. Make a new directory with mkdir. Call it whatever you want. So something like: mkdir newboot
    3. run lsblk and make note of the root partition on the new ssd, then mount that to newboot (or whatever you called it) with sudo mount /dev/sdX newboot (where X is the actual device label for the root parition of the new drive`
    4. open /etc/fstab with your terminal text editor of choice. Compare the UUIDs to the ones you noted. If they are the same, you’re golden (they should be the same, but I’ve also had them change on me. ymmv). If they are different, delete the old UUID and replace it with the new UUID for each respective partiiton
    5. Shut down the system
    6. Remove the old boot drive, and install the new boot drive in it’s place
    7. Boot the system. If all goes well, you’ll boot right into tumbleweed as if nothing has changed except you’re running from your shiny new ssd
    8. If it doesn’t boot, boot again from the live usb, and again check the UUIDs to make sure there were no mistakes
    9. Keep the old SSD unmodified in case you need to revert back to it.
    • I have hit a bit of a snag. Quick rundown of what I have done. I attempted to use clonezilla but then I learned that it can't clone a larger partition to a smaller partition, even if it is mostly "empty" space. So I learned how to use gparted from a live USB version of Linux mint to size the partition on my hdd with all of my stuff on it to be the same size as my new SSD (8tb to 4tb) so that it could clone to it. Well clonezilla ran for a couple hours overnight and then when I went to check things in the morning I got an error attempting to mount the drive as described in step 12. I don't remember the error specifically,something about a super block, but my googling told me it was most likely an issue with the cloning process. So I decided to just follow your directions exactly and use disc destroyer for the first time. It took five-ever as in almost 5 hours to copy everything over lol but I am able to mount it as described in step 12, great joy! But then at step 13 when I type sudo nano /newboot/etc/fstab I am told that it doesn't exist. I mosey on over in the file browser and sure enough there is no file at that location. For shits and gigs I run sudo nano /etc/fstab to look at the one on the live USB version of Linux mint and it doesn't seem to be what I should be looking for either:

      overlay / overlay rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0

      I thought about saying YOLO but then I remembered this was the exact UUID stuff I was worried about when I read the thread from Google so I thought I'd ask before just trying to boot the SSD and seeing what happens.

      Also I have some clarifying questions about the last few instructions. Step 15 says to remove the source HDD before booting, which I can do to test that the SSD cloned successfully but after the test I do want to be able to put the HDD back in to the computer and reformat it as extra storage space. Does that change anything about what I should do? If I want to use both drives together do the UUIDs still need to be identical? Or should they be different in that case?

      Thanks again so much for your help, I feel like I'm making progress and Im accidentally learning quite a bit in the process.

      • From your post it sounds like you're using btrfs subvolumes, did you use the same '-o subvol=/' when mounting newboot in step 12? I'm pretty sure you should be able to see /etc/fstab if you do that

        • I tried that and for some reason it only had one directory in /etc and that was snapper. I unmounted and remounted without the -o subvol=/ and I checked in /etc for fstab again and this time I found it so I'm sure I just overlooked it the first time.

          I was able to verify that the UUIDs were all the same but then when I attempted to boot from the SSD it went straight to what I think is the grub recovery screen? I just typed shutdown and booted back into the HDD. I guess I'm going to try and clone the drive again. If it doesn't work again I'll probably just bite the bullet and perform a fresh install on the SSD again and set everything up manually.

          • Do you have a screenshot of the error or recovery screen?

            Also what do you get if you run sudo btrfs subvolume list /newboot ?

            • Screenshot of screen ssd boots to currently

              results of sudo btrfs subvolume list newboot:

              I appreciate your help! I probably won't have time to work on it again really until tomorrow, but I feel like I'm close.

              • Sorry I'm a little unfamiliar with how Opensuse does things so that wasnt as useful as i was hoping lol. Did you have the HDD and SSD connected at the same time when you booted? If you did then you'll want to disconnect the HDD first.

                Also when you get to the grub boot menu if you press e it will show you the config for the selected boot option, can you post a screenshot of that? You may also be able to tell if the root UUID listed there matches the one you expect from fstab. You can also remove splash=silent and quiet from the line beginning with linux and that may give you an actual error message, although it's possible the boot process is failing before it even gets to that point. If you post the outputs here I can take a look

                Edit: looking at your screenshot again I may have misunderstood what was happening, is it failing to even load the boot menu? Also do you know if it's booting from BIOS or UEFI?

                Edit 2: I've thought about this some more and its looking like it might be a grub error rather than anything to do with subvolumes. There's a few things which are probably worth checking before going any further. First boot from your USB with both drives connected and run sudo blkid, assuming your SSD is /dev/sda and your HDD is /dev/sdb, do the UUIDs for the partitions on /dev/sda and /dev/sdb match?

                Again assuming /dev/sda is the new SSD run sudo mount /dev/sda2 /mnt and sudo mount /dev/sda1 /mnt/boot/efi then check if the following 2 files exist: /mnt/boot/grub2/grub.cfg /mnt/boot/efi/EFI/opensuse/grub.cfg

                • Most recently I have used gparted to resize the root partition of my HDD (/dev/sda2) to be only a little larger than the amount of data I actually had on it. Taking it from ~7 TB to 1tb, mostly so that I wouldn't have to copy "empty" space and also so that the partition would actually fit on my 4tb SSD (/dev/nvme0n1p2) Then I created 3 partitions on my SSD that matched the file structures on the HDD (fat=nvme0n1p1, btrfs=nvme0n1p2, linux-swap=nvme0n1p3).

                  I then booted from a USB with clonezilla live on it and proceeded to clone partition to partition sda1>nvme0n1p1, sda2>nvme0n1p2, sda3>nvme0n1p3. The only way I could perform the clones without errors was to run in expert mode, selecting -icds (disables check for drive size), -k (can't remember exactly what this one did, something about not copying partition header or title?) after cloning all partitions I unhooked the HDD inside the case and tried to boot. Hit the same grub screen and hitting e returned error: ../../grub-core/script/function.c119:can't find command 'e'.

                  I think it's booting from UEFI? But I'm not sure how to actually tell. I will check for those grub configs in the morning though. Your help is greatly appreciated!

                  • Well that sounds promising! In that case I suspect it is just that the new partitions have different UUIDs so you probably just need to fix the fstab and regenerate the grub.cfg. Definitely check the UUIDs with sudo blkid and let me know if they are different. Also its probably worth checking the default Btrfs subvolume hasn't changed. If you mount both drives and run sudo btrfs subvolume get-default /mountpath for both of them and check that the outputs match. If they don't paste both outputs here and we should be able to fix it.

                    You are almost certainly booting UEFI as your system looks to be quite new, probably the easiest way to check is to look at your fstab, on Opensuse I believe there should be a volume mounted to /boot/efi if you're UEFI booting.

                    Also just to help with the next part could you let me know which distro you're using to boot from USB? From one of your other comments I think its Mint isn't it?

                    • sudo blkid shows all UUIDs are the same as the partitions they are cloned from. I'm unable to mount /dev/nvme0n1p2 (SSD root partition) and it gives a "bad superblock" error. A little bit of googling led me to attempt to use the command sudo btrfs rescue super-recover -v /dev/nvme0n1p2 but it told me "all supers are valid, no need to recover" I then run sudo dmesg and see BTRFS error (device nvme0n1pe): bad tree block start, mirror 1 want 2521222217728 have 0 BTRFS error (device nvme0n1pe): bad tree block start, mirror 2 want 2521222217728 have 0 BTRFS error (device nvme0n1pe): failed to read chunk root BTRFS error (device nvme0n1pe): open_ctree failed

                      I think you're right I am 99% confident I have seen the /boot/efi directory on my system before in the past.

                      I am using Mint as my live USB image. But now I'm thinking it might have been wiser to use an opensuse tumbleweed live image since id reckon it would be better equipped to handle btrfs.

                      I think I might need to clone the drive again to fix the superblock issue but I don't know if I want to do it for what would be the 4th or 5th time now. I might just bite the bullet and fresh install to SSD again and copy my /home over and set everything up again. It will be a pain but not as big as this is becoming lol

                      I am very appreciative of your time though! And this experience has certainly taught me more about Linux and gave me some familiarity with new commands. So thank you again!

                      • Yeah sorry I've not come across that error before so I have no idea how to fix it without copying the partitions again. I don't think its anything to do with you not using an Opensuse image, other distros should be just as capable of handling Btrfs. I understand if you've had enough by now and would rather just do a fresh install! However if you would still like to try cloning it I've tested and it should be possible using gparted (assuming you can shrink the existing partition small enough to begin with). Small disclaimer, its possible to lose data if shrinking the partition goes wrong so don't do this if you don't have an existing backup or you're not comfortable potentially losing the data!

                        First boot into the live USB with your old HDD connected. use gparted to shrink the main root partition and apply the changes. Just pick a size thats below the space available on the new drive but a bit bigger than the minimum size you can shrink it to, you can resize it properly once its copied over. Then reboot and check that the HDD is still bootable. Then boot back into the live USB with both drives connected, and delete all the existing partitions off of the new SSD and apply the changes. Open the terminal and run lsblk to check if the swap partition is mounted, you'll probably see /dev/sda3 is listed as swap, if it is run sudo swapoff /dev/sda3 otherwise it won't let you copy it.

                        You should then be able to use gparted to copy/paste the partitions between the 2 disks. When you copy the swap partition make sure it goes at the end of the disk so you can grow the main partition afterwards. For some reason when testing in a VM I also found I had to increase the size of the swap partition by 1MiB or the copy process kept failing. Apply the changes, then grow the main partition to fill the remaining empty space and apply the changes once again. After that you should be able to reboot and disconnect the HDD and you should have a usable system! If you want to use the existing HDD as a data drive I would just delete all the partitions after plugging it in and create a new one, that will ensure it has a new UUID. However I would wait a couple of days to make sure you're happy everything cloned properly!

    • Thank you so much! I've only got the one SSD and one HDD, sorry that I wasn't very clear in my original post. But I think I can follow your detailed instructions and resolve this. I'll report back when I've had time to do as you've described. Again, much appreciated!

26 comments