This problem presupposes metaphysical realism, so you have to be a metaphysical realist to take the problem seriously. Metaphysical realism is a particular kind of indirect realism whereby you posit that everything we observe is in some sense not real, sometimes likened to a kind of "illusion" created by the mammalian brain (I've also seen people describe it as an "internal simulation"), called "consciousness" or sometimes "subjective experience" with the adjective "subjective" used to make it clear it is being interpreted as something unique to conscious subjects and not ontologically real.
If everything we observe is in some sense not reality, then "true" reality must by definition be independent of what we observe. If this is the case, then it opens up a whole bunch of confusing philosophical problems, as it would logically mean the entire universe is invisible/unobservable/nonexperiential, except in the precise configuration of matter in the human brain which somehow "gives rise to" this property of visibility/observability/experience. It seems difficult to explain this without just presupposing this property arbitrarily attaches itself to brains in a particular configuration, i.e. to treat it as strongly emergent, which is effectively just dualism, indeed the founder of the "hard problem of consciousness" is a self-described dualist.
This philosophical problem does not exist in direct realist schools of philosophy, however, such as Jocelyn Benoist's contextual realism, Carlo Rovelli's weak realism, or in Alexander Bogdanov's empiriomonism. It is solely a philosophical problem for metaphysical realists, because they begin by positing that there exists some fundamental gap between what we observe and "true" reality, then later have to figure out how to mend the gap. Direct realist philosophies never posit this gap in the first place and treat reality as precisely equivalent to what we observe it to be, so it simply does not posit the existence of "consciousness" and it would seem odd in a direct realist standpoint to even call experience "subjective."
The "hard problem" and the "mind-body problem" are the main reasons I consider myself a direct realist. I find that it is a completely insoluble contradiction at the heart of metaphysical realism, I don't think it even can be solved because you cannot posit a fundamental gap and then mend the gap later without contradicting yourself. There has to be no gap from the get-go. I see these "problems" as not things to be "solved," but just a proof-by-contradiction that metaphysical realism is incorrect. All the arguments against direct realism, on the other hand, are very weak and people who espouse them don't seem to give them much thought.
There is a strange phenomenon in academia of physicists so distraught over the fact that quantum mechanics is probabilistic that they invent a whole multiverse to get around it.
Let's say a photon hits a beam splitter and has a 25% chance of being reflected and a 75% chance of passing through. You could make this prediction deterministic if you claim the universe branches off into a grand multiverse where in 25% of the branches the photon is reflected and in 75% of the branches it passes through. The multiverse would branch off in this way with the same structure every single time, guaranteed.
Believe it or not, while they are a minority opinion, there are quite a few academics who unironically promote this idea just because they like that it restores determinism to the equations. One of them is David Deutsch who, to my knowledge, was the first to publish a paper arguing that he believed quantum computers delegate subtasks to branches of the multiverse.
It's just not true at all that the quantum chip gives any evidence for the multiverse, because believing in the multiverse does not make any new predictions. Everyone who proposes this multiverse view (called the Many-Worlds Interpretation) do not actually believe the other branches of the multiverse would actually be detectable. It is something purely philosophical in order to restore determinism, and so there is no test you could do to confirm it. If you believe the outcome of experiments are just random and there is one universe, you would also predict that we can build quantum computers, so the invention of quantum computers in no way proves a multiuverse.
It does not lend credence to the notion at all, that statement doesn't even make sense. Quantum computing is inline with the predictions of quantum mechanics, it is not new physics, it is engineering, the implementation of physics we already know to build stuff, so it does not even make sense to suggest engineering something is "discovering" something fundamentally new about nature.
MWI is just a philosophical worldview from people who dislike that quantum theory is random. Outcomes of experiments are nondeterministic. Bell's theorem proves you cannot simply interpret the nondeterminism as chaos, because any attempt to introduce a deterministic outcome at all would violate other known laws of physics, so you have to just accept it is nondeterministic.
MWI proponents, who really dislike nondeterminism (for some reason I don't particularly understand) came up with a "clever" workaround. Rather than interpreting probability distributions as just that, probability distributions, you instead interpret them as physical objects in an infinite-dimensional space. Let's say I flip four coins so the possible outcomes are HH, HT, TH, and TT, and each you can assign a probability value to. Rather than interpreting the probability values as the likelihood of events occurring, you interpret the "faceness" property of the coin as a multi-dimensional property that is physically "stretched" in four dimensions, where the amount it is "stretched" depends upon those values. For example, if the probabilities are 25% HH, 0% HT, 25% TH, and 50% TT, you interpret it as if the coin's "faceness" property is physically stretched out in four physical dimensions of 0.25 HH, 0 HT, 0.25 TH, and 0.5 TT.
Of course, in real quantum mechanics, it gets even more complicated than this because probability amplitudes are complex-valued, so you have an additional degree of freedom, so this would be an eight-dimensional physical space the "quantum" coins (like electron spin state) would be stretched out in. Additionally, notice how the number of dimensions depends upon the number of possible outcomes, which would grow exponentially by 2^N the more coins you have under consideration. MWI proponents thus posit that each description like this is actually just a limited description due to a limited perspective. In reality, the dimensions of this physical space would be 2^N where N=number of possible states of all particles in the entire universe, so basically infinite. The whole universe is a single giant infinite-dimensional object propagating through this infinite-dimensional space, something they called the "universal wave function."
If you believe this, then it kind of restores determinism. If there is a 50% probability a photon will reflect off of a beam splitter and a 50% probability it will pass through, what MWI argues is that there is in fact a 100% chance it will pass through and be reflected simulateously, because it basically is stretched out in proportions of 0.5 going both directions. When the observer goes to observe it, the observer themselves also would get stretched out in those proportions, of both simulateously seeing it it pass through and be reflected. Since this outcome is guaranteed, it is deterministic.
But why do we only perceive a single outcome? MWI proponents chalk it up to how our consciousness interprets the world, that it forms models based on a limited perspective, and these perspectives become separated from each other in the universal wave function during a process known as decoherence. This leads to an illusion that only a single perspective can be seen at a time, that even though the human observer is actually stretched out across all possible outcomes, they only believe they can perceive one of them at a time, and which one we settle on is random, I guess kind of like the blue-black/white-gold dress thing, your brain just kind of picks one at random, but the randomness is apparent rather than real.
This whole story really is not necessary if you are just fine with saying the outcome is random. There is nothing about quantum computers that changes this story. Crazy David has a bad habit of publishing embarrassingly bad papers in favor of MWI. One paper he defends MWI with a false dichotomy pitching MWI as if its only competition is Copenhagen, then straw manning Copenhagen by equating it to an objective collapse model, which no supporter of this interpretation I am aware of would ever agree to this characterization of it.
Another paper where he brings up quantum computing, he basically just argues that MWI must be right because it gives a more intuitive understanding of how quantum computing actually provides an advantage, that it delegates subtasks to different branches of the multiverse. It's bizarre to me how anyone could think something being "intuitive" or not (it's debatable whether or not it even is more intuitive) is evidence in favor of it. At best, it is an argument in favor of utility: if you personally find MWI intuitive (I don't) and it helps you solve problems, then have at ya, but pretending this somehow is evidence that there really is a multiverse makes no sense.
Yes, quantum computers can only break a certain class of asymmetrical ciphers, but we already have replacements called lattice-based cryptography which not even quantum computers can break. NIST even has on their website source code you can download for programs that implement some of these ciphers. We already have standards for quantum-resistance cryptography. Most companies have not switched over since it's slower, but I know some VPN programs claim to have implemented them.
To put it as simply as possible, in quantum mechanics, the outcome of events is random, but unlike classical probability theory, you can express probabilities as complex numbers. For example, it makes sense in quantum mechanics to say an event has a -70.7i% chance of occurring. This is bit cumbersome to explain, but the purpose of this is that there is a relationship between [the relative orientation between the measurement settings and the physical system being measured] and [the probability of measuring a particular outcome]. Using complex numbers gives you the additional degrees of freedom needed to represent both of these things simulateously and thus relate them together.
In classical probability theory, since probabilities are only between 0% and 100%, they can only accumulate, while the fact probabilities in quantum mechanics can be negative allows for them to cancel each other out. You can have the likelihood of one event not add onto another, but if it is negative, basically subtract from it, giving you a total chance of 0% of it occurring. This is known as destructive interference and is pretty much the hallmark effect of quantum mechanics. Even entanglement is really just interference between statistically correlated systems.
If you have seen the double-slit experiment, the particle has some probability of going through one slit or the other, and depending on which slit it goes through, it will have some probability of landing somewhere on the screen. You can compute these two possible paths separately and get two separate probability distributions for where it will land on the screen, which would look like two blobs of possible locations. However, since you do not know which slit it will pass through, to compute the final distribution you need to overlap those two probability distributions, effectively adding the two blobs together. What you find is that some parts of the two distributions cancel each other out, leaving a 0% chance that the particle will land there, which is why there are dark bands that show up in the screen, what is referred to as the interference pattern.
Complex-valued probabilities are so strange that some physicists have speculated that maybe there is an issue with the theory. The physicist David Bohm for example had the idea of separating the complex numbers into their real and imaginary parts, and just using two separate real functions. When he did that, he found he could replace the complex-valued probabilities with real-valued probabilities alongside a propagating "pilot wave," kinda like a field.
However, the physicist John Bell later showed that if you do this, then the only way to reproduce the predictions of quantum mechanics would be to violate the speed of light limit. This "pilot wave" field would not be compatible with other known laws of physics, specifically special relativity. Indeed, he would publish a theorem that proves that any attempt to get rid of these weird canceling probabilities and replacing them with more classical probabilities ends up breaking other known laws of physics.
That's precisely where "entanglement" comes into the picture. Entanglement is just a fancy word for a statistically correlated system. But the statistics of correlated systems, when you have complex-valued probabilities, can make different predictions than when you have only real-valued probabilities, it can lead to certain cancellations that you would not expect otherwise. What Bell proved is that these cancellations in an entangled system could only be reproduced with a classical probability theory if it violated the speed of light limit. Despite common misconception, Bell did not prove there is anything superluminal in quantum mechanics, only that you cannot replace quantum mechanics with a classical-esque theory without it violating the speed of light limit.
Despite the fact that there are no speed of light violations in quantum mechanics, these interference effects have results that are similar to that if you could violate the speed of light limit. This ultimately allows you to have more efficient processing of information and information exchange throughout the system.
A simple example of this is the quantum superdense coding. Let's say I want to send a person a two-qubit message (a qubit is like a bit, either 0 or 1), but I don't know what the message is, but I send him a single qubit now anyways. Then, a year later, I decide what the message should be, so I send him another qubit. Interestingly enough, it is in principle to setup a situation whereby the recipient, who now has two qubits, could receive both qubits you intend to send across those two qubits they possess, despite the fact you transmitted one of those long before you even decided what you wanted the message to be.
It's important to understand that this is not because qubits can actually carry more than one bit of information. No one has ever observed a qubit that was not either a 0 or 1. It cannot be both simulateously nor hold any additional information beyond 0 or 1. It is purely a result of the strange cancellation effects of the probabilities, that the likelihoods of different events occurring cancel out in a way that is very different from your everyday intuition, and you can make clever use of it to cause information to be (locally) exchanged throughout a system more efficiently than should be possible in classical probability theory.
There is another fun example that is known as the CHSH game. The game is simple, each team is composed of two members who at the start of the round are each given a card with randomly the numbers 0 or 1. The number on the card given to the first team member we can call X and the number of the card given to the second team member we can call Y. The objective of the game is for the two team members to then turn over their card and write their own 0 or 1 on the back, which we can call what they both write on their cards A and B. When the host collects the cards, he computes X and Y = A xor B, and if the equality holds true, the team scores a point.
The only kicker is that the team members are not allowed to talk to one another, they have to come up with their strategy beforehand. I would challenge you to write out a table and try to think of a strategy that will always work. You will find that it is impossible to score a point better than 75% of the time if the team members cannot communicate, but if they can, you can score a point 100% of the time. If the team members were given statistically correlated qubits at the beginning of the round and disallowed from communicating, they could actually make use of interference effects to score a point ~85% of the time. They can perform better than should be physically possible in a classical probability theory.
While you can build a quantum computer using electron spin as you mentioned, it doesn't have to be. There are many different technologies that operate differently. All that you need is something which can exhibit these quantum interference effects, something that can only be accurately predicted using these complex-valued probabilities. Electron spin is what people often first think of because it is simple to comprehend. Electrons can only have two spin values of up or down, which you can map to 0 and 1, and you can directly measure it using a Stern-Garlach apparatus. This just makes electron spin simple as a way to explain how quantum computing works to people, but they definitely do not all operate on electron spin. Some operate on photon polarization for example. Some operate on the motion of Ytterbium ions trapped in an electromagnetic field.
It's kind of like how you can implement bits using different voltage levels where 0v = 0 or 3.3v = 1, or how you can implement bits using the direction of magnetic polarization on a spinning platter in a hard drive whereby polarization in one direction = 0 and polarization in the opposite direction = 1. There are many different ways of physically implementing a bit. Similarly, there are many different ways of implementing a qubit. It also needs at minimum two discrete states to assign to 0 or 1, but on top of this it needs to follow the rules of quantum probability theory and not classical probability theory.