It's not that Roc only supports base-10 arithmetic. It also supports the typical base-2 floating-point numbers, because in many situations the performance benefits are absolutely worth the cost of precision loss. What sets Roc apart is its choice of default; when you write decimal literals like 0.1 or 0.2 in Roc, by default they're represented by a 128-bit fixed-point base-10 number that never loses precision, making it reasonable to use for calculations involving money.
Roc's Dec implementation (largely the work of Brendan Hansknecht—thanks, Brendan!) is essentially represented in memory as a 128-bit integer, except one that gets rendered with a decimal point in a hardcoded position. This means addition and subtraction use the same instructions as normal integer addition and subtraction. Those run so fast, they can actually outperform addition and subtraction of 64-bit base-2 floats!
Multiplication and division are a different story. Those require splitting up the 128 bits into two different 64-bit integers, doing operations on them, and then reconstructing the 128-bit representation. (You can look at the implementation to see exactly how it works.) The end result is that multiplication is usually several times slower than 64-bit float multiplication, and performance is even worse than that for division.
Some operations, such as sine, cosine, tangent, and square root, have not yet been implemented for Dec. (If implementing any of those operations for a fixed-point base-10 representation sounds like a fun project, please get in touch! People on Roc chat always happy to help new contributors get involved.)
If a function takes a number—whether it's an integer, a floating-point base-2 number, or a Dec—you can always write 5 as the number you're passing in. (If it's an integer, you'll get a compiler error if you try to write 5.5, but 5.5 will be accepted for either floats or decimal numbers.)
Because of this, it's actually very rare in practice that you'll write 0.1 + 0.2 in a .roc file and have it use the default numeric type of Dec. Almost always, the type in question will end up being determined by type inference—based on how you ended up using the result of that operation.
For example, if you have a function that says it takes a Dec, and you pass in (0.1 + 0.2), the compiler will do Dec addition and that function will end up receiving 0.3 as its argument. However, if you have a function that says it takes F64 (a 64-bit base-2 floating-point number), and you write (0.1 + 0.2) as its argument, the compiler will infer that those two numbers should be added as floats, and you'll end up passing in the not-quite-0.3 number we've been talking about all along.