Facebook uses Mercurial, but when people praise their developer tooling it's not just that. They're using their CLI which is built on top of Mercurial but cleans up its errors and commands further, it's all running on their own virtual filesystem (EdenFS), their dev testing in a customized version of chromium, and they sync code using their own in-house equivalent of GitHub, and all of it connects super nicely into their own customized version of VS Codium.
The source control was so smooth and pleasant that it convinced me that git isn't the be all end all, and the general developer focus was super nice, but some of that tooling was pretty janky, poorly documented, and you had no stack overflow to fall back on. And some of it (like EdenFS), really felt like it was the duct tape holding that overloaded monorepo together (complete with all the jankiness of a duct tape solution).
The inhouse tooling from the massive tech companies is very cool but I always wonder how that impacts transferrable skills. I work in a much smaller shop but intentionally make tech decisions that will give our engineers a highly transferrable skill set. If someone wants to leave it should be easy to bring their knowledge to bear elsewhere.
Speaking from my own experience and a few other seniors I work with, you try to recreate solutions you like at those smaller shops. It may not be identical, but you know what's possible.
I came into a company that didn't have a system to manage errors. At my old job, errors would get grouped automatically and work can be prioritized through the groupings. The new company only handled errors when they saw it, by word of mouth.
Immediately went to work setting up a similar system.
The inhouse tooling from the massive tech companies is very cool
I agree. I personally know nothing about tooling like this but I went through the tooling used at rockstar for example GTA V and it was very cool to how much they have automated and made tools easier to use.
I'm pleased to report that git has made significant strides, and git submodule can now be easily used to achieve a mono-repo-like level of painful jankiness.
My best VCS experience so far was when working with Plastic SCM. I like how it can track merges, the code review workflow is also nice, and in general it was pretty nice to work with.
Fuck Unity, who paywalled it into unusability, though. Another amazing project that was bought and killed by absurd monetization by Unity, same as Parsec.
I only really have experience after, and it's the only Unity product I've actually found that I like. My only major complaint is that it's not compatible with the base configuration of Palo Alto, but that's really more of a Palo Alto problem than a Parsec problem.
I still use Parsec for remote, and I don't have any issue with it, it works great and I like it. However, they also did offer a free SDK (Unity plugin) to integrate remote play into your game natively (just like you can have "Invite to Steam Remote Play" button from Steam SDK), which was exactly what we needed - and Steam Remote was never working without issues for us, in comparison to Parsec which worked amazingly well every time we tried it.
I found numerous mentions of Parsec SDK and how easy it is to integrate, but after Unity bought it, I couldn't find it anywhere. Only mention was that if you need it, you should contact them.
So I did that, mentioning that we are a small team of students working on a offline co-op only 2 player game in our free time, and that since Steam Remote wasn't working for us and I have great experience with Parsec, I asked what we have to do to get access to the SDK/Unity plugin.
Unity's answer? Sure, no problem, they will be happy to give us access, with first step being that we pay them 1 000 000$ for it.
Like, wtf? Did they even read the email? How out of touch you have to be, to casually ask a small student team to pay 1 000 000$?
I use git daily and still wonder why I had fewer merge issues on a larger team in the 1990s with command line rcs on Solaris. Maybe we were just more disciplined then. I know we were less likely to work on the same file concurrently. I feel like I spend more time fighting the tools than I ever used to. Some of that is because of the dumb decisions that were made on our project a decade or more ago.
I was trying to say that tools were better about letting us know that another developer was modifying the same file as us, so we would collaborate in advance of creating the conflict.
As far as performance goes, Microsoft did manage to make git work for them later on (...with many contributions upstreamed and homegrown solutions developed—but then, Facebook is the same, isn't it?).
Mercurial does have a few things going for it, though for most use-cases it's behind Git in almost all metrics.
I really do like the fact that it keeps a commit number counter, it's a lot easier to know if "commit 405572" is newer than "commit 405488" after all, instead of Git's "commit ea43f56" vs "commit ab446f1". (Though Git does have the describe format, which helps somewhat in this regard. E.g. "0.95b-4204-g1e97859fb" being the 4204th commit after tag 0.95b)
Rebasing updates the commit ids. It's fine. Commit IDs are only local anyway.
One thing that makes mercurial better for rebase based flows is obsolescence markers. The old version of the commits still exist after a rebases and are marked as being made obsolete by the new commits. This means somebody you've shared those old commits with isn't left in hyperspace when they fetch your new commits. There's history about what happened being shared.
jujutsu is a fresh take on git-- you describe the work you're about to do with jj new -m 'message'. Do the work. Anything not previously ignored in .gitignore is ready to commit with jj ci. You don't have to git add anything. No futzing with stashes to switch or refocus work. Need that file back? jj restore FILENAME.
It's very optimistic to think people will be able to describe what they're going to do before they do it. I find things rarely go exactly as planned and my commit messages usually include some nuance about my changes that I didn't anticipate.
That brings more problems. Despite the scaling challenges monorepos are clearly the way to go for company code in most cases.
Unfortunately my company heavily uses submodules and it is a complete mess. People duplicating work all over the place, updates in submodules breaking their super-modules because testing becomes intractable. Tons of duplicate submodules because of transitive dependencies. Making cross-repo changes becomes extremely difficult.
But if not for using submodules, how can one share code between (mono-)repos, which rely on the same common "module" / library / etc.?
Is it a matter of "not letting submodules usage get out of hand", sticking to an "upper limit of submodules", or are submodules to be avoided entirely for monorepos of a certain scale and there's a better option?
What kind of RCS is used always depends on the organisation. We are actually using GIT and SVN, and both make sense for the departments that are using them.
While I'm not using it, since we started our small-team hobby project in git and moving away from it would be a bother, there is one use-case of SVN that would save us a lot of headaches.
SVN being centralized means you can lock files. Merging Unity scenes together is really pain, the tooling mostly doesn't work properly and you have no way how to quickly check that nothing was lost. Usually, with several people working on a scene, it resulted in us having to decide whose work we will scratch and he will do it again, because merging it wouldn't work properly and you end up in a situation where two people each did hundreds or thousands of changes to a scene, you know that the Unity mergetool is wonky at best, and checking that all of those changes merged properly would take longer and be more error prone than simply copying one persons work over the other.
We resorted to simply asking in chat if anyone has any uncommited work, but with SVN (or any other centralized VSC, I suppose) we wouldn't have to bother with that - you simply lock the scene file and be safe.
Because Facebook is a terrible company that can't even build a functional website. They think they know better than the entire industry, yet can't get basic features like browser history, link sharing, back buttons, or even comments and zooming working. Fuckin idiots.