"We have data on the performance of >50k engineers from 100s of companies. ~9.5% of software engineers do virtually nothing: Ghost Engineers.”
"We have data on the performance of >50k engineers from 100s of companies. ~9.5% of software engineers do virtually nothing: Ghost Engineers.”
Last week, a tweet by Stanford researcher Yegor Denisov-Blanch went viral within Silicon Valley. “We have data on the performance of >50k engineers from 100s of companies,” he tweeted. “~9.5% of software engineers do virtually nothing: Ghost Engineers.”
Denisov-Blanch said that tech companies have given his research team access to their internal code repositories (their internal, private Githubs, for example) and, for the last two years, he and his team have been running an algorithm against individual employees’ code. He said that this automated code review shows that nearly 10 percent of employees at the companies analyzed do essentially nothing, and are handsomely compensated for it. There are not many details about how his team’s review algorithm works in a paper about it, but it says that it attempts to answer the same questions a human reviewer might have about any specific segment of code, such as:
“How difficult is the problem that this commit solves?
How many hours would it take you to just write the code in this commit assuming you could fully focus on this task?
How well structured is this source code relative to the previous commits? Quartile within this list
How maintainable is this commit?”
Ghost Engineers, as determined by his algorithm, perform at less than 10 percent of the median software engineer (as in, they are measured as being 10 times worse/less productive than the median worker).
Denisov-Blanch wrote that tens of thousands of software engineers could be laid off and that companies could save billions of dollars by doing so. “It is insane that ~9.5 percent of software engineers do almost nothing while collecting paychecks,” Denisov-Blanch tweeted. “This unfairly burdens teams, wastes company resources, blocks jobs for others, and limits humanity’s progress. It has to stop.”
The Stanford research has not yet been published in any form outside of a few graphs Denisov-Blanch shared on Twitter. It has not been peer reviewed. But the fact that this sort of analysis is being done at all shows how much tech companies have become focused on the idea of “overemployment,” where people work multiple full-time jobs without the knowledge of their employers and its focus on getting workers to return to the office. Alongside Denisov-Blanch’s project, there has been an incredible amount of investment in worker surveillance tools. (Whether a ~9.5 percent rate of workers not being effective is high is hard to say; it's unclear what percentage of workers overall are ineffective, or what other industry's numbers look like).
Over the weekend, a post on the r/sysadmin subreddit went viral both there and on the r/overemployed subreddit. In that post, a worker said they had just sat through a sales pitch from an unnamed workplace surveillance AI company that purports to give employees “red flags” if their desktop sits idle for “more than 30-60 seconds,” which means “no ‘meaningful’ mouse and keyboard movement,” attempts to create “productivity graph” based on computer behavior, and pits workers against each other based on the time it takes to complete specific tasks.
What is becoming clear is that companies are becoming obsessed with catching employees who are underperforming or who are functionally doing nothing at all, and, in a job market that has become much tougher for software engineers, are feeling emboldened to deploy new surveillance tactics.
“In the past, engineers wielded a lot of power at companies. If you lost your engineers or their trust or demotivated the team—companies were scared shitless by this possibility,” Denisov-Blanch told 404 Media in a phone interview. “Companies looked at having 10-15 percent of engineers being unproductive as the cost of doing business.”
Denisov-Blanch and his colleagues published a paper in September outlining an “algorithmic model” for doing code reviews that essentially assess software engineer worker productivity. The paper claims that their algorithmic code assessment model “can estimate coding and implementation time with a high degree of accuracy,” essentially suggesting that it can judge worker performance as well as a human code reviewer can, but much more quickly and cheaply.
I asked Denisov-Blanch if he thought his algorithm was scooping up people whose work contributions might not be able to be judged by code commits and code analysis alone. He said that he believes the algorithm has controlled for that, and that companies have told him specific workers who should be excluded from analysis because their job responsibilities extend beyond just pushing code.
“Companies are very interested when we find these people [the ghost engineers] and we run it by them and say ‘it looks like this person is not doing a lot, how does that fit in with their job responsibilities?’” Denisov-Blanch said. “They have to launch a low-key investigation and sometimes they tell us ‘they’re fine,’ and we can exclude them. Other times, they’re very surprised.”
He said that the algorithm they have developed attempts to analyze code quality in addition to simply analyzing the number of commits (or code pushes) an engineer has made, because number of commits is already a well-known performance metric that can easily be gamed by pushing meaningless updates or pushing then reverting updates over and over. “Some people write empty lines of code and do commits that are meaningless,” he said. “You would think this would be caught during the annual review process, but apparently it isn’t. We started this research because there was no good way to use data in a scalable way that’s transparent and objective around your software engineering team.”
Much has been written about the rise of “overemployment” during the pandemic, where workers take on multiple full-time remote jobs and manage to juggle them. Some people have realized that they can do a passable enough job at work in just a few hours a day or less.
“I have friends who do this. There’s a lot of anecdotal evidence of people doing this for years and getting away with it. Working two, three, four hours a day and now there’s return-to-office mandates and they have to have their butt in a seat in an office for eight hours a day or so,” he said. “That may be where a lot of the friction with the return-to-office movement comes from, this notion that ‘I can’t work two jobs.’ I have friends, I call them at 11 am on a Wednesday and they’re sleeping, literally. I’m like, ‘Whoa, don’t you work in big tech?’ But nobody checks, and they’ve been doing that for years.”
Denisov-Blanch said that, with massive tech layoffs over the last few years and a more difficult job market, it is no longer the case that software engineers can quit or get laid off and get a new job making the same or more money almost immediately. Meta and X have famously done huge rounds of layoffs to its staff, and Elon Musk famously claimed that X didn’t need those employees to keep the company running. When I asked Denisov-Blanch if his algorithm was being used by any companies in Silicon Valley to help inform layoffs, he said: “I can’t specifically comment on whether we were or were not involved in layoffs [at any company] because we’re under strict privacy agreements.”
The company signup page for the research project, however, tells companies that the “benefits of participation” in the project are “Use the results to support decision-making in your organization. Potentially reduce costs. Gain granular visibility into the output of your engineering processes.”
Denisov-Blanch said that he believes “very tactile workplace surveillance, things like looking at keystrokes—people are going to game them, and it creates a low trust environment and a toxic culture.” He said with his research he is “trying to not do surveillance,” but said that he imagines a future where engineers are judged more like salespeople, who get commission or laid off based on performance.
“Software engineering could be more like this, as long as the thing you’re building is not just counting lines or keystrokes,” he said. “With LLMs and AI, you can make it more meritocratic.”
Denisov-Blanch said he could not name any companies that are part of the study but said that since he posted his thread, “it has really resonated with people,” and that many more companies have reached out to him to sign up within the last few days.
I think most people misunderstand what software engineers do. Writing code is only a small portion of the work for most. Analyzing defects and performance issues, supporting production support that ends up with unqualified people due to the way support us handled these days, writing documentation or supporting those who do, design work, QE/QA/QC support, code reviews, product meetings, and tons of other stuff. That's why "AI" is not having any luck with just replacing even junior engineers, besides the fact that it just doesn't work.
I could make a paper in 5 minutes about how AI can be used to uniquely identify people by smelling their farts. Doesn't mean anything unless it's been peer reviewed.
Until this paper has been peer reviewed, I give it as much credit as I give a flat earth conspiracy person.
Those numbers seem really sus. "We came up with some kind of bullshit metric magic algorithm and it turns out that if you look at people who score 10% of the average, that's about 10% of the people!!!"
This is bullshit. There are many people hired with the job title "Software Engineer" who don't sit and generate code, and for a number of reasons.
You could be on a hybrid team that does projects and support, so you spend 80% of your time attending meetings, working tickets, working with users, and shuffling paper in whatever asinine change management process your company happens to use.
I have worked places where "engineers" ended up having to spend most of their time dicking around in ServiceNow/Remedy/etc. instead of doing their actual jobs. That's shitty business process design and shitty management, and not a reflection of the employee doing nothing.
I spend most of my time in other time wasters like jira and fucking aha as well.
If I actually do anything, it only generates more work for me because I have to explain myself to fifteen different parties before making very minor, very necessary changes.
many more companies have reached out to him to sign up within the last few days
They're looking for a semi-randomized layoff algorithm.
If this guy's anything but a con artist, he'd show the data that correlates his algorithm with observed performance ratings. And he'd also validate that applying his algorithm is consistent with labor law.
This guy is such a waste of carbon.
Don't be fooled by his title as a "researcher" or him being in Stanford.
He's just another Tech Bro, pushing his "product" to greedy companies to make a few bucks for himself.
Who knows how many meetings they’re involved in to constrain the crazy from senior management?
This is more than half of my job. Telling the company owners/other departments "No". Or changing their request to something actually reasonable and selling them that they want that instead.
Makes me think of a trend in FTP gaming, where there was a correlation between play time and $ spent, so gaming companies would try and optimise for time played. They'd psychologically manipulate their players to spend more time in game with daily quests, battle passes, etc, all in an effort to raise revenues.
What they didn't realise was that players spent time in game because it was fun, and they bought mtx because they enjoyed the game and wanted it to succeed. Optimising for play time had the opposite effect, and made the game a chore. Instead of raising revenues, they actually dropped.
This is why you always have to be careful when chasing metrics. If you pick wrong, it can have the opposite effect that you want.
Yes, but there's also people actually not doing anything. I am dev lead and after building a team, which was a lot of work, I am at a point where I am doing fuck all on most days. Maybe join a few meetings, make some decisions and work on my own stuff otherwise.
Yeah, there are plenty of truly pointless workers, I'm not denying that. But doing stupid metrics like commit counting or lines of code per day is stupid and counter productive, and it emphasizes the out of touch and inhuman methods of corporate idiots
I'm not even going to bother to take this seriously at all.
There's something to be said about unfulfilling and 'bullshit jobs'. Aside from the potentially dubious methodology here, consider the implications of this 'finding'.
How about look at the rentier and profit sapping features of these massive tech companies.
What a fucking snitch. 9.5% of engineers gotta go, but the CEO getting paid buckets and buckets of money isn't draining the company? Fire 9.5% of engineers that actually have knowledge and are skilled enough to demand a high price for their skills, or CEO fuck-all who comes in via zoom once a quarter and couldn't open a pdf if they're life depended on it. Hmm, what a hard choice 🤔
Frankly anyone telling you they can measure the value of a line of code without any background knowledge is selling BS.
the previous system of managers not so secretly counting total commits and lines added was comically stupid
That has been known not to work since the 1970s. There's probably something in The Mythical Man-Month ridiculing lines of code as a performance metric.
Some of the most productive work I ever did involved ripping out 80k lines of executable code and replacing it with 1500.
This fundamentally misunderstands the domain of software engineering. Most of the time, with an actually difficult problem, the hardest part is devising the solution itself. Which, you know, often involves a lot of thinking and not that much typing. And that also entirely puts aside how neurodivergent people - who are somewhat over repressed in STEM - often arrive at solutions in very different ways that statistical models like these simply don’t account for.
And what this tells me is that automating garbage commits that don't actually do anything is what those employers want. 5000 lines a day but it's all comments? I think so.
You’re 100% right. And I have absolutely done this in the past when some dipshit has the bright idea to tie comp adjustments to SLOC metrics. And it’s more than just comments: you just make EVERYTHING a variable, duplicate as much as possible, and avoid terse syntax at all costs. It makes the codebase nigh unmaintainable… but hey, if you’re gonna hit me in the wallet if I don’t do that, I don’t fucking care about the quality of the codebase under those constraints.
Based on my understanding of workflow from what my developer friend says, sometimes most if not all your shit is stuck at a point where you need to wait on another part of the project too. So like im imagining the people they figure are doing nothing isn't a situation where the same 9 people out of 100 simply never work.
Exactly this: highly paid engineers are usually PHDs or otherwise researchers focusing on difficult problems. Their output can't be measures in lines of code commits on github. Nevermind time spent mentoring younger engineers, reviewing pull requests, advising management, etc. Ask me how I know.
That said ... at my previous job for a while near the end they were paying me to do very little indeed. I was not happy. Eventually the company ran into trouble, laid a bunch of people off (including me) and now I'm a lot busier at my new job... also happier.
And beyond this, solving the problem is just the baseline. Solving the problem well can take an immense amount of time, often producing solutions that appear overly simplistic in the end.
I recently watched a talk about ongoing Java language work (Project Valhalla). They've been working on this particular set of performance improvements for years without a lot to show for it. Apparently, they had some prototypes that worked well but were unwieldy to use. After a lot of refinement, they have a solution that seems completely obvious. It takes a lot of skill to come up with solutions like that, and this type of work would be unjustly punished by algorithms like this.
I agree to an extent that their methodology might be somewhat flawed (we don't know). But I'll assume the analysts know what they're doing to an extent. They seem to have at least attempted to make their algorithm somewhat intelligent.
That said, I've absolutely met software engineers that were basically a waste of space. That take weeks to do something I could've banged out in a couple of hours. Though it's incredibly obvious, they somehow still keep their jobs.
I am not a coder, nor do I work in or have much knowledge of the industry. But I can tell immediately that this looks like some extra fancy BS. Designing a program to detect the quality and quantity of a person's code commits sounds like AI mumbo-jumbo from the start. Even if it were technically possible, it would not tell you whether someone is an effective communicator, coordinates with other team members, shares productive ideas, etc.
The headline should have been:
"Consulting Firm Desperately Tries to Justify its Existence."
Just googled the paper's author. Yep, sure enough he seems to contract with "FounderPartners," which describes itself as, " a team of serial entrepreneurs and M&A advisors."
It does mention that they send some of these in, and sometimes they get responses back that they are fine.
That covers all of your senior engineers that end up spending more time speccing/investigating things than code.
This kind of tool is probably very useful in 'fiefdom' companies where middle managers refuse to fire people because then they lose a headcount, or just protect their cronies. Having a central team that cuts across the company investigating that would be a good idea.
Unfortunately in a lot of cases, I can see people being fired off that even though they are doing other work, just because management don't understand what they do. Or worse because someone sells the tool as being flawless and they just fire anyone it picks up.
It's a long article that I admittedly didn't read all of. I got to the part where it said the details of his algorithm are basically unknown, which means his data means nothing. If someone can't provide the proof to their claims, they have no merit.
An LLM that's built entirely on code repo data, and is somehow claiming workers "do virtually nothing" without any sort of outside data, is insane.
One of my big beefs with ML/AL is that these tools can be used to wrap bad ideas in what I will call "Machine legitimacy". Which is another way of saying that there are many cases where these models are built up around a bunch of unrealistic assumptions, or trained on data that is not actually generalizable to the applied situation but will still spit out a value. That value becomes the truth because it came from some automated process. People cant critically interrogate it because the bad assumptions are hidden behind automation.
Alternatively they are on an engineering team and providing their expertise via other means beyond code submission. This entire thing sounds like a sledgehammer trying to do the work of a scalpel.
Or architects, infrastructure engineer… plenty of peripheral functions are hired as « IT engineers » and not pushing code in a repo. What a weird article.
I don't doubt the thesis, but reviewing commit history is next to useless. I'm probably not top 50% of activity within our organization but I've literally invented most of our tech and my name is on the patents.
If anything, it's the people who spend all day making pedantic code review comments just to boost git actions who have nothing better to do.
When I was a junior about 95% of my days was writing code. Nowadays? 30-40% maybe. The rest is meetings, code-review, helping colleagues that calls me among other things.
Good luck finding that Mr Algorithm. Commit history is basically useless due to another factor as well. For bugs - finding the actual problem and the reason for it, is often far more consuming than the fix itself.
Yeah I was just about to say one obvious flaw in his methodology is that people could show up as "high productivity" by adding thousands of lines of worthless comments.
Some easy tasks involve pumping out mounds of code/commits, some tasks involve monumental amounts of inter-department cooperation or design discussions with open source communities online or at yearly conferences and result in relatively small amounts of code especially in terms of LOC/day.
This study purports to take this into account to some degree but i call bullshit. I can barely explain this level of nuance to anyone above my first-line manager everyone else is just like what's taking so long can we throw an intern on it to speed things up? and its like sure... after you hire them full-time and spend the next couple years training them. Oh you want me to do it that too?
The whole tone of this "researcher" makes the bias so clear but I'm sure we'll have all kinds of fancy new monitoring and lay-offs of good people thanks to these sorts of bullshit metrics.
If you want to know whether employees are a waste of space or not, hire good fucking managers that know what they are doing. If they farm that out to tools like this, it's a good sign they don't.
Well, this is so typical of people who just see employees as numbers. How is it possible that a company thinks they have an overemployment problem? Doesn't it mean that the whole company's management is a pile of crap?
Just build your team on people who care, who have time and will to do 1on1s and who can build a culture of trust in your company. Then you will not need to waste money on an algorithmic whip.
Now, if someone comes up with an AI budy who helps with 1on1s and helps to build a culture of trust in a company, let me know, anything else is just waste of time and money solving problems created by crap management.
I’ve seen it first hand but I don’t know if 9.5% is the correct number. One software guy at my company works for 11 years at this company. He went through so much shit that at this point he doesn’t even sit under the software department anymore, he’s just under finance. All he does is upgrade GitLab once every quarter or so and then he just watches TV and messes around with his homelab in his free time. Comes to the office couple times a week for 3-4 hours to show everyone he is still alive then goes home.
Yeah fr sometimes I need to sit on a problem for a week or talk with coworkers or other teams before a solution presents itself. Programming isn't just writing code, that's practically the last step.
Hell I spend most of my day just reading the old code and the docs just in case I find an opportunity to massively optimize things, and those have been some of my best projects.
How much hubris/ignorance this guy has to believe his algorithm is accurate enough to detect “10%” of employees were deadbeats? What precision! If it found 50% deadbeats, that would mean the algorithm might be working.
The worst companies have only 10% deadbeats? Any company with only 10% deadbeats means their management team is doing a great job hiring/managing. Any company that only 50% deadbeat managers would be outstanding.
The thing about being a big organization is that you need to have slack capacity most of the time in order to be able to go quickly in a different direction at certain times. If you don't have excess capacity sitting idle, an unforeseen event can paralyze you
And slack capacity can be used effectively e.g., spend some time on process improvement. There’s always some saw to sharpen or some technical debt to repay.
This guy loves the taste of the corporate boot on his neck JFC. Who will think of the poor mega corporations! Assholes like this are fueling the rollback of worker rights under the guise of being some worker white knight, meanwhile he spreads propaganda like this:
“This unfairly burdens teams, wastes company resources, blocks jobs for others, and limits humanity’s progress. It has to stop.”
This is what is limiting humanity's progress? Not the mega corps that can deceive workers, and destroy people's lives, destroy the environment, and suck the life from everything in the name of profit, and if caught get a wrist slap already built into their margins?
I spent most of my week poring over logs until I finally figured out what the issue was, then submitted a one-line commit to fix it. If my company used this bullshit, I'd be fired.
If people were gaming the performance monitoring systems to do less work, this is just another step in the cat and mouse game. They’ll figure out how to game this too.
Tbh sounds yet another "if we lived in another society this would be great" how much money is spent trying to measure, manage and control people doing the actual work?
Honestly though make an opensource version of this and run in it within the team, make the results public to the team and it's a great tool to motivate coworkers and make sure they are working without needing a manager (as often at least) to assign KPIs and get status reports.
I worked on a project that had at least 4x the amount of "management" people of various stripes talking about delivery dates and status as it did engineers. There was like 5 engineers on the project and 20ish people just talking bullshit and sitting in meetings.
Probably more. I think the bullshit jobs estimate was significantly higher. Yet people being paid are essential for sustaining the demand for the economy's output. If one company gets rid of its bullshit jobs it'll probably get ahead. If most companies do it, all of their profits will fall along with the total economic output.
Some of this is titles and trends. Everyone is gaga on devops so much of operations is done by people that they likely identify as engineers and rarely touch the repos. Unless the place is putting configs into repos which is getting more common but is sorta a pain for bought cloud solutions they will appear to not be doing anything by this studies "methodology". Start evaluating by code contribution and you will get lots more contribution and a lower quality code base.