Yeah I was going to comment about this too, but yes IMO because it almost surely it is otherwise that person is completely lacking in reading comprehension i.e how on earth is "making [optimal] decisions" fundamentally different from "mambo jambo about threats and opportunities". Also the optimal decisions just mentioned previously is literally the opposite of the definition of "one size fits all". Also it goes without saying it goes with that general air of superiority, the stereotypical confidently wrong response lol.
But maybe I'm wrong and it is in fact very common to make these mistakes, heck I'm not a native English speaker maybe I'm missing something here lol.
I got GPT4 to write some angry comments about this and it is sort of uncanny:
Leadership is far more than data points and algorithms! It encompasses intuition, human connection, ethical decision-making, and countless intangibles that a machine could never grasp.
Organizations thrive on human connections, shared values, and mutual aspirations. The idea that an algorithm could replace the heartbeat of a company is baffling.
While AI can augment and support decision-making, the visionary leadership and human touch that CEOs provide are irreplaceable. Let's not lose sight of what truly matters.
This is the human touch that CEOs bring to the table, folks.
To be honest "make green number go up" is probably the best task you could find for an AI to replace humans with. All the actual work and whatever takes real multi-variable thought, prediction, skill and consideration.
exactly. for all the reductionism behind the idea that our leadership (politicians, CEOs etc) has no real agency because they are compelled to act in a way that furthers capitalism and profit inevitably, it's only rationally consistent to replace them with machines. Someone ought to tell them that
LMAO dude, that's not what CEOs do, that's like a mid-level director of engineering. The CEO's job is to prevent the rest of the executives from looting the company before the shareholders can do it.
I should write a piece how Neoliberalism is already carving up the CEO (and other) leadership positions. Hedge funds and other capital vultures constantly shuffle the corporate suite to suite their interests, so there is absolutely a place for computers as labor-saving devices for managing a portfolio of companies by these huge capital conglomerates. Cyberpunk was only wrong about the aesthetics.
EDIT: My real hot take should be that CEOs are undergoing Proletarianization. The masters of Capital reveal themselves as its greatest slaves.
I've noticed this too. There are already C-level agencies where a hedge fund can dial up a specific board for their purpose, from fucking over the founders of a seed start-up, to pivoting from a pro-consumer growth model to profit maximisation, to "Strip the copper wiring before the smallholders notice".
See also the heads of banks and financial institutions going from among the richest captains of industry to mere money butlers with values 2 orders of magnitude below their clients.
There's no longer a spectrum of bourgoise, there's your local used car salesman or medium business owner with 30 employees. And then there's the mega rich. Everyone in between is now Labour Aristocracy and eventually, they're gonna realise that.
Noooooo, not like that! Automation is only for people who didn't go to Harvard!
The funny thing is, the only barrier here is context size. Right now, LLMs have laughably bad context size (or attention spans, in human terms, it's basically how much information a Brian or model can keep active at any point in time) compared to humans, but that's going to change. It's not difficult to foresee a near future of LLMs with very, very, superhumanly large context sizes that could make human leadership seem ridiculously incompetent in comparison. Here's the thing, pyramid-like organizational structures are extremely common because we necessarily have layers of abstraction; the head of the organization can't do their job effectively if they're worried about whether Bob the Welder is going to make it in on time or if that invoice has got paid yet; likewise, Bob the Welder can't do his job if he's getting pulled off work to go sit in marketing meetings all day. There's only so much attention any one person can give in a day. The biggest problem is that information gets lost between these layers of abstraction, values don't necessarily remain consistent, and policies and practices aren't uniformly applicable, which can make it difficult for customers and even employees to navigate the normal processes of an organization, let alone the abnormal ones.
As LLM context sizes reach superhuman levels, it's conceivable that they could end up flattening organizational structures by being able to be both Bob's supervisor and the CEO (or at least the CEO's assistant), and being able to keep all of the organization's context, down to the individual employee and customer needs, in mind at all times when making decisions. A government or corporation run by a properly aligned super-context AI could possibly be the closest thing we're going to get to utopian leadership, and would likely be both more ethical and more effective than human leadership.
Good response, and thanks for bringing receipts. I'd love to read this a little later. Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source. True, freely available models that you could run on a gaming computer don't hold water against ChatGPT yet, but I do suspect that this will change as the emphasis in AI research pivots towards making models more efficient. It's also true that if a general AI is developed, it's not going to be FOSS, though that's honestly not the worst idea.
With respect to your article on Babbage, I'd like to point out that much of the leadership in AI right now has been leading with the idea that any AI must follow the 3 Hs: Honest, Harmless, and Helpful. I think it's more than just hype, IMO, because they're currently burning a lot of cash hiring teams whose whole job it is to make sure that we get alignment (that is, constraining it with ethical values rather than allowing it to become a paperclip maximizer) of a potential super-intelligence correct. To be quiet frank, there's a lot of MBAs out there who could stand to pick up those 3Hs.
The problems facing the world today does not come from leaders having too short attention spans or inadequate access to information. The problems comes from these rulers representing bourgeois rather than proletarian interests. No amount of bazinga is going to overcome class conflict and make the dictatorship of the bourgeoisie make decisions that benefit the masses.
It's possible that if giant-context models are freely available, flat-structured organizations run by AI could outcompete less agile pyramid-structured organizations. It is possible we could see the bourgeoisie hoisted by their own petard.
I've learned everything about being a CEO from Elon Musk, and the primary job of the CEO is to play Elden Ring poorly and complain about children blocking you on twitter
LLMs can already replace those shitty organization wide emails that go out telling us all that we matter, justifying raises below the rate of inflation, and pretending like screwing the client is "actually" delivering added value to the client.
shareholders should be clammoring for LLMs to replace CEOs, because LLMs aren't going to jack up executive payouts right before the take on a bunch of debt and file for bankruptcy.
Actually computers are extremely qualified to be CEO because the only trait you need is being willing to fuck over as many people as possible to maximize profits for your shareholders. The less empathy, the better. Shareholders want absolute psychos who will fire 50% of the workforce with no hesitation to make line go up by 2%.
I know this sounds like the background of a dystopian sci-fi, but following that logic: why not eventually replace politicians with AI?
The biggest problem I see is the possibility of an enemy hacking the system, but we can create many measurements to deal with that. Also... Lobbying+corruption+etc already exists.
We can still have some human interference, like plebiscite or something similar.
Opinions/criticism to propositions/laws could still be allowed.
The priorities on the AI logic would be: respecting human rights, guaranteeing basic needs, avoiding conflict (but being able to defend the country), protecting the environment, and then just using the most efficient way to minimize crime rate, burocracy, car traffic, etc...
What if we used the power of bazinga to remove all the nasty politics from government?
Surely, a chatbot, by virtue of being technology, would be the perfect dispenser of impartial, unbiased decisions to serve the common good the best. Would pre-existing prejudices, inequalities and superstitions be baked into the model from start? Would it reinforce the class power of the class that set it up in the first place? Would it be inflexible and conservative in the face of a changing world? No, absolutely not. It has computers in it and math that people can't understand so it must be good!