Les salaires progressent toujours plus vite que les prix, mais les augmentations ralentissent
pcouy @ pcouy @lemmy.pierre-couy.fr Posts 34Comments 176Joined 2 yr. ago
le SMB a progressé de 0,8 % et même de 1,4 % pour les ouvriers mieux lotis grâce à la revalorisation automatique du SMIC
Mince alors. Si j'avais su j'aurais fait ouvrier pour être mieux loti que les autres avec mon smic ! C'est à la limite du supportable ce type de tournure...
Les salaires progressent toujours plus vite que les prix, mais les augmentations ralentissent
Je trouve ce titre au mieux imprécis, au pire trompeur... Dans tous les cas c'est tres peu clair sur la nature du phénomène qu'on décrit. Personellement, je comprend en lisant ce titre un truc du style "depuis toujours, les salaires augmentent plus vite que l'inflation", alors que d'après le contenu de l'article, ça fait seulement depuis 2 ou 3 trimestres :
« Les salaires ont réagi avec retard à l'inflation. En revanche, ils ralentissent presque en même temps que les hausses de prix, ce qui nous a beaucoup surpris », reconnaît Dorian Roucher, chef du département Conjoncture à l'Insee.
Alors que les ménages n'ont guère pu compter sur leurs salaires pour limiter leurs pertes de pouvoir d'achat face au choc inflationniste de ces deux dernières années, les gains qui se profilent en 2024 risquent de décevoir. Dans ses prévisions, l'Institut de la statistique attend une hausse de 2,9 % du SMB pour l'année en cours après un bond de 4,3 % en 2023. Les salaires réels augmenteraient « modestement » en 2024, de 0,6 % selon l'Insee. « La dynamique des salaires cette année ne compenserait donc pas les pertes cumulées par les salariés en 2022 et 2023 qui ont atteint 2,5 % », souligne Dorian Roucher.
On est quand même très loin de l'optimisme que suggère le titre de l'article !
What I did is use a wildcard subdomain and certificate. This way, only pierre-couy.fr
and *.pierre-couy.fr
ever show up in the transparency logs. Since I'm using pi-hole with carefully chosen upstream DNS servers, passive DNS replication services do not seem to pick up my subdomains (but even subdomains I share with some relatives who probably use their ISP's default DNS do not show up)
This obviously only works if all your subdomains go to the same IP. I've achieved something similar to cloudflare tunnels using a combination of nginx and wireguard on a cheap VPS (I want to write a tutorial about this when I find some time). One side benefit of this setup is that I usually don't need to fiddle with my DNS zone to set up a new subdomains : all I need to do is add a new nginx config file with a server
section.
Some scanners will still try to brute-force subdomains. I simply block any IP that hits my VPS with a Host
header containing a subdomain I did not configure
On this day, exactly 12 years ago (9:30 EDT 1 Aug 2012), was the most expensive software bug ever, in both terms of dollars per second and total lost. The company managed to pare losses through the heroics of Goldman Sachs, and “only” lost $457 million (which led to its dissolution).
Devs were tasked with porting their HFT bot to an upcoming NYSE API service that was announced to go live less than a 33 days in the future. So they started a death march sprint of 80 hour weeks. The HFT bot was written in C++. Because they didn't want to have to recompile once, the lead architect decided to keep the same exact class and method signature for their PowerPeg::trade() method, which was their automated testing bot that they had been using since 2003. This also meant that they did not have to update the WSDL for the clients that used the bot, either.
They ripped out the old dead code and put in the new code. Code that actually called real logic, instead of the test code, which was designed, by default, to buy the highest offer given to it.
They tested it, they wrote unit tests, everything looked good. So they decided to deploy it at 8 AM EST, 90 minutes before market open. QA testers tested it in prod, gave the all clear. Everyone was really happy. They'd done it. They'd made the tight deadline and deployed with just 90 minutes to spare...
They immediately went to a sprint standup and then sprint retro meeting. Per their office policy, they left their phones (on mute) at their desks.
During the retro, the markets opened at 9:30 EDT, and the new bot went WILD (!!) It just started buying the highest offer offered for all of the stocks in its buy list. The markets didn’t react very abnormally, becuase it just looked like they were bullish. But they were buying about $5 million shares per second… Within 2 minutes, the warning alarms were going on in their internal banking sector… a huge percentage of their $2.5 billion in operating cash was being depleted, and fast!
So many people tried to contact the devs, but they were in a remote office in Hoboken due to the high price of realestate in Manhattan. And their phones were off and no one was at their computer.
The CEO was seen getting people to run through the halls of the building, yelling, and finally the devs noticed. 11 minutes ahd gone by and the bots had bought over $3 billion of stock. The total cash reserves were depleted. The compnay was in SERIOUS trouble...
None of the devs could find the source of the bug. The CEO, desperate, asked for solutions. "KILL THE SERVERS!!" one of the devs shouted!!
They got techs @ the datacenter next to the NYSE building to find all 8 servers that ran the bots and DESTROYED them with fireaxes. Just ripping the wires out… And finally, after 37 minutes, the bots stopped trading. Total paper loss: $10.8 billion.
The SEC + NYSE refused to rewind the trades for all but 6 stocks, the on paper losses were still at $8 billion. No way they coudl pay. Goldman Sachs stepped in and offered to buy all the stocks @ a for-profit price of $457 million, which they agreed to. All in all, the company lost close to $500 million and all of its corporate clients left, and it went out of business a few weeks later.
Now what was the cause of the bug? Fat fingering human error during release.
The sysop had declined to implement CI/CD, which was still in its infancy, probably because that was his full-time job and he was making like $300,000 in 2012 dollars ($500k today). There were 8 servers that housed the bot and a few clients on the same servers.
The sysop had correctly typed out and pasted the correct rsync commands to get the new C++ binary onto the servers, except for server 5 of 8. In the 5th instance, he had an extra 5 in the server name. The rsync failed, but because he pasted all of the commands at once, he didn't notice...
Because the code used the exact same method signature for the trade() method, server 5 was happy to buy up the most expensive offer it was given, because it was running the Sad Path test trading software. If they had changed the method signature, it wouldn't have run and the bug wouldn't have happened.
At 9:43 EDT, the devs decided collectively to do a "rollback" to the previous release. This was the worst possible mistake, because they added in the Power Peg dead code to the other 7 servers, causing the problems to grow exponentially. Although, it took about 3 minutes for anyone in Finance to actually inform them. At that point, more than $50 million dollars per second was being lost due to the bug.
It wasn't until 9:58 EDT that the servers had all been destroyed that the trading stopped.
Here is a description of the aftermath:
It was not until 9:58 a.m. that Knight engineers identified the root cause and shut down SMARS on all the servers; however, the damage had been done. Knight had executed over 4 million trades in 154 stocks totaling more than 397 million shares; it assumed a net long position in 80 stocks of approximately $3.5 billion as well as a net short position in 74 stocks of approximately $3.15 billion.
28 minutes. $8.65 billion inappropriately purchased. ~1680 seconds. $5.18 million/second.
But after the rollback at 9:43, about $4.4 billion was lost. ~900 seconds. ~$49 million/second.
That was the story of how a bad software decision and fat-fingered manual production release destroyed the most profitable stock trading firm of the time, and was the most expensive software bug in human history.
Thanks for the details ! Still curious to know how a new instance, with an old domain and fresh keys, would be handled by other instances.
I'm pretty sure they are actually hosting it. The tech is quite different (cofractal uses urls ending with {z}/{x}/{y}
, while their tile sever uses this stuff that works quite differently)
There is even a "Ignore cache" box in the devtools network tab
Yeah, this probably has to do with the cache. You can try opening dev tools (F12 in most browsers), go to the network tab, and browse to pathfinder.social. You should see all requests going out, including "fake requests" to content that you already have locally cached
They told me about hosting their own tile server earlier today. I'm really impressed by how fast they moved !
A pull request for a privacy page during the onboarding is in the works, and I've been working with them to update the settings page and documentation (with the goal of providing an easy way to switch map providers). They are also working on a privacy policy, and want to ship all of this in a few weeks as part of a single release.
Once again, I'm really impressed with how well they're handling this
That's really really weird, I cannot resolve the domain to an IP, even after trying a bunch of different DNS servers. If you're on linux, can you run nslookup pathfinder.social
and paste the output here ?
The fact that it has not been bought as soon as the domain expired makes me believe this instance went down before the trend started
These services usually use either or both of passive DNS replication (running public recursive DNS resolvers and logging lookup that returns a record) and certificate transparency logs (where certificate authorities publish the domain names for which they issue certificates). A lot of my subdomains are missing from these services
With all the botting going on on Reddit, this whole Google AI deal makes me think of the recent paper that demonstrates that, as common sens would suggest, deep learning models collapse when successive generations are trained on the previous generations' output
It does not seem to be the case. Was it the full domain for this instance ?
This is an old post, but I've only recently (I'd say a few months ago) started to see Google's indexing bots pop-up in my instance's server logs, so this may be about to change
never stopped POSTing, even though I configured nginx to always respond 403 to anything from them for about a year now.
Lol, there are definitely some stubborn user agents out there. I've been serving 418 to a bunch of SEO crawlers - with fail2ban configured to drop all packets from their IPs/CIDR ranges after some attemps - for a few months now. They keep coming at the same rate as soon as they get unbanned. I guess they keep sending requests into the void for the whole ban duration.
Using 418 for undesirable requests instead of a more common status code (such as 403) lets me easily filter these blocks in fail2ban, which can help weed out a lot of noise in server logs.
Your sensitive data and logins are tied to email addresses, which are tied to domains. Lose your domain, someone can access everything.
I recently stumbled upon an article showing how bad this can be when the expired domains were used for important/serious stuff
I think they do get marked as dead after the Bodis subdomain does not act as a Lemmy instance. But I was wondering if a large number of instances "waking up from the dead" and acting maliciously could cause some trouble. Or would such "undead" instances pose no more threat to the fediverse than the same number of newly created malicious instances ? I'm mainly thinking about stuff like being in a privileged position to DoS most instances at once, or impersonation of accounts that used to actually exist on these "undead" instances
Is named
actually running as the bind
user inside the container ? Maybe a USER bind
line below the RUN
lines will help.
I'll probably look into newer fancier options such as Caddy one day, but as far as I remember Nginx has never failed me : it's stable, battle tested, and extremely mature. I can't remember a single time when I've been affected by a breaking change (I could not even find one by searching changelogs) and the feature set makes it very versatile. Newer alternatives seem really interesting, but it seems to me they have quite frequent breaking changes and are not as feature rich.
That being said, I'd love to see side-by-side comparison of Nginx and Caddy configs (if anyone wants to translate to Caddy the Nginx caching proxy for OSM I shared earlier this week, that would make a good and useful example), as well as examples of features missing from Nginx. This may give me enough motivation to actually try Caddy :)
(edit : ad->and)
Hum, je crois que tu t'es trompé de post ? :)