Actually that "brain stops developing at 25" is a misconception, the study that spawned it just ran out of funding when the subjects were 25 and didn't see the brain development slowing down, iirc (no source on hand it's past midnight here).
Yeah, did:web exists, but I still called it centralized because it still relies on did:plc pretty much everywhere (though honestly domain name handles might actually be did:web, not sure). Didn't know about that dual setup by Bluesky though!
I did notice the @handle.invalid! Thanks!
My understanding was that activitypub was basically a rough formalization of existing protocols, designed to be as flexible as possible. More a template than a real protocol. Unfortunately mastodon's popularity basically made a bunch of things de-facto obligatory but not well documented, and there's still a bunch of ways to do.. anything.
That link doesn't work for me, but I ended up finding a post by them that seems to correspond. Good to know, thanks! Seems like it's realistic but expensive still (150$/mo?), and it's not gonna get cheaper... I hope they figure out a way to make them less centralized.
I believe that's your handle, not your identity. Your handle resolves to your identity, but your identity isn't directly tied to it, in case you lose the domain.
The aggregator is called the Relay, and I haven't even found anything suggesting one could realistically selfhost it. Then you need to handle the massive stream of data coming through it with AppViews, which are tough to handle too (there are a few but not many iirc).
That said, I am also impressed with the thought behind ATProtocol. It seems much more robust and defined than ActivityPub.
Bluesky's federation model is actually quite interesting, they go for a very portable approach vs activitypub's instance-basis. Unfortunately, there's still a massive centralization point (the main relay, the only thing that can really handle the firehose), and identity is also centralized, albeit has mechanisms to be decentralized.
I don't think this is gonna work. You can't just ban something like that, you have to provide alternatives, and there aren't any. There needs to be a Club Penguin-type "kids internet". Course, dealing with children's data is "too expensive" (and risky), so that's not gonna happen.
How the hell does Vee know that lmao
He did, but that could reasonably be explained away as "incredibly naïve and it's not direct harm". That said he probably would have instead tried to talk to Belos if he really was that naïve.. hm.
... ohh, Hunter's gonna turn evil, isn't he. Instead of a redemption arc he gets a corruption arc. Amity too, and Lillith potentially too. Maybe, assuming we're sticking with "seriously evil, King is a head" instead of "cartoon villainy"
Dog heaven is also pig hell. It's a very efficient system.
... man, if we ever meet Odalia in this universe, it's gonna be weird.
I would love to know as well!
I pronounce it da-eh-mon in my head, it sounds more old-timey than "dee-mon".
I wonder how this one survived 400 years though...
Spoilers and explanation of solution:
Each vertex here is one intersection in our hike. We don't actually care about the parts in-between, because there's only one way to go. The above is a visualisation of the final path, the red edges are the edges taken. Our graph looks "like that" because it's a hiking trail, not a maze, so there's no dead ends. This took about 2 seconds to generate, due to all the cloning needed to keep track of paths. The two veeery long edges on the ends are pretty obvious choices, but one might notice that pretty much every vertex takes the two maximum paths it has, given the restrictions of the path. There's still some mildly surprising paths, such as (99, 29) -> (89, 37) with a weight of 38. I'm wondering if there's a way to dismiss more paths... This graph is actually pretty free in terms of movement.
My actual solution takes ~150 ms to run (and 8 microseconds for part one with barely any optimization, damnn)
Anybody got some ideas to optimize today? I've got it down to 65ms (total) on my desktop, using A* with a visitation map. Each cell in the visitation map contains (in part 2) 16 entries; 4 per direction of movement, 1 for each level of straightaway. In part 2, I use a map with 11 entries per direction.
Optimizations I've implemented:
- use a 2D array instead of a hashset/map. No idea how much this saves, I did it in the first place.
- the minimum distance for a specific cell's direction + combo applies for higher combo levels as well for part 1. For part 2, if the current combo is greater than 4, we do the same*. Gains about 70(!!) ms
- A* heuristic weighting optimization, a weight of about 1% with a manhattan distance heuristic seems to gain about 15 ms (might be my input only tho)
*Correctness-wise: the reason we're splitting by direction is because there's a difference between being at a cell going up with a 3 combo but a really short path, and going right with a 0 combo but a long path. However, this is fine because a 3 combo in the same direction as a 0 combo is identical, just more restrictive.
Optimizations that could be done but I need to ensure correctness:
the same optimization for the combo, but for directions. If I'm on a specific combo+direction, does that imply something about the distance for another direction? Simply doing the same for every non-opposite direction isn't correct
Code: https://codeberg.org/Sekoia/adventofcode/src/branch/main/src/y2023/day17.rs
Warning: quite ugly, there's like 8 copy-pastes for adding to the queue
Is there a way to measure performance without depending on the hardware, i.e. two entirely different computers get the same score for the same code?
I could probably run the program on a server or something, but something local feels more reliable.
My Intel NUC server just died (whenever it's plugged in, it makes a buzzing noise, and the external power LED is off (the internal one is on tho)), so I need a new server box. Any recommendations?
I can salvage the RAM (16 GB DDR4) and hard drive (1TB HDD) off of this one, I believe.
I have a few selfhosted services, but I'm slowly adding more. Currently, they're all in subdomains like linkding.sekoia.example etc. However, that adds DNS records to fetch and means more setup. Is there some reason I shouldn't put all my services under a single subdomain with paths (using a reverse proxy), like selfhosted.sekoia.example/linkding?
According to https://lemmy.blahaj.zone/post/72658 I shouldn't be able to post but if you can see this...
I just want to say that the admins here are great and deserve appreciation, especially during this whole kerfuffle with Reddit :)
Have a good one, mods and admins!