TheDude @ TheDude @sh.itjust.works Posts 32Comments 68Joined 2 yr. ago

Scheduled Upgrade to 0.19.11 - May 2nd 2025 @ 8PM EDT
Yes, this will not change. The /c/Agora will continue to be a platform for members to use to shape this instance.
Our Terms of Service will be likely be reviewed in the future, but our core philosophy of “don’t get us in trouble” will stay the same. We’ve always believed that the actions of a few shouldn’t negatively impact the experience for everyone else—and that belief still stands.
When it comes to defederation, we handle it on a case-by-case basis. We only consider it under very specific circumstances, or if there’s a clear mandate from the community. For example, we may choose to defederate from an instance if it's causing harm or violating our rules, especially if their admin team is unresponsive or unreachable.
Whenever we do take such action, we usually start a discussion with the community to keep everyone informed. In fact, this has happened recently—you can read more about it here.
I've went ahead and updated and applied the update to our configuration files. This issue should be sorted now.
Thanks!
Thanks for bringing this to my attention.
I found that there were some changes to the nginx.conf file which can be seen here
it does look like its fixed in the 19.10 version of lemmy. I'll do another deep dive later tonight to see if we can just update the nginx configuration without upgrading to 19.10 and if its the case will schedule another small maintenance window this week to get it pushed out.
Thank you
sh.itjust.works
Give that link a try again otherwise as @InEnduringGrowStrong@sh.itjust.works mentioned send us a PM with your account and we'll manually verify it.
Thanks
Hey there! I'm sorry that you didn't receive the activation email. Can I ask how long ago this was? I recently changed our email provider and want to make sure it wasn't after the fact. Thanks!
test
Hey!
Everything has been reviewed and we're good to move over the 19.9. I'll be posting an announcement in the coming days to give everyone a heads up. I'm aiming towards the end of this week.
This one was received. Thanks!
I want to say that I've been blown away by the amount of support that's I've received in the last 24 hours. You all have left me speechless. Thank you everyone who have donated and those who couldn't, know that being a good member of this community already more than I could ask for. Thank you!
Hey there, I considered this at first but my understanding with tesseract is that this would mean we'd be proxying account credentials/tokens of other instances. I'm comfortable in doing this for our own instance but it doesn't feel I'd be a responsible human to enable it for other instances. If there was a way to be more transparent to users using it for external instances I'd reconsider. I'm open to any feedback.
edit: typo
The 2025 SJW Update: Donations, costs and other points
Absolutely. This is exactly why It's important that we get a donation strategy put into place sooner rather than later.
This is odd. I did not upgrade pictrs (the service responsible for image hosting). There may have been an error for some specific tables rows during the database migrates. Another possibility could have been an error during the schema update that I believe the new version had to go through.
Would you be able to DM me links to other posts that you come across with broken images? I will look to see if I can identify any common trends that could tell us more.
Thank you
We sure are!
It's coming. This instance is deployed using lemmy-ansible but with some slight modifications. I need to review a few things to make sure the transition is smooth. As you also pointed out, it's also summer time which isn't helping with my free time situation.
I have a long term solution that will speed up upgrades but I only expect to get it in place by the fall. In the meanwhile, I'm planning on getting this instance upgraded to 19.5 in the coming weeks.
Hey,
I'm due to make a post about the instances finances soon. I'll get one posted in the next bit.
I'm still covering costs out of pocket but my goal has been to have this instance fully funded by its members. I want this instance to stay true to its members, the fediverse and be put in a position where it can continue to thrive transparently without me. The admin team has been doing an outstanding job keeping this instance safe and moderated for everyone. I don't thank them enough but they are the true heroes of this instance.
As some may know, I've been working with a local non profit for the past 6 months to have them leveraged in accepting accepting donations for this instance. They have been slow to get things done but its progressing. I may end up doing what I was trying to avoid and create my own non-profit instead if that's what it takes. That direction does have its own challenges and would add additional responsibilities to my plate.
In the meanwhile, while some of the costs of the services have gone up slightly, I get joy knowing that I can continue providing this instance to all of you. I'm OK paying the costs to keeping the lights on until we can transition to donations.
Yes, I'll get this done
UPDATE: here you go https://oldsh.itjust.works/
There was a little hiccup this morning unrelated to the migration tonight. The Lemmy services like to be restarted every once in a while. Once we move to the new hardware I'll be able to look into implementing a better logging system and hopefully be more proactive when situations like these happen.
The biggest consumer of storage on this instance is related to the image hosting which we use an external object storage provider for. The second is the database which is no were near the 2TB capacity. 1TB SSDs are cheaper than 2TB SSDs and I also didn't want to spend more than I needed. As other mentioned if we need more space or IOPs in the future, I could accomplish this by adding more drives as a quick fix. This server does not support NVME unless I leverage its PCIe ports but I don't plan on doing that. By the time this instance gets to the point where 10 SSD drives just isn't cutting it anymore I'll probably have come across another opportunities on getting a new server with better NVME support.