After confirmation that the team was working on porting the ROM, LineageOS 21 is now officially coming to the Chromecast with Google TV 4K.
-
Maintenance Tonight
UPDATE DAY 2: backend has successfully been migrated onto new dedicated hosting after some pain. there should not be major downtime from here on. tomorrow I will be working on integrating a better backup solution and then I'll leave it alone for a little while.
UPDATE: I was able to deploy the database onto dedicated server hardware tonight, but have not finished moving over the other components I wanted to. You may notice a performance degradation due to increased database-backend latency (...or maybe it will just be better anyways, lol).
I will finish off work on this tomorrow!
Lemdro.id has been struggling with some performance issues lately, as you've likely noticed. This is due to changes made by our hosting provider that causes the database to run much slower. Tonight at 10pm PST, I will be putting lemdro.id into maintenance mode to migrate some parts of the infrastructure to a new dedicated server.
Thanks for your patience!
-
Continuous federation problems with lemdroid over the last few weeks
Posts to your instances communities seems sometimes to take multiple hours before federating and sometimes not at all.
Any ideas what's the cause? I also wonder if that "Parallel federation sending" feature introduced in Lemmy v0.19.6 could help with the issue?
- 9to5google.com LineageOS 21 now officially supports Chromecast with Google TV (4K)
-
I can't upload photos!
I get this error message whenever I try to
{"message":"{\"msg\":\"No space left on device (os error 28)\",\"files\":null}"}
-
State of the lemdro.id (LW federation + infra upgrades + new frontend option)
Hey all! I've done a lot of database maintenance work lately and other things to make lemdro.id better. Wanted to give a quick update on what's up and ask for feedback.
For awhile, we were quite a ways behind lemmy.world federation (along with many other instances) due to a technical limitation in lemmy itself that is being worked on. I ended up writing a custom federation buffer that allowed us to process activities more consistently and am happy to say that we are fully caught up with LW and will not have that problem again!
Additionally, on the database side of things, I've setup barman in the cluster to allow for point of time backups. Basically, we can now restore the database to any arbitrary point in time. This is on top of periodic automatic backups which also gets pulled to storage both on my personal NAS as well as a Backblaze bucket (both encrypted of course).
Today, I deployed a new frontend at https://next.lemdro.id. This one is very early stages and experimental but is
-
Photon UI rollout
I am rolling out the Photon UI as a replacement to the default lemmy UI right now. Initially, only about 50% of requests will be routed to Photon, determined by a hash of your IP address and user agent (sorry for any inconsistencies...). As I determine that this configuration is stable I will be slowly increasing the percentage until Photon is the new default frontend for lemdro.id.
If you have any difficulties, please reach out. Additionally, the "old" lemmy frontend will remain available at https://l.lemdro.id
Edit: I am aware of some problems with l.lemdro.id. It wasn't designed to run on a subdomain so I'll need to add a proxy layer to it to redirect requests. A task for tomorrow!
FINAL EDIT: https://l.lemdro.id is now fully operational, if you choose to use the old lemmy UI it is available there
-
Poor sorting experience
Over the course of the last couple weeks, I managed to root cause and solve the problem causing stale sorting on lemdro.id. My apologies!
-
0.19.3 Update
Edit: Upgrade went off without much of a hitch! Some little tasks left to do but we are now running 0.19.3. Seems like a lot of the federation issues have been solved by this too.
You will have to re-login and 2FA has been reset.
This update is scheduled to take place this weekend. No specific day or time because I am infamously bad at following those. I will try to minimize impact by keeping downtime to lower-traffic periods.
Ideally, there will be no downtime, but if there is it is likely to only last an hour maximum. During this time I will add an "under maintenance" page so you can understand what we are up to.
Feel free to join our Matrix space for more information and ongoing updates! My apologies for how long this took - I was in the middle of a big move and a new job.
Additionally, there may be small periods of increased latency or pictures not loading as I perform maintenance on both the backend database and pictrs server in preparation for this upgrade.
-
Update to 0.19.3?
Hello everyone,
Small question for you: do you have any idea on when you will update to 19.3? There is an issue with upvotes federation between 18.5 and 19.3 at the moment
-
Recent Downtime
There was a brief (~5 minute) period of downtime on lemdro.id recently. This occurred during a routine database upgrade when some strange issue caused a member of the cluster to become inconsistent and refuse to form quorum.
This locked the entire cluster into an invalid state which took some troubleshooting to fix. My apologies.
I will be rolling out read replicas for folks on the East coast of the US as well as those in Europe sometime in the next week, you should notice a pretty dramatic reduction in latency if you are from those areas. Additionally, other recent changes have increased reliability and decreased latency which may or may not be noticeable.
I will post another update before I start rolling out the read replicas since it is kind of a big change (and I will schedule a time for it)
-
Infrastructure Upgrade Rollback
I recently rolled out an infrastructure upgrade this last Monday (Dec 4) with the intent of reducing peak response times and removing occasional scaling errors.
Unfortunately, my metrics system showed slightly elevated error rates, so I've decided to rollback these changes for now. I will make another announcement before I roll them back out in the future. Thanks for your support!
-
Inaccessible instances
lemmy.studio and sopuli.xyz are not populating for me. Probably soft blocked? Also, it may be worth opening and pinning a megathread for this topic so the community doesn't get spammed.
edit 10/15: sopuli has started working, studio is still not showing
-
Should lemdro.id change its default instance to Photon? Please vote!
Hey there everyone. I think the Photon project has matured enough to the point where I feel ready to replace the default lemmy frontend with it. Since this instance serves roughly 1000 people now, I figured this was worth holding a vote on!
Please check out Photon as currently hosted at https://p.lemdro.id.
If you support changing the default frontend to Photon, upvote my comment on this post. If you don't support it, downvote that same comment.
Thanks!
-
Frequent error messages lately?
Has anyone else been getting frequent error messages on lemdro.id the last couple days? I am using the standard web interface and keep seeing something to the effect of:
FetchError: request to http://lemdroid-lemmy.flycast/api/v3/post/list?type_=Subscribed&page=1&limit=20&sort=Hot
-
lemmy.studio not populating?
Basically title, can't get any communities from the instance to load. It is in the list of federated instances in the footer.
-
Welcome to Lemdroid! Start here for resources and fresh communities 🪴.
Start your journey into the Fediverse by subscribing to our starter communities. We're actively working with subreddit communities and moderators on their transition over.
Our Mission
Lemdro.id strives to be a fully open source instance with incredible transparency. Visit our GitHub for the nuts and bolts that go into making this instance soar and our Matrix Space to chat with our team and access the read-only backroom admin chat.
Interfaces
- lemdro.id powered by Photon
- l.lemdro.id powered by Lemmy-UI
- m.lemdro.id powered by Voyager
- old.lemdro.id powered by mlmym
- a.lemdro.id *powered by [Alexandrite](https://github.com
-
Proxy tuning
Earlier today, I identified the root cause of an issue causing annoying 502 errors intermittently. If you've ever had an action infinitely load until you refreshed the page, that is this issue. I deployed a fix, and am slowly scaling it down to stress test it. If you encounter an infinite loading occurrence or an HTTP 502 error please let me know!
UPDATE: Stress testing complete. Theoretically we should be equipped to handle another 5k users without any intervention from me
-
Image server maintenance
Hello folks! I am migrating the image backend to an S3-compatible provider for cost and reliability reasons. During this time, thumbnails and other images hosted here will be borked, but the rest of Lemdro.id will remain online. Thank you for your patience!
UPDATE: Image migration is gonna take a hot minute. Should be done in around 6 hours, I'll get it fully fixed up in around 7-8 hours when I wake up (~08:30 PDT)
UPDATE 2: It failed, yay! Alright, fine. I turned the image proxying back on. I am migrating to S3 in the background and will switch over when it is done. Any images uploaded in the next 8 hours or so may end up being lost.
UPDATE 3: Migration complete. Will be rolling out the update to S3-backed image storage in around 6 hours (~6pm PDT)
UPDATE 4: Object storage backend deployed! Thanks for your patience folks.
-
Upcoming server maintenance (10:30pm PT-> 11:30pm PT)
I'm sure you all have noticed the latency problems on this instance. Stage 1 of my 4 stage scaling roadmap is taking place tonight as I migrate the database to physically run closer to the machines running lemmy.
I will do a more detailed write-up on this later, but the gist is that each db operation required a new connection from lemmy, and that means a brand new SSL handshake since the db is managed elsewhere. Pooling would solve this, but lemmy does not handle a properly configured pg bouncer correctly in my testing. So the solution is move the database closer and within the private network to avoid SSL handshakes altogether.
TL;DR instance gonna go brrrr, downtime starting at 10:30pm pacific time tonight, should be done by 11:30pm