Don't forget, that SCOTUS also gave themselves the right of judicial review out of thin air.
I mean, I can get most of any books I'm looking for from https://annas-archive.org/, and with the website is pretty clean from clutter.
Happens to me all the time with music. Be jamming out to a song I've loved for years, right when my fav. artist dropped the album. Then tiktok plays the one hook from one song on their album and people always ask "Oh! Did you hear that on tikTok?!" .... No...no....no 🙄
This is the one I use to host on a vps. No clue on it's deployability on a pi.
https://github.com/ubergeek77/Lemmy-Easy-Deploy
It's auto installs and updates. Just need to forward the DNS for your instance to whatever domain name you like. It's pretty straight forward from the documentation.
Just watched Tosh interview one of the actors from Hallmark movies, it was pretty good.
Had to flip my phone to make sure it wasn't a boobs meme with the calculator
When a decepticon has truck nuts walking up the pyramids, I was out...
Also other personal things about him in general
Also other personal things about him in general
I ran into the same problem and ended up switching to an S3 with Vultr. It's been a while since I did it but here are the links that I used to figure it out. I'm deployed using Lemmy-Easy-Deploy.
I used a combination of:
https://lemmy.world/post/538280
https://github.com/ubergeek77/Lemmy-Easy-Deploy/blob/main/ADVANCED_CONFIGURATION.md
https://git.asonix.dog/asonix/pict-rs/#user-content-filesystem-to-object-storage-migration
Good luck!
Maybe look into self hosted llm. I've used two recently to help analyze a large volume of books, by ingesting them into the data set, then chat with the bot for specifics. It worked pretty well but there are some limitations, such as token length and general hallucinations. But they both use citations of the data they used, so it helps to check their work.
PrivateGPT - https://github.com/imartinez/privateGPT
Llamaindex - https://github.com/run-llama/llama_index
Both have simple selfhosted webui or actual applications. So, in theory you should be able to ingest data and then see if it then matches any submissions you submit later. But I have not really tried it for this though, so it might not work.
Why such a long spoon for tea?
It might be but the image is similar to other pastafarians depictions of the flying spaghetti monster.
Midjourney mostly creates scary looking ones with a simple input.
The Mac mini and high-end MacBook Pro will not be among the first wave of Macs to launch with the M3 chip later this year, according to...
cross-posted from: https://linkopath.com/post/11819
> According to Bloomberg's Mark Gurman, Apple is planning to release new MacBook Pro and Mac mini models with the M3 chip in the middle of 2024 at the latest. The new MacBook Pro models will feature the M3 Pro and M3 Max chips, while the Mac mini will only be available with the M3 chip. The new Mac mini is not expected to be released until late 2024 at the earliest. There will be no new 14- and 16-inch MacBook Pro models with the M3 chip in October 2023.