I remember using an app that blocked spam calls using a collaborative database. The one I use now is Truecaller, but it's always trying to get me to subscribe. I liked the one I used before better. What is the best caller ID app that can block spam that you know about?
I'm looking to create a unified view of data across multiple Debian-based devices without fully replicating the data. My current setup includes:
- Main computer with one HDD
- Server with four drives
- A couple of Raspberry Pis
I want a folder on each device that provides access to the contents from all drives, but without actually copying or syncing the data to each device. I'm aiming for a solution that allows browsing and accessing all files from any device while keeping the actual data in its original location.
I've been looking into using a combination of MergerFS and SSHFS. The idea is to use SSHFS to mount remote directories and then use MergerFS to combine these mounts with local directories into a single view. However, I'm not sure if I should merge the drives in the server and share the merged folder with all other systems or I should share each drive with each system and merge them in there. Is this the best approach or are there better alternatives?
I want to avoid solutions like Syncthing, Dropbox, or Google Drive that would clone the entire data set to each device. I'm trying to avoid data duplication and save storage space on devices with smaller capacities.
I'd love to hear your thoughts on the MergerFS + SSHFS approach, or if you have any other suggestions that might better fit my needs. Any insights, recommendations, or personal experiences would be greatly appreciated!
Thanks in advance for your help!
Imagine being able to travel back in time and have a conversation with your younger self. What words of wisdom or advice would you share? What experiences or lessons learned would you want them to know about?
The ones I buy contain lemon for preservation, but I don't like the acidic taste of lemon in tomato sauce.
Oh, so it’s mostly a side effect, but they are still primarily being trained to predict the next word.
And the only solution to the dead internet theory is scanning our eyeballs for Worldcoin. There doesn’t seem to be any non-dystopian timelines in our future.
I've been reading about recent research on how the human brain processes and stores memories, and it's fascinating! It seems that our brains compress and store memories in a simplified, low-resolution format rather than as detailed, high-resolution recordings. When we recall these memories, we reconstruct them based on these compressed representations. This process has several advantages, such as efficiency, flexibility, and prioritization of important information.
Given this understanding of human cognition, I can't help but wonder why AI isn't being trained in a similar way. Instead of processing and storing vast amounts of data in high detail, why not develop AI systems that can compress and decompress input like the human brain? This could potentially lead to more efficient learning and memory management in AI, similar to how our brains handle information.
Are there any ongoing efforts in the AI community to explore this approach? What are the challenges and benefits of training AI to mimic this aspect of human memory? I'd love to hear your thoughts!
I agree, but that might complicate things. Instead of votes we could also use time spent reading posts as the engagement metric.
How about something like this?
Quality Engagement Score (QES)
QES = (PCM * AVU) / MAU, where:
- PCM = Posts + Comments per Month
- AVU = Average Votes per User (total monthly upvotes / MAU)
- MAU = Monthly Active Users
PCM measures raw activity, while AVU factors in community approval.
I appreciate your perspective, but my focus is on enhancing our measurement of community activity; if you have a more effective metric in mind, I’d love to hear it instead of just pointing out flaws.
Hey fellow Lemmings,
I've been thinking about how we measure the liveliness of our communities, and I believe we're missing the mark with Monthly Active Users (MAU). Here's why I think Posts + Comments per Month (PCM) would be a superior metric:
Why PCM is Better Than MAU
-
Quality over Quantity: MAU counts lurkers equally with active participants. PCM focuses on actual engagement.
-
Spam Resistance: Creating multiple accounts to inflate MAU is easy. Generating meaningful posts and comments is harder.
-
True Reflection of Activity: A community with 1000 MAU but only 10 posts/comments is less vibrant than one with 100 MAU and 500 posts/comments.
-
Encourages Participation: Displaying PCM could motivate users to contribute more actively.
-
Easier to Track: No need for complex user tracking. Just count posts and comments.
Implementation Ideas
- Show PCM in the community list alongside subscriber count
- Display PCM in each community's sidebar
- Use PCM for sorting "hot" communities
What do you think? Should we petition the Lemmy devs to consider implementing this? Let's discuss!
There are 16M comments per day according to the observer website.
30k communities and 9M posts per day. I find the number of posts per day very hard to believe. Each community would have an average of 300 posts per day, and most communities are abandoned. Maybe it's the bot communities that repost all the Reddit posts that inflate the number so high.
> Yeah because first of all, content had to be spread out across 562826 different communities for no reason other than that reddit had lots of communities, after growing for many many years. It started with just a few. > > Then 99% of those were created on Lemmy.world, and every new user was directed to sign up at Lemmy.world. > > I guess a lot of people here are younger than me and didn’t experience forums, but we had like 30 forum channels. That was enough to talk about anything at all. And I believe it’s the same here, it would have been enough. And then all channels would have easy to find content. > > source
Hey everyone! I'm curious about the number of communities on Lemmy and the activity levels within them. Specifically, is there a reliable source where I can check the total number of communities and the average number of posts per month? It seems like the number of communities might be quite high, but I wonder how low the post activity is across most of them. Any insights or links to resources would be greatly appreciated!
I often find myself browsing videos on different invidious instances or posts on various lemmy instances, and I would love to be able to create a "watch later" list or a "favorite" list that works across all of them. I don't want to have to manually import and export these lists between different instances, either, like I have to do on lemmy, invidious, etc.
I'm currently using a single bookmarks folder to keep track of everything, but I don't like this because it's a mess. I'd like to be able to create two or three different lists for different groups of websites, so that I can easily find what I'm looking for. For example, a favorite list for reddit, tumblr, etc, another favorite list and a watch for later list for invidious instances, and other lists for other sites.
Is there any way to achieve this? I'm open to using browser extensions, third-party apps, or any other solutions that might be out there. I would prefer a free solution, but I'm willing to consider paid options as well.
A bookmark can only exist in one folder at a time, whereas I want to be able to add a single item to multiple lists (e.g., both "favorites" and "watch later").
I believe the closest to what I'm looking for are Raindrop.io, Pocket, Wallabag, Hoarder, etc.
https://github.com/hoarder-app/hoarder?tab=readme-ov-file#alternatives
I use Manjaro Linux and Firefox.
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
I want to create a collage of 20 screenshots from a video, arranged in a 5x4 grid, regardless of the video’s length. How can I do this efficiently on a Linux system?
Specifically, I’d like a way to automatically generate this collage of 20 thumbnails from the video, without having to manually select and arrange the screenshots. The number of thumbnails should always be 20, even if the video is longer or shorter.
Can you suggest a command-line tool or script that can handle this task efficiently on Linux? I’m looking for a solution that is automated and doesn’t require a lot of manual work.
Here's what I've tried but I only get 20 black boxes:
```bash #!/bin/bash
Check if input video exists
if [ ! -f "$1" ]; then echo "Error: Input video file not found." exit 1 fi
Get video duration
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$1")
Calculate interval between frames
interval=$((duration / 20))
Extract 20 frames from the video
for i in {1..20}; do ffmpeg -ss $((interval * ($i - 1))) -i "$1" -vf scale=200:-1 -q:v 2 "${1%.*}_frame$i.jpg" done
Create collage
montage -mode concatenate -tile 5x4 -geometry +2+2 "${1%.}_frame.jpg" output_collage.jpg
Clean up temporary files
rm "${1%.}_frame.jpg"
echo "Collage created: output_collage.jpg" ```
It isn't, the open source ones that I now are Llama, Mistral and Grok
I used Google before, but since I degoogled, I only have my contacts on my Android phone. However, I would like to be able to access them on Linux too and have them synced.
There's also an open issue to add this to lemmy-ui
Take a look at Chocolate, perhaps already fills the role of the program you want to make, it gets metadata from different endpoints for Movies, TV Shows, Games, Books, Music, and Live TV.
I'm very interested in this kind of project, I'm specifically looking for something like this that helps me automatically tag all my media content.
A collection of modern/faster/saner alternatives to common unix commands. - ibraheemdev/modern-unix
DeepSeek Coder: Let the Code Write Itself. Contribute to deepseek-ai/DeepSeek-Coder development by creating an account on GitHub.
Permanently Deleted
BleachBit, the popular free system cleaner, has just released a major update — its first since 2021. For those unfamiliar with it, BleachBit is an
CogVLM: Visual Expert for Pretrained Language Models
Presents CogVLM, a powerful open-source visual language foundation model that achieves SotA perf on 10 classic cross-modal benchmarks
repo: https://github.com/THUDM/CogVLM abs: https://arxiv.org/abs/2311.03079
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration. - GitHub - bitmagnet-io/bitmagnet: A self-hosted ...
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.
This is a significant release with lots of major and long requested features. Here's a run down: Session Resurrection This version adds a built-in capability to resurrect sessions. Attaching to "ex...
A terminal workspace with batteries included