Skip Navigation

User banner
Posts
17
Comments
74
Joined
1 mo. ago

  • Though apparently I didn't need step 6 as it started running after I downloaded it

    Hahahha. It really is a little redundant, now that you mention it. I'll remove it from the post. Thank you!

    Good fun. Got me interested in running local LLM for the first time.

    I'm very happy to hear my post motivated you to run an LLM locally for the first time! Did you manage to run any other models? How was your experience? Let us know!

    What type of performance increase should I expect when I spin this up on my 3070 ti?

    That really depends on the model, to be completely honest. Make sure to check the model requirements. For llama3.2:2b you can expect a significant performance increase, at least.

  • Of course! I run several snowflake proxies across my devices and their browsers.

  • I didn't use an LLM to make the post. I did, however, use Claude to make it clearer since English is not my first language. I hope that answers your question.

  • I have tried on more or less 5 spare phones. None of them have less than 4 GB of RAM, however.

  • I would argue there would not be any noticeable differences.

  • The performance may feel somewhat limited, but this is due to Android devices usually having less processing power compared to computers. However, for smaller models like the ones I mentioned, you likely won't notice much of a difference when running them on a computer.

  • That really depends on your threat model. The app isn't monitoring your activity or has imbedded trackers. It pulls content directly from YouTube's CDN. All they (Google) know is your IP address, but nothing else. For 99.9% of people that's totally ok.

  • Hmmm... You're right. It does feel a lot more arbitrary when you put it that way.

  • You know what? You actually do have a point.

  • My favorite anime website is down; good thing FMHY has a bunch of great ones to choose from. Migrating sucks, though.

  • There isn't really a natural barrier between North and South America, though. Asia has the Urals.

  • Interesting question... I think it would be possible, yes. Poison the data, in a way.

  • Not Perplexity specifically; I'm taking about the broader "issue" of data-mining and it's implications :)

  • You're aware that it's in their best interest to make everyone think their """AI""" can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it's mostly faked?

    Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn't all powerful and all knowing. It just crawls the web and uses other language models to "digest" what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.

    Taking what an """AI""" company has to say about their product at face value in this part of the hype cycle is questionable at best.

    Sure, that might be part of it, but they've always been very transparent on their reliance on third party models and web crawlers. I'm not even sure what your point here is. Don't take what they said at face value; test the claims yourself.

  • What did you mean by "police" your content?

  • Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

    That doesn't make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I've shared some initial points, I'm more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.

    I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

    Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity's official docs.

  • The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

    It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

    However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that's an issue that can be solved with some prompt engineering and as one's account gets more established.

  • I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)