Skip Navigation
USB-C Multi-Device Charger Recommendations?

I've been playing around with my home office setup. I have multiple laptops to manage (thanks work) and a handful of personal devices. I would love to stop playing the "does this charging brick put out enough juice for this device" game.

I have:

  • 1x 100W Laptop
  • 1x 60W Laptop
  • 1x 30W Router
  • 1x 30W Phone
  • 2x raspberry pis

I've been looking at multi-device bricks like this UGREEN Nexode 300W but hoped someone might know of a similar product for less than $170.

Saving a list of products that are in the ballpark below, in case they help others. Unfortunately they just miss the mark for my use case.

  • Shargeek S140: $80, >100W peak delivery for one device, but drops below that as soon as a second device is plugged in.
  • 200W Omega: at $140 it's a little steep. Plus it doesn't have enough ports for me. For these reasons, I'm out.
  • Anker Prime 200W: at $80 this seems like a winner, but they don't show what happens to the 100W outputs when you plug in a third (or sixth) device. Question pending with their support dept. it can't hit 100W on any port with 6 devices plugged in.
  • Anker Prime 250W: thanks FutileRecipe for the recommendation! This hits all of the marks and comes in around $140 after a discount. Might be worth the coin.

If you've read this far, thanks for caring! You're why this corner of the internet is so fun. I hope you have a wonderful day.

6
Can AI even be open source? It's complicated
  • Please don't assume anything, it's not healthy.

    Explicitly stating assumptions is necessary for good communication. That's why we do it in research. :)

    it depends on the license of that binary

    It doesn't, actually. A binary alone, by definition, is not open source as the binary is the product of the source, much like a model is the product of training and refinement processes.

    You can't just automatically consider something open source

    On this we agree :) which is why saying a model is open source or slapping a license on it doesn't make it open source.

    the main point is that you can put closed source license on a model trained from open source data

    1. Actually the ability to legally produce closed source material depends heavily on how the data is licensed in that case
    2. This is not the main point, at all. This discussion is regarding models that are released under an open source license. My argument is that they cannot be truly open source on their own.
  • Can AI even be open source? It's complicated
  • Quite aggressive there friend. No need for that.

    You have a point that intensive and costly training process plays a factor in the usefulness of a truly open source gigantic model. I'll assume here that you're referring to the likes of Llama3.1's heavy variant or a similarly large LLM. Note that I wasn't referring to gigantic LLMs specifically when referring to "models". It is a very broad category.

    However, that doesn't change the definition of open source.

    If I have an SDK to interact with a binary and "use it as [I] please" does that mean the binary is then open source because I can interact with it and integrate it into other systems and publish those if I wish? :)

  • Can AI even be open source? It's complicated
  • Do you plan to sue the provider of your "open source" model? If so, would the goal be to force the provider to be in full compliance with the license (access to their source code and training set)? Would the goal be to force them to change the license to something they comply with?

  • Can AI even be open source? It's complicated
  • You would be obligated, if your goal were to be complying with the spirit and description of open source (and sleeping well at night, in my opinion).

    Do you have the source code and full data set used to train the "open source" model you're referring to?

  • What self hosting feels like (It's painful, please help 🥲)
  • Excellent notes. If I could add anything it would be on number 4 -- just. add. imagery. For the love of your chosen deity, learn the shortcut for a screenshot on your OS. Use it like it's astro glide and you're trying to get a Cadillac into a dog house.

    The little red circles or arrows you add in your chosen editing software will do more to convey a point than writing a paragraph on how to get to the right menu.

  • I watched Nvidia's Computex 2024 keynote and it made my blood run cold
  • Believe what you will. I'm not an authority on the topic, but as a researcher in an adjacent field I have a pretty good idea. I also self host Ollama and SearXNG (a metasearch engine, to be clear, not a first party search engine) so I have some anecdotal inclinations.

    Training even a teeny tiny LLM or ML model can run a typical gaming desktop at 100% for days. Sending a query to a pretrained model hardly even shows up on HTop unless it's gigantic. Even the gigantic models only spike the CPU for a few seconds (until the query is complete). SearXNG, again anecdotally, spikes my PC about the same as Mistral in Ollama.

    I would encourage you to look at more explanations like the one below. I'm not just blowing smoke, and I'm not dismissing the very real problem of massive training costs (in money, energy, and water) that you're pointing out.

    https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption

  • I watched Nvidia's Computex 2024 keynote and it made my blood run cold
  • I don't disagree, but it is useful to point out there are two truths in what you wrote.

    The energy use of one person running an already trained model on their own hardware is trivial.

    Even the energy use of many many people using already trained models (ChatGPT, etc) is still not the problem at hand (probably on the order of the energy usage from a typical search engine).

    The energy use in training these models (the appendage measuring contest between tech giants pretending they're on the cusp of AGI) is where the cost really ramps up.

  • Have you tried NixOS?
  • Love the example here!

    I'm still learning about available references (ex config.services.navidrome.settings.Port). What resources did you find to be the best for learning that kind of thing?

    I'll accept RTFM if that's applicable :)

  • rquickshare: Rust implementation of NearbyShare/QuickShare from Android for Linux and macOS.
  • Hm.. if I'm reading the README correctly this is a LAN only drop mechanism between a phone and a laptop. Syncthing does that already, albeit with a cumbersome number of features and config for that use case. If that's not accurate I'm sure you'll let me know :)

    I would love to see this develop an airdrop-esque Bluetooth / PAN phone to phone feature though! Especially if a compatible iOS app were available that would be really slick.

  • Beeper Self Hosting
    github.com GitHub - beeper/bridge-manager: A tool for running self-hosted bridges with the Beeper Matrix server.

    A tool for running self-hosted bridges with the Beeper Matrix server. - beeper/bridge-manager

    GitHub - beeper/bridge-manager: A tool for running self-hosted bridges with the Beeper Matrix server.

    Is anybody self hosting Beeper bridges?

    I'm still wary of privacy concerns, as they basically just have you log into every other service through their app (which as I understand is always going on in the closed source part of Beeper's product).

    The linked GitHub README also states that the benefit of hosting their bridge setup is basically "hosting Matrix hard" which I don't necessarily believe.

    14
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SU
    sunstoned @lemmus.org
    Posts 2
    Comments 64