Skip Navigation
Home-Assistant Compatible Hub Needed
  • Oh, I backup religiously since Blue failed right after I moved and backup my backups on my laptop as well. (literally failed; I lost everything and had to run photorec and three other tools to pick out everything I'd done for the previous six months, since that I hadn't copied to a backup on my server because I was prepping to move at the time).

    So far, OTBR is the biggest stopping issue since HA runs it but nothing sticks. I admit, moving zwave is my actual biggest dread; zigbees I can do probably in a weekend, but zwave is such hell to unpair and re-pair (thought it makes up for it by sticking forever). That's part of the reason I love Thread and Matter; they're almost as sticky as zwave once they pair, and while pairing them is variable (sometimes fast, sometimes not so much) they repair themselves pretty consistently if the outage is under 24 hours and you can deliberately unpair them fairly easily.

  • Home-Assistant Compatible Hub Needed
  • I've been running Home Assistant for roughly five-six years (Pi, then Blue, now Amber and a second instance on my server for network integrations like nmap and netgear), but since my SmartThings hub was taking care of zigbee/zwave, until now I used HA as a coordinator for every smart device ecosystem I was using (Hue, Wyze, Ring, Blink, Alexa, August, Arlo, et al). Sorry that wasn't clear.

    While Ive started slowly adding zigbee devices directly, I haven't started with zwave and thread isn't working for me yet (OTBR is running but nothing sticks). And I really don't want to have my hub fail and all my thread/matter devices useless when I don't have anything that can access them.

  • Home-Assistant Compatible Hub Needed
  • So far, the OTBR on HA isn't working, but...if it's an age and device issue, it may be migrating the zigbee and zwave over to HA and leave my SmartThings for my OTBR devices will work for now. That may at least buy some time to work out how to make the HA's OTBR work.

  • Home-Assistant Compatible Hub Needed
  • It's not reliable on thread/matter.

  • Home-Assistant Compatible Hub Needed
  • I use zigbee, zwave, and thread. I've migrated half my contact sensors, a few motion and presence, and two rooms of light bulbs to thread, which HA is...questionable on, hence beginning my search.

  • Home-Assistant Compatible Hub Needed

    My SmartThings Hub is (slowly) starting to error out more and more. I'm doing a soft reset monthly to keep everything up ( I did a hard reset about a year ago when I moved), which works, but I think it's time I start learning a new hub, preferably one not discontinued. My original plan was to put everything in Home Assistant when this time came, but a.) I really like it as my home coordinator with my custom scripts and addons and I don't want to mess with what is working right now and b.) while I'm getting the hang of running zigbee on there, zwave is in progress and thread...not really working most of the time.

    So. I need to buy a general all-protocol hub; any recommendations that are fully compatible with Home Assistant? One with custom scripting would be a huge plus; I miss doing that in SmartThings.

    16
    Symlink Creation to /usr/local/bin in HAOS Not Working When Shell Script run as service
  • The shell integration is why this happened.; I wanted to run the update script as a service so it could be triggered when the Supervisor or Core versions changed so it would automatically symlink my scripts in /usr/local/bin in the ssh_addon container. The shell integration runs in the homeassistant container, so that's when it became complicated.

  • Symlink Creation to /usr/local/bin in HAOS Not Working When Shell Script run as service
  • So it can be done, it just--required a lot of steps and me making a mapping spreadsheet of all the containers. But! Automations and scripts run in the homeassistant container, while when you ssh, you're going into the ssh addon container which should have been obvious and really was once I finished mapping all the containers.

    Goal: I need /usr/local/bin in the ssh container so I can run scripts over ssh and access my function library script easily without ./path/to/script.

    Summary: ssh into HAOS from the homeassistant container with an HAOS root user (port 22222), run docker exec to get into the ssh addon container, then make your symlinks for /usr/local/bin.

    (Note: this is ridiculously complicated and I know there has to be a better way. But this works so I win.)

    1. Get access to HAOS itself as root: https://developers.home-assistant.io/docs/operating-system/debugging. Verify you can login successfully.
    2. In homeassistant container:
    • a. create an .ssh folder (/config/.ssh)
    • b. add the authorized_keys file you made for step one.
    • c. add the public and private keys you made for step one (should be in the ssh addon container).
    • d. set permissions;
    chmod 600 /config/.ssh/authorized_keys
    chmod 600 /config/.ssh/PRIVATE_KEY
    chmod 644 /config/.ssh/PUBLIC_KEY
    chmod 700 /config/.ssh
    
    • e. In /config/shell_scripts.yaml or wherever you put your shell scripts, add the script you want to use to update /usr/local/bin: UPDATE_BIN_SCRIPT: /config/shell_scripts/UPDATE_BIN_SCRIPT
    • f. Restart HA.
    • g. Check it in Developer Tools->Services

    I have no idea how consistent the ssh addon container name is usually but it's different on all three of my installs, so insert your container name for SSH_ADDON_CONTAINER_NAME

    Steps: login to HAOS, go into the SSH Container, and do the update. This is horribly messy but hey, it works.

    UPDATE_BIN_SCRIPT

    #!/bin/bash
    
    # OPTIONAL: Update some of the very outdated alpine packages in both homeassistant and the ssh addon (figlet makes cool ascii art of my server
    # name).   You'll need to run it twice; once for the homeassistant container, then again in the ssh container.  Assuming you want to update packages,
    # anyway
    # update homeassistant container packages
    apk add coreutils figlet iproute2 iw jq ncurses procps-ng sed util-linux wireless-tools
    
    # ssh into HAOS and access docker container
    ssh -i /config/.ssh/PRIVATE_KEY -p 22222 root@HA_IP_ADDRESS << EOF
    	docker exec SSH_ADDON_CONTAINER_NAME \
    	bash -c \
           'apk add coreutils figlet iproute2 iw jq ncurses procps-ng sed util-linux wireless-tools; \
    	if [ ! -h /usr/local/bin/SCRIPT1 ]; then echo "SCRIPT1 does not exist"; \
    	ln -s /homeassistant/shell_scripts/SCRIPT1 /usr/local/bin/SCRIPT1; echo "Link created"; \
    	else echo "Link exists";fi; \
    	if [ ! -h /usr/local/bin/SCRIPT2 ]; then echo "SCRIPT2 does not exist"; \
    	ln -s /homeassistant/shell_scripts/SCRIPT2 /usr/local/bin/SCRIPT2; echo "Link created"; \
    	else echo "Link exists";fi'
    EOF
    
    echo "Done"
    

    I am going to feel really stupid when I find out there's a much easier way.

  • Symlink Creation to /usr/local/bin in HAOS Not Working When Shell Script run as service
  • Docker containers are designed to be immutable. The moment they’re stopped and recreated, any changes to them ads thrown out. You’re supposed to add a layer to your Docker image if you want to add command lines and such. That’s why it’ll keep deleting your stuff every time you update.

    It took me until I put Home Assistant on my server in a docker container to realize what was going on there. I use docker more now, but it's really, really nothing like this.

    Running the script inside Docker should put it in the right place, but I wouldn’t advice doing it that way.

    That's what I've been doing manually over regular ssh (not the 22222 port one).

    To work around the path issue, maybe consider using hard links rather than soft links?

    That's what I think I need to do, but the only 'hard' links--at least according to multiple find -name/find -iname searches on the ssh 22222 port--are all in /mnt/data/docker/overlay2 and /var/lib/docker/overlay2. I get there's a working pattern with the overlays but dear God why.

    Alternatively, you could figure out where HAOS stores the Docker config and add a volume definition of your own. You’ll probably be able to put all of your files in /usr/local/bin by adding a line like “- /path/home/host:/usr/local/bin” in the right place. I don’t know where this config is stored, though.

    Okay that makes sense. I guess the first step is to get the container structure and volume.

    Thanks so much! I'll update if I find the solution or die trying.

  • Symlink Creation to /usr/local/bin in HAOS Not Working When Shell Script run as service

    I hope I can explain this correctly; I am only somewhat familiar with docker.

    I have a script that I run after Home Assistant OS updates; it updates the alpine operating system with some extra packages and creates three symlinks in /usr/local/bin for three scripts in /config/shell_scripts. Up until now, it's run perfectly when I run it manually over ssh.

    Then I decided to create an automation to run it automatically after an HAOS update, and while the package updates work, the script says those symlinks exist already so it doesn't create them; they do not exist, HAOS deletes them after an update (and Core updates might too, but I usually update them together so never checked).

    After a lot of frustration, I logged into Home Assistant with root on the 22222 port and found it's checking and finding the previously created symlinks from earlier HAOS updates inside /mnt/data/docker/overlay2/../usr/local/bin and /var/lib/docker/overlay2/../usr/local/bin. At least, that's my guess; googling docker and overlays has been a trip.

    So my question: how do I structure the script so that the symlink check is within the docker container's version of /usr/local/bin so it will create the symlink?

    I get the answer is probably super obvious, but I am not seeing it. The only other thing I can think to do is export /homeassistant/shell_scripts to PATH but while that works over ssh, I haven't tested that running as a shell script service and I really want to automate this process.

    Screenshot of full script attached; here's the short version I'm using for testing. This has been checked with and without variables for paths.

    ``` #!/bin/bash

    variables

    bin_home="/usr/local/bin" shell_home="/homeassistant/shell_scripts"

    add packages

    pkgs="coreutils figlet iproute2 iw jq procps-ng sed util-linux wireless-tools" apk add $pkgs echo "Packages Updated"

    ln -s "$shell_home/lib_comp" "$bin_home/lib_comp" ln -s "$shell_home/ipa_status" "$bin_home/ipa_status" ln -s "$shell_home/ssh_text" "$bin_home/ssh_text"

    echo "Done" ```

    3
    Home Assistant Yellow - What to Expect When You're Configuring

    So my Home Assistant Blue suffered a tragedy last year and I temporarily switched to using a Pi 4. Then Ameridroid had the Yellow (PoE!) available, and here I am with a Yellow, waiting for the CM4 ( CM4108032) and SSD to arrive tomorrow, and realizing I have no idea what to expect or if there is anything I should expect when I start the config and move this weekend.

    Background: I've run HA for roughly four years now on Pi4, Odroid, and in Docker on my server, but since moving it to the Pi while waiting for the Yellow to come in stock, I've only done basic maintenance and working on other projects, so on a guess, I'm going to have to brush up my python and yaml. And anything that's changed that I may have missed.

    This will be a scratch initial installation; after linking up my integrations and getting my naming conventions consistent, I'll start copying my yaml files and code over and decide how much of my interface I want to keep.

    Any advice would be appreciated if there's anything that's very different with the Yellow hardware. I use two SmartThings Hubs for zigbee, zwave, and matter over Thread control for the most part but have been dipping my toes into direct zigbee control (just not that well yet) and trying to get direct matter over thread to work (not going well at all). And do I need my skynet dongle or Sonoff zigbee dongle or will the built in module work across the board?

    1
    Home Assistant Core Update 2023.12.0 - /sys/firmware no longer has model data
  • You know, I didn't think of that. I've never run an OS in docker; all I tested my data collection scripts on were my regular VM's a few times just for fun. And for that matter, most LXC containers I run in Proxmox are privileged to get around restrictions (still haven't found a way for LXC's to let me compile different architectures, though. HA may have updated their docker to current, which would explain why it happened so suddenly.

    And yes, for now, I'll just do root login to grab the information; it's technically more accurate, I am just knee-jerk distrustful of using root to the point until Proxmox and this last year, I almost forgot it existed unless there's a very weird linux problem I need it for. Thanks for this information, though; I've only just started seriously working with LXC and docker containers, so that's not an approach I woudl have considered.

  • Home Assistant Core Update 2023.12.0 - /sys/firmware no longer has model data
  • Full disclosure: I just--and I mean just--got my head wrapped around docker and containers due to installing Proxmox on my server. Right now, my Proxmox server runs a LXC container for docker, and in docker I run Handbrake and MakeMKV images that run the GUIs in a browser or run with command line. They connect to each other through mounting the LXC's /home/user into both., then added a connection to the remote shares on my other server so I can send them to my media server. Yes, I did have to map all the mountings out first before I started but hey, that's how I learn.

    Long way of saying: I am just now able to start understanding how Home Assistant works--someone said Home Assistant OS was basically really a hypervisor overseeing a lot of containers and now that I use Proxmox, that really helped--but I'm still really unfamiliar with the details.

    I installed the full Home Assistant on a dedicated Pi4, so it's the only thing it does. Until yesterday, the only part I actually interacted with was the data portion, which is where all my files are, where I configure my GUI and script, store addons, etc. The container for this portion runs on Alpine Linux; I can and have and do install/update/change/build packages I need or like to use. in there It's ephemeral; anything I do outside the data directory (it holds /config, /addons, etc) gets wiped clean on update, so I reinstall them whenever HA does an update .

    When I run my data collection scripts on my Home Assistant SBC, they take their information from the container aka Alpine Linux., including saying my OS was Alpine. All of this worked correctly up until--according to the directory dates, December 10th at 2:40 AM when the /sys/firmware was last updated and everything in it vanished, breaking the symlink to /proc/device-tree/model. This also updated the container OS to Alpine 3.19.0. Data collection runs hourly; one of my Pis ssh's into each computer to run four data collection scripts and updates a browser page I run off apache, so I can check current presence and network status and also check the OS/hardware/running services of all my computers from the browser (the services script doesn't work on Alpine yet; different structure). I didn't notice until recently because work got super busy, so I only verified availability and network status regularly.

    These are the packages I install or switch to an updated/different version the Alpine container to help with this or just have fun: -figlet (it's just cute ASCII art for an ssh banner), -iproute2 (network info, when updated has option to store network info in a variable as a json),

    • iw (wireless adapter info),
    • jq (reads and processes json files),
    • procps-ng (updated uptime package for more options),
    • sed (updated can do more than the installed one),
    • util-linux (for column command in bash),
    • wireless-tools (iwconfig, more wireless data if iw doesn't have it) (Note: I think tr may also be updated by one of these.)

    These are the ones I use for data collection that are already installed:

    • lscpu ("Model name" "Vendor ID" "Architecture" "CPU(s)" "CPU min MHz" "CPU max MHz")
    • uname (kernel)

    These are the files I access for data collection:

    • /proc/device-tree/model (Computer model)
    • /proc/meminfo (RAM)
    • /proc/uptime (Uptime)
    • /etc/os-release (Current OS data)
    • /sys/class/thermal/thermal_zone0/temp (CPU temperature for all my SBCs except BeagleBone Black)

    Until this month, all of those files were accessible both before I do the package updates and after. The only one affected was maybe /proc/uptime by the uptime update to get more options. Again: I've been running these scripts or versions of them for well over a year and I test individually on each SBC before adding them to my data collection scripts to run remotely; all of these worked on every computer, including whatever SBC was running Home Assistant. (Odroid N2+ until it died a few months ago) And all of them work right now--except /proc/device-tree/model on my Home Assistant SBC. The only way I can get model info is to add an extra ssh to Home Assistant itself as root and grab the data off that file (and while I"m there, get the OS data for Home Assistant instead of Alpine), save it to my shell script directory in my data container, and have the my script process that file for my data after it gets the rest from the container.

    That's why I'm weirded out; this is one of the things that is the same on every single Linux OS I've used and on Alpine, so why on earth would this one thing change?

    This could conceivably be an Alpine issue; I downloaded Alpine 3.19.0 to run in Proxmox when I get a chance, and I kind of hope that it's a deliberate change in Alpine, because otherwise, I can't imagine why on earth the HA team would alter Alpine to break that symlink. Or they could be templating Alpine for the container each time and this time it accidentally broke. The entire thing is just so weird. Or maybe--though not likely--a bug in Alpine 3.19.0, but I doubt it; I can't possibly be the first to notice, it was released at least three weeks ago and I googled a lot.

    I'm honestly not sure it affects anything at all, but it bothers me so here we are. Though granted, it did make me finally get off my ass and figure out how to login as root into HA as well as do a badly needed refactor of my main data collection script (the one that does the ssh'ing) as well as clean and refactor my computer information scripts, so maybe it was destiny.

  • OpenRGB and Proxmox VMs

    So I just did my first install of Proxmox a few months ago in my home server, and I'm seriously, seriously in love, enough that I decided to use it to run my media server as well; it runs Plex, act as a NAS for my media and backups, etc.

    Specs: CPU: AMD Ryzen 5 5600G Board: ASRock B550 Steel Legend RAM: 32 GB DDR4 3200 OS Drive: Samsung 970 Evo 500GB PCIe 3.0 Storage: 50 TB over six drives (all media), all mounted directly to Proxmox

    RGB: Fans: 10 CORSAIR iCUE RGB Elite Performance Power Supply: Corsair CX750F RGB

    Running (always): Containers: Plex, Docker, Jellyfin (still experimenting) (Docker runs docker images for MakeMKV, Handbrake and MediaInfo, but it's only up when I am using them.(

    Before Proxmox, Cassiope ran on Solus Budgie and before that Xubuntu, and while OpenRGB could be cranky, with Solus I had full control all the RGB elements so I know how to use it. I spun up a Xubuntu VM and installed OpenRGB in it and connected to it after some tinkering but not to everything, but I figured that was just because I hadn't done the hardware connections to the VM correctly. Still, it was there.

    It not only did not let me control them, it turned everything RGB off. I had to shut down the server and restart cold for them to come back on and I could not connect and it also started fucking with my lights. I killed the VM, tried again, couldn't connect at all. Just to see what happened, I installed directly to Proxmox with the image and it connected.

    It did not go well. Short version, I had to scrub the install and flash my board's BIOS, and so no more experimenting on my media server. I moved some of it to my other server, Watson (similar specs but AMD Ryzen 7 3700X CPU and less storage and 64 GB RAM CORSAIR Vengeance RGB Pro) .

    I tried in Xubuntu, Solus, EndeavourOS, Linux Mint, then--just to see what happened--directly on Proxmox. This progressed and ended literally the same way but with more and weirder steps: the VMs would fail, my board started acting up, I (again) tried with a direct install to Proxmox, and yes I had to scrub, flash the BIOS, and reinstall everything.

    I am new to Proxmox but I've been jumping distros in Linux for well over a decade and except for EndeavourOS (which is amazing, I finally like Arch), all the VM OSes I used I know very, very well and all worked with OpenRGB perfectly. I have broken many installs and had to scrub over the years like a lot but generally it was something I did either on purpose or because I made a mistake. (or while root, deleted my entire OS; it happens). I was very methodical and very careful, especially when I switched to my other server and took notes; there was nothing I did--or for that matter, was even able to do--that should have caused this. So my guess is I need to do some configuration in Proxmox to get my hardware to connect to the VM (or to use it directly on Proxmox) , but I don't even know where to start and after flashing two separate boards, I'm a little shy on guessing randomly. Yes, I have googled a lot but nothing I read that I tried worked.

    I would really really like control and use of my fans and lights but I really, really love Proxmox and the flexibility of a hypervisor and I'm learning so much and barely scratched the surface, so I'd like an alternative to having a lot of very pretty lights that I can't control. Right now, OpenRGB seems to be the only game in town for iCue or Polychrome when not in Windows, so--help?

    2
    Home Assistant Core Update 2023.12.0 - /sys/firmware no longer has model data
  • Which is why I"m not sure I need a bug report. The part I have non-root access to is inside a docker container and that's all I needed to collect data. But it's such a random thing to go missing.since that core update.

  • Home Assistant Core Update 2023.12.0 - /sys/firmware no longer has model data
  • Since probably October, I've noticed some really really random problems show up that never used to. And for once, I know it wasn't me messing with the code; I took a sabbatical from HA to learn how to use Proxmox a couple of months ago. and everything worked fine. It was actually a clean install to a new Raspberry Pi as my Odroid decided to stop working and I haven't had time to learn to solder (hopefully this week, tho). I was kind of wondering if it was the Pi that was the problem.

  • Home Assistant Core Update 2023.12.0 - /sys/firmware no longer has model data
  • I know, I'm trying to write up a clear bug report on this, but I'm honestly not sure if it actually has any effect other than messing up my data collection scripts. Yeah, it's annoying the hell out of me but I've been going through the documented issues with the core and it doesn't look like anyone else noticed a problem. I've been trying to figure out if it's created by an alpine package that I can run, but not much luck there.

    Note: I enabled root for Home Assistant OS and the symlink and file are fine there.

  • Home Assistant Core Update 2023.12.0 - /sys/firmware no longer has model data

    I know how awkward that title is and I apologize.

    OS: Home Assistant 11.2

    Core: 2023.12.3

    Computer: Raspberry Pi 4 Model B Rev 1.5

    Explanation: I run a set of data collection scripts on my home network and one of the pieces of data is getting the computer model. In all my other SBCs, the below symlink gets that data.

    Symlink: /proc/device-tree/model

    File Location: /sys/firmware/devicetree/base/model

    The symlink is broken and when I went to check the firmware directory, it is completely empty. The last update date for /sys/firmware according to ls -la is December 10 at 2:40 which when I checked my backups, is when core_2023.12.0 installed.

    Attached is what should be in the firmware folder on my other Raspberry Pi 4 Model B Rev 1.5 right now.

    I did a find from root for either the model file or anything vaguely resembling it and I can't find it. Anyone else have this problem or is it just happening to me? Or am I missing something?

    11
    Lemmy.world active users is tapering off while other servers are gaining serious traction.
  • Manually: I had lemmy.world open in one tab and the other server's community page in the other.

  • Is it really a mass exodus? And is it really a mass exodus to lemmy?
  • Logically, I want to say no, not really, but I also would have thought the blackout and ongoing protests wouldn't really affect Reddit and they'd ignore it. Reddit itself, however, seems incredibly determined to pursue a course of action which requires performing This Does Not Affect Us At All as dramatically and publicly as possible given the slightest opportunity whether anyone cares or not. This doesn't even include the admins playing subreddit roulette that encompasses actively rebelling subs, subs deep in malicious compliance, and subs that have no idea wtf is going on they just want to talk about their weird NSFW fetish in peace.

    So no, I don't think so, but I'm beginning to wonder if Reddit thinks there is and what they're seeing on their side that I'm not.

  • What is your go-to Linux distro and why?
  • I semi-regularly distro-hop, but Xubuntu is the distro I keep coming back to between hops to take a break or when one goes (temporarily) dormant. It's currently running on my primary server/linux machine.

    Reasons: 1.) It's light on resources 2.) It's very simple and clean. 3.) It works with all the programs I use regularly; only one needs to be hand-compiled (but that one has to be compiled for literally any Linux machine). 4.) I know it. Scrub/partition/install/configure in under an hour. I can pick up any of my projects again immediately where I left off.

  • Does not having any social media presence have any negative impact on a person's social life?
  • The only reason I have social media accounts under my wallet name is to avoid anyone wondering why I'm not on social media (also: grandparents). Everyone IRL who I care enough about to actually explain know I login once a year in a separate browser (under incognito) and check every privacy setting from my checklist and update if it's important (like job change). LinkedIn I check regularly, but that's because a.) I only connect with people from work and a lot of them do think it's important to have strong networks (and they could be right, no idea) and b.) LinkedIn has an education section that my job really likes because it has free classes and when I get bored at work, I can do a quick class in something (nothing they actually want us to do; I have to work in the nightmare that is Agile, do not make me take yet another class about the benefits of this software development hellscape, thanks).

    Honestly, I try to give the impression I'm not into social media IRL; there are like, three people in my daily life who are allowed into my online life and one because we more or less both got the internet at the same time and started a mailing list together. Don't get me wrong, I know a lot of nice people IRL, but not the type I want to introduce to the friends I made online.

  • Lemmy.world active users is tapering off while other servers are gaining serious traction.
  • I kind of think that's how it's supposed to go in my made-up-right-this-second knowledge of the evolution of open source Federated social media sites. Pick the largest/most active/most variety to get your feet wet and make any weird mistakes you need to make in a crowd where you're one of many and sheer speed of posting means you'll be forgotten in like, hours. Then you get comfortable and see if this is a forever-fit or just a okay-right-now fit.

    I mean, I hard-bond to my first and pretty much settle immediately for life unless something is seriously awry, but even I made a backup in another one that I mirrored all my favorite communities in and I am seriously getting one more in a smaller, more specialized server. Yes, I do get the point of Federated, you do not need to explain, but here's the thing: intellectually I know that actually, the population of the Fediverse is orders of magnitude smaller than reddit or pretty much any other social media site, but feelings do not agree: Reddit was like a large, slightly hostile country with a lot of states you avoided always but especially between dusk and dawn; the scope of Fediverse is like being on a very small planet in an expanding universe you can watch growing in real time and it never stops. It's great, but there's something very unsettling realizing you're eight servers from home surrounded by kpop or wake up to find you posted in three communities in servers you don't recognize at two AM and if you can get a reputation for that kind of thing.

    My ADHD is living the dream, let's go, but I can see how it would throw people a little.

  • Why I prefer Linux
  • TP-Link AC600

    Oops, this was meant as a reply to someone about the TP-Link AC2100 router in anothrr window, ugh. Too many google results open.

    Let me google the chipset for that one if you haven't found drivers that work yet. For some of the Realtek based ones, there's some you can compile yourself by morrownr.

  • seperis Seperis @lemmy.world
    Posts 5
    Comments 50