Skip Navigation
NumPy 2.0.0 released
  • I did not know about autolinks - thanks for the link!

    It is interesting how different parsers handle this exact situation. I usually am cautious about it because I typically am not sure how it will be handled if I am not explicit with the URL and additional text.

  • NumPy 2.0.0 released
  • I'm curious about this. The source text of your comment appears that your comment was just the URL with no markdown. For your comment about a markdown parsing bug to be true, shouldn't the URL have been written in markdown with []() notation (or a space between the URL and the period) since a period is a valid URL character? For example, instead of typing https://google.github.io/styleguide/cppguide.html., should [https://google.github.io/styleguide/cppguide.html.](https://google.github.io/styleguide/cppguide.html) have been typed?

  • Automated CI/CD Data Snapshots
  • Yes, I am using PersistentVolumes. I have played around with different tools that have backup/snapshot abilities, but I haven't seen a way to integrate that functionality with a CD tool. I'm sure if I spent enough time working through things, I may be able to put together something that allows the CD tool to take a snapshot. However, I think that having it handle rollbacks would be a bit too much for me to handle without assistance.

  • Automated CI/CD Data Snapshots
  • Thanks for the reply! I am currently looking to do this for a Kubernetes cluster running various services to more reliably (and frequently) perform upgrades with automated rollbacks when necessary. At some point in the future, it may include services I am developing, but at the moment that is not the intended use case.

    I am not currently familiar enough with the CI/CD pipeline (currently Renovatebot and ArgoCD) to reliably accomplish automated rollbacks, but I believe I can get everything working with the exception of rolling back a data backup (especially for upgrades that contain backwards incompatible database changes). In terms of storage, I am open to using various selfhosted services/platforms even if it means drastically changing the setup (eg - moving from TrueNAS to Longhorn, moving from Ceph to Proxmox, etc.) if it means I can accomplish this without a noticeable performance degradation to any of the services.

    I understand that it can be challenging (or maybe impossible) to reliably generate backups while the services are running. I also understand that the best way to do this for databases would be to stop the service and perform a database dump. However, I'm not too concerned with losing <10 seconds of data (or however long the backup jobs take) if the backups can be performed in a way that does not result in corrupted data. Realistically, the most common use cases for the rollbacks would be invalid Kubernetes resources/application configuration as a result of the upgrade or the removal/change of a feature that I depend on.

  • Automated CI/CD Data Snapshots

    cross-posted from: https://lemmy.ml/post/16693054

    > Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to: > > 1. CI tool identifies new version of service and creates a pull request > 1. Manually merge pull request > 1. CD tool identifies changes to Git repo > 1. CD tool creates data snapshot and/or data backup > 1. CD tool deploys update > 1. Issue with deployment identified that requires rollback > 1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service > 1. CD tool identifies the rolled back version > 1. (OPTIONAL) CD tool creates data snapshot and/or data backup > 1. CD tool reverts to snapshot taken prior to upgrade > 1. CD tool deploys service to prior version per the Git repo > 1. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)

    4
    [Question] Automated CI/CD Data Snapshots

    Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to:

    1. CI tool identifies new version of service and creates a pull request
    2. Manually merge pull request
    3. CD tool identifies changes to Git repo
      1. CD tool creates data snapshot and/or data backup
      2. CD tool deploys update
    4. Issue with deployment identified that requires rollback
      1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service
      2. CD tool identifies the rolled back version
        1. (OPTIONAL) CD tool creates data snapshot and/or data backup
        2. CD tool reverts to snapshot taken prior to upgrade
        3. CD tool deploys service to prior version per the Git repo
    5. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)
    0
    Hosting a public wishlist
  • There are several proprietary options (many/most of which you cannot host). Looking for Amazon Wishlist alternatives should help in putting together a list of potential options. Some additional projects which are open source and selfhostable that you could also start with include:

  • Finally got my server to work properly. (Routing with custom local domain instead of ports)
  • Everything I mentioned works for LAN services as long as you have a domain name. You shouldn't even need to point the domain name to any IP addresses to get it working. As long as you use a domain registrar that respects your privacy appropriately, you should be able to set things up with a good amount of privacy.

    Yes, you can do wildcard certificates through Let's Encrypt. If you use one of the reverse proxies I mentioned, the reverse proxy will create the wildcard certificates and maintain them for you. However, you will likely need to use a DNS challenge. Doing so isn't necessarily difficult. You will likely need to generate an API key or something similar at the domain registrar or DNS service you're using. The process will likely vary depending on what DNS service/company you are using.

  • Finally got my server to work properly. (Routing with custom local domain instead of ports)
  • Congrats on getting everything working - it looks great!

    One piece of (unprovoked, potentially unwanted) advice is to setup SSL. I know you're running your services behind Wireguard so there isn't too much of a security concern running your services on HTTP. However, as the number of your services or users (family, friends, etc.) increases, you're more likely to run into issues with services not running on HTTPS.

    The creation and renewal of SSL certificates can be done for free (assuming you have a domain name already) and automatically with certain reverse proxy services like NGINXProxyManager or Traefik, which can both be run in Docker. If you set everything up with a wildcard certificate via DNS challenge, you can still keep the services you run hidden from people scanning DNS records on your domain (ie people won't know that an SSL certificate was issued for immich.your.domain). How you set up the DNS challenge will vary by the DNS provider and reverse proxy service, but the only additional thing that you will likely need to set up a wildcard challenge, regardless of which services you use, is an email address (again, assuming you have a domain name).

  • Best resources to learn more about networking
  • Raspberry Pi + PiHole + PiVPN = Network Gateway Drug

    Although, PiVPN is winding down so you might want to find something different instead. Setting up a regular Wireguard VPN isn't so bad, but it may be simpler to setup a Tailscale Tailnet.

  • Sharing my personal Firefox user.js based on arkenfox's privacy policies.
  • I was looking for a free opensource sharing plateform first

    What type of sharing platform are you looking for? A git repo? A single file sharing service? A code/text snippet sharing service? Something else?

    There are many options available. Some have free, public instances available for use. Others require you to self host the service. Regardless, you're not stuck using Github just to share your user.js file.

  • Sharing my personal Firefox user.js based on arkenfox's privacy policies.
  • the only sites I give permenant cookie exception are my selfhosted services

    This is what I was referring to. How are you accomplishing this?

    I'm still looking for the switches to block all new requests asking to access microphone, location, notification

    I can't help with this at the moment, but if you're still struggling with this I can provide the lines required to disable these items. However, I don't know how to do this with exceptions (ie allowing your self hosted sites to use that functionality, but block all other sites). At minimum though you could require Firefox to ask you every time a site wants to use something. This may get repetitive for things like your self hosted sites if you have everything clearing when you exit Firefox.

  • Sharing my personal Firefox user.js based on arkenfox's privacy policies.
  • Didn't look at the repo thoroughly, but I can appreciate the work that went into this.

    • Is there any reason you went this route instead of just using an user-overrides.js file for the standard arkenfox user.js file?
    • Does the automatic dark theme require enabling any fingerprintable settings (beyond just possobly determining the theme of the OS/browser)?
    • How are you handling exceptions for sites? I assumed it would be in the user.js file, but didn't notice anything in particular handling specific URLs differently.
  • I am looking for a privacy respecting android tv box/stick
  • How do you use your Beelink? More specifically what OS (and maybe core/most used apps) do you have installed? How do you interact with it (eg - wireless keyboard/mouse, USB IR receiver, etc.)?

    Any downside to this approach compared to using the Smart TV/Android TV/Apple TV features?

  • Google lays off hundreds in Assistant, hardware, engineering teams
  • Calls made from speakers and Smart Displays will not show up with a caller ID unless you’re using Duo.

    Is it possible to use Duo still? Google knows it discontinued/merged Duo with Google Meet nearly 18 months ago, right?

  • [@protonprivacy](https://lemmy.world/c/protonprivacy) Any plans to tackle identity? For SSO purposes I’m stuck with say, google but would love to move over to proton.
  • I think that @theomegabit@infosec.exchange is asking for Proton to become an OAuth/OIDC provider. This would allow you to sign into any service, app, platform, etc. that supports it using your Proton account. Some common providers that are widely supported are Google, Apple, Github, Facebook, and Microsoft.

    It is generally considered more secure than using "regular credentials" like username/email and password when using several services. There are a few downsides to this though. One of those downsides is that your OAuth/OIDC provider will have record of all your accounts used through OAuth/OIDC. For example, @theomegabit@infosec.exchange would like to avoid Google knowing about the various services used.

  • How good/bad is Firefox sync.
  • I'm still not sure what point you are trying to make. Your initial claim was:

    Although Mozilla encrypts the synced data, the necessary account data is shared and used by Google to track those.

    @utopiah@lemmy.ml asked:

    Are you saying Firefox shares data to Alphabet beyond Google as the default search engine? If so and if it applies to Sync (as if the question from OP here) can you please share sources for that?

    You stated:

    Mozilla does, sharing your account data

    You also provided evidence that Mozilla uses Google Analytics trackers on the Firefox's product information website. I mentioned that it's not sufficient evidence of your claim as the trackers are independent of Firefox the browser and Sync. Additionally, the use of trackers for websites is clearly identified on Mozilla's Privacy Policies and there is not much else mentioned on the Privacy Policies outside of those trackers and Google's geolocation services in Firefox.

    You've also mentioned Google's contract with Mozilla, which is controversial for many people, but isn't evidence of Mozilla providing user data to Google even in conjunction with the previously mentioned trackers. You then discussed various other browsers, but I'm not sure how that is relevant to your initial claim.

    While it seems we can both agree that Mozilla and it's products are far from perfect, it is looking like your initial claim was baseless as you have yet to provide any evidence of your initial claim. Do you have any evidence through things like code reviews or packet inspections of Firefox or Sync that hints Mozilla is sharing additional information to Google? At this point, I would even accept a user(s) providing evidence of some weird behavior like the recent issue where google.com wouldn't load in Firefox on Android if someone could find a way to connect the weird behavior to Mozilla sharing data with Google.

  • How good/bad is Firefox sync.
  • I don't understand what point you are trying to make. Mozilla has several privacy policies that cover its various products and services which all seem to follow Mozilla's Privacy Principles and Mozilla's overarching Privacy Policy. Mozilla also has documentation regarding data collection.

    The analytics trackers that you mentioned would fall under Mozilla's Websites Privacy Policy, which does state that it uses Google Analytics and can be easily verified a number of ways such as the services you previously listed.

    However, Firefox sync uses https://accounts.firefox.com/ which has its own Privacy Policy. There is some confusion around "Firefox Accounts" as it was rebranded to "Mozilla Accounts", which again has its own Privacy Policy. There is no indication that data covered by those policies are shared with Google. If Google Analytics trackers on Mozilla's website are still a concern for these services, you can verify that the Firefox Accounts and Mozilla Accounts URLs do not contain any Google Analytics trackers.

    Firefox has a Privacy Policy as well. Firefox's Privacy Policy has sections for both Mozilla Accounts and Sync. Neither of which indicate that data is shared with Google. Additionally, the data stored via the Sync service is encrypted. However, there is some telemetry data that Mozilla collects regarding Sync and more information about it can be found on Mozilla's documentation about telemetry for Sync.

    The only thing that I could find about Firefox, Sync, or Firefox Accounts/Mozilla Accounts sharing data with Google was for location services within Firefox. While it would be nice for Firefox not to use Google's geolocation services, it is a reasonable concession and can be disabled.

    Mozilla is most definitely not a perfect company, even when it comes to privacy. Even Firefox has been caught with some privacy issues relatively recently with the unique installation ID.

    Again, I'm not saying that Mozilla is doing nothing wrong. I am saying that your "evidence" that Mozilla is sharing Firefox, Sync, or Firefox Accounts/Mozilla Accounts data with Google because of Google Analytics trackers on some of Mozilla's websites is coincidental at best. Without additional evidence, it is misleading or flat out wrong.

  • How good/bad is Firefox sync.
  • I'm not disputing the results, but this appears to be checking calls made by Firefox's website (https://www.mozilla.org/en-US/Firefox/) and not Firefox, the web browser application. Just because an application's website uses Google Analytics does not mean that the application shares user data with Google.

  • Protectli FW6B
  • Some additional ideas for the Protectli device:

    • backup/redundant OPNsense instance for high availability
    • backup NAS/storage
      • set it up at a family/friend's house
    • a test/QA device for new services or architecture changes
    • travel router/firewall
    • home theater PC
    • Proxmox/virtualization host
      • Kubernetes cluster
    • Tor, I2P, cryptocurrency, etc. node
    • Home Assistant
      • dedicated local STT/TTS/conversation agent
    • NVR
    • low powered desktop PC

    There are so many options. It really depends on what you want, your other devices, the Protectli's specs, your budget, etc.

  • homelab @lemmy.ml rhymepurple @lemmy.ml
    Automated Container Image Updates

    I'm trying to find a video that demonstrated automated container image updates for Kubernetes, similar to Watchtower for Docker. I believe the video was by @geerlingguy@mastodon.social but I can't seem to find it. The closest functionality that I can find to what I recall from the video is k8s-digester. Some key features that were discussed include:

    • Automatically update tagged version number (eg - Image:v1.1.0 -> Image:v1.2.0)
    • Automatically update image based on tagged image's digest for tags like "latest" or "stable"
    • Track container updates through modified configuration files
      • Ability to manage deploying updates through Git workflows to prevent unwanted updates
    • Minimal (if any) downtime
    • This may not have been in the video, but I believe it also discussed managing backups and rollback functionality as part of the upgrade process

    While this tool may be used in a CI/CD pipeline, its not limited exclusively to Git repositories as it could be used to monitor container registries from various people or organizations. The tool/process may have also incorporated Ansible.

    If you don't know which video I'm referring to, do you have any suggestions on how to achieve this functionality?

    EDIT: For anyone stumbling on this thread, the video was Meet Renovate - Your Update Automation Bot for Kubernetes and More! by @technotim@mastodon.social, which discusses the Kubernetes tool Renovate.

    7
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RH
    rhymepurple @lemmy.ml
    Posts 3
    Comments 59