Lem453 4 days ago • 100%
And something like this can be used as the docker server to hold the repository
Lem453 5 days ago • 100%
I'm surprised no one mentioned this if you are already using kde
Lem453 5 days ago • 100%
Lem453 1 week ago • 100%
This should help
Lem453 1 week ago • 95%
Vaultwarden itself is actually one of the easiest docker apps to deploy...if you already have the foundation of your home lab setup correctly.
The foundation has a steep learning curve.
Domain name, dynamic DNS update, port forwarding, reverse proxy. Not easy to get all this working perfectly but once it does you can use the same foundation to install any app. If you already had the foundation working, additional apps take only a few minutes.
Want ebooks? Calibre takes 10 mins. Want link archiving? Linkwarden takes 10 mins
And on and on
The foundation of your server makes a huge difference. Well worth getting it right at the start and then building on it.
I use this setup: https://youtu.be/liV3c9m_OX8
Local only websites that use https (Vaultwarden) and then external websites that also use https (jellyfin).
Lem453 1 week ago • 100%
What type/brand do you have now?
Lem453 2 weeks ago • 100%
See me comment above
https://lemmy.ca/comment/11490137
I don't like that obsidian not fully open source but the plugins can't be beat if you use them. Check out some youtube videos for top 20 plugins etc. Takes the app to a whole new level.
Lem453 2 weeks ago • 100%
The real power of obsidian is similar to why Raspberry Pi is so popular, it has such a large community that plugins are amazing and hard to duplicate.
That being said, I use this to live sync between all my devices. It works with almost the same latency as google docs but its not meant for multiple people editing the same file at the same time
Lem453 4 weeks ago • 100%
Is it still a drop in replacement for gitea, I've been meaning to switch
Lem453 1 month ago • 100%
Futo voice to text works nice and fast on my pixel 8 pro. Fractions of a second slower than google. Also that's with the slower English 74 library (more data point, slower). They have an even larger one but the default is the smaller and faster English-39 model
Lem453 1 month ago • 100%
Sleep mode seems to be working well for me on fedora atomic with kde (aurora).
Deep sleep works well and can stay sleeping for days.
Normally sleep rules are working well. The do not sleep toggle in the power menu also works to prevent it from sleeping.
Only thing that doesn't work is flatpak apps can't prevent the system from sleeping, so watching a video, using Handbrake to encode etc will all just allow it to sleep if there is no physical input.
I have a 2018 dell xps
Lem453 2 months ago • 100%
And borgmatic makes retention rules with automatic runs super easy. It basically a wrapper that runs borg on the client side.
Lem453 2 months ago • 100%
I've been using this for a few months now. Its really great.
Lem453 2 months ago • 100%
Security in layers.
All your services should be using https. Vaultwarden in particular won't even run without https unless you bypass a bunch of security measures.
This is how to setup local only and external https, I highly recommend this as a baseline setup for every homelab. It allows you to choose how much security you want on a per app basis and makes adding new apps trivially easy.
Lem453 2 months ago • 100%
Anyone with the knowledge to self host will quickly discover 3-2-1. If they choose to follow it, that's on them but data loss won't be from ignorance
Lem453 2 months ago • 100%
Borg backup to borgbase is not very expensive and borg will encrypt the data plus the vault is also encrypted
Lem453 2 months ago • 100%
Keep vaultwarden behind wireguard for local only access then also use https certs and good master password. Very secure like this
Lem453 2 months ago • 100%
Last in checked, there is an open PR for the PWA Android app the expose the share function. That will allow this to work however you will have to install the PWA via chrome since the share feature for PWA is proprietary. Sucks because I use Firefox with a bunch of privacy features .
Lem453 2 months ago • 63%
Https is end to end encryption and doesn't need to be on their road map
Encryption at rest could be an option but seeing as how many other projects have trouble with it (nsxtcloud), its probably best to have this at the fike system level with disc encryption
Lem453 2 months ago • 100%
Same with jellyfin.
They basically don't accept recurrent donations on purpose
Lem453 2 months ago • 100%
I've got multiple apps using LDAP, oauth, and proxy on authentik, I've not had this happen.
I also use traefik as reverse proxy.
I didn't manually create an outpost. Not sure what advantage there is unless you have a huge organization and run multiple redundant containers. Regardless there might be some bug here because I otherwise have the same setup as you.
I would definitely try uploading everything to the latest container version first
Lem453 2 months ago • 100%
For people wanting the a very versatile setup, follow this video:
Apps that are accessed outside the network (jellyfin) are jellyfin.domain.com
Apps that are internal only (vaultwarden) or via wireguard as extra security: Vaultwarden.local.domain.com
Add on Authentik to get single sign on. Apps like sonarr that don't have good security can be put behind a proxy auth and also only accessed locally or over wireguard.
Apps that have oAuth integration (seafile etc) get single sign on as well at Seafile.domain.com (make this external so you can do share links with others, same for immich etc).
With this setup you will be super versatile and can expand to any apps you could every want in the future.
Lem453 2 months ago • 100%
The same as for anything else if your phone gets stolen. You restore from backups.
Aegis allows you to make a backup that you can keep yourself on your computer, your own cloud storage etc.
Every OS has some kind of built in vault/encryption feature. Put the file in there. It only needs to be updated when you add another 2fa account (so very infrequently)
Lem453 3 months ago • 100%
Don't use cloud based 2fa and you won't need to wonder about this.
Aegis is one of several opensource 2fa apps you can use instead.
Lem453 3 months ago • 100%
Not using cloud based 2fa which is dumb to begin with
Lem453 3 months ago • 100%
Does anyone know if dockge allows you to directly connect to a git repo to pull compose files?
This is what I like most about portainer. I work in the compose files from an IDE and the check them into my self hosted git repo.
Then on portainer, the stack is connected to the repo so only press a button to pull the latest compose and there is a check box to decide if I want the docker image to update or not.
Works really well and makes it very easy to roll back if needed.
Lem453 3 months ago • 100%
Bitwarden let's you upload files (key files) and save all you passwords.
Lem453 3 months ago • 100%
Use aegis, export the keys and then reimport them every time you switch. Trusting your second factor to a cloud is a disaster waiting to happen.
If you want to get fancy setup your own cloud server (nextcloud, Seafile, owncloud etc) and set the backup folder for aegis to the self hosted cloud for easy restore every time you switch ROMs.
Lem453 3 months ago • 100%
Ya I'm using the English 79 model (not the default) voice on a pixel 8 and it works very well.
Lem453 3 months ago • 100%
If you really want to be pedantic you could setup raid 1+0 or 5 and live the true RAM hot swapping life
Lem453 3 months ago • 100%
FWIW collabora and open office can integrate with other clouds like Seafile and owncloud Infinite scale. So even without NextCloud it can be used. It can also be used stand alone.
Lem453 3 months ago • 100%
It's not easy but they only way to make it all work without creating massive security holes is to only buy things that allow connection with open standards (which means home Assistant can connect to it.
Lem453 3 months ago • 100%
The correct way of doing this is to never interact with an iot device directly. Put all of them on the same network with Home Assistant and then control all of them only via Home Assistant. Then you make one exception for home assistant to be accessible to the other networks.
This also allows you to disable Internet access for every single iot device Except home assistant.
Lem453 3 months ago • 100%
I don't remember all the details. They never went closed source, there was a difference in opinion between primary devs on the direction the project should take.
Its possible that was related to corporate funding but I don't know that.
Regardless it was a fork where some devs stayed with owncloud and most went with NextCloud. I moved to NextCloud at this time as well.
OwnCloud now seems to have the resources to completely rewrite it from the ground up which seems like a great thing.
If the devs have a disagreement again then the code can just be forked again AFAIK just like any other open source project.
Lem453 3 months ago • 100%
If I understand it correctly, layering an application is no more dangerous than a regular install on a non atomic os. In other words, every piece of software you have installed on normal fedora desktop is not containerized, if it's software you were going to install anyways, layering it is the same as before (albeit significantly slower than install and update).
But that means that you get great benefits because 99% of your software packages are properly containerized
Lem453 3 months ago • 66%
I only read the beginning but it says you can use it for private deployments but can't use it commercially. Seems reasonable. Any specific issues?
Lem453 3 months ago • 100%
I have no problem supporting devs but locking what should be core features behind a paywall in unacceptable for me.
Lem453 3 months ago • 100%
I mean software that's actively being developed can't be called DOA. Even if it's garbage now (and I don't know if it is) doesn't mean it can't become useful at a future date.
Its not like a TV show where once released it can never be changed.
Lem453 3 months ago • 100%
Oh never mind, I saw this finding announcement for 6M and assumed it was the same company. Looks like they have many corporate investors...doesn't inspire too much confidence.
Although they are still using the Apache 2 license and you can see they are very active in github. It does look like it's a good FOSS project from the surface.
Lem453 3 months ago • 83%
Ya it was bought by kiteworks which provides document management services for corps (which explains why that mention traceable file access in their features a lot).
That being said, they bought them in 2014 it seems and it's been a decade now
Correcting: they were bought very recently, they have been accepting corporate funding for more than a decade however. That's not bad in and of itself.
The topic of self-hosted cloud software comes up often but I haven't seen anyone mention owncloud infinite scale (the rewrite in Go). I started my cloud experience with owncloud years ago. Then there was a schism and almost all the active devs left for the nextcloud fork. I used nextcloud from it's inception until last year but like many others it always felt brittle (easy to break something) and half baked (features always seemed to be at 75% of what you want). As a result I decided to go with Seafile and stick to the Unix philosophy. Get an app that does one thing very well rather than a mega app that tries to do everything. Seafile does this very well. Super fast, works with single sign on etc. No bloat etc. Then just the other day I discovered that owncloud has a full rewrite. No php, no Apache etc. Check the github, multiple active devs with lots of activity over the last year etc. The project seems stronger than ever and aims to fix the primary issues of nextcloud/owncloud PHP. Also designed for cloud deployment so works well with docker, should be easy to configure via docker variables instead of config files mapped into the container etc. Anyways, the point of this thread is: 1. If you never heard of it like me then check it out 2. If you have used it please post your experiences compared to NextCloud, Seafile etc.
Technically this isn't actually a seafile issue, however the upload client really should have the ability to run checksums to compare the original file to the file that is being synced to the server (or other device). I run docker in a VM that is hosted by proxmox. Proxmox manages a ZFS array which contains the primary storage that the VM uses. Instead of making the VM disk 1TB+, the VM disk is relatively small since its only the OS (64GB) and the docker containers mount a folder on the ZFS array itself which is several TBs. This has all been going really well with no issues, until yesterday when I tried to access some old photos and the photos would only load half way. The top part would be there but the bottom half would be grey/missing. This seemed to be randomly present on numerous photos, however some were normal and others had missing sections. Digging deeper, some files were also corrupt and would not open at all (PDFs, etc). Badness alert.... All my backups come from the server. If the server data has been corrupt for a long time, then all the backups would be corrupt as well. All the files on the seafile server originally were synced from my desktop so when I open the file locally on the desktop it all works fine, only when I try to open the file on seafile does it fail. Also not all the files were failing only some. Some old, some new. Even the file sizes didn't seem to consistently predict if it would work on not. Its now at the point where I can take a photo from my desktop, drag it into a seafile library via the browser and it shows successful upload, but then trying to preview the file won't work and downloading that very same file back again shows the file size about 44kb regardless of the original file size. Google/DDG...can't find anyone that has the same issue...very bad Finally I notice an error in mariadb: "memory pressure can't write to disk" (paraphrased). Ok, that's odd. The ram was fine which is what I assumed it was. HD space can't be the issue since the ZFS array is only 25% full and both mariadb and seafile only have volumes that are on the zfs array. There are no other volumes...or is there??? Finally in portainer I'm checking out the volumes that exist, seafile only has the two as expected, data and database. Then I see hundreds of unused volumes. Quick google reveals `docker volume purge` which deletes many GBs worth of volumes that were old and unused. By this point, I've already created and recreated the seafile docker containers a hundred times with test data and simplified the docker compose as much as possible etc, but it started working right away. Mariadb starts working, I can now copy a file from the web interface or the client and it will work correctly. Now I go through the process of setting up my original docker compose with all the extras that I had setup, remake my user account (luckily its just me right now), setup the sync client and then start copying the data from my desktop to my server. I've got to say, this was scary as shit. My setup uploads files from desktop, laptop, phone etc to the server via seafile, from there borg backup takes incremental backups of the data and sends it remotely. The second I realized that local data on my computer was fine but the server data was unreliable I immediately knew that even my backups were now unreliable. IMHO this is a massive problem. Seafile will happily 'upload' a file and say success, but then trying to redownload the file results in an error since it doesn't exist. **Things that really should be present to avoid this:** 1. The client should have the option to run a quick checksum on each file after it uploads and compare the original to the uploaded one to ensure data consistency. There should probably be an option to do this afterwards as well as a check. Then it can output a list of files that are inconsistent. 2. The default docker compose should be run with health checks on mariadb so when it starts throwing errors but the interface still runs, someone can be alerted. 3. Need some kind of reminder to check in on unused docker containers.
Looking for a self hosted YouTube front end with automatic downloader. So you would subscribe to a channel for example and it would automatically download all the videos and new uploads. Jellyfin might be able to handle the front end part but not sure about automatic downloads and proper file naming and metadata
The jellyfin app (self hosted video streaming) app on steam deck (installed via desktop mode->discovery as a flat pack) doesn't seem to register as 'playing' with the os. The screen will dim after a few mins. I'm 'playing' the jellyfin app as a non steam game in game mode. I know I can disable screen dimming in the settings but is there a way to have it auto detect when a video is playing and prevent the screen from dimming?
Any suggestions for roasted decaf beans I can get Canada?
Very solid price, the cheapest I've seen for something like this. Has anyone tried it with OPNsense or other software? The linked thread talks about someone getting 60C load temps but the air was 37C and they are using a RJ45 DAC which are known to use lots of power. Wondering if anyone else has experience with this. Seems like a big advancement in what's possible at a home scale for non second hand equipment. Another article about this: https://liliputing.com/this-small-fanless-pc-is-built-for-networking-with-four-10-gbe-and-five-2-5-gb-ethernet-ports/
This should eventually make it's way into jellyfin. Eager to see the performance improvements.
Beautiful stats for Jellyfin. I just set it up in docker compose yesterday. Love it!
I'm wondering if I can get a device that enables zwave over Ethernet/wifi and connect that to my home assistant setup? Basically I have a home assistant setup in my house. I want to add a few simple things to my parents place but I want it to all be on the same HA instance. On the router in my parents place, I can install wireguard to connect it to my LAN. So now my parents network is the same as my LAN network. I'm looking for a device that can connect to zwave and then send that info over the LAN to my home assistant. Does such a thing exist? Thanks.
By local control, I mean if the Z-wave hub is down will the switch still work as a dumb switch and turn the lights on/off? This is the product I would like to get, but can't find if they allow 'dumb switch' operation. Does anyone have experience with these? https://byjasco.com/ultrapro-z-wave-in-wall-smart-switch-with-quickfit-and-simplewire-white Thanks!
Starship has been stacked and is apparently ready to launch as per Musk. Waiting on FAA approval for second test flight.
Hi all. Just learned about NixOS a few weeks ago. I'm in the process of migrating several of my docker services to a new server that will have proxmox installed as the host and then a VM for docker. I'm currently using alpine as the VM and it works well but one of the main goals of the migration is to use infrastructure as code as much as possible. All my docker services are docker compose files checked into a git repo that gets deployed. When I need to make a change, I update the git repo and pull down the latest docker compose. I currently have a bunch of steps that I need to do on the alpine VM to make it ready for docker (qemu agent, NFS shares, etc). NixOS promises to be able to do all that with a single config file and then create a immutable OS that never changes after that. That seems to follow the philosophy well for infrastructure as code and easy reproducibility. Has anyone else tried NixOS as a docker host? Any issues you've encountered?
I'm just starting to upgrade my basic unraid docker to an InfraAsCode setup. I will use unraid as Nas only. My media and backups will be on unraid, everything else on a separate proxmox VM that is running and SSD storage array for ZFS. Both the unraid and proxmox hosts share their storage via NFS. Each docker container mounts the NFS volumes as needed. For the containers I use an alpine VM with docker. I use portainer to connect to a gitea repo (on unraid) to pull down the docker compose file. So my workflow is, use VS code on my PC to write the compose file, commit to git, then on portainer I hit the redeploy button and it pulls the latest compose file automatically. What's your setup?