<= 50GB, usually 1-2GB ) and involve lots of random IO with aggressive file locking. The SQLite databases in some of my containers are absolutely hating being mounted over NFS. The "data" volumes are large and sequential -- camera streams, LLaMa datasets, etc. These have performed fine direct-to-NFS.Is there a generally-accepted technology to allow my container hosts to utilize their local 1tb NVMe drives as high-speed storage, while still treating them as network mounts? I want as much data as possible to [eventually] live on the QNAP, since I actually do use it for snapshots and backups. Having all that local data living on any particular node on a consumer-grade SSD is not ideal. I am hoping to keep the source of truth on the QNAP, to allow for container tepid-migration between hosts as applicable. Hot migration is currently out of scope for what I want to do, although if it's an easy enough switch to flip I'd love to.I've tried cachefilesd in the past, and got worse performance than native NFS mounts. I don't know why -- the drives themselves benchmarked fast, and the NFS connection benchmarked fast, but putting the two together just fell on its face. I'm open to really any options, be it distributed filesystems like Gluster, iSCSI exports with dm-cache, or anything else." title="Caching network storage on local container hosts">full image - Repost: Caching network storage on local container hosts (from Reddit.com, Caching network storage on local container hosts)
Mining:
Exchanges:
Donations:
I have a QNAP TS-873A connected via 10gbit to a managed Trendnet switch, and a couple PCs hanging off that for my homelab. The QNAP is running 8x12TB spinning rust, and while I fight with them about how I'm allowed to use the Optane I installed in it, there's currently no cache acceleration on the NAS itself.The container host PCs are all mini-PCs with an NVMe boot/OS device (256gb) and a larger NVMe device (1tb) for scratch. Each mini-PC will have up to 2x 2.5gbit connections to the switch, so there should be some decent bandwidth available to the QNAP. I am currently using a handful of Docker Compose stacks, but I am hoping to move to Swarm or a homelab-flavor of Kubernetes. I have no particular affiliation with any of them.My containers are currently structured to mount their volumes via the NFS driver, but I'm not sold on this at all. They generally have two kinds of volumes: "working" and "data". The "working" volumes tend to be small ( <= 50GB, usually 1-2GB ) and involve lots of random IO with aggressive file locking. The SQLite databases in some of my containers are absolutely hating being mounted over NFS. The "data" volumes are large and sequential -- camera streams, LLaMa datasets, etc. These have performed fine direct-to-NFS.Is there a generally-accepted technology to allow my container hosts to utilize their local 1tb NVMe drives as high-speed storage, while still treating them as network mounts? I want as much data as possible to [eventually] live on the QNAP, since I actually do use it for snapshots and backups. Having all that local data living on any particular node on a consumer-grade SSD is not ideal. I am hoping to keep the source of truth on the QNAP, to allow for container tepid-migration between hosts as applicable. Hot migration is currently out of scope for what I want to do, although if it's an easy enough switch to flip I'd love to.I've tried cachefilesd in the past, and got worse performance than native NFS mounts. I don't know why -- the drives themselves benchmarked fast, and the NFS connection benchmarked fast, but putting the two together just fell on its face. I'm open to really any options, be it distributed filesystems like Gluster, iSCSI exports with dm-cache, or anything else.
Social Media Icons