#homelab #minipc #proxmox
# General Resources
- [GitHub - awesome-selfhosted/awesome-selfhosted: A list of Free Software network services and web applications which can be hosted on your own servers](https://github.com/awesome-selfhosted/awesome-selfhosted)
- Interesting things from this to try:
- Communication
- Matrix
- [Element \| Secure collaboration and messaging](https://element.io/)
- [Conduit - Your own chat server](https://conduit.rs/)
- Manufacturing
- CNCjs
- Manyfold
- Media Management
- Jellyfin
- [Guide to Host Jellyfin for People Coming from Plex : r/selfhosted](https://www.reddit.com/r/selfhosted/comments/1kcmamw/guide_to_host_jellyfin_for_people_coming_from_plex/?share_id=sP7inWGGwW52R2PV-EH1h&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1)
- ChannelTube // MeTube // Pinchflat
- Miscellaneous
- Baby Buddy
- [GitHub - C4illin/ConvertX: 💾 Self-hosted online file converter. Supports 1000+ formats ⚙️](https://github.com/C4illin/ConvertX)
- [GitHub - Shelf-nu/shelf.nu: shelf is open source Asset Management Infrastructure for absolutely everyone.](https://github.com/Shelf-nu/shelf.nu)
- Money Management
- investigate further!
- Recipe Management
- investigate further!
- [Downloads - Proxmox Virtual Environment](https://www.proxmox.com/en/downloads/proxmox-virtual-environment)
- [Intel NUC Proxmox Homelab Build](https://medium.com/nerd-for-tech/intel-nuc-10-proxmox-ve-homelab-ce2ef63075c)
- Good Proxmox on NUC guide
- Talks about Backing Up environment to Synology NFS share!
# Design
- Stealing the 2025 build from [u/noced](https://www.reddit.com/u/noced)
- Hardware
- [ASUS NUC 15 Pro Slim Barebone Kit RNUC15CRKU50000U B&H Photo](https://www.bhphotovideo.com/c/product/1881417-REG/asus_rnuc15crku50000u_nuc_15_pro_slim.html)
- $520 for Slim / Core Ultra 5 225H
- "SLIM" fits into 1U Shelf
- 32GB DDR5 RAM
- [Crucial 32GB Laptop DDR5 5600 MHz SO-DIMM Memory CT2K16G56C46S5](https://www.bhphotovideo.com/c/product/1735729-REG/crucial_crucial_ram_32gb_kit.html)
- $78.99 (2025-04-10)
- 1TB NVMe SSD
- [Samsung 1TB 990 EVO Plus PCIe 5.0 x2 M.2 Internal MZ-V9S1T0B/AM](https://www.bhphotovideo.com/c/product/1855149-REG/samsung_mz_v9s1t0b_am_1tb_990_evo_plus.html)
- $74.99 on sale (2025-04-10)
- Proxmox
- Passing Arc GPU through to VM and then through to Docker Container
- [Intel Arc GPU not detected with new Asus NUC 15 Pro : r/Proxmox](https://www.reddit.com/r/Proxmox/comments/1jo54i1/intel_arc_gpu_not_detected_with_new_asus_nuc_15/)
- Virtual Machine 1 - DockerProd
- Portainer
- Plex
- media share NFS mounted(?) from Synology
- READ ONLY???
- Home Bridge
- Calibre + Calibre Web?
- Virtual Machine 2 - DockerDev
- Immich Photo Library
- Docker Install?
- photos share NFS mounted(?) from Synology
- [Paperless-ngx](https://docs.paperless-ngx.com/)
- Virtual Machine 3 - Home Assistant (HASS.io)
- Backup Proxmox settings and virtual machines to Synology Backup Share
# Build Log
- 2025-05-06
- NUC Arrived!
- Installed RAM and NVMe drive.
- Download Proxmox VE 8.4 ISO Installer.
- Create a bootable USB stick using BalenaEtcher.
- Boot from Proxmox Install USB.
- Select **Proxmox Install (Graphical)**.
- Select **zfs (RAID0)** for the filesystem.
- Set Country / Time Zone / Keyboard Layout.
- Set root user password and email (store in 1Password).
- Set the Management Network:
- Hostname: **donnager.reynolds.io**
- IP Address / Gateway: Set in home network IP list
- DNS: **1.1.1.1**
- Access via web interface ([currently here](https://10.0.1.129:8006/)) and test login.
- Access the Proxmox Shell from the web interface.
- Run the recommended post-install script (answer yes to all questions, particularly disable enterprise repository)
```
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/post-pve-install.sh)"
```
# noced's Build Notes from 2025-05-06
Raw build notes from [u/noced](https://www.reddit.com/u/noced) from his NUC/Proxmox/AlpineLinux docker host setup on the same ASUS NUC 15 Pro.
## Prerequisites
### Local SSH key
ssh-keygen -t ed25519 -C "
[email protected]"
## Proxmox
Post install scripts:
https://community-scripts.github.io/ProxmoxVE/scripts?id=post-pve-install
```
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/post-pve-install.sh)"
```
### GPU Passthrough – Proxmox
```
install proxmox-kernel-6.11
echo "blacklist i915" >> /etc/modprobe.d/blacklist.conf
vim /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt
proxmox-boot-tool refresh
reboot
dmesg | grep -e DMAR -e IOMMU
dmesg | grep 'remapping'
pvesh get /nodes/nuc1/hardware/pci --pci-class-blacklist ""
```
## Alpine Linux general setup
### Create VM
Create a new VM in Proxmox
```
qm create 101 --name alpine2 --memory 8192 --cpu host --cores 4 --net0 virtio,bridge=vmbr0,firewall=1 --onboot 1 --ostype l26 --boot order='scsi0;ide2;net0'
qm set 101 --scsihw virtio-scsi-single --scsi0 local-zfs:25
qm set 101 --scsihw virtio-scsi-single --scsi1 nvmedata1:4096
qm set 101 --cdrom local:iso/alpine-extended-3.21.3-x86_64.iso
qm set 101 --hostpci0 00:02.0
qm start 101
```
### Install Alpine Linux
Connect to the VM console and start the Alpine Linux installation
Follow the installation prompts:
```
setup-alpine
* Keyboard layout: none
* Hostname: alpine2
* Network interface: eth0
* IP Address: dhcp
* Manual network configuration: no
* Set root password
* Time zone: America/New_York
* Proxy: none
* Mirror: skip
* Setup user: no
* SSH server: OpenSSH
* Allow root login: yes
* Enable SSH key: none
* Disk to use: sda
* How would you like to use the disk: sys
* Erase disk and continue: y
reboot
```
After reboot, log in as root with the password you set during installation
### Copy SSH key from client
ssh-copy-id -i ~/.ssh/id_ed25519.pub root@SERVERIP
### Disable password/root login
```
vim /etc/ssh/sshd_config
PasswordAuthentication no
PermitRootLogin prohibit-password
service sshd restart
```
### Install updates
```
su
apk add vim
vim /etc/apk/repositories
# remove comment for community repos>
apk update
apk upgrade
```
### Install Docker
Docker Compose reference: https://docs.docker.com/reference/compose-file/
```
apk add docker docker-compose
rc-update add docker boot
service docker start
```
### GPU Passthrough
See https://pve.proxmox.com/wiki/PCI_Passthrough
```
apk add mesa-dri-gallium mesa-va-gallium intel-media-driver libva-intel-driver linux-firmware-i915 vainfo intel-gpu-tools
```
## alpine1
```
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
Via https://github.com/homebridge/docker-homebridge/wiki/Homebridge-on-Portainer
## alpine2
This VM host runs Alpine Linux and is the base for Immich in Docker.
* https://immich.app/docs/installation/docker/docker-compose
### Create a new partition on the NVMe drive
```
apk add parted xfsprogs
fdisk -l
parted /dev/sdb --script mklabel gpt
parted /dev/sdb --script mkpart primary xfs 0% 100%
mkfs.xfs /dev/sdb1
mkdir -p /mnt/docker_volume
mount -t xfs /dev/sdb1 /mnt/docker_volume
echo "/dev/sdb1 /mnt/docker_volume xfs defaults,noatime 0 2" >> /etc/fstab
## Verify the mount
df -h
reboot
```
## Watchtower
https://containrrr.dev/watchtower/
```
version: "3"
services:
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: "America/New_York"
WATCHTOWER_SCHEDULE: "0 0 4 * * *"
WATCHTOWER_CLEANUP: "true"
WATCHTOWER_NOTIFICATION_REPORT: "true"
WATCHTOWER_NOTIFICATION_URL: pushover://shoutrrr:<API Key>@<User ID>/?devices=<Device>
WATCHTOWER_NOTIFICATION_TEMPLATE: |
{{- if .Report -}}
{{- with .Report -}}
{{len .Scanned}} Scanned, {{len .Updated}} Updated, {{len .Failed}} Failed
{{- range .Updated}}
- {{.Name}} ({{.ImageName}}): {{.CurrentImageID.ShortID}} updated to {{.LatestImageID.ShortID}}
{{- end -}}
{{- range .Fresh}}
- {{.Name}} ({{.ImageName}}): {{.State}}
{{- end -}}
{{- range .Skipped}}
- {{.Name}} ({{.ImageName}}): {{.State}}: {{.Error}}
{{- end -}}
{{- range .Failed}}
- {{.Name}} ({{.ImageName}}): {{.State}}: {{.Error}}
{{- end -}}
{{- end -}}
{{- else -}}
{{range .Entries -}}{{.Message}}{{"\n"}}{{- end -}}
{{- end -}}
```
## Docker Swarm
alpine1:~# docker swarm init
To add a worker to this swarm, run the following command:
`docker swarm join --token SWMTKN-1-ABCDEFG12345 192.168.1.220:2377`