I’ve been using Portainer to manage my homelab stacks from a single dashboard, which is more convenient than the CLI, but I’m not very satisfied with it so I’ve been looking for alternatives.

Portainer often fails to deploy them and is either silent about it, or doesn’t give me much information to work with. The main convenience is that (when it works) it automatically pulls the updated docker compose files from my repo and deploys it without any action on my part.

Docker Swarm and Kubernetes seem to be the next ones in line. I have some experience with K8s so I know it can be complex, but I hope it’s a complexity most paid upfront when setting everything up rather than being complicated to maintain.

Do you have any experience with either one of these, or perhaps another way to orchestrate these services?

  • TheFrenchGhosty@lemmy.pussthecat.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Docker Compose as is.

    I used Portainer for like 2 years when I first learned Docker (I only used to deploy compose file and motoring the container), but it’s really shit when you know how it works.

  • 7egend@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I use Portainer, never had any issues with it and I’m running around 40 Stacks at any given time. There’s also Yacht, which is nice, but not quite as feature rich as Portainer but it’s super easy to use in comparison.

  • tiwenty@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Currently it’s in the CLI, I just split my compose files in different concerns, and just use a bash alias that uses a wildcard to call them all.

    But now as I’m adding a RPi in the stack to add some monitoring and a few light stuff, I’m also thinking of going to Kube. But as you say, it may be tough ^^

  • ilco@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Ive used both Portainer or yacht. Its a decent ways.to manage docker stacks/aps.it kinda depends on the wishes you have app wise. Im also trying out a project named cosmos.(simplyfied portainer like app but with a focus on ease of use) on afriend server

  • loganmarchione@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    This is definitely an over-engineered setup…

    I store my Docker Compose files in an internal-only git repo (hosted on Gitea).

    Drone is my CI/CD system, and I use Renovatebot to look for updates to container tags (never pull latest). My workflow is this:

    1. Renovatebot runs once a night (at midnight) via Drone in a Docker container (I’ve written about this here). If a new image tag is found, it opens a PR in Gitea.
    2. I manually log in to Gitea and approve the PR.
    3. The PR approval (merging to master) kicks off a Drone workflow that does the following:
      • Runs an Alpine Linux container
      • SSHes from the Alpine Linux container into my Docker host
      • Runs a script (on the Docker host) that basically runs git pull, then docker compose -f "$D" pull and then docker compose -f "$D" up -d.
      • If there is a failure, Drone emails me

    I’ve written about step 3 here.

    This means I never manually update Docker Compose files, I let Renovate manage everything, I approve PRs, then I walk away and let the scripts run.

    I also run a single-node K3s cluster that is hosted on GitHub. Again, using Renovate to open PRs, and I run Flux so watch for changes to master, which then redeploys applications.