Overview
This article provides instructions in installing and configuring ArgoCD in Kubernetes.
Installation
The main task here is that Openshift is using ArgoCD so we should be familiar with how ArgoCD works.
Images
Installation-wise, it’s pretty easy. There are a couple of changes you’ll need to do. First off is review the install.yaml file to see what images will be loaded. Bring them in to the local repository following those instructions then update the install.yaml file to point to the local repository.
Next make sure the imagePullPolicy is set to Always for security reasons which is one of the reasons we local the images locally so we’re not constantly pulling from the internet.
Private Repository
In order to access our private gitlab server and private projects, we’ll want to create a ssh public/private key pair. Simply press enter for passwordless access. Note you’ll want to save the keypair somewhere safe in case you need to use it again. For ArgoCD, you’ll be creating tiles and repository entries for each project.
ssh-keygen -t rsa
Next, in gitlab, access Settings and SSH Keys and add your new public key. I called mine ArgoCD so I knew which one to manage.
You’ll need to add an entry in ArgoCD in the Settings, Repository Certificates and Known Hosts. Since I have several repos on my gitlab server, I simply logged into my bldr0cuomgit1 server and copied the single line for the gitlab server from the known_hosts file. Then clicked the Add SSH Knwon Hosts button and added it to the list. If you don’t do this, you’ll get a known_hosts error when ArgoCD tries to connect to the repo. You can click the Skip server verification box when creating a connection to bypass this however it’s not secure.
Next in ArgoCD, in the Settings, Repositories section, you’ll be creating a connection to the repository for the project. Enter the following information for my llamas installation.
Name: GitOps Repo
Project: gitops
URL: git@lnmt1cuomgitlab.internal.pri:external-unix/gitops.git
SSH private key data: [ssh private key]
Click Connect and you should get a ‘Successful‘ response for the repo.
TLS Update
Per the troubleshooting section below, update the argocd-cmd-params-cm ConfigMap to add a data.server.insecure: “true” section. This ensures ArgoCD works with the haproxy-ingress controller.
Installation
Once done, create the argocd namespace file, argocd.yaml then apply it.
apiVersion: v1
kind: Namespace
metadata:
name: argocd
kubectl apply -f argocd.yaml
Now that the namespace is created, create the argocd installations by applying the install.yaml file.
kubectl create -f install.yaml
It’ll take a few minutes for everything to start but once up, it’s all available.
In order to access the User Interface, you’ll need to create an argocd.dev.internal.pri alias to the HAProxy Load Balancer. In addition, you’ll need to apply the ingress.yaml file so you can access the UI.
kubectl apply -f ingress.yaml
Command Line Interface
Make sure you pull the argocd binary file which gives you CLI access to the argocd server.
Troubleshooting
After getting the haproxy-ingress controller installed and running, adding an ingress route to ArgoCD was failing. I mean, it was applied successfully however I was getting the following error from the argocd.dev.internal.pri website I’d configured:
The page isn’t redirecting properly
A quick search found the TLS Issue mentioned in the bug report (see References) which sent me over to the Multiple Ingress Objects page. At the end of the linked block of information was this paragraph:
The API server should then be run with TLS disabled. Edit the argocd-server
deployment to add the --insecure
flag to the argocd-server command, or simply set server.insecure: "true"
in the argocd-cmd-params-cm
ConfigMap
And it referred me to the ConfigMap page and I made the following update on the fly (we’ll need to fix it in the GitOps repo though).
kubectl edit configmap argocd-cmd-params-cm -n argocd
Which brought up a very minimal configmap.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-cmd-params-cm
app.kubernetes.io/part-of: argocd
name: argocd-cmd-params-cm
namespace: argocd
I made the following change and restarted the argocd-server and I have access both to the UI and to be able to use the argocd CLI. Make sure true is in quotes though or you’ll get an error.
apiVersion: v1
data:
server.insecure: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-cmd-params-cm
app.kubernetes.io/part-of: argocd
name: argocd-cmd-params-cm
namespace: argocd
External Clusters
I want to be able to use ArgoCD on the main cluster in order to push updates to remote clusters. Basically manage the llamas band website from one location. In order to do this, I need to connect the clusters together. In order to do this, I need to log in to the main ArgoCD cluster with the command line tool, argocd. Then make sure the working area has access to all the clusters in the .kube/config file. And finally use the argocd cli to connect to the clusters.
The main thing I find with many articles is the assumption of information. While I’ve provided links to where I found information, here I provide extra information that may have been left out of the linked article.
Login to ArgoCD
Logging into the main dev argocd environment is pretty easy in general. I had a few problems but eventually with help got logged in. The main thing was using the flags needed to get in. It took several tries and understanding what I was trying to get in to before I got logged in.
First off, I had to realize that I should be logged into the argocd ingress URL. In my case, argocd.dev.internal.pri. I still had a few issues and ultimately had the following error:
$ argocd login argocd.dev.internal.pri --skip-test-tls --grpc-web
Username: admin
Password:
FATA[0003] rpc error: code = Unknown desc = Post "https://argocd.dev.internal.pri:443/session.SessionService/Create": x509: certificate is valid for ingress.local, not argocd.dev.internal.pri
I posted up a call for help as I was having trouble locating a solution and eventually someone took pity and provided the answer. The –insecure flag. Since I was already using –skip-test-tls, I didn’t even think to see if there was such a flag. And it worked.
$ argocd login argocd.dev.internal.pri --skip-test-tls --grpc-web --insecure
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.dev.internal.pri' updated
Merge Kubeconfig
Next, in order for argocd to have sufficient access to the other clusters, you need to merge configuration files to a single config. You might want to create a service account with admin privileges to separate it away from the kubernetes-admin account. Since this is my homelab, for now I’m simply using the kubernetes-admin account.
Problem though, in the .kube/config file, the authinfo is the same name for each cluster, kubernetes-admin. But since it’s a label and the actual account is kubernetes-admin@bldr, you can just change each label to get a unique authinfo entry. Back up all files before working on them of course.
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kubernetes-admin@bldr bldr kubernetes-admin
kubernetes-admin@cabo cabo kubernetes-admin
kubernetes-admin@lnmt lnmt kubernetes-admin
kubernetes-admin@tato tato kubernetes-admin
When you do the merge as shown below, there’ll be just one set of configs for kubernetes-admin and you won’t be able to access the other clusters. What I did was in each unique config file, I changed the label, then merged them together. Under contexts, change the user to kubernetes-bldr
contexts:
- context:
cluster: bldr
user: kubernetes-bldr
name: kubernetes-admin@bldr
And in the users section, also change the name to match.
users:
- name: kubernetes-bldr
With the names changed, you can now merge the files together. I’ve named mine after each of the clusters so I have bldr, cabo, tato, and lnmt. If you have files in a different location, add the path to the files.
export KUBECONFIG=bldr:cabo:tato:lnmt
And then merge them into a single file.
kubectl config view --flatten > all-in-one.yam
Check the file to make sure it at least looks correct, copy it to .kube/config, and then check the contexts.
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kubernetes-admin@bldr bldr kubernetes-bldr
kubernetes-admin@cabo cabo kubernetes-cabo
kubernetes-admin@lnmt lnmt kubernetes-lnmt
kubernetes-admin@tato tato kubernetes-tato
See, AUTHINFO is all different now. Change contexts to one of the other clusters and check access. Once it’s all working, you should now be able to add them to ArgoCD.
Cluster Add
Now to the heart of the task, adding the remote clusters to ArgoCD. Now that we’re logged in and have access to all clusters from a single .kube/config file, we can add them to ArgoCD.
$ argocd cluster add kubernetes-admin@cabo --name cabo0cuomvip1.qa.internal.pri
WARNING: This will create a service accountargocd-manager
on the cluster referenced by contextkubernetes-admin@cabo
with full cluster level privileges. Do you want to continue [y/N]? y
INFO[0005] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0005] ClusterRole "argocd-manager-role" created
INFO[0005] ClusterRoleBinding "argocd-manager-role-binding" created
INFO[0010] Created bearer token secret for ServiceAccount "argocd-manager"
Cluster 'https://cabo0cuomvip1.qa.internal.pri:6443' added
And it’s added. Check the GUI, Settings, Clusters and you should see it there.
Pingback: Kubernetes Index | Motorcycle Touring