Overview
This is the third wave. The initial article made references to different waves but as I progress, the waves show what the priorities are for getting the environment back up.
For example, I spent a lot of time getting the Inventory back up since that’s where all my inventory hosts files are located for Ansible. This let me add appropriate tags and rebuild the hosts files so I wasn’t doing any steps manually.
Gitlab
Next up is to get Gitlab Enterprise Edition installed. I did perform a backup of my old system however I realized it was on Gitlab EE 15 and current is Gitlab EE 18. When I tried to locate the v15 software, it’s so old that it’s unavailable.
Realize that the whole point of git itself is everything involved in the project is on everyone’s computer. So I made the decision to just install the current version of Gitlab EE v18, and simply recreate all the Projects.
There are some 45 Projects spread across four systems but again, they’re all complete.
What’s the loss then? Mainly the Pull Request history itself. When you commit a change to your code, you probably created a short log entry. The Pull Request is handled by the overall tool, Gitlab in this case. You can provide a more detailed description of the change, you can see a graphical representation of the branches and who approved the Pull Request. There are other parts in a more production like environment that are also missed.
In my case though, it’s just me learning how CI/CD works and the effort to locate the v15 software and then perform upgrades, is a bit unnecessary. So I created all the git projects I’d been working on and loaded them into Gitlab.
Next up is getting Gitlab Runners in place and installing Jenkins and the two Jenkins agents. Then the actual CI/CD process configure.
Jenkins
Well shoot, restoring Jenkins worked perfectly. I followed the instructions over on the Jenkins.io site.
Add the repository. Install fontconfig and java-21-openjdk. Install Jenkins. Restore Jenkins home directory in /var/lib/jenkins. Fix the firewall. Start Jenkins. Access Website.
Since I was running a version 300 Jenkins and this is a version 500, there are a bunch of deprecated plugins and other plugins that needed to be updated. I really do need to pop out and clean up the plugins. Another day.
Anyway, it just works! Now to do the two agent servers.
Jenkins Agents
These were actually pretty easy. Create the Jenkins account, home directory in /var/lib/jenkins. Make sure the directory is properly owned. Restore the backed up Jenkins home directories. And done!
The main things I needed to do was to make sure the Jenkins Controller had ssh access to the two Jenkins Agents. Since it was a restore, all the information was already there.
The problem though was the home directories weren’t owned by jenkins:jenkins, they were owned by root:root. So the Agents weren’t connecting. I deleted and then added in new keys but it still didn’t work. Docs were talking about agent.jar but I had remoting.jar. I tried renaming it but got a permission denied error. A puzzle until I realized the jenkins account home directory was owned by root:root and changed the ownership to jenkins:jenkins.
Next up, connect Jenkins with the brand new Gitlab server.
SSH Keys
Since this is a brand new Gitlab server, some of the original settings are missing. Plus Jenkins is an actual restore so the credentials are configured and associated with the various Jenkins jobs.
But the process is simple enough. Instead of a personal or project token, I simply log into the Jenkins account on the Controller, change to the .ssh directory, and capture the id_rsa.pub key information. This gets added to my account’s list of SSH keys.
And since I recreated the keys, I updated the credentials in Jenkins I was already using with the new id_rsa private key, and the Jenkins jobs were able to successfully access gitlab.
Remote Server
One of the things I need to do is enable various systems such as the Jenkins account to access the remote server. But because the remote server is still rather old, I’m getting the following error message when attempting to connect:
Unable to negotiate with [remote server] port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss
On the Windows system, I can add the following to my .ssh/config file:
Host [hostname]
user=[account]
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa
While this works with Windows (and my Mac laptop as well), on my linux servers, I’m getting the following error:
ssh_dispatch_run_fatal: Connection to [remote server] port 22: error in libcrypto
This is related to the crypto policies which are located in the /etc/crypto-policies directory. The initial configuration in the config file is set to DEFAULT. Check the /usr/share/crypto-policies directory for a list of options:
$ ls -al
total 28
drwxr-xr-x. 9 root root 152 Jan 26 03:00 .
drwxr-xr-x. 125 root root 4096 Feb 13 21:01 ..
drwxr-xr-x. 6 root root 61 Oct 31 04:20 back-ends
drwxr-xr-x. 2 root root 4096 Jan 26 03:00 DEFAULT
-rw-r--r--. 1 root root 680 Sep 5 2025 default-config
drwxr-xr-x. 2 root root 4096 Jan 26 03:00 FIPS
drwxr-xr-x. 2 root root 4096 Jan 26 03:00 FUTURE
drwxr-xr-x. 2 root root 4096 Jan 26 03:00 LEGACY
drwxr-xr-x. 3 root root 109 Jan 26 03:00 policies
drwxr-xr-x. 5 root root 136 Jan 26 03:04 python
-rw-r--r--. 1 root root 167 Oct 31 04:20 reload-cmds.sh
You can use the update-crypto-policies command to modify the configuration, but that does change it for all applications on the server. Since this is an internal environment, setting it isn’t critical, however be aware of the hazards:
# update-crypto-policies --set DEFAULT:SHA1
Setting system policy to DEFAULT:SHA1
Note: System-wide crypto policies are applied on application start-up.
It is recommended to restart the system for the change of policies
to fully take place.
While it does say to reboot, the command does reload all pertinent applications. Rebooting just ensures a clean loading of the policies. Check the /etc/crypto-policies/config file and the /etc/crypto-policies/state/current file to verify the change.