Now that I can check out files, edit, and check them back in. The last step is syncing the files with the target server or servers. I’m trying to eliminate the extra static files/put them into the repo vs having them be a second area to manage. Part of the problem is other teams. We want to be able to have them manage files without having to log in to the git server and manually touch the old static files.
It works pretty much the same as the previous configuration.
Copy the unixsvc public key to the target server(s).
Set up a script to do a pull and check for the error code.
If changes, use rsync to sync the data across.
Simple enough script. Set up a cron job to run every minute and the target server(s) will always be updated.
Need to test the heck out of this to make sure it works as expected. Add the other projects, less the inventory and status ones (they’re websites). And finish documenting it so I can enable it at work.
Next up, gitlab and jenkins. Let’s try this through a web interface using “normal” DevOps.
git. I never did “git” it.
The side client I work for uses SVN; with SVN Workbench, it’s pretty easy to do what I need to – although I’m probably doing it wrong anyhow! At least I can keep source to all the versions of the code we support.
I am learning to be a developer backup for a .net project at work (C# MVC project), and we use git and a GitKraken client to synch files. Not too bad, but…
I could never do git from the command line. I guess my development needs are just me controlling my code, and I’ve never had to learn it.
What projects are you working on that requires source code control?
Oops – just found the post that describes how you’re using git…
I had suggestions from the guys to use SVN as well but it appears that the DevOps part is more focused on using git. The guys at work on the Dev/Eng side are using git to manage scripts along with Ansible to push out stuff. It just seemed like git would be a better choice based on the direction the company was going, Dev/Eng wise.