As a Unix Systems Administrator, I’m a long time user of Revision Control System (RCS) to manage configuration files. My first time was in managing DNS Zone Files at NASA Headquarters. Over the past few years, I’ve been using RCS to manage my personal projects and work shell scripts. While I’m not the only one writing scripts and code, I believe I write the bulk of them. Anyway, I want to bring the team on board with managing scripts. Adding theirs into revision control and making it easy for the team to manage their and my scripts. Then we, everyone on the team and any new team members, can do nothing but benefit from managing each others scripts.
In order for me personally set up, I need to come up with a git/rcs Rosetta Stone. Not just commands but concepts. Taking the hacks I currently do to make RCS work in a team environment and bringing the team on board with documentation they can understand and use. This is done because I’m the one mainly using revision control for the scripts and I want to keep the history of the projects. Honestly though, it’s not super important to maintain the current history. Moving straight over to git with the existing files would also work fine and if problems occur, that’s what may happen. We’ll see as we progress.
There are plenty of git books and documentation but a google search doesn’t really identify a tutorial for moving from managing files in RCS to managing files in git. And while RCS is pretty simple in general, I do have a few scripts to help in managing my coding environment. Since it’s RCS, there are additional hacks to make it work the way I want it to work which make it more difficult for others on the team to manage it.
Background, environment setup first. Then some quick references in the rcs commands I’m using, followed by the scripts and data files I wrap around the rcs commands and a list of other scripts I use to manage the environment.
First off, you can poke at the various RCS books on line without too much trouble to see what RCS is and how it works. In general you have a directory of files. You can store versions in the current directory or create an RCS subdirectory and the RCS commands will store versions there. I prefer this method as it makes it cleaner when you’re viewing directories.
* code – A set of directories that contain checked out scripts and RCS subdirectories.
** [project] – The source code for that project
** make[project] – A script that creates the provisioning directory structure using the manifest and copying all the files from the static directory.
** manifest.[project] – A file that contains all the scripts that belong to the project. I use a couple of symbols to create directories and list what’s being done.
Archived data. Either code bits that aren’t useful any more (in a [project] directory) or older data files. I like to maintain the data imports for historical purposes.
The most current files that aren’t code. Spreadsheet .csv files for importing, pictures, data files, etc.
* stage – The provisioning staging area. The make[project] script copies the code and all the static data for the project into this directory under a [project] subdirectory.
** [project] – The staged files for the project.
** exclude.[project] – Files and directories that aren’t to be synchronized.
** sync[project] – The script that uses rsync to synchronize the directory structure (code and static data) out to the various servers as required.
A project’s working directory for
[project] – An individual’s working directory for any scripts.
* /var/www/html – My php working directory for three projects.
The sync[project] scripts are run every minute out of cron. They’re looking for a sync.[project] file which was created by the make[project] script.
The make[project] scripts are run every night at 1am. This ensures the scripts are up to date even if a sync wasn’t performed earlier.
The rcs commands I use:
* co – With the -l(ock) option, check out the current revision, place it in the current directory, and lock the revision. This prevents others from updating the script.
* ci – With the -u(nlock) option, check in, don’t lock the script, and leave the original in place for further editing.
* rlog – Show the history of the script.
There are a lot more options and commands but as it’s just me, I haven’t needed to explore too far. These three commands do everything I need.
The script wrappers I use:
I have several scripts I use to manage the environment. There are things I want to do to make sure work is complete and that all files are included when provisioning.
* check – This is a wrapper around the various check scripts. It greps out the comments from each check script and then lists all the scripts.
* checkdiff – Compares the passed script name with the master script to show you the differences between the two.
* checkin – Runs a ‘checkdiff’ command to show the differences between the two scripts, then runs the RCS ci -u command. As ci doesn’t show differences, I wanted to be able to see what had changed so I could properly document the change.
* checkinstall – Runs through the working directory and returns a file name if a script exists in the working directory but not in the master code directory.
* checkmanifest – Parses the manifest file and reports any files that are checked out and being worked on in the working directory.
* checkout – Sees if there’s a difference between the master script and the current script. If so, it simply exits. Otherwise it checks out the script and copies it into your working directory.
* checkrw – Basically checks for any script in the code directory that has ‘rw’ permissions indicating it’s been checked out. I use it to make sure I haven’t missed any scripts when checking in several.
In the code directory, I have a few scripts that help manage the code.
* findcount – This script essentially creates a list of all files in the project and writes it to a countall file. It also counts lines of code, comments, etc for statistical purposes.
** countall – The list of all files in the project.
** countall.backup – The make[project] script runs a diff against the countall and countall.backup files. If there’s a difference, the script exits without creating the staging directory. To correct, just copy the countall file over the countall.backup file.
** fixsettings – This script makes sure the settings.php configuration file exists in every directory.
** searchall – This script does a search of every file in the working directory for the passed keyword. Helpful when searching for all instances of a keyword.
Where do the files go?
* Inventory – The inventory project goes to three servers in /usr/local/httpd/htsecure.
* php Scripts – This goes to four servers including the Ansible server to build host files. In /usr/local/httpd/bin.
* Shell Scripts – These scripts go to just the Jumpstart server and then is sync’d across all 1,200 servers.
* Kubernetes – These scripts go to each of the Kubernetes clusters in /var/tmp/kubernetes.
As you can see, there are several scripts and the environment is really configured for one person.
Goal is to document how to create your own working server using VirtualBox. One problem with a tool like Vagrant is the work environment doesn’t permit access to outside sites without going through a proxy. So setting up an environment that can be used at work will be beneficial.
1. Create your working environment using VirtualBox. You need a working directory plus a web site for testing the two web projects.
2. Create a Source Code Control git server. I’m using ssh to retrieve projects.
3. Pull the project code to your working server.
4. Check out project code.
5. Check in project code.
6. Push the project code back to the git server.
7. Provision project code to the various servers as listed above.
But the documentation needs to reference the existing environment in order for the new commands and processes to be understood.