Presenting Homelabs To Potential Employers

A recent posting mentioned documenting your HomeLab on your Resume to show you’re going above and beyond what your job requires. Shows initiative and interest. I know when we’re looking at candidates, we want to find people who have external geeky interests like a HomeLab. In some interviews, people have been asked to diagram out their HomeLab and are dinged if they don’t have one or can’t document it on the fly.

But an alternative view was also posted in that is there a Work/Life balance. Are you taking the time to do other things outside of tech? Is tech your sole hobby?

Personally my main hobby is table top gaming. Board Games, War Games, Card Games, Dice Games, and Role Playing Games. Games. In fact, it partially got me into computers. That and I was a typesetter on a computerized typesetting machine back in 1982 or so. Typesetting up a Dungeons & Dragons character sheet got me thinking about it and then creating a Car Wars vehicle generation program in Basic got me a job. But I also tour on a Motorcycle, Ski, Snowshoe, and yes, learn new technologies with my HomeLab.

Since the mid-90’s, my home environment has consisted of at least a separate Firewall system (generally my previous desktop when I upgraded). Back in the early 00’s, I signed up for an external server hosted in Miami where I could put pictures and continue to maintain my Unix knowledge (an OpenBSD server, now a CentOS one).

A few years back I was gifted two Dell R710s that were being replaced at work. I’ve since installed vSphere to have a virtualized environment, pfSense (firewall plus) as the first Virtual server, and then a bunch of servers to isolate the things I do for fun and to learn new tech. Recently I signed up for the VMware Users Group (VMUG) and their Advantage program. This gives me access to the vCenter software and the Operations options to let me turn my two ESX servers into a cluster (and mimicking work much better). We’re moving towards a Private Cloud using vRealize technology so I have access to that too (not installed… yet).

I have a bunch of different servers, syslog, Space Walk, Samba, a couple of development servers, MySQL, several web servers, a CI/CD pipeline is being staged (Jenkins, Artifactory, Ansible, and GitLab), and I’m in the process of rebuilding my Kubernetes servers (3 Masters/3 Workers).

I’ve recently been tasked with taking ownership of the Kubernetes development process, in part due to my explorations of Kubernetes at home. I’m moving the server management scripts over to a GitLab server at work that I built. Again, in part due to me setting it up at home (I have a Revision Control System (RCS) system at home and work now and moved to git).

It’s helping me get more familiar with the new DevOps ideas plus the CI/CD pipeline tools, Orchestration with Kubernetes, and Microservices with Docker.

The question though is, should this kind of information be better documented on a resume? Should there be a HomeLab section or a Learning section, something that shows your interest, motivation, and desire even if your job doesn’t require you to use these tools?

There’s also the idea that such things be documented in a Cover Letter. You see the position, supply your resume, and then in the Cover, call out your HomeLab. It might be a better place for some positions where you’re looking at a specific job. Over the years though, I’ve seen positions where you supply only your resume to a company database which is then mined when a position opens. Tough to supply a cover for a job you don’t know about.

It’s something to think about.

Posted in Computers | 2 Comments

If A Tree Falls In The Forest

I write scripts to better manage the Unix servers we’re responsible for. Shell scripts, perl scripts, whatever it takes to get the information needed to stay on top of the environment. Be proactive.

Generally scripts are written when a problem is discovered. As we’re responsible for the support lab and production servers (not the development or engineering labs) and since only production servers are monitored, there are many scripts that duplicate the functions of monitoring. Unlike production, we don’t need to immediately respond to a failed device in the support lab. But we do need to eventually respond. Scripts are then written to handle that.

The monitoring environment has its own unique issues as well. It’s a reactive configuration in general. You can configure things to provide warnings but it can only warn or alert on issues it’s aware of. And still there’s the issue of not available in the support lab.

I’ve been programming in one manner or another (basic, C, perl, shell, php, etc) since 1982 or so (Timex/Sinclair). One of the things I always tried to understand and correct were compile time warnings and errors. Of course errors needed to be taken care of, but to me, warnings weren’t to be ignored. As such server management scripts are more like code than 6 or 7 lines of whatever commands are needed to accomplish the task. Variables, comments, white space, indentation, etc. The Inventory program I wrote (Solaris/Apache/Mysql/PHP; new one is LAMP) looks like this:

Creating list of files in the Code Repository
Blank Lines: 25687
Comments: 10121
Actual Code: 100775
Total Lines: 136946
Number of Scripts: 702

136,000 lines of code of which 25% are blank lines or comments. The data gathering scripts is up to 11,000 or so lines in 72 scripts and another 100 or so scripts that aren’t managed via a source code repository.

The process generally consists of data captures on each server which is pulled to a central server, then reports are written and made available. One of the more common methods is a comparison file. A verified file compared to the current file. A difference greater than 0 means something’s changed and should be looked at. With some of the data being gathered being pretty fluid, being able to check for every little issue can be pretty daunting so it’s a comparison. This works pretty well overall but of course there’s setup (every server data file has to be reviewed and confirmed) and regular review of the files vs just an alert.

Another tool is the log review process. Logs are pulled from each server to a central location, then processed and filtered to remove the noise (the filter file is editable of course), and the final result concatenated into a single log file which then can be reviewed for potential issues. In most cases the log entries are inconsequential. Noise. As such they are added to the filter file which reduces the final log file size. This can become valuable in situations like the lab where monitoring isn’t installed or for situations where the team doesn’t want to get alerted (paged) but do want to be aware.

But there’s a question. Are the scripts actually valuable? Is the time spent reviewing the daily output greater than the time fixing problems should they occur? These things start off as good intentions but over time become a monolith of scripts and data. At what point do you step back and think, “there should be a better way”?

How I determined it was when I was moved to a new team. In the past, I wrote the scripts to help but if I’m the only one looking at the scripts, is it really helping the team? In moving to the new team, I still have access to the tools but I’m hands off. I can see an error that happened 18 months ago that hasn’t been corrected. I found a failed drive in the support lab 6 months later. I found a failed system fan who knows how long ago it had failed.

There was an attempt to even use Nagios as a view into the environment but there are so many issues that again, working on them becomes overwhelming.

The newest process is to have a master script check quite a few things on individual servers and present that to admins who log in to work on other tasks. Reviewing that shows over 100,000 checks on 1,200 systems and about 23,000 things that need to be investigated and corrected.

But is the problem the scripts aren’t well known enough? Did I fail to transition the use of them? Certainly I’ve passed the knowledge along in email notifications (how the failure was determined) over time and the scripts are internally documented as well as documented on the script wiki.

If a tree falls in the forest and no one is around, does it make a sound?

If a script reports an error and no one looks at the output, is there a problem?

The question then becomes, how do I transfer control to the team? I’ve never said, “don’t touch” and have always said, “make changes, feel free” but I suspect there’s still hesitation to make changes to the scripts I’ve created.

The second question is related more to if the scripts are useful. Just because I found a use for them doesn’t mean the signal to noise ratio is valuable, same as the time to review vs the time to research and resolve.

Finally if the scripts are useful but the resulting data unwieldy, what’s an alternate method of presenting the information that’s more useful. The individual server check scripts seem to be a better solution than a centralized master report with 23,000+ lines but a review shows limited use of the review process upon login.

Is it time for a meeting to review the scripts? Time to see if there is a use, is it valuable, can it be trimmed down. Is there just so much work to manage it that the scripts, while useful, just can’t be addressed due to the reduction of staff (3 admins for 1,200 systems).

Posted in Computers | 4 Comments

Yearly State of the Game Room 2017

And here we are again. The yearly State of the Game Room. With the move to a new place, I got to move all the games into a single room which is a positive thing and lets me display them all rather than jumping between a couple of rooms to locate a game. I also have a lot better lighting in this room making gaming a lot easier.

I’ve added a good 6 Kallex shelf squares of games over the year. Bunny Kingdom and The Thing are two of the ones we enjoyed the most and the ones we’ll be taking to the end of year bash. Another one we played was The Others. It was a pretty interesting game with lots of bits but it really got a bit long a couple of times. I can do a couple or three hours of a game before I start wanting to end it.

List of games I can remember for the year by looking at the shelves. I’m sure I missed a couple especially since we moved this year. I do have an inventory system so I can keep track but I haven’t been keeping up on it. I’m working on getting it updated over the next few weeks or so. Maybe I’ll have a better list for next year 🙂

* 7th Sea RPG
* 7 Wonders: Duel Pantheon
* Apocalypse Prevention, Inc RPG
* Arkham Horror the Card Game
* Blood Bowl
* Bottom of the 9th
* Bunny Kingdom
* a few Call of Cthulhu books
* Castles of Burgundy Card Game
* Charterstone
* Clank
* Conan RPG
* Conan Board Game
* Dark Souls
* DC Deck Building Multiverse Box
* Dead of Winter Warring Colonies
* Dice Forge
* a few Dungons and Dragons books
* Elder Sign expansions
* Eldritch Horror expansions
* Evolution Climate
* Exit
* Fallout
* Fate of the Elder Gods
* Feast For Odin
* First Martian
* Five Tribes expansions
* Fragged RPG
* Gaia Project
* Jaipur
* Jump Drive
* Kemet
* Kindom Builder
* Kingdominos
* Kingsport expansion
* Legendary expansion
* Lovecraft Letters
* Magic Maze
* Mansions of Madness 2nd Edition expansions
* Massive Darkness
* Mountains of Madness
* My Little Pony RPG
* Netrunner packs
* New Angeles
* The Others
* Pandemic Second Season
* Pandemic Iberia
* Pandemic Rising Tide
* Paranoia RPG (Kickstarter)
* a few Pathfinder books
* Queendominos
* a couple of Red Dragon Inn expansions
* a few Savage Worlds books
* Scythe expansions
* Secret Hitler
* a few Shadowrun 5th books
* Sherlock Holmes expansions
* Cities of Spendor
* Talon
* Terraforming Mars
* The Thing
* Ticket to Ride Germany
* Time Stories
* Twilight Imperium
* via Nebula
* Villages of Valeria
* Whistle Stop
* Whitechapel expansion
* Whitehall
* a few X-Wing minis.

After getting things organized and making room for the next year, I have pictures.







Posted in Gaming | Leave a comment

‘If You Were Smart, You’d Be In Engineering’

This is a comment I heard a few years back from a manager on the Engineering side of the house. Of course it’s stated in a bad way, but the words behind it is that to advance in technology, your next step should be in Engineering (or Development) and you should be working towards that.

As a Systems Admin for many many years, I’ve found that I’m good at the job. Managing systems and providing solutions in Operations to improve the environment. It also plays to my own strengths and weaknesses. Troubleshooting problems, fixing them, and moving on vs spending all my time on a single product as a programmer or engineer. I’ve had discussions with prior managers about advancement both in Engineering and in Management. I’ve even attended Management Training. After discussion, it was felt that I should work with my strengths.

A few years back, I was moved into an Operations Engineer role. The company started implementing a Plan, Build, Run model and I was moved into the Build team. No real reason was given for the choices. The concept was to move a portion of the team away from day to day maintenance duties in order to focus on Business related Product deployments. It’s taking too long for the entire team to get Products into the field, what with being on call and dealing with maintenance issues like failing hardware so let’s pull three people away from the team to be dedicated on Business related Projects.

Sadly there was no realization that the delays are in part due to multiple groups in Operations having their own priorities. Delays were more due to the inability of the groups to fully align on a project. With three business groups each with their own priorities, the time to deploy really didn’t change much and there were fewer people to do the work.

As we moved forward, the company redefined the Build role. Titles change and we’re Enterprise Systems Engineers. One of the new goals was to move away from server builds and into server automation. Build one or two servers for the project in order to create automation and then turn the rest of the project as a Turn Key solution, over to the Run team to complete.

In addition, a new project came along and was designed to follow the Continuous Integration/Continuous Delivery (CI/CD) process. The concept is to deploy software using an incremental process vs the current 6 month to 18 month process.

The current process is to gather together all the change requests that identify problems in the product plus the product enhancement requests and spend time creating the upgraded product. It can take months to get this assembled, get code written and tested, fix problems that crop up, get it through QA testing, and then into Operations hands. And as noted before, deployment itself can take several months due to other projects on my plate and tasks on other groups plates.

The CI/CD process means that there’s a pipeline between Development into Operations. A flow, picture a stream. A problem is discovered, it’s passed back to development, they fix it, put it into the flow, automated testing is performed to make sure it doesn’t break anything, it’s put into the production support environment, then production. Rather than taking months to assemble and fix a large release, you have a single minor release and you can identify, fix, and deploy it into production in a matter of minutes. Even better, the Developer or Engineer would be on call for their product. If something happened, they’d be paged along with the teams, could potentially fix it right then, put it into the flow and let the fix get deployed. Automated testing is key here though and part of the fix is to fix or add new tests to make sure it doesn’t happen again or even with other products.

This process is pretty similar to how a single programmer working on a personal project for example might manage their program(s). I have a project I’ve been working on for years and has reached about 140,000 lines of code (actually about 98,000 lines of actual program with the remaining 42,000 lines being comments or blank lines). I’ve created a personal pipeline, without automated testing though. And an identified problem can be fixed and deployed in a few minutes. New requests can generally take about 30 minutes to complete and push out to the production locations.

This CI/CD oriented project was interesting. Plus it included creating a Kubernetes environment. I’d had someone comment about DevOps several years earlier so I’d been poking about at articles and even reading The Phoenix Project, a book about CI/CD and DevOps. A second project followed using CI/CD and Kubernetes. A part of the CI/CD process was utilizing Ansible and Ansible Tower, a tool I’ve been interested in using for a few years.

As a Systems Administrator, one of our main responsibilities is to automate. If you’re doing a task more than once, automate it. Write a script. I started out hobby programming, then working part time and then full time, back in the early 80’s. Even though I moved into system administration, I’ve been hobby programming and writing programs to help with work ever since. I believe it’s been helpful in my career as a Server Administrator. I have deeper knowledge about how programs work and can write more effective scripts to manage the environment.

In the new Build environment, it’s been even more helpful. It lets me focus even more on learning automation techniques with cloud computing (I’ve created Amazon Web Services and Google Cloud Services accounts) and working towards automating server builds. Years back I started creating “Gold Image” virtual machine templates. I also started in on scripting server installations. The rest of the team has jumped in and we have an even better installation script framework. Lately I’ve been working on a server validation script. For the almost 1,000 systems we manage, the script is reporting on 85,000 successful server checks with errors called out that can be corrected incrementally by the team (a local login report plus a central reporting structure). As part of this, I’ve started creating Ansible Playbooks and added scripts to our Script framework to make changes to systems. Eventually we’ll be able to use this script to create an automation process to make sure servers are built correctly the first time.

It’s been fun. It’s a challenge and sort of a new way of doing things. I say “sort of” because it still falls into the “automate everything” aspect of being a Systems Admin. With Virtual Machines and Docker Containers, we should be able to further automate server builds. With scripts defining how the environment should look, we should be able to create an automatic build process. Plus we should be able to create a configuration management environment as well. A central location defining a server configuration that ensures the server configuration is stable. You make a change on the server and the configuration script changes it back to match the defined configuration. Changes have to be made centrally.

DevOps though. It seems like this is really “Development learns how to do Operations”. “If you were smart, you’d be in Engineering.” Many of the articles I read, many of the posts on the various forums, many of the discussions I’ve followed on Youtube and in person really seem to be focused on Development becoming Operations. There doesn’t seem to be a lot of work on how to make Operations part of the process. It seems to be assumed that a Systems Admin will move into Engineering or Development or will move on to an environment that still uses the old methods of managing servers. Alternately we’ll be more like COBOL programmers. A niche group that gets called in for Y2K like problems.

I like writing code and I’m certainly pursuing this knowledge. I have a couple of ESX servers at home with almost 100 VMs to test Kubernetes/Docker, work on how to automate server deployments, work on different pipeline like products like Jenkins and Gitlab. I use it for my own sites in an effort to get knowledgeable about such a role and am starting to use them at work. Not DevOps as that’s not a position or role. That’s a philosophy of working together. A “DevOps Engineer” is more along the lines of an Engineer working on pipeline like products. Working to get Product automated testing in place but also working on getting Operations related automated testing in place.

If you’re not taking time to learn this on your own and the company isn’t working on getting you knowledgeable, then you’re going to fall by the wayside.

Posted in Computers | Tagged , , , , , , , , | Leave a comment

git Version Control for rcs Users – Setting up gitlab

Next up is to install Gitlab. The problem will be the inability to access to ‘net from work. So a manual install will need to be done. We’ll see how that goes.

Gitlab is pretty easy to install. Just follow the website. I’m going to build the server, generate an RPM list and a file list, install gitlab and the run a diff to see what changed. It may or may not help in setting up the gitlab server at work.

Comparing the before and after files and rpm listing, there were no differences before the install. Doing the installation now per the website shows the following installation and dependencies:

Installing:
 gitlab-ce                                   x86_64                      10.1.3-ce.0.el7                       gitlab_gitlab-ce                      353 M
Installing for dependencies:
 audit-libs-python                           x86_64                      2.4.1-5.el7                           jumpstart                              69 k
 checkpolicy                                 x86_64                      2.1.12-6.el7                          jumpstart                             247 k
 libsemanage-python                          x86_64                      2.1.10-18.el7                         jumpstart                              94 k
 policycoreutils-python                      x86_64                      2.2.5-20.el7                          jumpstart                             435 k
 python-IPy                                  noarch                      0.75-6.el7                            jumpstart                              32 k
 setools-libs                                x86_64                      3.3.7-46.el7                          jumpstart                             485 k

For a local installation without access to the ‘net, as long as I install the above dependencies on the server and then the gitlab-ce rpm, I should be good.

Rebuilding here in a sec to test my theory.

Confirmed. I installed the above packages and once done, installed the gitlab rpm as noted on the website:

export EXTERNAL_URL=http://lnmt1cuomgit1.internal.pri
rpm -ivh gitlab-ce-10.1.3-ce.0.el7.x86_64.rpm

Ready to be used…

Jenkins next…

Posted in Computers | Tagged , , , , | Leave a comment

git Version Control for rcs Users – Configuring gitlab and Jenkins Servers

This is intended to document the steps involved in moving from an RCS based environment to a git based environment. In previous posts, I showed the environment I’m currently working in and the progress to changing from using RCS to using git. I duplicated the process in installing code to the target servers so that I can now use git from the command line very much like I used rcs and my scripts. Now I want to move a bit farther into the devops side of things, learning about setting up an easier way for the team to manage the scripts and code, providing a method for external teams to access some of the code, and replacing my sync scripts with Jenkins.

I’ll need to duplicate my environment again, this time with even the same server names. Since I have an ESX host at home, I can set up several servers that effectively match my official environment.

* git server: lnmt1cuomgit1
* Jenkins server: lnmt1cuomjkns1
* Target Tool server: lnmt1cuomtool1
* Target Status server: lnmt1cuomstat1
* Target Inventory server: lnmt1cuominv1
* Worker server: This will be a local, virtual box system where I can pull and test scripts and the two websites.

Again, this is partly to help me manage my personal sites better, but also to help with the work effort to get us more focused on Dev type tasks.

The Source server will have gitlab and Jenkins installed since Jenkins uses port 8080, they both should be able to cohabitate. Update: And after testing, that’s a big nope. So two different servers.

At least for the php site (Inventory), I’ll need to do a bit of rewriting as the settings.php file also contains the passwords to the database. Either the settings file needs to be replaced by a table or the passwords will need to be pulled from an external file outside of the site directory structure. Main bits right now though are the scripts the team actually uses. The Inventory has report scripts, data uploads, and of course the front end to the database which the scripts don’t touch.

Environment Specs:

Server CPUs RAM Disk /usr /opt /var
lnmt1cuomgit1 2 4 40 4 10 4
lnmt1cuomjkns1 2 4 40 4 8 grow
lnmt1cuomtool1 2 4 50 16 1 16
lnmt1cuomstat1 2 4 40 4 1 grow
lnmt1cuominv1 2 4 40 4 1 grow

On to gitlab…

Posted in Computers | Tagged , , , , , | Leave a comment

Cajun Shrimp and Salmon

Quick recipe.

Prep time
10 mins
Cook time
20 mins
Total time
30 mins

Sweet and savory pan-seared salmon topped with sautéed shrimp in cajun butter sauce. Salmon New Orleans is an unforgettable 30 minute meal your family will crave!

Author: Tiffany
Recipe type: Main dish
Cuisine: American
Serves: 4

Ingredients
4 6-ounce salmon fillets
salt and pepper, to taste
1 pound large shrimp, peeled and de-veined
8 tablespoons butter, divided
2 tablespoons honey

cajun seasoning
½ teaspoon salt
1 teaspoon garlic powder
1 teaspoon paprika
¼ teaspoon pepper
½ teaspoon onion salt
½ teaspoon cayenne pepper
heaping ½ teaspoon dried oregano
¼ teaspoon crushed red pepper flakes

Instructions

In a small bowl stir together all cajun seasoning ingredients and set aside.

Season salmon with salt and pepper to taste. Melt 2 tablespoons butter in a large skillet over medium-high heat. Add honey and whisk to combine (mixture should be bubbly). Add salmon fillets to pan and cook for 5-6 minutes, then flip and cook another 7-8 minutes until salmon is cooked through, flaky, and browned. Transfer salmon to a platter and cover to keep warm.

Add remaining butter to the pan over medium-high heat. Once butter is melted, stir in cajun seasoning. Add shrimp to pan and saute until opaque, about 5-6 minutes.

Serve salmon topped with shrimp. Drizzle any extra pan sauce over the top and garnish with chopped parsley if desired. Serve immediately.

Shrimp

Fill a large pot with water. Bring water to a rolling boil and let it boil for 5 minutes. Drop shrimp in the boiling water. Once the shrimp turns pink and floats to the top it is cooked and ready to be removed from the pot. Use a metal strainer to remove the floating shrimp and transfer to a bowl.

Posted in Cooking | Leave a comment

Refinishing The Deck

The deck at the house is a bit dried out and aged looking. We’d been on the lookout for someone to do the siding and decks as protection is needed here in the mountains. We’re closer to the sun and elements. As we were passing one of the homes up here, we spotted one of the yard signs for Mountain Woodcare. I called them and had them come out and provide a quote on refreshing the siding and deck. The owner, Jeremy (“J” or “Jay”) came out and after a pretty thorough review, he suggested the siding didn’t need to be done right now but the decks could certainly use some TLC. With the estimate and a quick discussion with Jeanne, we approved the work and they came out Monday last week to get started. We moved all the furniture and such off the deck so they could get right to work 🙂 Over the course of the week, they were out every day stripping the old paint off the rails and power washing everything with chemicals to get things nice and clean and ready for application of the stain and oil based sealant. There was even some sanding that was needed. By Friday they had it all done and it looked excellent. Of course I took before, during and after pictures because I like to be able to compare and see the improvements.

We’d discussed work on the decks with the previous owners and again with Jay. Jay said he recalls coming out to give an estimate but no followup. Based on the looks, it’s been 3 or 4 years at least since any maintenance was done on the decks. Fortunately a bit of maintenance and TLC brought the decks back to beauty.

We have essentially four decks. A Kitchen deck, a Master Bedroom Deck, a lower deck that fronts the entire house, and an Entryway deck (upper by the garage, stairs down to the front door and a deck that wraps around to the MBR bathroom). I’ll present each as a block of Before, During, and After pictures.

And as a note Jay was nice enough to take a few extra minutes to power wash the Gazebo. We’ll hit it with some fresh paint this week.

Kitchen Deck – Before

We were trying to show how dry all the wood was before the team got started. All the Before pictures try to catch it at different times of the day so you can see the differences.

Kitchen Deck – During

You can see the wood railings have been stripped and the deck washed. See how the water soaks in to the wood?

Kitchen Deck – After

And After looks great. Jay recommended a bit of a tint to the oil vs a clear oil as UV protection. The wood is apparently Brazilian Redwood. The thing to note later is the beading up of the water on the freshly oiled deck. Looking good.

Master Bedroom Deck – Before

Master Bedroom Deck – During

Master Bedroom Deck – After

Entryway Deck – Before

Entryway Deck – During

Entryway Deck – After

Lower Deck – Before

Lower Deck – During

Lower Deck – After

Posted in Colorado, Deck Refinish, Home Improvement, Rocky Knob | Leave a comment

git Version Control for rcs Users – Synchronization

Now that I can check out files, edit, and check them back in. The last step is syncing the files with the target server or servers. I’m trying to eliminate the extra static files/put them into the repo vs having them be a second area to manage. Part of the problem is other teams. We want to be able to have them manage files without having to log in to the git server and manually touch the old static files.

It works pretty much the same as the previous configuration.

Copy the unixsvc public key to the target server(s).

Set up a script to do a pull and check for the error code.

If changes, use rsync to sync the data across.

Simple enough script. Set up a cron job to run every minute and the target server(s) will always be updated.

Need to test the heck out of this to make sure it works as expected. Add the other projects, less the inventory and status ones (they’re websites). And finish documenting it so I can enable it at work.

Next up, gitlab and jenkins. Let’s try this through a web interface using “normal” DevOps.

Posted in Computers | Tagged , , , | 3 Comments

git Version Control for rcs Users – Setup and Usage

At least for someone like me where I’m the only person working on projects, the setup and usage of RCS and git are pretty straightforward. Once we get into team usage, it gets to be a bit more complicated. Right now the team can check out and check in a script but due to permissions, they aren’t able to sync the repositories. Fortunately the scripts do that every minute (checking for the flag file) but it’s a bit cumbersome.

Setup ssh git

There are a few bits that need to be done in order to get git set up.

1. Create the git user on the git server. Make sure you have sufficient space for all the code. I created a 30 gig slice in /opt/git and used it for git’s home directory.

useradd -c 'Git Service Account' -d /opt/git -m git
passwd git

2. You’ll need to add your public keys into git’s .ssh directory as ‘authorized_keys’. Do this for every server you will be pulling files from.

3. Create the Master repository on the git server. You won’t need to put any code into the directory but you do need to run ‘git init –base’ to initialize it.

mkdir /opt/projects
for i in suite inventory status httpd kubernetes changelog admin newuser
do
  mkdir /opt/projects/$i
  cd /opt/projects/$i
  git init --base
done

The “–base” option indicates this is a master repository, not a user’s working directory. The working directory will be on your home system.

4. On your home system, create the local or working repository.

mkdir projects
cd projects

5. If you’re creating the first repository as I would for the ‘suite’ scripts, make the ‘suite’ directory and initialize it. You’ll want to set a couple of variables as well.

mkdir suite
cd suite
git init
git config --global user.name 'Carl Schelin'
git config --global user.email cschelin@west.com

6. Since I’m converting existing RCS files, I want to bring all the previous changes into git. I’m using rcs-fast-export, a Ruby script that imports all the RCS changes into git. You’ll want to run the script in the directory.

rcs-fast-export.rb . | git fast-import && git reset

Note – this script isn’t working for the inventory application. I suspect it’s because I have three places where the same file name is used but for different purposes. Do some testing before you commit the updates.

7. Once done, push the code up to the git server. This will depend on what you have set up to manage repositories.

git push git@lnmt1cuomgit1.internal.pri:projects/suite

And you’re done. The project is ready for the team to retrieve and manage.

Team Setup

1. Very similar to above, each member of the team will need to copy their ssh keys over to the git server. Concatenate it with git’s authorized_keys file.

2. Create a projects directory. Don’t forget to set your git environment variables.

mkdir projects
cd projects
git config --global user.name 'Carl Schelin'
git config --global user.email cschelin@west.com

And you’re ready to edit code.

Managing Code

1. Before you can do anything, you’ll need to retrieve the git project. For the first time, you’ll have to use the clone options.

git clone git@lnmt1cuomgit1.internal.pri/projects/suite

This will retrieve all the files associated with the project you want to manage.

2. For subsequent updates, you’ll want to pull files from the server.

git pull git@lnmt1cuomgit1.internal.pri/projects/suite

3. You’ll now have a ‘suite’ directory. Within that are a ‘bin’ and ‘etc’ directory. Files in these directories are managed by git. As you know from RCS, you need to check out and check in changes. Use the ‘checkout’ keyword to begin editing the files. Change to the ‘bin’ directory and check out the ‘chkserver’ script.

cd suite/bin
git checkout chkserver

4. You can now edit the file. Once done, you’ll need to check it back in. It’s a two step process. You need to ‘add’ it back in and then ‘commit’ the change.

git add chkserver
git commit chkserver

Your editor of choice will display the current ‘git status’ as comments. Anything that’s not a comment will be added to the gitlog.

5. Once done, you’ll need to upload changes to the master.

git push git@lnmt1cuomgit1.internal.pri:projects/suite master

git Commands

List of commands that you’ll find useful. I’ll add more as I explore.

  • git status – Show the status of the project
  • git log – Show the commit log for the project or if you pass a file name, shows the commit log for the file.
Posted in Computers | Tagged , , , | Leave a comment